code
stringlengths 2.5k
6.36M
| kind
stringclasses 2
values | parsed_code
stringlengths 0
404k
| quality_prob
float64 0
0.98
| learning_prob
float64 0.03
1
|
---|---|---|---|---|
## _*H2 ground state energy computation using Quantum Phase Estimation*_
This notebook demonstrates using Qiskit Aqua Chemistry to compute ground state energy of the Hydrogen (H2) molecule using QPE (Quantum Phase Estimation) algorithm. Let's first look at how to carry out such computation programmatically. Afterwards, we will illustrate how the computation can also be carried out using json configuration dictionaries.
This notebook has been written to use the PYSCF chemistry driver. See the PYSCF chemistry driver readme if you need to install the external PySCF library that this driver requires.
We first set up the H2 molecule, create the fermionic and in turn the qubit operator using PySCF.
```
from collections import OrderedDict
from qiskit import LegacySimulators
from qiskit.transpiler import PassManager
from qiskit_aqua import AquaError
from qiskit_aqua import QuantumInstance
from qiskit_aqua.algorithms import ExactEigensolver
from qiskit_aqua.algorithms import QPE
from qiskit_aqua.components.iqfts import Standard
from qiskit_aqua_chemistry import FermionicOperator
from qiskit_aqua_chemistry import AquaChemistry
from qiskit_aqua_chemistry.drivers import ConfigurationManager
from qiskit_aqua_chemistry.aqua_extensions.components.initial_states import HartreeFock
import time
distance = 0.735
cfg_mgr = ConfigurationManager()
pyscf_cfg = OrderedDict([
('atom', 'H .0 .0 .0; H .0 .0 {}'.format(distance)),
('unit', 'Angstrom'),
('charge', 0),
('spin', 0),
('basis', 'sto3g')
])
section = {}
section['properties'] = pyscf_cfg
try:
driver = cfg_mgr.get_driver_instance('PYSCF')
except ModuleNotFoundError:
raise AquaError('PYSCF driver does not appear to be installed')
molecule = driver.run(section)
qubit_mapping = 'parity'
fer_op = FermionicOperator(h1=molecule.one_body_integrals, h2=molecule.two_body_integrals)
qubit_op = fer_op.mapping(map_type=qubit_mapping,threshold=1e-10).two_qubit_reduced_operator(2)
```
Using a classical exact eigenvalue solver, we can establish the reference groundtruth value of the ground state energy:
```
exact_eigensolver = ExactEigensolver(qubit_op, k=1)
result_ee = exact_eigensolver.run()
reference_energy = result_ee['energy']
print('The exact ground state energy is: {}'.format(result_ee['energy']))
```
Next we set up the QPE algorithm instance using the HartreeFock initial state and a standard inverse quantum fourier transform, and execute:
```
num_particles = molecule.num_alpha + molecule.num_beta
two_qubit_reduction = True
num_orbitals = qubit_op.num_qubits + (2 if two_qubit_reduction else 0)
num_time_slices = 50
n_ancillae = 9
state_in = HartreeFock(qubit_op.num_qubits, num_orbitals,
num_particles, qubit_mapping, two_qubit_reduction)
iqft = Standard(n_ancillae)
qpe = QPE(qubit_op, state_in, iqft, num_time_slices, n_ancillae,
paulis_grouping='random', expansion_mode='suzuki',
expansion_order=2, shallow_circuit_concat=True)
backend = LegacySimulators.get_backend('qasm_simulator')
quantum_instance = QuantumInstance(backend, shots=100, pass_manager=PassManager())
result_qpe = qpe.run(quantum_instance)
print('The ground state energy as computed by QPE is: {}'.format(result_qpe['energy']))
```
As can be easily seen, the QPE computed energy is quite close to the groundtruth value we computed earlier.
Next we demonstrate how the same computation can be carried out using json dictionaries to drive the qiskit_aqua_chemistry stack. Such a dictionary can of course also be manipulated programmatically. An sibling notebook `h2_iqpe` is also provided, which showcases how the ground state energies over a range of inter-atomic distances can be computed and then plotted as well.
```
molecule = 'H .0 .0 0; H .0 .0 {}'.format(distance)
# Input dictionary to configure Qiskit Aqua Chemistry for the chemistry problem.
aqua_chemistry_qpe_dict = {
'driver': {'name': 'PYSCF'},
'PYSCF': {
'atom': molecule,
'basis': 'sto3g'
},
'operator': {'name': 'hamiltonian', 'transformation': 'full', 'qubit_mapping': 'parity'},
'algorithm': {
'name': 'QPE',
'num_ancillae': 9,
'num_time_slices': 50,
'expansion_mode': 'suzuki',
'expansion_order': 2,
},
'initial_state': {'name': 'HartreeFock'},
'backend': {'shots': 100}
}
aqua_chemistry_ees_dict = {
'driver': {'name': 'PYSCF'},
'PYSCF': {'atom': molecule, 'basis': 'sto3g'},
'operator': {'name': 'hamiltonian', 'transformation': 'full', 'qubit_mapping': 'parity'},
'algorithm': {
'name': 'ExactEigensolver',
}
}
```
With the two algorithms configured, we can then run them and check the results, as follows.
```
result_qpe = AquaChemistry().run(aqua_chemistry_qpe_dict, backend=backend)
result_ees = AquaChemistry().run(aqua_chemistry_ees_dict)
print('The groundtruth total ground state energy is {}.'.format(
result_ees['energy'] - result_ees['nuclear_repulsion_energy']
))
print('The total ground state energy as computed by QPE is {}.'.format(
result_qpe['energy'] - result_qpe['nuclear_repulsion_energy']
))
print('In comparison, the Hartree-Fock ground state energy is {}.'.format(
result_ees['hf_energy'] - result_ees['nuclear_repulsion_energy']
))
```
|
github_jupyter
|
from collections import OrderedDict
from qiskit import LegacySimulators
from qiskit.transpiler import PassManager
from qiskit_aqua import AquaError
from qiskit_aqua import QuantumInstance
from qiskit_aqua.algorithms import ExactEigensolver
from qiskit_aqua.algorithms import QPE
from qiskit_aqua.components.iqfts import Standard
from qiskit_aqua_chemistry import FermionicOperator
from qiskit_aqua_chemistry import AquaChemistry
from qiskit_aqua_chemistry.drivers import ConfigurationManager
from qiskit_aqua_chemistry.aqua_extensions.components.initial_states import HartreeFock
import time
distance = 0.735
cfg_mgr = ConfigurationManager()
pyscf_cfg = OrderedDict([
('atom', 'H .0 .0 .0; H .0 .0 {}'.format(distance)),
('unit', 'Angstrom'),
('charge', 0),
('spin', 0),
('basis', 'sto3g')
])
section = {}
section['properties'] = pyscf_cfg
try:
driver = cfg_mgr.get_driver_instance('PYSCF')
except ModuleNotFoundError:
raise AquaError('PYSCF driver does not appear to be installed')
molecule = driver.run(section)
qubit_mapping = 'parity'
fer_op = FermionicOperator(h1=molecule.one_body_integrals, h2=molecule.two_body_integrals)
qubit_op = fer_op.mapping(map_type=qubit_mapping,threshold=1e-10).two_qubit_reduced_operator(2)
exact_eigensolver = ExactEigensolver(qubit_op, k=1)
result_ee = exact_eigensolver.run()
reference_energy = result_ee['energy']
print('The exact ground state energy is: {}'.format(result_ee['energy']))
num_particles = molecule.num_alpha + molecule.num_beta
two_qubit_reduction = True
num_orbitals = qubit_op.num_qubits + (2 if two_qubit_reduction else 0)
num_time_slices = 50
n_ancillae = 9
state_in = HartreeFock(qubit_op.num_qubits, num_orbitals,
num_particles, qubit_mapping, two_qubit_reduction)
iqft = Standard(n_ancillae)
qpe = QPE(qubit_op, state_in, iqft, num_time_slices, n_ancillae,
paulis_grouping='random', expansion_mode='suzuki',
expansion_order=2, shallow_circuit_concat=True)
backend = LegacySimulators.get_backend('qasm_simulator')
quantum_instance = QuantumInstance(backend, shots=100, pass_manager=PassManager())
result_qpe = qpe.run(quantum_instance)
print('The ground state energy as computed by QPE is: {}'.format(result_qpe['energy']))
molecule = 'H .0 .0 0; H .0 .0 {}'.format(distance)
# Input dictionary to configure Qiskit Aqua Chemistry for the chemistry problem.
aqua_chemistry_qpe_dict = {
'driver': {'name': 'PYSCF'},
'PYSCF': {
'atom': molecule,
'basis': 'sto3g'
},
'operator': {'name': 'hamiltonian', 'transformation': 'full', 'qubit_mapping': 'parity'},
'algorithm': {
'name': 'QPE',
'num_ancillae': 9,
'num_time_slices': 50,
'expansion_mode': 'suzuki',
'expansion_order': 2,
},
'initial_state': {'name': 'HartreeFock'},
'backend': {'shots': 100}
}
aqua_chemistry_ees_dict = {
'driver': {'name': 'PYSCF'},
'PYSCF': {'atom': molecule, 'basis': 'sto3g'},
'operator': {'name': 'hamiltonian', 'transformation': 'full', 'qubit_mapping': 'parity'},
'algorithm': {
'name': 'ExactEigensolver',
}
}
result_qpe = AquaChemistry().run(aqua_chemistry_qpe_dict, backend=backend)
result_ees = AquaChemistry().run(aqua_chemistry_ees_dict)
print('The groundtruth total ground state energy is {}.'.format(
result_ees['energy'] - result_ees['nuclear_repulsion_energy']
))
print('The total ground state energy as computed by QPE is {}.'.format(
result_qpe['energy'] - result_qpe['nuclear_repulsion_energy']
))
print('In comparison, the Hartree-Fock ground state energy is {}.'.format(
result_ees['hf_energy'] - result_ees['nuclear_repulsion_energy']
))
| 0.694303 | 0.980431 |
For this problem set, we'll be using the Jupyter notebook:

---
## Part A (2 points)
Write a function that returns a list of numbers, such that $x_i=i^2$, for $1\leq i \leq n$. Make sure it handles the case where $n<1$ by raising a `ValueError`.
```
def squares(n):
"""Compute the squares of numbers from 1 to n, such that the
ith element of the returned list equals i^2.
"""
### BEGIN SOLUTION
if n < 1:
raise ValueError("n must be greater than or equal to 1")
return [i ** 2 for i in range(1, n + 1)]
### END SOLUTION
```
Your function should print `[1, 4, 9, 16, 25, 36, 49, 64, 81, 100]` for $n=10$. Check that it does:
```
squares(10)
"""Check that squares returns the correct output for several inputs"""
assert squares(1) == [1]
assert squares(2) == [1, 4]
assert squares(10) == [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
assert squares(11) == [1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121]
"""Check that squares raises an error for invalid inputs"""
try:
squares(0)
except ValueError:
pass
else:
raise AssertionError("did not raise")
try:
squares(-4)
except ValueError:
pass
else:
raise AssertionError("did not raise")
```
---
## Part B (1 point)
Using your `squares` function, write a function that computes the sum of the squares of the numbers from 1 to $n$. Your function should call the `squares` function -- it should NOT reimplement its functionality.
```
def sum_of_squares(n):
"""Compute the sum of the squares of numbers from 1 to n."""
### BEGIN SOLUTION
return sum(squares(n))
### END SOLUTION
```
The sum of squares from 1 to 10 should be 385. Verify that this is the answer you get:
```
sum_of_squares(10)
"""Check that sum_of_squares returns the correct answer for various inputs."""
assert sum_of_squares(1) == 1
assert sum_of_squares(2) == 5
assert sum_of_squares(10) == 385
assert sum_of_squares(11) == 506
"""Check that sum_of_squares relies on squares."""
orig_squares = squares
del squares
try:
sum_of_squares(1)
except NameError:
pass
else:
raise AssertionError("sum_of_squares does not use squares")
finally:
squares = orig_squares
```
---
## Part C (1 point)
Using LaTeX math notation, write out the equation that is implemented by your `sum_of_squares` function.
$\sum_{i=1}^n i^2$
---
## Part D (2 points)
Find a usecase for your `sum_of_squares` function and implement that usecase in the cell below.
```
def pyramidal_number(n):
"""Returns the n^th pyramidal number"""
return sum_of_squares(n)
```
|
github_jupyter
|
def squares(n):
"""Compute the squares of numbers from 1 to n, such that the
ith element of the returned list equals i^2.
"""
### BEGIN SOLUTION
if n < 1:
raise ValueError("n must be greater than or equal to 1")
return [i ** 2 for i in range(1, n + 1)]
### END SOLUTION
squares(10)
"""Check that squares returns the correct output for several inputs"""
assert squares(1) == [1]
assert squares(2) == [1, 4]
assert squares(10) == [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
assert squares(11) == [1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121]
"""Check that squares raises an error for invalid inputs"""
try:
squares(0)
except ValueError:
pass
else:
raise AssertionError("did not raise")
try:
squares(-4)
except ValueError:
pass
else:
raise AssertionError("did not raise")
def sum_of_squares(n):
"""Compute the sum of the squares of numbers from 1 to n."""
### BEGIN SOLUTION
return sum(squares(n))
### END SOLUTION
sum_of_squares(10)
"""Check that sum_of_squares returns the correct answer for various inputs."""
assert sum_of_squares(1) == 1
assert sum_of_squares(2) == 5
assert sum_of_squares(10) == 385
assert sum_of_squares(11) == 506
"""Check that sum_of_squares relies on squares."""
orig_squares = squares
del squares
try:
sum_of_squares(1)
except NameError:
pass
else:
raise AssertionError("sum_of_squares does not use squares")
finally:
squares = orig_squares
def pyramidal_number(n):
"""Returns the n^th pyramidal number"""
return sum_of_squares(n)
| 0.759404 | 0.966726 |
# Work placements salary prediction based on grades and education
### Use of Multiple Linear Regression. Comparison with Ridge and Lasso
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from scipy import stats
from sklearn import metrics
from sklearn.linear_model import Lasso, Ridge
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import MinMaxScaler
# Import data from csv file
filename = r".\data\Placement_Data_Full_Class.csv"
df = pd.read_csv(filename)
# Initial EDA
print(df.head(10))
print(df.shape)
print(df.dtypes)
print(df.describe())
print(df.isna().sum())
```
#### Data cleaning and pre-processing
```
# Drop individuals not currently working
data = df.dropna(subset=['salary'])
# Drop secondary education and non-relevant information
data.drop(columns=['sl_no', 'ssc_b', 'hsc_b', 'hsc_s', 'status'], inplace=True)
# final EDA
print(data.head(10))
print(data.shape)
print(data.dtypes)
print(data.describe())
print(data.isna().sum())
# Reset index of final data
data.reset_index(inplace=True, drop=True)
# Get dummy variables for categorical data
data = pd.get_dummies(data, drop_first=True)
# Remove outliers
z_scores = stats.zscore(data)
abs_z_scores = np.abs(z_scores)
filtered_entries = (abs_z_scores < 5).all(axis=1)
data = data[filtered_entries]
# Split of data into train and test
X = data.drop(columns=['salary'])
y = data.salary
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
# Visualisation of relevant numeric columns
sns.pairplot(data, vars=['degree_p', 'etest_p', 'mba_p', 'salary'])
plt.show()
# Salary box-plot
plt.boxplot(data.salary)
plt.show()
```
#### Linear regression
```
# Linear regression model
regressor = LinearRegression()
regressor.fit(X_train, y_train)
y_pred_reg = regressor.predict(X_test)
print('Linear Regressor:')
print('Mean Absolute Error:', metrics.mean_absolute_error(y_test, y_pred_reg))
print('Mean Squared Error:', metrics.mean_squared_error(y_test, y_pred_reg))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, y_pred_reg)))
print('Error relative to mean:', round(np.sqrt(metrics.mean_squared_error(y_test, y_pred_reg)) / y.mean() * 100, 2),
'%')
print('Score: ', regressor.score(X_test, y_test))
comparison = pd.DataFrame({'Actual': y_test, 'Predicted': y_pred_reg})
comparison.plot(kind='bar', figsize=(10, 8))
plt.title('Linear regression')
plt.xlabel('Person index')
plt.ylabel('Salary')
plt.grid(which='major', linestyle='-', linewidth='0.5', color='green')
plt.grid(which='minor', linestyle=':', linewidth='0.5', color='black')
plt.show()
coeff_df = pd.DataFrame(regressor.coef_, X.columns, columns=['Coefficient'])
# Cross validation
cv_results = cross_val_score(regressor, X, y, cv=5)
print(cv_results)
np.mean(cv_results)
# Linear regression with MinMaxScaler
steps = [('scaler', MinMaxScaler()),
('regressor', LinearRegression())]
pipeline = Pipeline(steps)
pipeline.fit(X_train, y_train)
y_pred_pip = pipeline.predict(X_test)
print('Linear Regressor with MinMaxScaler:')
print('Mean Absolute Error:', metrics.mean_absolute_error(y_test, y_pred_pip))
print('Mean Squared Error:', metrics.mean_squared_error(y_test, y_pred_pip))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, y_pred_pip)))
print('Error relative to mean:', round(np.sqrt(metrics.mean_squared_error(y_test, y_pred_pip)) / y.mean() * 100, 2),
'%')
print('Score: ', pipeline.score(X_test, y_test))
comparison = pd.DataFrame({'Actual': y_test, 'Predicted': y_pred_pip})
comparison.plot(kind='bar', figsize=(10, 8))
plt.title('Linear regression with MinMaxScaler')
plt.xlabel('Person index')
plt.ylabel('Salary')
plt.grid(which='major', linestyle='-', linewidth='0.5', color='green')
plt.grid(which='minor', linestyle=':', linewidth='0.5', color='black')
plt.show()
cv_results = cross_val_score(pipeline, X, y, cv=5)
print(cv_results)
np.mean(cv_results)
```
#### Regularisation
```
# Ridge
ridge = Ridge(alpha=0.1, normalize=True)
ridge.fit(X_train, y_train)
ridge_pred = ridge.predict(X_test)
ridge.score(X_test, y_test)
# Lasso
lasso = Lasso(alpha=0.1, normalize=True)
lasso.fit(X_train, y_train)
lasso_pred = lasso.predict(X_test)
lasso.score(X_test, y_test)
# Lasso for feature selection
names = X.columns
lasso = Lasso(alpha=0.1)
lasso_coef = lasso.fit(X, y).coef_
_ = plt.plot(range(len(names)), lasso_coef)
_ = plt.xticks(range(len(names)), names, rotation=90)
_ = plt.ylabel('Coefficients')
_ = plt.grid(linestyle='-', linewidth=0.5)
plt.show()
comparison = pd.DataFrame({'Feature': names, 'Lasso Coefficient': lasso_coef})
comparison.plot(kind='bar', figsize=(10, 8))
plt.title('Lasso for feature selection')
plt.xlabel('Feature')
plt.ylabel('Coefficients')
plt.xticks(range(len(names)), names, rotation=90)
plt.grid(linestyle='-', linewidth=0.5)
plt.show()
# Summary of selected features and discarded features
non_selected_feat = names[abs(lasso_coef) == 0]
selected_feat = names[abs(lasso_coef) != 0]
print('total features: {}'.format(len(names)))
print('selected features: {}'.format(len(selected_feat)))
print('features with coefficients shrank to zero: {} - {}'.format(len(non_selected_feat), non_selected_feat[0]))
```
|
github_jupyter
|
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from scipy import stats
from sklearn import metrics
from sklearn.linear_model import Lasso, Ridge
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import MinMaxScaler
# Import data from csv file
filename = r".\data\Placement_Data_Full_Class.csv"
df = pd.read_csv(filename)
# Initial EDA
print(df.head(10))
print(df.shape)
print(df.dtypes)
print(df.describe())
print(df.isna().sum())
# Drop individuals not currently working
data = df.dropna(subset=['salary'])
# Drop secondary education and non-relevant information
data.drop(columns=['sl_no', 'ssc_b', 'hsc_b', 'hsc_s', 'status'], inplace=True)
# final EDA
print(data.head(10))
print(data.shape)
print(data.dtypes)
print(data.describe())
print(data.isna().sum())
# Reset index of final data
data.reset_index(inplace=True, drop=True)
# Get dummy variables for categorical data
data = pd.get_dummies(data, drop_first=True)
# Remove outliers
z_scores = stats.zscore(data)
abs_z_scores = np.abs(z_scores)
filtered_entries = (abs_z_scores < 5).all(axis=1)
data = data[filtered_entries]
# Split of data into train and test
X = data.drop(columns=['salary'])
y = data.salary
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
# Visualisation of relevant numeric columns
sns.pairplot(data, vars=['degree_p', 'etest_p', 'mba_p', 'salary'])
plt.show()
# Salary box-plot
plt.boxplot(data.salary)
plt.show()
# Linear regression model
regressor = LinearRegression()
regressor.fit(X_train, y_train)
y_pred_reg = regressor.predict(X_test)
print('Linear Regressor:')
print('Mean Absolute Error:', metrics.mean_absolute_error(y_test, y_pred_reg))
print('Mean Squared Error:', metrics.mean_squared_error(y_test, y_pred_reg))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, y_pred_reg)))
print('Error relative to mean:', round(np.sqrt(metrics.mean_squared_error(y_test, y_pred_reg)) / y.mean() * 100, 2),
'%')
print('Score: ', regressor.score(X_test, y_test))
comparison = pd.DataFrame({'Actual': y_test, 'Predicted': y_pred_reg})
comparison.plot(kind='bar', figsize=(10, 8))
plt.title('Linear regression')
plt.xlabel('Person index')
plt.ylabel('Salary')
plt.grid(which='major', linestyle='-', linewidth='0.5', color='green')
plt.grid(which='minor', linestyle=':', linewidth='0.5', color='black')
plt.show()
coeff_df = pd.DataFrame(regressor.coef_, X.columns, columns=['Coefficient'])
# Cross validation
cv_results = cross_val_score(regressor, X, y, cv=5)
print(cv_results)
np.mean(cv_results)
# Linear regression with MinMaxScaler
steps = [('scaler', MinMaxScaler()),
('regressor', LinearRegression())]
pipeline = Pipeline(steps)
pipeline.fit(X_train, y_train)
y_pred_pip = pipeline.predict(X_test)
print('Linear Regressor with MinMaxScaler:')
print('Mean Absolute Error:', metrics.mean_absolute_error(y_test, y_pred_pip))
print('Mean Squared Error:', metrics.mean_squared_error(y_test, y_pred_pip))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, y_pred_pip)))
print('Error relative to mean:', round(np.sqrt(metrics.mean_squared_error(y_test, y_pred_pip)) / y.mean() * 100, 2),
'%')
print('Score: ', pipeline.score(X_test, y_test))
comparison = pd.DataFrame({'Actual': y_test, 'Predicted': y_pred_pip})
comparison.plot(kind='bar', figsize=(10, 8))
plt.title('Linear regression with MinMaxScaler')
plt.xlabel('Person index')
plt.ylabel('Salary')
plt.grid(which='major', linestyle='-', linewidth='0.5', color='green')
plt.grid(which='minor', linestyle=':', linewidth='0.5', color='black')
plt.show()
cv_results = cross_val_score(pipeline, X, y, cv=5)
print(cv_results)
np.mean(cv_results)
# Ridge
ridge = Ridge(alpha=0.1, normalize=True)
ridge.fit(X_train, y_train)
ridge_pred = ridge.predict(X_test)
ridge.score(X_test, y_test)
# Lasso
lasso = Lasso(alpha=0.1, normalize=True)
lasso.fit(X_train, y_train)
lasso_pred = lasso.predict(X_test)
lasso.score(X_test, y_test)
# Lasso for feature selection
names = X.columns
lasso = Lasso(alpha=0.1)
lasso_coef = lasso.fit(X, y).coef_
_ = plt.plot(range(len(names)), lasso_coef)
_ = plt.xticks(range(len(names)), names, rotation=90)
_ = plt.ylabel('Coefficients')
_ = plt.grid(linestyle='-', linewidth=0.5)
plt.show()
comparison = pd.DataFrame({'Feature': names, 'Lasso Coefficient': lasso_coef})
comparison.plot(kind='bar', figsize=(10, 8))
plt.title('Lasso for feature selection')
plt.xlabel('Feature')
plt.ylabel('Coefficients')
plt.xticks(range(len(names)), names, rotation=90)
plt.grid(linestyle='-', linewidth=0.5)
plt.show()
# Summary of selected features and discarded features
non_selected_feat = names[abs(lasso_coef) == 0]
selected_feat = names[abs(lasso_coef) != 0]
print('total features: {}'.format(len(names)))
print('selected features: {}'.format(len(selected_feat)))
print('features with coefficients shrank to zero: {} - {}'.format(len(non_selected_feat), non_selected_feat[0]))
| 0.620162 | 0.915356 |
## Dependencies
```
import warnings, json, re, math
from melanoma_utility_scripts import *
from kaggle_datasets import KaggleDatasets
from sklearn.model_selection import KFold, RandomizedSearchCV, GridSearchCV, cross_val_score, cross_validate
from xgboost import XGBClassifier
SEED = 42
seed_everything(SEED)
warnings.filterwarnings("ignore")
```
# Model parameters
```
config = {
"HEIGHT": 512,
"WIDTH": 512,
"CHANNELS": 3,
"N_FOLDS": 5,
"N_USED_FOLDS": 5,
"DATASET_PATH": 'melanoma-512x512'
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
config
```
# Load data
```
database_base_path = '/kaggle/input/siim-isic-melanoma-classification/'
train = pd.read_csv(f"/kaggle/input/{config['DATASET_PATH']}/train.csv")
test = pd.read_csv(database_base_path + 'test.csv')
print('Train samples: %d' % len(train))
display(train.head())
print(f'Test samples: {len(test)}')
display(test.head())
```
# Missing values
```
# age_approx (mean)
train['age_approx'].fillna(train['age_approx'].mean(), inplace=True)
test['age_approx'].fillna(train['age_approx'].mean(), inplace=True)
# anatom_site_general_challenge (NaN)
train['anatom_site_general_challenge'].fillna('NaN', inplace=True)
test['anatom_site_general_challenge'].fillna('NaN', inplace=True)
# sex (mode)
train['sex'].fillna(train['sex'].mode()[0], inplace=True)
test['sex'].fillna(train['sex'].mode()[0], inplace=True)
```
# Feature engineering
```
### One-hot ecoding
train = pd.concat([train, pd.get_dummies(train['sex'], prefix='sex_ohe', drop_first=True)], axis=1)
test = pd.concat([test, pd.get_dummies(test['sex'], prefix='sex_ohe', drop_first=True)], axis=1)
train = pd.concat([train, pd.get_dummies(train['anatom_site_general_challenge'], prefix='anatom_ohe')], axis=1)
test = pd.concat([test, pd.get_dummies(test['anatom_site_general_challenge'], prefix='anatom_ohe')], axis=1)
### Mean ecoding
# Sex
train['sex_mean'] = train['sex'].map(train.groupby(['sex']).target.mean())
test['sex_mean'] = test['sex'].map(train.groupby(['sex']).target.mean())
# Age
train['anatom_mean'] = train['anatom_site_general_challenge'].map(train.groupby(['anatom_site_general_challenge']).target.mean())
test['anatom_mean'] = test['anatom_site_general_challenge'].map(train.groupby(['anatom_site_general_challenge']).target.mean())
print('Train set')
display(train.head())
print('Test set')
display(test.head())
```
# Model
```
features = ['age_approx']
ohe_features = [col for col in train.columns if 'ohe' in col]
features += ohe_features
print(features)
params = {'n_estimators': 750,
'min_child_weight': 0.81,
'learning_rate': 0.025,
'max_depth': 2,
'subsample': 0.80,
'colsample_bytree': 0.42,
'gamma': 0.10,
'random_state': SEED,
'n_jobs': -1}
```
# Training
```
skf = KFold(n_splits=config['N_USED_FOLDS'], shuffle=True, random_state=SEED)
test['target'] = 0
model_list = []
for fold,(idxT, idxV) in enumerate(skf.split(np.arange(15))):
print(f'\nFOLD: {fold+1}')
print(f'TRAIN: {idxT} VALID: {idxV}')
train[f'fold_{fold+1}'] = train.apply(lambda x: 'validation' if x['tfrecord'] in idxT else 'train', axis=1)
x_train = train[train['tfrecord'].isin(idxT)]
y_train = x_train['target']
x_valid = train[~train['tfrecord'].isin(idxT)]
y_valid = x_valid['target']
model = XGBClassifier(**params)
model.fit(x_train[features], y_train, eval_set=[(x_valid[features], y_valid)],
eval_metric='auc', early_stopping_rounds=10, verbose=0)
model_list.append(model)
# Evaludation
preds = model.predict_proba(train[features])[:, 1]
train[f'pred_fold_{fold+1}'] = preds
# Inference
preds = model.predict_proba(test[features])[:, 1]
test[f'pred_fold_{fold+1}'] = preds
test['target'] += preds / config['N_USED_FOLDS']
```
# Feature importance
```
for n_fold, model in enumerate(model_list):
print(f'Fold: {n_fold + 1}')
feature_importance = model.get_booster().get_score(importance_type='weight')
keys = list(feature_importance.keys())
values = list(feature_importance.values())
importance = pd.DataFrame(data=values, index=keys,
columns=['score']).sort_values(by='score',
ascending=False)
plt.figure(figsize=(16, 8))
sns.barplot(x=importance.score.iloc[:20],
y=importance.index[:20],
orient='h',
palette='Reds_r')
plt.show()
```
# Model evaluation
```
display(evaluate_model(train, config['N_USED_FOLDS']).style.applymap(color_map))
display(evaluate_model_Subset(train, config['N_USED_FOLDS']).style.applymap(color_map))
```
# Adversarial Validation
```
### Adversarial set
adv_train = train.copy()
adv_test = test.copy()
adv_train['dataset'] = 1
adv_test['dataset'] = 0
x_adv = pd.concat([adv_train, adv_test], axis=0)
y_adv = x_adv['dataset']
### Adversarial model
model_adv = XGBClassifier(**params)
model_adv.fit(x_adv[features], y_adv, eval_metric='auc', verbose=0)
### Preds
preds = model_adv.predict_proba(x_adv[features])[:, 1]
### Plot feature importance and ROC AUC curve
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 10))
# Feature importance
feature_importance = model_adv.get_booster().get_score(importance_type='weight')
keys = list(feature_importance.keys())
values = list(feature_importance.values())
importance = pd.DataFrame(data=values, index=keys,
columns=['score']).sort_values(by='score',
ascending=False)
ax1.set_title('Feature Importances')
sns.barplot(x=importance.score.iloc[:20],
y=importance.index[:20],
orient='h',
palette='Reds_r',
ax=ax1)
# Plot ROC AUC curve
fpr_train, tpr_train, _ = roc_curve(y_adv, preds)
roc_auc_train = auc(fpr_train, tpr_train)
ax2.set_title('ROC AUC curve')
ax2.plot(fpr_train, tpr_train, color='blue', label='Adversarial AUC = %0.2f' % roc_auc_train)
ax2.legend(loc = 'lower right')
ax2.plot([0, 1], [0, 1],'r--')
ax2.set_xlim([0, 1])
ax2.set_ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
```
# Visualize predictions
```
train['pred'] = 0
for n_fold in range(config['N_USED_FOLDS']):
train['pred'] += train[f'pred_fold_{n_fold+1}'] / config['N_FOLDS']
print('Label/prediction distribution')
print(f"Train positive labels: {len(train[train['target'] > .5])}")
print(f"Train positive predictions: {len(train[train['pred'] > .5])}")
print(f"Train positive correct predictions: {len(train[(train['target'] > .5) & (train['pred'] > .5)])}")
print('Top 10 samples')
display(train[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in train.columns if (c.startswith('pred_fold'))]].head(10))
print('Top 10 positive samples')
display(train[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in train.columns if (c.startswith('pred_fold'))]].query('target == 1').head(10))
print('Top 10 predicted positive samples')
display(train[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in train.columns if (c.startswith('pred_fold'))]].query('pred > .5').head(10))
```
# Visualize test predictions
```
print(f"Test predictions {len(test[test['target'] > .5])}|{len(test[test['target'] <= .5])}")
print('Top 10 samples')
display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'target'] +
[c for c in test.columns if (c.startswith('pred_fold'))]].head(10))
print('Top 10 positive samples')
display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'target'] +
[c for c in test.columns if (c.startswith('pred_fold'))]].query('target > .5').head(10))
```
# Test set predictions
```
submission = pd.read_csv(database_base_path + 'sample_submission.csv')
submission['target'] = test['target']
display(submission.head(10))
display(submission.describe())
submission[['image_name', 'target']].to_csv('submission.csv', index=False)
```
|
github_jupyter
|
import warnings, json, re, math
from melanoma_utility_scripts import *
from kaggle_datasets import KaggleDatasets
from sklearn.model_selection import KFold, RandomizedSearchCV, GridSearchCV, cross_val_score, cross_validate
from xgboost import XGBClassifier
SEED = 42
seed_everything(SEED)
warnings.filterwarnings("ignore")
config = {
"HEIGHT": 512,
"WIDTH": 512,
"CHANNELS": 3,
"N_FOLDS": 5,
"N_USED_FOLDS": 5,
"DATASET_PATH": 'melanoma-512x512'
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
config
database_base_path = '/kaggle/input/siim-isic-melanoma-classification/'
train = pd.read_csv(f"/kaggle/input/{config['DATASET_PATH']}/train.csv")
test = pd.read_csv(database_base_path + 'test.csv')
print('Train samples: %d' % len(train))
display(train.head())
print(f'Test samples: {len(test)}')
display(test.head())
# age_approx (mean)
train['age_approx'].fillna(train['age_approx'].mean(), inplace=True)
test['age_approx'].fillna(train['age_approx'].mean(), inplace=True)
# anatom_site_general_challenge (NaN)
train['anatom_site_general_challenge'].fillna('NaN', inplace=True)
test['anatom_site_general_challenge'].fillna('NaN', inplace=True)
# sex (mode)
train['sex'].fillna(train['sex'].mode()[0], inplace=True)
test['sex'].fillna(train['sex'].mode()[0], inplace=True)
### One-hot ecoding
train = pd.concat([train, pd.get_dummies(train['sex'], prefix='sex_ohe', drop_first=True)], axis=1)
test = pd.concat([test, pd.get_dummies(test['sex'], prefix='sex_ohe', drop_first=True)], axis=1)
train = pd.concat([train, pd.get_dummies(train['anatom_site_general_challenge'], prefix='anatom_ohe')], axis=1)
test = pd.concat([test, pd.get_dummies(test['anatom_site_general_challenge'], prefix='anatom_ohe')], axis=1)
### Mean ecoding
# Sex
train['sex_mean'] = train['sex'].map(train.groupby(['sex']).target.mean())
test['sex_mean'] = test['sex'].map(train.groupby(['sex']).target.mean())
# Age
train['anatom_mean'] = train['anatom_site_general_challenge'].map(train.groupby(['anatom_site_general_challenge']).target.mean())
test['anatom_mean'] = test['anatom_site_general_challenge'].map(train.groupby(['anatom_site_general_challenge']).target.mean())
print('Train set')
display(train.head())
print('Test set')
display(test.head())
features = ['age_approx']
ohe_features = [col for col in train.columns if 'ohe' in col]
features += ohe_features
print(features)
params = {'n_estimators': 750,
'min_child_weight': 0.81,
'learning_rate': 0.025,
'max_depth': 2,
'subsample': 0.80,
'colsample_bytree': 0.42,
'gamma': 0.10,
'random_state': SEED,
'n_jobs': -1}
skf = KFold(n_splits=config['N_USED_FOLDS'], shuffle=True, random_state=SEED)
test['target'] = 0
model_list = []
for fold,(idxT, idxV) in enumerate(skf.split(np.arange(15))):
print(f'\nFOLD: {fold+1}')
print(f'TRAIN: {idxT} VALID: {idxV}')
train[f'fold_{fold+1}'] = train.apply(lambda x: 'validation' if x['tfrecord'] in idxT else 'train', axis=1)
x_train = train[train['tfrecord'].isin(idxT)]
y_train = x_train['target']
x_valid = train[~train['tfrecord'].isin(idxT)]
y_valid = x_valid['target']
model = XGBClassifier(**params)
model.fit(x_train[features], y_train, eval_set=[(x_valid[features], y_valid)],
eval_metric='auc', early_stopping_rounds=10, verbose=0)
model_list.append(model)
# Evaludation
preds = model.predict_proba(train[features])[:, 1]
train[f'pred_fold_{fold+1}'] = preds
# Inference
preds = model.predict_proba(test[features])[:, 1]
test[f'pred_fold_{fold+1}'] = preds
test['target'] += preds / config['N_USED_FOLDS']
for n_fold, model in enumerate(model_list):
print(f'Fold: {n_fold + 1}')
feature_importance = model.get_booster().get_score(importance_type='weight')
keys = list(feature_importance.keys())
values = list(feature_importance.values())
importance = pd.DataFrame(data=values, index=keys,
columns=['score']).sort_values(by='score',
ascending=False)
plt.figure(figsize=(16, 8))
sns.barplot(x=importance.score.iloc[:20],
y=importance.index[:20],
orient='h',
palette='Reds_r')
plt.show()
display(evaluate_model(train, config['N_USED_FOLDS']).style.applymap(color_map))
display(evaluate_model_Subset(train, config['N_USED_FOLDS']).style.applymap(color_map))
### Adversarial set
adv_train = train.copy()
adv_test = test.copy()
adv_train['dataset'] = 1
adv_test['dataset'] = 0
x_adv = pd.concat([adv_train, adv_test], axis=0)
y_adv = x_adv['dataset']
### Adversarial model
model_adv = XGBClassifier(**params)
model_adv.fit(x_adv[features], y_adv, eval_metric='auc', verbose=0)
### Preds
preds = model_adv.predict_proba(x_adv[features])[:, 1]
### Plot feature importance and ROC AUC curve
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 10))
# Feature importance
feature_importance = model_adv.get_booster().get_score(importance_type='weight')
keys = list(feature_importance.keys())
values = list(feature_importance.values())
importance = pd.DataFrame(data=values, index=keys,
columns=['score']).sort_values(by='score',
ascending=False)
ax1.set_title('Feature Importances')
sns.barplot(x=importance.score.iloc[:20],
y=importance.index[:20],
orient='h',
palette='Reds_r',
ax=ax1)
# Plot ROC AUC curve
fpr_train, tpr_train, _ = roc_curve(y_adv, preds)
roc_auc_train = auc(fpr_train, tpr_train)
ax2.set_title('ROC AUC curve')
ax2.plot(fpr_train, tpr_train, color='blue', label='Adversarial AUC = %0.2f' % roc_auc_train)
ax2.legend(loc = 'lower right')
ax2.plot([0, 1], [0, 1],'r--')
ax2.set_xlim([0, 1])
ax2.set_ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
train['pred'] = 0
for n_fold in range(config['N_USED_FOLDS']):
train['pred'] += train[f'pred_fold_{n_fold+1}'] / config['N_FOLDS']
print('Label/prediction distribution')
print(f"Train positive labels: {len(train[train['target'] > .5])}")
print(f"Train positive predictions: {len(train[train['pred'] > .5])}")
print(f"Train positive correct predictions: {len(train[(train['target'] > .5) & (train['pred'] > .5)])}")
print('Top 10 samples')
display(train[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in train.columns if (c.startswith('pred_fold'))]].head(10))
print('Top 10 positive samples')
display(train[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in train.columns if (c.startswith('pred_fold'))]].query('target == 1').head(10))
print('Top 10 predicted positive samples')
display(train[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in train.columns if (c.startswith('pred_fold'))]].query('pred > .5').head(10))
print(f"Test predictions {len(test[test['target'] > .5])}|{len(test[test['target'] <= .5])}")
print('Top 10 samples')
display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'target'] +
[c for c in test.columns if (c.startswith('pred_fold'))]].head(10))
print('Top 10 positive samples')
display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'target'] +
[c for c in test.columns if (c.startswith('pred_fold'))]].query('target > .5').head(10))
submission = pd.read_csv(database_base_path + 'sample_submission.csv')
submission['target'] = test['target']
display(submission.head(10))
display(submission.describe())
submission[['image_name', 'target']].to_csv('submission.csv', index=False)
| 0.45181 | 0.590573 |
# Emulators: Measuring performance
This example illustrates how a neural networks performs in emulating the log-likelihood surface of a time series and in Bayesian inference, using a two-step MCMC procedure with emulator neural networks [Emulated Metropolis MCMC](../sampling/first-example.ipynb).
It follows on from [Emulators: First example](../mcmc/first-example-emulator.ipynb)
Like in the first example, I start by importing pints:
```
import pints
```
Next, I create a model class using the "Logistic" toy model included in pints:
```
import pints.toy as toy
class RescaledModel(pints.ForwardModel):
def __init__(self):
self.base_model = toy.LogisticModel()
def simulate(self, parameters, times):
# Run a simulation with the given parameters for the
# given times and return the simulated values
r, k = parameters
r = r / 50
k = k * 500
return self.base_model.simulate([r, k], times)
def simulateS1(self, parameters, times):
# Run a simulation with the given parameters for the
# given times and return the simulated values
r, k = parameters
r = r / 50
k = k * 500
return self.base_model.simulateS1([r, k], times)
def n_parameters(self):
# Return the dimension of the parameter vector
return 2
model = toy.LogisticModel()
```
In order to generate some test data, I choose an arbitrary set of "true" parameters:
```
true_parameters = [0.015, 500]
start_parameters = [0.75, 1.0] # rescaled true parameters
```
And a number of time points at which to sample the time series:
```
import numpy as np
times = np.linspace(0, 1000, 400)
```
Using these parameters and time points, I generate an example dataset:
```
org_values = model.simulate(true_parameters, times)
range_values = max(org_values) - min(org_values)
```
And make it more realistic by adding gaussian noise:
```
noise = 0.05 * range_values
print("Gaussian noise:", noise)
values = org_values + np.random.normal(0, noise, org_values.shape)
```
Using matplotlib and seaborn (optional - for styling), I look at the noisy time series I just simulated:
```
import matplotlib.pyplot as plt
import seaborn as sns
import itertools
sns.set(context='notebook', style='whitegrid', palette="deep", font='Times New Roman',
font_scale=1.5, color_codes=True, rc={"grid.linewidth": 1})
palette = itertools.cycle(sns.color_palette())
c=next(palette)
fig = plt.figure(figsize=(12,4.5))
plt.xlabel('Time')
plt.ylabel('Values')
plt.plot(times, org_values, lw=2, c=c, label='Original data')
plt.plot(times, values, '--', c=c, label='Noisy data')
plt.legend()
plt.show()
fig.savefig("results/logistic.png", bbox_inches='tight', dpi=200)
```
Now, I have enough data (a model, a list of times, and a list of values) to formulate a PINTS problem:
```
model = RescaledModel()
problem = pints.SingleOutputProblem(model, times, values)
```
I now have some toy data, and a model that can be used for forward simulations. To make it into a probabilistic problem, a _noise model_ needs to be added. This can be done using the `GaussianLogLikelihood` function, which assumes independently distributed Gaussian noise over the data, and can calculate log-likelihoods:
```
log_likelihood = pints.GaussianKnownSigmaLogLikelihood(problem, noise)
```
This `log_likelihood` represents the _conditional probability_ $p(y|\theta)$, given a set of parameters $\theta$ and a series of $y=$ `values`, it can calculate the probability of finding those values if the real parameters are $\theta$.
This can be used in a Bayesian inference scheme to find the quantity of interest:
$p(\theta|y) = \frac{p(\theta)p(y|\theta)}{p(y)} \propto p(\theta)p(y|\theta)$
To solve this, a _prior_ is defined, indicating an initial guess about what the parameters should be.
Similarly as using a _log-likelihood_ (the natural logarithm of a likelihood), this is defined by using a _log-prior_. Hence, the above equation simplifies to:
$\log p(\theta|y) \propto \log p(\theta) + \log p(y|\theta)$
In this example, it is assumed that we don't know too much about the prior except lower and upper bounds for each variable: We assume the first model parameter is somewhere on the interval $[0.01, 0.02]$, the second model parameter on $[400, 600]$, and the standard deviation of the noise is somewhere on $[1, 100]$.
```
# Create (rescaled) bounds for our parameters and get prior
#bounds = pints.RectangularBoundaries([0.5, 0.8], [1.0, 1.2])
#bounds = pints.RectangularBoundaries([0.7125, 0.95], [0.7875, 1.05])
#bounds = pints.RectangularBoundaries([0.675, 0.90], [0.825, 1.1])
#bounds = pints.RectangularBoundaries([0.525, 0.7], [0.975, 1.3])
bounds = pints.RectangularBoundaries([0.6, 0.8], [0.9, 1.2])
log_prior = pints.UniformLogPrior(bounds)
```
With this prior, the numerator of Bayes' rule can be defined -- the unnormalised log posterior, $\log \left[ p(y|\theta) p(\theta) \right]$, which is the natural logarithm of the likelihood times the prior:
```
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
sns.set(context='notebook', style='whitegrid', palette="deep", font='Times New Roman',
font_scale=1.5, color_codes=True, rc={"grid.linewidth": 1})
input_parameters = log_prior.sample(2000)
x = [p[0] for p in input_parameters]
y = [p[1] for p in input_parameters]
likelihoods = np.apply_along_axis(log_likelihood, 1, input_parameters)
likelihoods[:5]
print(min(x), max(x))
print(min(y), max(y))
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x, y, list(likelihoods))
plt.show()
#fig.savefig("figures/training-data-best-nn-6-64.png", bbox_inches='tight', dpi=600)
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.model_selection import train_test_split
X_train, X_valid, y_train, y_valid = train_test_split(input_parameters, likelihoods, test_size=0.3, random_state=0)
emu = pints.MultiLayerNN(problem, X_train, y_train, input_scaler=MinMaxScaler(), output_scaler=StandardScaler())
emu.set_parameters(layers=6, neurons=64, hidden_activation='relu', activation='linear', learning_rate=0.0001)
hist = emu.fit(epochs=500, batch_size=32, X_val=X_valid, y_val=y_valid, verbose=0)
emu.summary()
emu([0.75, 1])
log_likelihood([0.75, 1])
# summarize history for loss
#print(hist.history.keys())
sns.set(context='notebook', style='whitegrid', palette="deep", font='Times New Roman',
font_scale=1.5, color_codes=True, rc={"grid.linewidth": 1})
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(20,10))
ax1.title.set_text('Learning curves based on MSE')
ax2.title.set_text('Learning curves based on MAE')
ax1.plot(hist.history['loss'])
ax1.plot(hist.history['val_loss'])
ax1.set_ylabel('MSE')
ax1.set_xlabel('Epoch')
ax1.legend(['training', 'validation'], loc='upper left')
ax2.plot(hist.history['mean_absolute_error'])
ax2.plot(hist.history['val_mean_absolute_error'])
ax2.set_ylabel('MAE')
ax2.set_xlabel('Epoch')
ax2.legend(['training', 'validation'], loc='upper left')
ax3.plot(hist.history['rescaled_mse'])
ax3.plot(hist.history['val_rescaled_mse'])
ax3.set_ylabel('Rescaled MSE')
ax3.set_xlabel('Epoch')
ax3.legend(['training', 'validation'], loc='upper left')
ax4.plot(hist.history['rescaled_mae'])
ax4.plot(hist.history['val_rescaled_mae'])
ax4.set_ylabel('Rescaled MAE')
ax4.set_xlabel('Epoch')
ax4.legend(['training', 'validation'], loc='upper left')
plt.show()
fig.savefig("results/training-best-nn-6-64.png", bbox_inches='tight', dpi=200)
len(hist.history['loss'])
import matplotlib.pyplot as plt
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
sns.set(context='notebook', style='whitegrid', palette="deep", font='Times New Roman',
font_scale=1.5, color_codes=True, rc={"grid.linewidth": 1})
test_splits = 50 # number of splits along each axis
r_grid, k_grid, test_data = pints.generate_grid(bounds.lower(), bounds.upper(), test_splits)
model_prediction = pints.predict_grid(log_likelihood, test_data)
emu_prediction = pints.predict_grid(emu, test_data)
angle=(25, 300)
alpha=0.7
fontsize=16
labelpad=15
title = "Comparison of log-likelihood surfaces"
x_label = "Growth rate (r)"
y_label = "Carrying capacity (k)"
z_label = "Log-likelihood"
fig = plt.figure(figsize=(15,10))
ax = plt.axes(projection='3d')
ax.plot_surface(r_grid, k_grid, model_prediction, cmap='Blues', edgecolor='none', alpha=alpha)
ax.plot_surface(r_grid, k_grid, emu_prediction, cmap='Oranges', edgecolor='none', alpha=alpha)
ax.view_init(*angle)
plt.title(title, fontsize=fontsize*1.25)
ax.set_xlabel(x_label, fontsize=fontsize, labelpad=labelpad)
ax.set_ylabel(y_label, fontsize=fontsize, labelpad=labelpad)
ax.set_zlabel(z_label, fontsize=fontsize, labelpad=labelpad)
fake2Dline1 = mpl.lines.Line2D([0],[0], linestyle="none", c='blue', marker = 'o', alpha=0.5)
fake2Dline2 = mpl.lines.Line2D([0],[0], linestyle="none", c='orange', marker = 'o', alpha=0.8)
ax.legend([fake2Dline1, fake2Dline2], ["True log-likelihood", "NN emulator log-likelihood"])
plt.show()
fig.savefig("results/likelihood-surfaces-best-nn-6-64.png", bbox_inches='tight', dpi=200)
mape = np.mean(np.abs((model_prediction - emu_prediction) / model_prediction))
mape
delta_k = np.array([0.0, 0.005])
delta_r = np.array([0.005, 0.0])
gradients_k = (pints.predict_grid(log_likelihood, (test_data + delta_k)) - pints.predict_grid(log_likelihood, (test_data - delta_k))) / (sum(2*delta_k))
gradients_r = (pints.predict_grid(log_likelihood, (test_data + delta_r)) - pints.predict_grid(log_likelihood, (test_data - delta_r))) / (sum(2*delta_r))
emu_gradients_k = (pints.predict_grid(emu, (test_data + delta_k)) - pints.predict_grid(emu, (test_data - delta_k))) / (sum(2*delta_k))
emu_gradients_r = (pints.predict_grid(emu, (test_data + delta_r)) - pints.predict_grid(emu, (test_data - delta_r))) / (sum(2*delta_r))
mape_k = np.mean(np.abs((gradients_k - emu_gradients_k) / gradients_k))
mape_r = np.mean(np.abs((gradients_r - emu_gradients_r) / gradients_r))
print((mape_k+mape_r)/2)
import time
n_chains = 3
n_iter = 30000 # Add stopping criterion
warm_up = 10000
sigma0 = np.abs(start_parameters) * 5e-05 # Choose a covariance matrix for the proposal step
x0 = [
np.array(start_parameters) * 0.9,#0.95,#0.9,
np.array(start_parameters) * 1.05,#0.97,#1.05,
np.array(start_parameters) * 1.15,#1.05,#1.15,
]
scaling_factors = [1/50, 500]
param_names=["r", "k"]
log_posterior_emu = pints.LogPosterior(emu, log_prior)
```
## Running MCMC routines
### Adaptive Covariance MCMC
```
# Create mcmc routine
mcmc = pints.MCMCController(log_posterior, n_chains, x0)
# Add stopping criterion
mcmc.set_max_iterations(n_iter)
# Disable logging mode
#mcmc.set_log_to_screen(False)
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Show traces and histograms
pints.plot.trace(chains)
# Discard warm up
chains_thinned = chains[:, warm_up:, :]
# Check convergence using rhat criterion
print('R-hat:')
print(pints.rhat_all_params(chains_thinned))
# Look at distribution in chain 0
pints.plot.pairwise(chains_thinned[0])
# Show graphs
plt.show()
```
### Standard Metropolis Hastings MCMC
```
# Create mcmc routine
mcmc = pints.MCMCController(log_posterior, n_chains, x0, sigma0=sigma0, method=pints.MetropolisRandomWalkMCMC)
# Add stopping criterion
mcmc.set_max_iterations(n_iter)
# Disable logging mode
#mcmc.set_log_to_screen(False)
# Run!
print('Running...')
metropolis_chains = mcmc.run()
print('Done!')
# Show traces and histograms
pints.plot.trace(metropolis_chains)
# Discard warm up
metropolis_chains_thinned = metropolis_chains[:, warm_up:, :]
# Check convergence using rhat criterion
print('R-hat:')
print(pints.rhat_all_params(metropolis_chains_thinned))
# Look at distribution in chain 0
pints.plot.pairwise(metropolis_chains_thinned[0])
# Show graphs
plt.show()
# Revert scaling
metropolis_chains_rescaled = np.copy(metropolis_chains)
metropolis_chain_rescaled = metropolis_chains_rescaled[0]
metropolis_chain_rescaled = metropolis_chain_rescaled[warm_up:]
metropolis_chains = np.array([[[s*f for s,f in zip(samples, scaling_factors)] for samples in chain]
for chain in metropolis_chains])
metropolis_chain = metropolis_chains[0]
metropolis_chain = metropolis_chain[warm_up:]
```
### Metropolis Hastings MCMC using NN as posterior
```
# Create mcmc routine
mcmc = pints.MCMCController(log_posterior_emu, n_chains, x0, sigma0=sigma0, method=pints.MetropolisRandomWalkMCMC)
# Add stopping criterion
mcmc.set_max_iterations(n_iter)
# Disable logging mode
#mcmc.set_log_to_screen(False)
# Run!
print('Running...')
chains_emu = mcmc.run()
print('Done!')
# Show traces and histograms
pints.plot.trace(chains_emu)
# Discard warm up
chains_emu_thinned = chains_emu[:, warm_up:, :]
# Check convergence using rhat criterion
print('R-hat:')
print(pints.rhat_all_params(chains_emu_thinned))
# Look at distribution in chain 0
pints.plot.pairwise(chains_emu_thinned[0])
# Show graphs
plt.show()
# Revert scaling
chains_emu_rescaled = np.copy(chains_emu)
chain_emu_rescaled = chains_emu_rescaled[0]
chain_emu_rescaled = chain_emu_rescaled[warm_up:]
chains_emu = np.array([[[s*f for s,f in zip(samples, scaling_factors)] for samples in chain] for chain in chains_emu])
chain_emu = chains_emu[0]
chain_emu = chain_emu[warm_up:]
```
### 2-Step MCMC using NN as emulator
```
# Create mcmc routine
mcmc = pints.MCMCController(log_posterior_emu, n_chains, x0, sigma0=sigma0, method=pints.EmulatedMetropolisMCMC, f=log_posterior)
# Add stopping criterion
mcmc.set_max_iterations(n_iter)
# Disable logging mode
#mcmc.set_log_to_screen(False)
# Run!
print('Running...')
emulated_chains = mcmc.run()
print('Done!')
# Show traces and histograms
pints.plot.trace(emulated_chains)
# Discard warm up
emulated_chains_thinned = emulated_chains[:, warm_up:, :]
# Check convergence using rhat criterion
print('R-hat:')
print(pints.rhat_all_params(emulated_chains_thinned))
# Look at distribution in chain 0
pints.plot.pairwise(emulated_chains_thinned[0])
# Show graphs
plt.show()
acceptance_rates = mcmc.acceptance_rates()
acceptance_rates
# Revert scaling
emulated_chains_rescaled = np.copy(emulated_chains)
emulated_chain_rescaled = emulated_chains_rescaled[0]
emulated_chain_rescaled = emulated_chain_rescaled[warm_up:]
emulated_chains = np.array([[[s*f for s,f in zip(samples, scaling_factors)] for samples in chain]
for chain in emulated_chains])
emulated_chain = emulated_chains[0]
emulated_chain = emulated_chain[warm_up:]
```
## Examining NN performance
```
emu_prediction = np.apply_along_axis(emu, 1, metropolis_chain_rescaled).flatten()
model_prediction = np.apply_along_axis(log_likelihood, 1, metropolis_chain_rescaled).flatten()
diffs = (np.abs((model_prediction - emu_prediction) / model_prediction))
iters = np.linspace(0, n_iter-warm_up, len(metropolis_chain_rescaled))
plt.figure(figsize=(10, 5))
plt.title("Emulator and model absolute differences along a chain of MCMC")
plt.xlabel("Number of iterations")
plt.ylabel("Likelihood")
plt.plot(iters, diffs, color = "Black")
plt.show()
diffs.mean()
emu_prediction = np.apply_along_axis(emu, 1, chain_emu_rescaled).flatten()
model_prediction = np.apply_along_axis(log_likelihood, 1, metropolis_chain_rescaled).flatten()
diffs = (np.abs((model_prediction - emu_prediction) / model_prediction))
iters = np.linspace(0, n_iter-warm_up, len(chain_emu_rescaled))
fig = plt.figure(figsize=(10, 5))
plt.title("Emulator and model errors along a chain of MCMC")
plt.xlabel("Iteration")
plt.ylabel("Absolute percentage error")
plt.plot(iters, diffs)#, color = "Black")
plt.show()
fig.savefig("results/mcmc-diffs-best-nn-6-64.png", bbox_inches='tight', dpi=200)
diffs[-1]
sns.set(context='notebook', style='ticks', palette="deep", font='Times New Roman',
font_scale=1.5, color_codes=True, rc={"grid.linewidth": 1})
# Create grid of parameters
x = [p[0] for p in metropolis_chain_rescaled]
y = [p[1] for p in metropolis_chain_rescaled]
xmin, xmax = np.min(x), np.max(x)
ymin, ymax = np.min(y), np.max(y)
xx, yy = np.mgrid[xmin:xmax:100j, ymin:ymax:100j]
params = [list(n) for n in zip(xx, yy)]
ll = np.apply_along_axis(log_likelihood, 1, params)
ll_emu = np.apply_along_axis(emu, 1, params)
ll_emu = [list(e[0][0]) for e in ll_emu]
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,7))
ax1.title.set_text('Log-Likelihood')
ax2.title.set_text('Neural Network')
ax1.set_xlabel('Rescaled growth rate (r)')
ax1.set_ylabel('Rescaled carrying capacity (k)')
ax2.set_xlabel('Rescaled growth rate (r)')
ax2.set_ylabel('Rescaled carrying capacity (k)')
ax1.contourf(xx, yy, ll, cmap='Blues', alpha=0.8, extent=[xmin, xmax, ymin, ymax])
ax1.contour(xx, yy, ll, colors='k')
ax2.contourf(xx, yy, ll_emu, cmap='Oranges', alpha=0.8, extent=[xmin, xmax, ymin, ymax])
ax2.contour(xx, yy, ll_emu, colors='k')
plt.show()
fig.savefig("results/close-contours-best-nn-6-64.png", bbox_inches='tight', dpi=200)
import seaborn as sns
sns.set(context='notebook', style='ticks', palette="deep", font='Times New Roman',
font_scale=1.5, color_codes=True, rc={"grid.linewidth": 1})
# Create grid of parameters
xmin, xmax = 0.5, 1.0
ymin, ymax = 0.8, 1.2
xx, yy = np.mgrid[xmin:xmax:100j, ymin:ymax:100j]
params = [list(n) for n in zip(xx, yy)]
ll = np.apply_along_axis(log_likelihood, 1, params)
ll_emu = np.apply_along_axis(emu, 1, params)
ll_emu = [list(e[0][0]) for e in ll_emu]
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,7))
ax1.title.set_text('Log-Likelihood')
ax2.title.set_text('Neural Network')
ax1.set_xlabel('Rescaled growth rate (r)')
ax1.set_ylabel('Rescaled carrying capacity (k)')
ax2.set_xlabel('Rescaled growth rate (r)')
ax2.set_ylabel('Rescaled carrying capacity (k)')
ax1.contourf(xx, yy, ll, cmap='Blues', alpha=0.8, extent=[xmin, xmax, ymin, ymax])
ax1.contour(xx, yy, ll, colors='k')
ax2.contourf(xx, yy, ll_emu, cmap='Oranges', alpha=0.8, extent=[xmin, xmax, ymin, ymax])
ax2.contour(xx, yy, ll_emu, colors='k')
plt.show()
fig.savefig("results/contours.png", bbox_inches='tight', dpi=200)
# Create grid of parameters
xmin, xmax = 0.675, 0.825
ymin, ymax = 0.9, 1.1
xx, yy = np.mgrid[xmin:xmax:100j, ymin:ymax:100j]
params = [list(n) for n in zip(xx, yy)]
ll = np.apply_along_axis(log_likelihood, 1, params)
ll_emu = np.apply_along_axis(emu, 1, params)
ll_emu = [list(e[0][0]) for e in ll_emu]
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,7))
ax1.title.set_text('Log-Likelihood')
ax2.title.set_text('Neural Network')
ax1.set_xlabel('Rescaled growth rate (r)')
ax1.set_ylabel('Rescaled carrying capacity (k)')
ax2.set_xlabel('Rescaled growth rate (r)')
ax2.set_ylabel('Rescaled carrying capacity (k)')
ax1.contourf(xx, yy, ll, cmap='Blues', alpha=0.8, extent=[xmin, xmax, ymin, ymax])
ax1.contour(xx, yy, ll, colors='k')
ax2.contourf(xx, yy, ll_emu, cmap='Oranges', alpha=0.8, extent=[xmin, xmax, ymin, ymax])
ax2.contour(xx, yy, ll_emu, colors='k')
plt.show()
fig.savefig("results/contours-closer.png", bbox_inches='tight', dpi=200)
# Create grid of parameters
xmin, xmax = 0.7125, 0.7875
ymin, ymax = 0.95, 1.05
xx, yy = np.mgrid[xmin:xmax:100j, ymin:ymax:100j]
params = [list(n) for n in zip(xx, yy)]
ll = np.apply_along_axis(log_likelihood, 1, params)
ll_emu = np.apply_along_axis(emu, 1, params)
ll_emu = [list(e[0][0]) for e in ll_emu]
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,7))
ax1.title.set_text('Log-Likelihood')
ax2.title.set_text('Neural Network')
ax1.set_xlabel('Rescaled growth rate (r)')
ax1.set_ylabel('Rescaled carrying capacity (k)')
ax2.set_xlabel('Rescaled growth rate (r)')
ax2.set_ylabel('Rescaled carrying capacity (k)')
ax1.contourf(xx, yy, ll, cmap='Blues', alpha=0.8, extent=[xmin, xmax, ymin, ymax])
ax1.contour(xx, yy, ll, colors='k')
ax2.contourf(xx, yy, ll_emu, cmap='Oranges', alpha=0.8, extent=[xmin, xmax, ymin, ymax])
ax2.contour(xx, yy, ll_emu, colors='k')
plt.show()
fig.savefig("results/contours-closest.png", bbox_inches='tight', dpi=200)
sns.set(context='notebook', style='whitegrid', palette="deep", font='Times New Roman',
font_scale=1.5, color_codes=True, rc={"grid.linewidth": 1})
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,7))
ax1.title.set_text('Log-likelihood contours with MCMC samples')
ax2.title.set_text('Neural network contours with MCMC samples')
ax1.set_xlabel('Rescaled growth rate (r)')
ax1.set_ylabel('Rescaled carrying capacity (k)')
ax2.set_xlabel('Rescaled growth rate (r)')
ax2.set_ylabel('Rescaled carrying capacity (k)')
# Create grid of parameters
x = [p[0] for p in chain_emu_rescaled]
y = [p[1] for p in chain_emu_rescaled]
xmin, xmax = np.min(x), np.max(x)
ymin, ymax = np.min(y), np.max(y)
xx, yy = np.mgrid[xmin:xmax:100j, ymin:ymax:100j]
params = [list(n) for n in zip(xx, yy)]
ll = np.apply_along_axis(log_likelihood, 1, params)
ll_emu = np.apply_along_axis(emu, 1, params)
ll_emu = [list(e[0][0]) for e in ll_emu]
# Sort according to differences in log-likelihood
idx = diffs.argsort()
x_sorted = np.array(x)[idx]
y_sorted = np.array(y)[idx]
diffs_sorted = diffs[idx]
# Add contour lines of log-likelihood
ax1.contourf(xx, yy, ll, cmap='Greys', extent=[xmin, xmax, ymin, ymax])
#ax1.contour(xx, yy, ll, colors='w')
# Plot chain_emu
ax1.set_xlim([xmin, xmax])
ax1.set_ylim([ymin, ymax])
im1 = ax1.scatter(x_sorted, y_sorted, c=diffs_sorted, s=70, edgecolor='k', cmap="RdYlGn_r")
# Add contour lines of emulated likelihood
ax2.contourf(xx, yy, ll_emu, cmap='Greys', extent=[xmin, xmax, ymin, ymax])
#ax2.contour(xx, yy, ll_emu, colors='w')
# Plot chain_emu
ax2.set_xlim([xmin, xmax])
ax2.set_ylim([ymin, ymax])
im2 = ax2.scatter(x_sorted, y_sorted, c=diffs_sorted, s=70, edgecolor='k', cmap="RdYlGn_r")
fig.colorbar(im1, ax=ax1)
fig.colorbar(im2, ax=ax2)
plt.show()
fig.savefig("results/errors-on-contours-best-nn-6-64.png", bbox_inches='tight', dpi=200)
```
## Comparing NN performance to 2-step MCMC performance
```
sns.set(context='notebook', style='whitegrid', palette="deep", font='Times New Roman',
font_scale=1.5, color_codes=True, rc={"grid.linewidth": 1})
fig, axes = pints.plot.histogram([metropolis_chain, chain_emu, emulated_chain],
ref_parameters=true_parameters,
sample_names=["MCMC", "Emulator", "2-Step MCMC"],
parameter_names=["Growth rate (r)", "Maximum capacity (k)"])
fig.set_size_inches(14, 10)
plt.subplots_adjust(wspace=0, hspace=0.4)
plt.show()
fig.savefig("results/log-posterior-samples-best-nn-6-64.png", bbox_inches='tight', dpi=200)
sns.set(context='paper', style='whitegrid', palette="deep", font='Times New Roman',
font_scale=1.5, color_codes=True, rc={"grid.linewidth": 1})
fig, axes = pints.plot.trace(metropolis_chains, ref_parameters=true_parameters,
parameter_names=["Growth rate (r)", "Maximum capacity (k)"])
fig.set_size_inches(12, 8)
plt.subplots_adjust(wspace=0.2, hspace=0.3)
plt.show()
fig.savefig("results/traces-chainmcmc-best-nn-6-64.png", bbox_inches='tight', dpi=200)
sns.set(context='paper', style='whitegrid', palette="deep", font='Times New Roman',
font_scale=1.5, color_codes=True, rc={"grid.linewidth": 1})
fig, axes = pints.plot.trace(chains_emu, ref_parameters=true_parameters,
parameter_names=["Growth rate (r)", "Maximum capacity (k)"])
fig.set_size_inches(12, 8)
plt.subplots_adjust(wspace=0.2, hspace=0.3)
plt.show()
fig.savefig("results/traces-chainemu-best-nn-6-64.png", bbox_inches='tight', dpi=200)
sns.set(context='paper', style='whitegrid', palette="deep", font='Times New Roman',
font_scale=1.5, color_codes=True, rc={"grid.linewidth": 1})
fig, axes = pints.plot.trace(emulated_chains, ref_parameters=true_parameters,
parameter_names=["Growth rate (r)", "Maximum capacity (k)"])
fig.set_size_inches(12, 8)
plt.subplots_adjust(wspace=0.2, hspace=0.3)
plt.show()
fig.savefig("results/traces-emuchain-best-nn-6-64.png", bbox_inches='tight', dpi=200)
from scipy import stats
metropolis_chain_r = np.array([sample[0] for sample in metropolis_chain])
metropolis_chain_k = np.array([sample[1] for sample in metropolis_chain])
chain_emu_r = np.array([sample[0] for sample in chain_emu])
chain_emu_k = np.array([sample[1] for sample in chain_emu])
emulated_chain_r = np.array([sample[0] for sample in emulated_chain])
emulated_chain_k = np.array([sample[1] for sample in emulated_chain])
w_distance1_r = stats.wasserstein_distance(metropolis_chain_r, chain_emu_r)
w_distance1_k = stats.wasserstein_distance(metropolis_chain_k, chain_emu_k)
w_distance2_r = stats.wasserstein_distance(metropolis_chain_r, emulated_chain_r)
w_distance2_k = stats.wasserstein_distance(metropolis_chain_k, emulated_chain_k)
print("NN vs MCMC:", w_distance1_r, w_distance1_k)
print("2-step MCMC vs MCMC:", w_distance2_r, w_distance2_k)
ess = pints.effective_sample_size(metropolis_chain)
ess
ess1 = pints.effective_sample_size(chain_emu)
ess1
ess2 = pints.effective_sample_size(emulated_chain)
ess2
```
|
github_jupyter
|
import pints
import pints.toy as toy
class RescaledModel(pints.ForwardModel):
def __init__(self):
self.base_model = toy.LogisticModel()
def simulate(self, parameters, times):
# Run a simulation with the given parameters for the
# given times and return the simulated values
r, k = parameters
r = r / 50
k = k * 500
return self.base_model.simulate([r, k], times)
def simulateS1(self, parameters, times):
# Run a simulation with the given parameters for the
# given times and return the simulated values
r, k = parameters
r = r / 50
k = k * 500
return self.base_model.simulateS1([r, k], times)
def n_parameters(self):
# Return the dimension of the parameter vector
return 2
model = toy.LogisticModel()
true_parameters = [0.015, 500]
start_parameters = [0.75, 1.0] # rescaled true parameters
import numpy as np
times = np.linspace(0, 1000, 400)
org_values = model.simulate(true_parameters, times)
range_values = max(org_values) - min(org_values)
noise = 0.05 * range_values
print("Gaussian noise:", noise)
values = org_values + np.random.normal(0, noise, org_values.shape)
import matplotlib.pyplot as plt
import seaborn as sns
import itertools
sns.set(context='notebook', style='whitegrid', palette="deep", font='Times New Roman',
font_scale=1.5, color_codes=True, rc={"grid.linewidth": 1})
palette = itertools.cycle(sns.color_palette())
c=next(palette)
fig = plt.figure(figsize=(12,4.5))
plt.xlabel('Time')
plt.ylabel('Values')
plt.plot(times, org_values, lw=2, c=c, label='Original data')
plt.plot(times, values, '--', c=c, label='Noisy data')
plt.legend()
plt.show()
fig.savefig("results/logistic.png", bbox_inches='tight', dpi=200)
model = RescaledModel()
problem = pints.SingleOutputProblem(model, times, values)
log_likelihood = pints.GaussianKnownSigmaLogLikelihood(problem, noise)
# Create (rescaled) bounds for our parameters and get prior
#bounds = pints.RectangularBoundaries([0.5, 0.8], [1.0, 1.2])
#bounds = pints.RectangularBoundaries([0.7125, 0.95], [0.7875, 1.05])
#bounds = pints.RectangularBoundaries([0.675, 0.90], [0.825, 1.1])
#bounds = pints.RectangularBoundaries([0.525, 0.7], [0.975, 1.3])
bounds = pints.RectangularBoundaries([0.6, 0.8], [0.9, 1.2])
log_prior = pints.UniformLogPrior(bounds)
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
sns.set(context='notebook', style='whitegrid', palette="deep", font='Times New Roman',
font_scale=1.5, color_codes=True, rc={"grid.linewidth": 1})
input_parameters = log_prior.sample(2000)
x = [p[0] for p in input_parameters]
y = [p[1] for p in input_parameters]
likelihoods = np.apply_along_axis(log_likelihood, 1, input_parameters)
likelihoods[:5]
print(min(x), max(x))
print(min(y), max(y))
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x, y, list(likelihoods))
plt.show()
#fig.savefig("figures/training-data-best-nn-6-64.png", bbox_inches='tight', dpi=600)
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.model_selection import train_test_split
X_train, X_valid, y_train, y_valid = train_test_split(input_parameters, likelihoods, test_size=0.3, random_state=0)
emu = pints.MultiLayerNN(problem, X_train, y_train, input_scaler=MinMaxScaler(), output_scaler=StandardScaler())
emu.set_parameters(layers=6, neurons=64, hidden_activation='relu', activation='linear', learning_rate=0.0001)
hist = emu.fit(epochs=500, batch_size=32, X_val=X_valid, y_val=y_valid, verbose=0)
emu.summary()
emu([0.75, 1])
log_likelihood([0.75, 1])
# summarize history for loss
#print(hist.history.keys())
sns.set(context='notebook', style='whitegrid', palette="deep", font='Times New Roman',
font_scale=1.5, color_codes=True, rc={"grid.linewidth": 1})
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(20,10))
ax1.title.set_text('Learning curves based on MSE')
ax2.title.set_text('Learning curves based on MAE')
ax1.plot(hist.history['loss'])
ax1.plot(hist.history['val_loss'])
ax1.set_ylabel('MSE')
ax1.set_xlabel('Epoch')
ax1.legend(['training', 'validation'], loc='upper left')
ax2.plot(hist.history['mean_absolute_error'])
ax2.plot(hist.history['val_mean_absolute_error'])
ax2.set_ylabel('MAE')
ax2.set_xlabel('Epoch')
ax2.legend(['training', 'validation'], loc='upper left')
ax3.plot(hist.history['rescaled_mse'])
ax3.plot(hist.history['val_rescaled_mse'])
ax3.set_ylabel('Rescaled MSE')
ax3.set_xlabel('Epoch')
ax3.legend(['training', 'validation'], loc='upper left')
ax4.plot(hist.history['rescaled_mae'])
ax4.plot(hist.history['val_rescaled_mae'])
ax4.set_ylabel('Rescaled MAE')
ax4.set_xlabel('Epoch')
ax4.legend(['training', 'validation'], loc='upper left')
plt.show()
fig.savefig("results/training-best-nn-6-64.png", bbox_inches='tight', dpi=200)
len(hist.history['loss'])
import matplotlib.pyplot as plt
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
sns.set(context='notebook', style='whitegrid', palette="deep", font='Times New Roman',
font_scale=1.5, color_codes=True, rc={"grid.linewidth": 1})
test_splits = 50 # number of splits along each axis
r_grid, k_grid, test_data = pints.generate_grid(bounds.lower(), bounds.upper(), test_splits)
model_prediction = pints.predict_grid(log_likelihood, test_data)
emu_prediction = pints.predict_grid(emu, test_data)
angle=(25, 300)
alpha=0.7
fontsize=16
labelpad=15
title = "Comparison of log-likelihood surfaces"
x_label = "Growth rate (r)"
y_label = "Carrying capacity (k)"
z_label = "Log-likelihood"
fig = plt.figure(figsize=(15,10))
ax = plt.axes(projection='3d')
ax.plot_surface(r_grid, k_grid, model_prediction, cmap='Blues', edgecolor='none', alpha=alpha)
ax.plot_surface(r_grid, k_grid, emu_prediction, cmap='Oranges', edgecolor='none', alpha=alpha)
ax.view_init(*angle)
plt.title(title, fontsize=fontsize*1.25)
ax.set_xlabel(x_label, fontsize=fontsize, labelpad=labelpad)
ax.set_ylabel(y_label, fontsize=fontsize, labelpad=labelpad)
ax.set_zlabel(z_label, fontsize=fontsize, labelpad=labelpad)
fake2Dline1 = mpl.lines.Line2D([0],[0], linestyle="none", c='blue', marker = 'o', alpha=0.5)
fake2Dline2 = mpl.lines.Line2D([0],[0], linestyle="none", c='orange', marker = 'o', alpha=0.8)
ax.legend([fake2Dline1, fake2Dline2], ["True log-likelihood", "NN emulator log-likelihood"])
plt.show()
fig.savefig("results/likelihood-surfaces-best-nn-6-64.png", bbox_inches='tight', dpi=200)
mape = np.mean(np.abs((model_prediction - emu_prediction) / model_prediction))
mape
delta_k = np.array([0.0, 0.005])
delta_r = np.array([0.005, 0.0])
gradients_k = (pints.predict_grid(log_likelihood, (test_data + delta_k)) - pints.predict_grid(log_likelihood, (test_data - delta_k))) / (sum(2*delta_k))
gradients_r = (pints.predict_grid(log_likelihood, (test_data + delta_r)) - pints.predict_grid(log_likelihood, (test_data - delta_r))) / (sum(2*delta_r))
emu_gradients_k = (pints.predict_grid(emu, (test_data + delta_k)) - pints.predict_grid(emu, (test_data - delta_k))) / (sum(2*delta_k))
emu_gradients_r = (pints.predict_grid(emu, (test_data + delta_r)) - pints.predict_grid(emu, (test_data - delta_r))) / (sum(2*delta_r))
mape_k = np.mean(np.abs((gradients_k - emu_gradients_k) / gradients_k))
mape_r = np.mean(np.abs((gradients_r - emu_gradients_r) / gradients_r))
print((mape_k+mape_r)/2)
import time
n_chains = 3
n_iter = 30000 # Add stopping criterion
warm_up = 10000
sigma0 = np.abs(start_parameters) * 5e-05 # Choose a covariance matrix for the proposal step
x0 = [
np.array(start_parameters) * 0.9,#0.95,#0.9,
np.array(start_parameters) * 1.05,#0.97,#1.05,
np.array(start_parameters) * 1.15,#1.05,#1.15,
]
scaling_factors = [1/50, 500]
param_names=["r", "k"]
log_posterior_emu = pints.LogPosterior(emu, log_prior)
# Create mcmc routine
mcmc = pints.MCMCController(log_posterior, n_chains, x0)
# Add stopping criterion
mcmc.set_max_iterations(n_iter)
# Disable logging mode
#mcmc.set_log_to_screen(False)
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Show traces and histograms
pints.plot.trace(chains)
# Discard warm up
chains_thinned = chains[:, warm_up:, :]
# Check convergence using rhat criterion
print('R-hat:')
print(pints.rhat_all_params(chains_thinned))
# Look at distribution in chain 0
pints.plot.pairwise(chains_thinned[0])
# Show graphs
plt.show()
# Create mcmc routine
mcmc = pints.MCMCController(log_posterior, n_chains, x0, sigma0=sigma0, method=pints.MetropolisRandomWalkMCMC)
# Add stopping criterion
mcmc.set_max_iterations(n_iter)
# Disable logging mode
#mcmc.set_log_to_screen(False)
# Run!
print('Running...')
metropolis_chains = mcmc.run()
print('Done!')
# Show traces and histograms
pints.plot.trace(metropolis_chains)
# Discard warm up
metropolis_chains_thinned = metropolis_chains[:, warm_up:, :]
# Check convergence using rhat criterion
print('R-hat:')
print(pints.rhat_all_params(metropolis_chains_thinned))
# Look at distribution in chain 0
pints.plot.pairwise(metropolis_chains_thinned[0])
# Show graphs
plt.show()
# Revert scaling
metropolis_chains_rescaled = np.copy(metropolis_chains)
metropolis_chain_rescaled = metropolis_chains_rescaled[0]
metropolis_chain_rescaled = metropolis_chain_rescaled[warm_up:]
metropolis_chains = np.array([[[s*f for s,f in zip(samples, scaling_factors)] for samples in chain]
for chain in metropolis_chains])
metropolis_chain = metropolis_chains[0]
metropolis_chain = metropolis_chain[warm_up:]
# Create mcmc routine
mcmc = pints.MCMCController(log_posterior_emu, n_chains, x0, sigma0=sigma0, method=pints.MetropolisRandomWalkMCMC)
# Add stopping criterion
mcmc.set_max_iterations(n_iter)
# Disable logging mode
#mcmc.set_log_to_screen(False)
# Run!
print('Running...')
chains_emu = mcmc.run()
print('Done!')
# Show traces and histograms
pints.plot.trace(chains_emu)
# Discard warm up
chains_emu_thinned = chains_emu[:, warm_up:, :]
# Check convergence using rhat criterion
print('R-hat:')
print(pints.rhat_all_params(chains_emu_thinned))
# Look at distribution in chain 0
pints.plot.pairwise(chains_emu_thinned[0])
# Show graphs
plt.show()
# Revert scaling
chains_emu_rescaled = np.copy(chains_emu)
chain_emu_rescaled = chains_emu_rescaled[0]
chain_emu_rescaled = chain_emu_rescaled[warm_up:]
chains_emu = np.array([[[s*f for s,f in zip(samples, scaling_factors)] for samples in chain] for chain in chains_emu])
chain_emu = chains_emu[0]
chain_emu = chain_emu[warm_up:]
# Create mcmc routine
mcmc = pints.MCMCController(log_posterior_emu, n_chains, x0, sigma0=sigma0, method=pints.EmulatedMetropolisMCMC, f=log_posterior)
# Add stopping criterion
mcmc.set_max_iterations(n_iter)
# Disable logging mode
#mcmc.set_log_to_screen(False)
# Run!
print('Running...')
emulated_chains = mcmc.run()
print('Done!')
# Show traces and histograms
pints.plot.trace(emulated_chains)
# Discard warm up
emulated_chains_thinned = emulated_chains[:, warm_up:, :]
# Check convergence using rhat criterion
print('R-hat:')
print(pints.rhat_all_params(emulated_chains_thinned))
# Look at distribution in chain 0
pints.plot.pairwise(emulated_chains_thinned[0])
# Show graphs
plt.show()
acceptance_rates = mcmc.acceptance_rates()
acceptance_rates
# Revert scaling
emulated_chains_rescaled = np.copy(emulated_chains)
emulated_chain_rescaled = emulated_chains_rescaled[0]
emulated_chain_rescaled = emulated_chain_rescaled[warm_up:]
emulated_chains = np.array([[[s*f for s,f in zip(samples, scaling_factors)] for samples in chain]
for chain in emulated_chains])
emulated_chain = emulated_chains[0]
emulated_chain = emulated_chain[warm_up:]
emu_prediction = np.apply_along_axis(emu, 1, metropolis_chain_rescaled).flatten()
model_prediction = np.apply_along_axis(log_likelihood, 1, metropolis_chain_rescaled).flatten()
diffs = (np.abs((model_prediction - emu_prediction) / model_prediction))
iters = np.linspace(0, n_iter-warm_up, len(metropolis_chain_rescaled))
plt.figure(figsize=(10, 5))
plt.title("Emulator and model absolute differences along a chain of MCMC")
plt.xlabel("Number of iterations")
plt.ylabel("Likelihood")
plt.plot(iters, diffs, color = "Black")
plt.show()
diffs.mean()
emu_prediction = np.apply_along_axis(emu, 1, chain_emu_rescaled).flatten()
model_prediction = np.apply_along_axis(log_likelihood, 1, metropolis_chain_rescaled).flatten()
diffs = (np.abs((model_prediction - emu_prediction) / model_prediction))
iters = np.linspace(0, n_iter-warm_up, len(chain_emu_rescaled))
fig = plt.figure(figsize=(10, 5))
plt.title("Emulator and model errors along a chain of MCMC")
plt.xlabel("Iteration")
plt.ylabel("Absolute percentage error")
plt.plot(iters, diffs)#, color = "Black")
plt.show()
fig.savefig("results/mcmc-diffs-best-nn-6-64.png", bbox_inches='tight', dpi=200)
diffs[-1]
sns.set(context='notebook', style='ticks', palette="deep", font='Times New Roman',
font_scale=1.5, color_codes=True, rc={"grid.linewidth": 1})
# Create grid of parameters
x = [p[0] for p in metropolis_chain_rescaled]
y = [p[1] for p in metropolis_chain_rescaled]
xmin, xmax = np.min(x), np.max(x)
ymin, ymax = np.min(y), np.max(y)
xx, yy = np.mgrid[xmin:xmax:100j, ymin:ymax:100j]
params = [list(n) for n in zip(xx, yy)]
ll = np.apply_along_axis(log_likelihood, 1, params)
ll_emu = np.apply_along_axis(emu, 1, params)
ll_emu = [list(e[0][0]) for e in ll_emu]
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,7))
ax1.title.set_text('Log-Likelihood')
ax2.title.set_text('Neural Network')
ax1.set_xlabel('Rescaled growth rate (r)')
ax1.set_ylabel('Rescaled carrying capacity (k)')
ax2.set_xlabel('Rescaled growth rate (r)')
ax2.set_ylabel('Rescaled carrying capacity (k)')
ax1.contourf(xx, yy, ll, cmap='Blues', alpha=0.8, extent=[xmin, xmax, ymin, ymax])
ax1.contour(xx, yy, ll, colors='k')
ax2.contourf(xx, yy, ll_emu, cmap='Oranges', alpha=0.8, extent=[xmin, xmax, ymin, ymax])
ax2.contour(xx, yy, ll_emu, colors='k')
plt.show()
fig.savefig("results/close-contours-best-nn-6-64.png", bbox_inches='tight', dpi=200)
import seaborn as sns
sns.set(context='notebook', style='ticks', palette="deep", font='Times New Roman',
font_scale=1.5, color_codes=True, rc={"grid.linewidth": 1})
# Create grid of parameters
xmin, xmax = 0.5, 1.0
ymin, ymax = 0.8, 1.2
xx, yy = np.mgrid[xmin:xmax:100j, ymin:ymax:100j]
params = [list(n) for n in zip(xx, yy)]
ll = np.apply_along_axis(log_likelihood, 1, params)
ll_emu = np.apply_along_axis(emu, 1, params)
ll_emu = [list(e[0][0]) for e in ll_emu]
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,7))
ax1.title.set_text('Log-Likelihood')
ax2.title.set_text('Neural Network')
ax1.set_xlabel('Rescaled growth rate (r)')
ax1.set_ylabel('Rescaled carrying capacity (k)')
ax2.set_xlabel('Rescaled growth rate (r)')
ax2.set_ylabel('Rescaled carrying capacity (k)')
ax1.contourf(xx, yy, ll, cmap='Blues', alpha=0.8, extent=[xmin, xmax, ymin, ymax])
ax1.contour(xx, yy, ll, colors='k')
ax2.contourf(xx, yy, ll_emu, cmap='Oranges', alpha=0.8, extent=[xmin, xmax, ymin, ymax])
ax2.contour(xx, yy, ll_emu, colors='k')
plt.show()
fig.savefig("results/contours.png", bbox_inches='tight', dpi=200)
# Create grid of parameters
xmin, xmax = 0.675, 0.825
ymin, ymax = 0.9, 1.1
xx, yy = np.mgrid[xmin:xmax:100j, ymin:ymax:100j]
params = [list(n) for n in zip(xx, yy)]
ll = np.apply_along_axis(log_likelihood, 1, params)
ll_emu = np.apply_along_axis(emu, 1, params)
ll_emu = [list(e[0][0]) for e in ll_emu]
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,7))
ax1.title.set_text('Log-Likelihood')
ax2.title.set_text('Neural Network')
ax1.set_xlabel('Rescaled growth rate (r)')
ax1.set_ylabel('Rescaled carrying capacity (k)')
ax2.set_xlabel('Rescaled growth rate (r)')
ax2.set_ylabel('Rescaled carrying capacity (k)')
ax1.contourf(xx, yy, ll, cmap='Blues', alpha=0.8, extent=[xmin, xmax, ymin, ymax])
ax1.contour(xx, yy, ll, colors='k')
ax2.contourf(xx, yy, ll_emu, cmap='Oranges', alpha=0.8, extent=[xmin, xmax, ymin, ymax])
ax2.contour(xx, yy, ll_emu, colors='k')
plt.show()
fig.savefig("results/contours-closer.png", bbox_inches='tight', dpi=200)
# Create grid of parameters
xmin, xmax = 0.7125, 0.7875
ymin, ymax = 0.95, 1.05
xx, yy = np.mgrid[xmin:xmax:100j, ymin:ymax:100j]
params = [list(n) for n in zip(xx, yy)]
ll = np.apply_along_axis(log_likelihood, 1, params)
ll_emu = np.apply_along_axis(emu, 1, params)
ll_emu = [list(e[0][0]) for e in ll_emu]
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,7))
ax1.title.set_text('Log-Likelihood')
ax2.title.set_text('Neural Network')
ax1.set_xlabel('Rescaled growth rate (r)')
ax1.set_ylabel('Rescaled carrying capacity (k)')
ax2.set_xlabel('Rescaled growth rate (r)')
ax2.set_ylabel('Rescaled carrying capacity (k)')
ax1.contourf(xx, yy, ll, cmap='Blues', alpha=0.8, extent=[xmin, xmax, ymin, ymax])
ax1.contour(xx, yy, ll, colors='k')
ax2.contourf(xx, yy, ll_emu, cmap='Oranges', alpha=0.8, extent=[xmin, xmax, ymin, ymax])
ax2.contour(xx, yy, ll_emu, colors='k')
plt.show()
fig.savefig("results/contours-closest.png", bbox_inches='tight', dpi=200)
sns.set(context='notebook', style='whitegrid', palette="deep", font='Times New Roman',
font_scale=1.5, color_codes=True, rc={"grid.linewidth": 1})
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,7))
ax1.title.set_text('Log-likelihood contours with MCMC samples')
ax2.title.set_text('Neural network contours with MCMC samples')
ax1.set_xlabel('Rescaled growth rate (r)')
ax1.set_ylabel('Rescaled carrying capacity (k)')
ax2.set_xlabel('Rescaled growth rate (r)')
ax2.set_ylabel('Rescaled carrying capacity (k)')
# Create grid of parameters
x = [p[0] for p in chain_emu_rescaled]
y = [p[1] for p in chain_emu_rescaled]
xmin, xmax = np.min(x), np.max(x)
ymin, ymax = np.min(y), np.max(y)
xx, yy = np.mgrid[xmin:xmax:100j, ymin:ymax:100j]
params = [list(n) for n in zip(xx, yy)]
ll = np.apply_along_axis(log_likelihood, 1, params)
ll_emu = np.apply_along_axis(emu, 1, params)
ll_emu = [list(e[0][0]) for e in ll_emu]
# Sort according to differences in log-likelihood
idx = diffs.argsort()
x_sorted = np.array(x)[idx]
y_sorted = np.array(y)[idx]
diffs_sorted = diffs[idx]
# Add contour lines of log-likelihood
ax1.contourf(xx, yy, ll, cmap='Greys', extent=[xmin, xmax, ymin, ymax])
#ax1.contour(xx, yy, ll, colors='w')
# Plot chain_emu
ax1.set_xlim([xmin, xmax])
ax1.set_ylim([ymin, ymax])
im1 = ax1.scatter(x_sorted, y_sorted, c=diffs_sorted, s=70, edgecolor='k', cmap="RdYlGn_r")
# Add contour lines of emulated likelihood
ax2.contourf(xx, yy, ll_emu, cmap='Greys', extent=[xmin, xmax, ymin, ymax])
#ax2.contour(xx, yy, ll_emu, colors='w')
# Plot chain_emu
ax2.set_xlim([xmin, xmax])
ax2.set_ylim([ymin, ymax])
im2 = ax2.scatter(x_sorted, y_sorted, c=diffs_sorted, s=70, edgecolor='k', cmap="RdYlGn_r")
fig.colorbar(im1, ax=ax1)
fig.colorbar(im2, ax=ax2)
plt.show()
fig.savefig("results/errors-on-contours-best-nn-6-64.png", bbox_inches='tight', dpi=200)
sns.set(context='notebook', style='whitegrid', palette="deep", font='Times New Roman',
font_scale=1.5, color_codes=True, rc={"grid.linewidth": 1})
fig, axes = pints.plot.histogram([metropolis_chain, chain_emu, emulated_chain],
ref_parameters=true_parameters,
sample_names=["MCMC", "Emulator", "2-Step MCMC"],
parameter_names=["Growth rate (r)", "Maximum capacity (k)"])
fig.set_size_inches(14, 10)
plt.subplots_adjust(wspace=0, hspace=0.4)
plt.show()
fig.savefig("results/log-posterior-samples-best-nn-6-64.png", bbox_inches='tight', dpi=200)
sns.set(context='paper', style='whitegrid', palette="deep", font='Times New Roman',
font_scale=1.5, color_codes=True, rc={"grid.linewidth": 1})
fig, axes = pints.plot.trace(metropolis_chains, ref_parameters=true_parameters,
parameter_names=["Growth rate (r)", "Maximum capacity (k)"])
fig.set_size_inches(12, 8)
plt.subplots_adjust(wspace=0.2, hspace=0.3)
plt.show()
fig.savefig("results/traces-chainmcmc-best-nn-6-64.png", bbox_inches='tight', dpi=200)
sns.set(context='paper', style='whitegrid', palette="deep", font='Times New Roman',
font_scale=1.5, color_codes=True, rc={"grid.linewidth": 1})
fig, axes = pints.plot.trace(chains_emu, ref_parameters=true_parameters,
parameter_names=["Growth rate (r)", "Maximum capacity (k)"])
fig.set_size_inches(12, 8)
plt.subplots_adjust(wspace=0.2, hspace=0.3)
plt.show()
fig.savefig("results/traces-chainemu-best-nn-6-64.png", bbox_inches='tight', dpi=200)
sns.set(context='paper', style='whitegrid', palette="deep", font='Times New Roman',
font_scale=1.5, color_codes=True, rc={"grid.linewidth": 1})
fig, axes = pints.plot.trace(emulated_chains, ref_parameters=true_parameters,
parameter_names=["Growth rate (r)", "Maximum capacity (k)"])
fig.set_size_inches(12, 8)
plt.subplots_adjust(wspace=0.2, hspace=0.3)
plt.show()
fig.savefig("results/traces-emuchain-best-nn-6-64.png", bbox_inches='tight', dpi=200)
from scipy import stats
metropolis_chain_r = np.array([sample[0] for sample in metropolis_chain])
metropolis_chain_k = np.array([sample[1] for sample in metropolis_chain])
chain_emu_r = np.array([sample[0] for sample in chain_emu])
chain_emu_k = np.array([sample[1] for sample in chain_emu])
emulated_chain_r = np.array([sample[0] for sample in emulated_chain])
emulated_chain_k = np.array([sample[1] for sample in emulated_chain])
w_distance1_r = stats.wasserstein_distance(metropolis_chain_r, chain_emu_r)
w_distance1_k = stats.wasserstein_distance(metropolis_chain_k, chain_emu_k)
w_distance2_r = stats.wasserstein_distance(metropolis_chain_r, emulated_chain_r)
w_distance2_k = stats.wasserstein_distance(metropolis_chain_k, emulated_chain_k)
print("NN vs MCMC:", w_distance1_r, w_distance1_k)
print("2-step MCMC vs MCMC:", w_distance2_r, w_distance2_k)
ess = pints.effective_sample_size(metropolis_chain)
ess
ess1 = pints.effective_sample_size(chain_emu)
ess1
ess2 = pints.effective_sample_size(emulated_chain)
ess2
| 0.81231 | 0.987042 |
# Matplotlib
Welcome to the exercises for reviewing matplotlib! Take your time with these, Matplotlib can be tricky to understand at first. These are relatively simple plots, but they can be hard if this is your first time with matplotlib, feel free to reference the solutions as you go along.
Also don't worry if you find the matplotlib syntax frustrating, we actually won't be using it that often throughout the course, we will switch to using seaborn and pandas built-in visualization capabilities. But, those are built-off of matplotlib, which is why it is still important to get exposure to it!
** * NOTE: ALL THE COMMANDS FOR PLOTTING A FIGURE SHOULD ALL GO IN THE SAME CELL. SEPARATING THEM OUT INTO MULTIPLE CELLS MAY CAUSE NOTHING TO SHOW UP. * **
# Exercises
Follow the instructions to recreate the plots using this data:
## Data
```
import numpy as np
x = np.arange(0,100)
y = x*2
z = x**2
```
** Import matplotlib.pyplot as plt and set %matplotlib inline if you are using the jupyter notebook. What command do you use if you aren't using the jupyter notebook?**
```
import matplotlib.pyplot as plt
%matplotlib inline
```
## Exercise 1
** Follow along with these steps: **
* ** Create a figure object called fig using plt.figure() **
* ** Use add_axes to add an axis to the figure canvas at [0,0,1,1]. Call this new axis ax. **
* ** Plot (x,y) on that axes and set the labels and titles to match the plot below:**
```
# Create Figure (empty canvas)
fig = plt.figure()
# Add set of axes to figure
ax = fig.add_axes([0, 0, 1, 1])
# Plot on that ax
ax.plot(x, y, 'b')
ax.set_xlabel('x') # Notice the use of set_ to begin methods
ax.set_ylabel('y')
ax.set_title('title')
```
## Exercise 2
** Create a figure object and put two axes on it, ax1 and ax2. Located at [0,0,1,1] and [0.2,0.5,.2,.2] respectively.**
```
# Creates blank canvas
fig = plt.figure()
axes1 = fig.add_axes([0, 0, 1, 1]) # main axes
axes2 = fig.add_axes([0.2, 0.5, .2, .2]) # inset axes
```
** Now plot (x,y) on both axes. And call your figure object to show it.**
```
# Larger Figure Axes 1
axes1.plot(x, y, 'r')
axes1.set_xlabel('x')
axes1.set_ylabel('y')
# Insert Figure Axes 2
axes2.plot(x, y, 'b')
axes2.set_xlabel('x')
axes2.set_ylabel('y')
fig
```
## Exercise 3
** Create the plot below by adding two axes to a figure object at [0,0,1,1] and [0.2,0.5,.4,.4]**
```
# Creates blank canvas
fig = plt.figure()
axes1 = fig.add_axes([0, 0, 1, 1]) # main axes
axes2 = fig.add_axes([0.2, 0.5, .4, .4]) # inset axes
```
** Now use x,y, and z arrays to recreate the plot below. Notice the xlimits and y limits on the inserted plot:**
```
# Larger Figure Axes 1
axes1.plot(x, z, 'b')
axes1.set_xlabel('x')
axes1.set_ylabel('z')
# Insert Figure Axes 2
axes2.plot(x, y, 'b')
axes2.set_ylim(30, 50)
axes2.set_xlim(20, 22)
axes2.set_xlabel('x')
axes2.set_ylabel('y')
axes2.set_title('zoom')
fig
```
## Exercise 4
** Use plt.subplots(nrows=1, ncols=2) to create the plot below.**
```
fig, axes = plt.subplots(nrows=1, ncols=2)
```
** Now plot (x,y) and (x,z) on the axes. Play around with the linewidth and style**
```
axes[0].plot(x, y, color="blue", lw=3, ls='--')
axes[1].plot(x, z, color="red", lw=4)
fig
```
** See if you can resize the plot by adding the figsize() argument in plt.subplots() are copying and pasting your previous code.**
```
fig, axes = plt.subplots(nrows=1, ncols=2,figsize=(12,2))
axes[0].plot(x, y, color="blue", lw=3, ls='--')
axes[0].set_xlabel('x')
axes[0].set_ylabel('y')
axes[1].plot(x, z, color="red", lw=4)
axes[1].set_xlabel('x')
axes[1].set_ylabel('z')
```
# Great Job!
|
github_jupyter
|
import numpy as np
x = np.arange(0,100)
y = x*2
z = x**2
import matplotlib.pyplot as plt
%matplotlib inline
# Create Figure (empty canvas)
fig = plt.figure()
# Add set of axes to figure
ax = fig.add_axes([0, 0, 1, 1])
# Plot on that ax
ax.plot(x, y, 'b')
ax.set_xlabel('x') # Notice the use of set_ to begin methods
ax.set_ylabel('y')
ax.set_title('title')
# Creates blank canvas
fig = plt.figure()
axes1 = fig.add_axes([0, 0, 1, 1]) # main axes
axes2 = fig.add_axes([0.2, 0.5, .2, .2]) # inset axes
# Larger Figure Axes 1
axes1.plot(x, y, 'r')
axes1.set_xlabel('x')
axes1.set_ylabel('y')
# Insert Figure Axes 2
axes2.plot(x, y, 'b')
axes2.set_xlabel('x')
axes2.set_ylabel('y')
fig
# Creates blank canvas
fig = plt.figure()
axes1 = fig.add_axes([0, 0, 1, 1]) # main axes
axes2 = fig.add_axes([0.2, 0.5, .4, .4]) # inset axes
# Larger Figure Axes 1
axes1.plot(x, z, 'b')
axes1.set_xlabel('x')
axes1.set_ylabel('z')
# Insert Figure Axes 2
axes2.plot(x, y, 'b')
axes2.set_ylim(30, 50)
axes2.set_xlim(20, 22)
axes2.set_xlabel('x')
axes2.set_ylabel('y')
axes2.set_title('zoom')
fig
fig, axes = plt.subplots(nrows=1, ncols=2)
axes[0].plot(x, y, color="blue", lw=3, ls='--')
axes[1].plot(x, z, color="red", lw=4)
fig
fig, axes = plt.subplots(nrows=1, ncols=2,figsize=(12,2))
axes[0].plot(x, y, color="blue", lw=3, ls='--')
axes[0].set_xlabel('x')
axes[0].set_ylabel('y')
axes[1].plot(x, z, color="red", lw=4)
axes[1].set_xlabel('x')
axes[1].set_ylabel('z')
| 0.738952 | 0.988842 |
# Setup
First, import necessary libraries, load the agreement data from the data file and see the basic
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
data = pd.read_csv("pax_all_agreements_data_v3.csv")
data.info()
data.head()
```
# Cleaning & Checking
as the data is given in standardized and systematic,so I'm gonna skip data cleanning section and just do some simple check here
check if duplicate data exist;
```
assert any(data.duplicated()) == False
for n in data.Contp:
assert n in ['Government','Territory','Government/territory','Inter-group','Other']
assert any(data['Contp'].isnull()) == False
import matplotlib.pyplot as plt
# see how many are related to natural resources and how many are not
print(data['NatRes'].value_counts())
data1 = data[data.NatRes == 1]
data2 = data[data.NatRes == 0]
# see how many kind of conflict type in total and their distribution
print(data['Contp'].value_counts())
sorted_count1 = data1['Contp'].value_counts(normalize=True)
print(sorted_count1,'natural resources related')
sorted_count2 = data2['Contp'].value_counts(normalize=True)
print(sorted_count2,'not related')
# plt.subplot(2,1,1)
plt.pie(sorted_count1, labels = sorted_count1.index, startangle = 90,
counterclock = False,autopct='%1.1f%%');
# plt.subplot(2,1,2)
# plt.pie(sorted_count2, labels = sorted_count2.index, startangle = 90,
# counterclock = False,autopct='%1.1f%%');
```
## explanation
as it turns out that 203 rows of peace agreement are related with natural resource, while 1629 are not.
in those have relation with natural resource, Government/territory is the biggest Conflict type, counting for 44.8% of all peaceagreement, follow by Government type with 36%.Territory type of conflict happen quite less accounting for 6.9%
situation is quite similar with those not related with natural resource except for a very small percentage of Other type.
Now filter the data and keep those are related with natural resources.
```
data = data[data.NatRes == 1]
import seaborn as sns
%matplotlib inline
# create a column to indicate the year when agreement was signed
data['Dat'] = pd.to_datetime(data['Dat'])
data['year'] = data['Dat'].dt.year
sns.displot(data, x="year" )
ax = sns.displot(data, x="year" ,kind="kde" ,hue = "Contp")
plt.xlim(1990,2019)
plt.xticks(np.arange(1990, 2019,5) )
plt.xticks( color="red", rotation=45)
data['Reg'].value_counts()
data1['Reg'].value_counts().head()
sns.displot(data, x="Reg" )
plt.xticks(rotation=-35)
sns.displot(data, x="Reg", hue="Contp",multiple="stack")
plt.xticks(rotation=-35)
sns.displot(data, x="year" ,kind="kde" ,hue = "Reg")
```
see how confict type change with time and see the relation between region and time
```
sns.displot(data, x="Contp", y="Reg", cbar=True,cmap="RdBu_r")
plt.xticks(rotation=-35)
```
load the conflict_data and have a look.
```
cdata = pd.read_csv("conflict_data.csv")
cdata.info()
cdata.head()
# see the basic information about event_type and sub_event_type
print(cdata.groupby('event_type')['sub_event_type'].value_counts(normalize=True))
sns.displot(cdata, x="event_type", y="sub_event_type", cbar=True,cmap="RdBu_r")
plt.xticks(rotation=-35)
# print(cdata['fatalities'].value_counts())
sns.scatterplot(x="event_type", y="fatalities", data=cdata);
plt.xticks(rotation=-35)
sns.catplot(x="event_type", y="fatalities",kind="box", meanline = True,showmeans= True, data=cdata)
plt.xticks(rotation=-35)
sns.displot(cdata, x="event_type", y="source_scale", cbar=True,cmap="RdBu_r")
plt.xticks(rotation=-35)
```
# Reflect and Hypothesise
reflection
Conflict type have great relation with the death amount.The average death amount tops in battles. there are quite amount of outliers especially in "battles" and "violence against civilians" and the number of each outlier can vary hugely. violence against civilians are more likely to cause more death.
type of conflict have a lot to do with the region
hypothesis:
The number of deaths is very unpredicable and uncontrollable, because of various reasons, for example, extreme weather
protests are likely to cause limited death or none at all, because the nature of this aciton is more gentle.
region's relation with conflict may be related to geography and historical reasons. eg Pakistan and India’s ever-conflict on the border
In group work, we could look deeper into that.
testing
In order to test the hypothsis, more research should be done through reading relevant and also through communication with the data holder.
|
github_jupyter
|
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
data = pd.read_csv("pax_all_agreements_data_v3.csv")
data.info()
data.head()
assert any(data.duplicated()) == False
for n in data.Contp:
assert n in ['Government','Territory','Government/territory','Inter-group','Other']
assert any(data['Contp'].isnull()) == False
import matplotlib.pyplot as plt
# see how many are related to natural resources and how many are not
print(data['NatRes'].value_counts())
data1 = data[data.NatRes == 1]
data2 = data[data.NatRes == 0]
# see how many kind of conflict type in total and their distribution
print(data['Contp'].value_counts())
sorted_count1 = data1['Contp'].value_counts(normalize=True)
print(sorted_count1,'natural resources related')
sorted_count2 = data2['Contp'].value_counts(normalize=True)
print(sorted_count2,'not related')
# plt.subplot(2,1,1)
plt.pie(sorted_count1, labels = sorted_count1.index, startangle = 90,
counterclock = False,autopct='%1.1f%%');
# plt.subplot(2,1,2)
# plt.pie(sorted_count2, labels = sorted_count2.index, startangle = 90,
# counterclock = False,autopct='%1.1f%%');
data = data[data.NatRes == 1]
import seaborn as sns
%matplotlib inline
# create a column to indicate the year when agreement was signed
data['Dat'] = pd.to_datetime(data['Dat'])
data['year'] = data['Dat'].dt.year
sns.displot(data, x="year" )
ax = sns.displot(data, x="year" ,kind="kde" ,hue = "Contp")
plt.xlim(1990,2019)
plt.xticks(np.arange(1990, 2019,5) )
plt.xticks( color="red", rotation=45)
data['Reg'].value_counts()
data1['Reg'].value_counts().head()
sns.displot(data, x="Reg" )
plt.xticks(rotation=-35)
sns.displot(data, x="Reg", hue="Contp",multiple="stack")
plt.xticks(rotation=-35)
sns.displot(data, x="year" ,kind="kde" ,hue = "Reg")
sns.displot(data, x="Contp", y="Reg", cbar=True,cmap="RdBu_r")
plt.xticks(rotation=-35)
cdata = pd.read_csv("conflict_data.csv")
cdata.info()
cdata.head()
# see the basic information about event_type and sub_event_type
print(cdata.groupby('event_type')['sub_event_type'].value_counts(normalize=True))
sns.displot(cdata, x="event_type", y="sub_event_type", cbar=True,cmap="RdBu_r")
plt.xticks(rotation=-35)
# print(cdata['fatalities'].value_counts())
sns.scatterplot(x="event_type", y="fatalities", data=cdata);
plt.xticks(rotation=-35)
sns.catplot(x="event_type", y="fatalities",kind="box", meanline = True,showmeans= True, data=cdata)
plt.xticks(rotation=-35)
sns.displot(cdata, x="event_type", y="source_scale", cbar=True,cmap="RdBu_r")
plt.xticks(rotation=-35)
| 0.269037 | 0.937498 |
# Vehicle detection and tracking using deep learning
> * 🔬 Data Science
* 🥠 Deep Learning and Object Detection
* 🛤️ Tracking
## Table of Contents
* [Introduction and objective](#Introduction-and-objective)
* [Necessary imports](#Necessary-imports)
* [Prepare data that will be used for training](#Prepare-data-that-will-be-used-for-training)
* [Model training](#Model-training)
* [Visualize training data](#Visualize-training-data)
* [Load model architecture](#Load-model-architecture)
* [Train the model](#Train-the-model)
* [Visualize results on validation set](#Visualize-results-on-validation-set)
* [Save the model](#Save-the-model)
* [Inference and tracking](#Inference-and-tracking)
* [Conclusion](#Conclusion)
## Introduction and objective
Vehicle detection and tracking is a common problem with multiple use cases.
Government authorities and private establishment might want to understand the traffic flowing through a place to better develop its infrastructure for the ease and convenience of everyone. A road widening project, timing the traffic signals and construction of parking spaces are a few examples where analysing the traffic is integral to the project.
Traditionally, identification and tracking has been carried out manually. A person will stand at a point and note the count of the vehicles and their types. Recently, sensors have been put into use, but they only solve the counting problem. Sensors will not be able to detect the type of vehicle.
In this notebook, we'll demonstrate how we can use deep learning to detect vehicles and then track them in a video. We'll use a short [video](https://youtu.be/e8cxjNE-4_I) taken from live traffic camera feed.
## Necessary imports
```
import pandas as pd
from arcgis.learn import RetinaNet, prepare_data
```
## Prepare data that will be used for training
You can download vehicle training data from [here](https://github.com/Esri/arcgis-python-api/tree/master/samples/04_gis_analysts_data_scientists/data/vehicle_training_data.zip). Extract the downloaded file to get your training data.
## Model training
Let's set a path to the folder that contains training images and their corresponding labels.
```
data_path = "data/vehicle_detection"
```
We'll use the `prepare_data` function to create a fastai databunch with the necessary parameters such as `batch_size`, and `chip_size`. A complete list of parameters can be found in the [API reference](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.learn.html#prepare-data).
The given dataset has 235 images of size 854x480 pixels. We will define a `chip_size` of 480 pixels which will create random crops of 480x480 from the given images. This way we will maintain the aspect ratios of the objects but can miss out on objects when training the model for fewer epochs. To avoid cropping, we can set `resize_to`=480 so that every chip is an entire frame and doesn't miss any object, but there is a risk of poor detection with smaller sized object.
```
data = prepare_data(data_path,
batch_size=4,
dataset_type="PASCAL_VOC_rectangles",
chip_size=480)
```
We see the warning above because there are a few images in our dataset with missing corresponding label files. These images will be ignored while loading the data. If it is a significant number, we might want to fix this issue by adding the label files for those images or removing those images.
We can use the `classes` attribute of the data object to get information about the number of classes.
```
data.classes
```
### Visualize training data
To visualize and get a sense of the training data, we can use the `data.show_batch` method.
```
data.show_batch()
```
In the previous cell, we see a sample of the dataset. We can observe, in the given chips, that the most common vehicles are cars and bicycles. It can also be noticed that the different instance of the vehicles have varying scales.
### Load model architecture
`arcgis.learn` provides us object detection models which are based on pretrained convnets, such as ResNet, that act as the backbones. We will use `RetinaNet` with the default parameters to create our vehicle detection model. For more details on `RetinaNet` check out [How RetinaNet works?]() and the [API reference](https://esri.github.io/arcgis-python-api/apidoc/html/arcgis.learn.html#retinanet).
```
retinanet = RetinaNet(data)
```
We will use the `lr_find()` method to find an optimum learning rate. It is important to set a learning rate at which we can train a model with good accuracy and speed.
```
retinanet.lr_find()
```
### Train the model
We will now train the `RetinaNet` model using the suggested learning rate from the previous step. We can specify how many epochs we want to train for. Let's train the model for 100 epochs. Also, we can turn `tensorboard` True if we want to visualize the training process in tensorboard.
```
retinanet.fit(100, lr=4.365158322401661e-05, tensorboard=True)
```
After the training is complete, we can view the plot with training and validation losses.
```
retinanet.learn.recorder.plot_losses()
```
### Visualize results on validation set
To see sample results we can use the `show_results` method. This method displays the chips from the validation dataset with ground truth (left) and predictions (right). We can also specify the threshold to view predictions at different confidence levels. This visual analysis helps in assessing the qualitative results of the trained model.
```
retinanet.show_results(thresh=0.4)
```
To see the quantitative results of our model we will use the `average_precision_score` method.
```
retinanet.average_precision_score(detect_thresh=0.4)
```
We can see the average precision for each class in the validation dataset. Note that while car and bicycle have a good score, van doesn't, and a few have a score of 0. Remember when we visualized the data using `show_batch` we noted that the cars and bicycles were the most common objects. It means, the scores could be correlated with the number of examples of these objects we have in our training dataset.
Let's look at the number of instances of each class in the training data and it should explain.
```
all_classes = []
for i, bb in enumerate(data.train_ds.y):
all_classes += bb.data[1].tolist()
df = pd.value_counts(all_classes, sort=False)
df.index = [data.classes[i] for i in df.index]
df
```
It is evident that the classes that have a score of 0.0 have extremely low number of examples in the training dataset.
### Save the model
Let's save the model by giving it a name and calling the `save` method, so that we can `load` it later whenever required. The model is saved by default in a directory called `models` in the `data_path` initialized earlier, but a custom path can be provided.
```
retinanet.save('vehicle_det_ep100_defaults')
```
## Inference and tracking
Multiple-object tracking can be performed using `predict_video` function of the `arcgis.learn` module. To enable tracking, set the `track` parameter in the `predict_video` function as `track=True`.
The following options/parameters are available in the predict video function for the user to decide:-
* `vanish_frames`: The number of frames the object remains absent from the frame to be considered as vanished.
* `detect_frames`: The number of frames an object remains present in the frame to start tracking.
* `assignment_iou_thrd`: There might be multiple trackers detecting and tracking objects. The Intersection over Union (iou) threshold can be set to assign a tracker with the mentioned threshold value.
```
retinanet.predict_video(input_video_path=r'data/test.mp4',
metadata_file=r'data/vid1.csv',
track=True,
visualize=True,
threshold=0.5,
resize=True)
```
<video width="100%" height="450" loop="loop" controls src="data/test_predictions.mp4" />
We can count the number of vehicles per unit of time and update a feature layer with the live count of cars, buses, trucks etc. When this process is done for multiple intersections within the city, an ArcGIS dashboard can be created. It queries the continually updated feature layers and displays the results using a dashboard such the following:
<video width="100%" height="450" loop="loop" controls src="data/Video_DC_Parade.mp4" />
## Conclusion
In this notebook, we have learnt how to automate multi-object tracking and counting system. This will not only help in intelligent traffic management but can be found useful in wide variety of applications.
|
github_jupyter
|
import pandas as pd
from arcgis.learn import RetinaNet, prepare_data
data_path = "data/vehicle_detection"
data = prepare_data(data_path,
batch_size=4,
dataset_type="PASCAL_VOC_rectangles",
chip_size=480)
data.classes
data.show_batch()
retinanet = RetinaNet(data)
retinanet.lr_find()
retinanet.fit(100, lr=4.365158322401661e-05, tensorboard=True)
retinanet.learn.recorder.plot_losses()
retinanet.show_results(thresh=0.4)
retinanet.average_precision_score(detect_thresh=0.4)
all_classes = []
for i, bb in enumerate(data.train_ds.y):
all_classes += bb.data[1].tolist()
df = pd.value_counts(all_classes, sort=False)
df.index = [data.classes[i] for i in df.index]
df
retinanet.save('vehicle_det_ep100_defaults')
retinanet.predict_video(input_video_path=r'data/test.mp4',
metadata_file=r'data/vid1.csv',
track=True,
visualize=True,
threshold=0.5,
resize=True)
| 0.537284 | 0.9853 |
Lambda School Data Science
*Unit 4, Sprint 3, Module 3*
---
# Autoencoders
> An autoencoder is a type of artificial neural network used to learn efficient data codings in an unsupervised manner.[1][2] The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore signal “noise”. Along with the reduction side, a reconstructing side is learnt, where the autoencoder tries to generate from the reduced encoding a representation as close as possible to its original input, hence its name.
## Learning Objectives
*At the end of the lecture you should be to*:
* <a href="#p1">Part 1</a>: Describe the componenets of an autoencoder
* <a href="#p2">Part 2</a>: Train an autoencoder
* <a href="#p3">Part 3</a>: Apply an autoenocder to a basic information retrieval problem
__Problem:__ Is it possible to automatically represent an image as a fixed-sized vector even if it isn’t labeled?
__Solution:__ Use an autoencoder
Why do we need to represent an image as a fixed-sized vector do you ask?
* __Information Retrieval__
- [Reverse Image Search](https://en.wikipedia.org/wiki/Reverse_image_search)
- [Recommendation Systems - Content Based Filtering](https://en.wikipedia.org/wiki/Recommender_system#Content-based_filtering)
* __Dimensionality Reduction__
- [Feature Extraction](https://www.kaggle.com/c/vsb-power-line-fault-detection/discussion/78285)
- [Manifold Learning](https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction)
We've already seen *representation learning* when we talked about word embedding modelings during our NLP week. Today we're going to achieve a similiar goal on images using *autoencoders*. An autoencoder is a neural network that is trained to attempt to copy its input to its output. Usually they are restricted in ways that allow them to copy only approximately. The model often learns useful properties of the data, because it is forced to prioritize which aspecs of the input should be copied. The properties of autoencoders have made them an important part of modern generative modeling approaches. Consider autoencoders a special case of feed-forward networks (the kind we've been studying); backpropagation and gradient descent still work.
# Autoencoder Architecture (Learn)
<a id="p1"></a>
## Overview
The *encoder* compresses the input data and the *decoder* does the reverse to produce the uncompressed version of the data to create a reconstruction of the input as accurately as possible:
<img src='https://miro.medium.com/max/1400/1*44eDEuZBEsmG_TCAKRI3Kw@2x.png' width=800/>
The learning process gis described simply as minimizing a loss function:
$ L(x, g(f(x))) $
- $L$ is a loss function penalizing $g(f(x))$ for being dissimiliar from $x$ (such as mean squared error)
- $f$ is the encoder function
- $g$ is the decoder function
## Follow Along
### Extremely Simple Autoencoder
```
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
import wandb
from wandb.keras import WandbCallback
# this is the size of our encoded representations
encoding_dim = 32 # 32 floats -> compression of factor 24.5, assuming the input is 784 floats
# this is our input placeholder
input_img = Input(shape=(784,))
# "encoded" is the encoded representation of the input
encoded = Dense(encoding_dim, activation='relu')(input_img)
# "decoded" is the lossy reconstruction of the input
decoded = Dense(784, activation='sigmoid')(encoded)
# this model maps an input to its reconstruction
autoencoder = Model(input_img, decoded)
# this model maps an input to its encoded representation
encoder = Model(input_img, encoded)
# create a placeholder for an encoded (32-dimensional) input
encoded_input = Input(shape=(encoding_dim,))
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# create the decoder model
decoder = Model(encoded_input, decoder_layer(encoded_input))
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
from tensorflow.keras.datasets import mnist
import numpy as np
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
print(x_train.shape)
print(x_test.shape)
wandb.init(project="mnist_ae", entity="lambda-ds6")
autoencoder.fit(x_train, x_train,
epochs=1000,
batch_size=256,
shuffle=True,
validation_data=(x_test, x_test),
verbose = False,
callbacks=[WandbCallback()])
# this model maps an input to its encoded representation
encoder = Model(input_img, encoded)
# create a placeholder for an encoded (32-dimensional) input
encoded_input = Input(shape=(encoding_dim,))
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# create the decoder model
decoder = Model(encoded_input, decoder_layer(encoded_input))
# encode and decode some digits
# note that we take them from the *test* set
encoded_imgs = encoder.predict(x_test)
decoded_imgs = decoder.predict(encoded_imgs)
# use Matplotlib (don't ask)
import matplotlib.pyplot as plt
n = 10 # how many digits we will display
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
#wandb.log({'inputs': [wandb.Image(x_test[:10], caption="Input")], 'outputs': [wandb.Image(decoded_imgs[:10], caption="Output")]})
```
## Challenge
Expected to talk about the components of autoencoder and their purpose.
# Train an Autoencoder (Learn)
<a id="p2"></a>
## Overview
As long as our architecture maintains an hourglass shape, we can continue to add layers and create a deeper network.
## Follow Along
### Deep Autoencoder
```
input_img = Input(shape=(784,))
encoded = Dense(128, activation='relu')(input_img)
encoded = Dense(64, activation='relu')(encoded)
encoded = Dense(32, activation='relu')(encoded) # => This is the dry strawberry
decoded = Dense(64, activation='relu')(encoded)
decoded = Dense(128, activation='relu')(decoded)
decoded = Dense(784, activation='sigmoid')(decoded)
# compile & fit model
autocoder = Model(input_img, decoded)
autocoder.compile(optimizer='sgd',
loss='binary_crossentropy'
)
wandb.init(project="mnist_ae", entity="lambda-ds6")
autoencoder.fit(x_train, x_train,
epochs=500,
batch_size=256,
shuffle=True,
validation_data=(x_test, x_test),
verbose = False,
callbacks=[WandbCallback()])
# encode and decode some digits
# note that we take them from the *test* set
encoded_imgs = encoder.predict(x_test)
decoded_imgs = decoder.predict(encoded_imgs)
# use Matplotlib (don't ask)
import matplotlib.pyplot as plt
n = 10 # how many digits we will display
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
```
### Convolutional autoencoder
> Since our inputs are images, it makes sense to use convolutional neural networks (convnets) as encoders and decoders. In practical settings, autoencoders applied to images are always convolutional autoencoders --they simply perform much better.
> Let's implement one. The encoder will consist in a stack of Conv2D and MaxPooling2D layers (max pooling being used for spatial down-sampling), while the decoder will consist in a stack of Conv2D and UpSampling2D layers.
```
from tensorflow.keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D
from tensorflow.keras.models import Model
from tensorflow.keras import backend as K
# Create Model
input_img = Input(shape=(28,28,1))
x = Conv2D(16,(3,3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2,2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)
# at this point the representation is (4, 4, 8) i.e. 128-dimensional representation
x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
from tensorflow.keras.datasets import mnist
import numpy as np
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = np.reshape(x_train, (len(x_train), 28, 28, 1)) # adapt this if using `channels_first` image data format
x_test = np.reshape(x_test, (len(x_test), 28, 28, 1)) # adapt this if using `channels_first` image data format
wandb.init(project="mnist_ae", entity="lambda-ds6")
autoencoder.fit(x_train, x_train,
epochs=100,
batch_size=256,
shuffle=True,
validation_data=(x_test, x_test),
verbose=False,
callbacks=[WandbCallback()])
# this model maps an input to its encoded representation
encoder = Model(input_img, encoded)
# create a placeholder for an encoded (32-dimensional) input
encoded_input = Input(shape=(encoding_dim,))
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# create the decoder model
decoder = Model(encoded_input, decoder_layer(encoded_input))
decoded_imgs = autoencoder.predict(x_test)
decoded_imgs.shape
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i+n)
plt.imshow(x_test[i].reshape(28,28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + n)
plt.imshow(decoded_imgs[i].reshape(28,28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
```
#### Visualization of the Representations
```
encoder = Model(input_img, encoded)
encoder.predict(x_train)
n = 10
plt.figure(figsize=(20, 8))
for i in range(n):
ax = plt.subplot(1, n, i)
plt.imshow(encoded_imgs[i].reshape(4, 4 * 8).T)
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
```
## Challenge
You will train an autoencoder at some point in the near future.
# Information Retrieval with Autoencoders (Learn)
<a id="p3"></a>
## Overview
A common usecase for autoencoders is for reverse image search. Let's try to draw an image and see what's most similiar in our dataset.
To accomplish this we will need to slice our autoendoer in half to extract our reduced features. :)
## Follow Along
```
#encoded = Flatten()(encoded) # Applies only to Conv AE
encoder = Model(input_img, encoded)
encoded_imgs = encoder.predict(x_train)
encoded_imgs[0].T
from sklearn.neighbors import NearestNeighbors
nn = NearestNeighbors(n_neighbors=10, algorithm='ball_tree')
nn.fit(encoded_imgs)
nn.kneighbors(...)
```
## Challenge
You should already be familiar with KNN and similarity queries, so the key component of this section is know what to 'slice' from your autoencoder (the encoder) to extract features from your data.
# Review
* <a href="#p1">Part 1</a>: Describe the componenets of an autoencoder
- Enocder
- Decoder
* <a href="#p2">Part 2</a>: Train an autoencoder
- Can do in Keras Easily
- Can use a variety of architectures
- Architectures must follow hourglass shape
* <a href="#p3">Part 3</a>: Apply an autoenocder to a basic information retrieval problem
- Extract just the encoder to use for various tasks
- AE ares good for dimensionality reduction, reverse image search, and may more things.
# Sources
__References__
- [Building Autoencoders in Keras](https://blog.keras.io/building-autoencoders-in-keras.html)
- [Deep Learning Cookbook](http://shop.oreilly.com/product/0636920097471.do)
__Additional Material__
|
github_jupyter
|
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
import wandb
from wandb.keras import WandbCallback
# this is the size of our encoded representations
encoding_dim = 32 # 32 floats -> compression of factor 24.5, assuming the input is 784 floats
# this is our input placeholder
input_img = Input(shape=(784,))
# "encoded" is the encoded representation of the input
encoded = Dense(encoding_dim, activation='relu')(input_img)
# "decoded" is the lossy reconstruction of the input
decoded = Dense(784, activation='sigmoid')(encoded)
# this model maps an input to its reconstruction
autoencoder = Model(input_img, decoded)
# this model maps an input to its encoded representation
encoder = Model(input_img, encoded)
# create a placeholder for an encoded (32-dimensional) input
encoded_input = Input(shape=(encoding_dim,))
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# create the decoder model
decoder = Model(encoded_input, decoder_layer(encoded_input))
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
from tensorflow.keras.datasets import mnist
import numpy as np
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
print(x_train.shape)
print(x_test.shape)
wandb.init(project="mnist_ae", entity="lambda-ds6")
autoencoder.fit(x_train, x_train,
epochs=1000,
batch_size=256,
shuffle=True,
validation_data=(x_test, x_test),
verbose = False,
callbacks=[WandbCallback()])
# this model maps an input to its encoded representation
encoder = Model(input_img, encoded)
# create a placeholder for an encoded (32-dimensional) input
encoded_input = Input(shape=(encoding_dim,))
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# create the decoder model
decoder = Model(encoded_input, decoder_layer(encoded_input))
# encode and decode some digits
# note that we take them from the *test* set
encoded_imgs = encoder.predict(x_test)
decoded_imgs = decoder.predict(encoded_imgs)
# use Matplotlib (don't ask)
import matplotlib.pyplot as plt
n = 10 # how many digits we will display
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
#wandb.log({'inputs': [wandb.Image(x_test[:10], caption="Input")], 'outputs': [wandb.Image(decoded_imgs[:10], caption="Output")]})
input_img = Input(shape=(784,))
encoded = Dense(128, activation='relu')(input_img)
encoded = Dense(64, activation='relu')(encoded)
encoded = Dense(32, activation='relu')(encoded) # => This is the dry strawberry
decoded = Dense(64, activation='relu')(encoded)
decoded = Dense(128, activation='relu')(decoded)
decoded = Dense(784, activation='sigmoid')(decoded)
# compile & fit model
autocoder = Model(input_img, decoded)
autocoder.compile(optimizer='sgd',
loss='binary_crossentropy'
)
wandb.init(project="mnist_ae", entity="lambda-ds6")
autoencoder.fit(x_train, x_train,
epochs=500,
batch_size=256,
shuffle=True,
validation_data=(x_test, x_test),
verbose = False,
callbacks=[WandbCallback()])
# encode and decode some digits
# note that we take them from the *test* set
encoded_imgs = encoder.predict(x_test)
decoded_imgs = decoder.predict(encoded_imgs)
# use Matplotlib (don't ask)
import matplotlib.pyplot as plt
n = 10 # how many digits we will display
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
from tensorflow.keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D
from tensorflow.keras.models import Model
from tensorflow.keras import backend as K
# Create Model
input_img = Input(shape=(28,28,1))
x = Conv2D(16,(3,3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2,2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)
# at this point the representation is (4, 4, 8) i.e. 128-dimensional representation
x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
from tensorflow.keras.datasets import mnist
import numpy as np
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = np.reshape(x_train, (len(x_train), 28, 28, 1)) # adapt this if using `channels_first` image data format
x_test = np.reshape(x_test, (len(x_test), 28, 28, 1)) # adapt this if using `channels_first` image data format
wandb.init(project="mnist_ae", entity="lambda-ds6")
autoencoder.fit(x_train, x_train,
epochs=100,
batch_size=256,
shuffle=True,
validation_data=(x_test, x_test),
verbose=False,
callbacks=[WandbCallback()])
# this model maps an input to its encoded representation
encoder = Model(input_img, encoded)
# create a placeholder for an encoded (32-dimensional) input
encoded_input = Input(shape=(encoding_dim,))
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# create the decoder model
decoder = Model(encoded_input, decoder_layer(encoded_input))
decoded_imgs = autoencoder.predict(x_test)
decoded_imgs.shape
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i+n)
plt.imshow(x_test[i].reshape(28,28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + n)
plt.imshow(decoded_imgs[i].reshape(28,28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
encoder = Model(input_img, encoded)
encoder.predict(x_train)
n = 10
plt.figure(figsize=(20, 8))
for i in range(n):
ax = plt.subplot(1, n, i)
plt.imshow(encoded_imgs[i].reshape(4, 4 * 8).T)
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
#encoded = Flatten()(encoded) # Applies only to Conv AE
encoder = Model(input_img, encoded)
encoded_imgs = encoder.predict(x_train)
encoded_imgs[0].T
from sklearn.neighbors import NearestNeighbors
nn = NearestNeighbors(n_neighbors=10, algorithm='ball_tree')
nn.fit(encoded_imgs)
nn.kneighbors(...)
| 0.885829 | 0.985398 |

<h1><center>Microsoft Malware Prediction</center></h1>
<h2><center>Can you predict if a machine will soon be hit with malware?</center></h2>
### Dependencies
```
#@title
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler, LabelEncoder, MinMaxScaler, OneHotEncoder
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, classification_report, accuracy_score
from keras import optimizers
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
warnings.filterwarnings("ignore")
```
#### Upload Kaggle json
```
#@title
# Colab's file access feature
from google.colab import files
#retrieve uploaded file
uploaded = files.upload()
#print results
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
# Then move kaggle.json into the folder where the API expects to find it.
!mkdir -p ~/.kaggle/ && mv kaggle.json ~/.kaggle/ && chmod 600 ~/.kaggle/kaggle.json
```
#### Load and unzip competition files
```
#@title
#download microsoft-malware-prediction data
!kaggle competitions download -c microsoft-malware-prediction
#@title
!unzip train.csv.zip
!unzip test.csv.zip
!ls
```
#### Load data
```
dtypes = {
'MachineIdentifier': 'category',
'ProductName': 'category',
'RtpStateBitfield': 'float16',
'IsSxsPassiveMode': 'int8',
'AVProductsInstalled': 'float16',
'AVProductsEnabled': 'float16',
'OrganizationIdentifier': 'float16',
'Platform': 'category',
'Processor': 'category',
'OsSuite': 'int16',
'OsPlatformSubRelease': 'category',
'SkuEdition': 'category',
'IsProtected': 'float16',
'SmartScreen': 'category',
'UacLuaenable': 'float32',
'Census_MDC2FormFactor': 'category',
'Census_DeviceFamily': 'category',
'Census_ProcessorCoreCount': 'float16',
'Census_ProcessorManufacturerIdentifier': 'float16',
'Census_PrimaryDiskTypeName': 'category',
'Census_SystemVolumeTotalCapacity': 'float32',
'Census_TotalPhysicalRAM': 'float32',
'Census_ChassisTypeName': 'category',
'Census_InternalPrimaryDiagonalDisplaySizeInInches': 'float16',
'Census_InternalPrimaryDisplayResolutionVertical': 'float16',
'Census_PowerPlatformRoleName': 'category',
'Census_OSArchitecture': 'category',
'Census_OSBranch': 'category',
'Census_OSEdition': 'category',
'Census_OSSkuName': 'category',
'Census_OSInstallTypeName': 'category',
'Census_OSInstallLanguageIdentifier': 'float16',
'Census_OSWUAutoUpdateOptionsName': 'category',
'Census_GenuineStateName': 'category',
'Census_ActivationChannel': 'category',
'Census_FlightRing': 'category',
'Census_IsVirtualDevice': 'float16',
'Census_IsTouchEnabled': 'int8',
'Census_IsAlwaysOnAlwaysConnectedCapable': 'float16',
'Wdft_IsGamer': 'float16',
'Wdft_RegionIdentifier': 'float16',
'AvSigVersion': 'category',
'OsBuildLab': 'category',
'Census_OSVersion': 'category',
'AppVersion': 'category',
'EngineVersion': 'category',
'OsVer': 'category',
'HasDetections': 'int8'
}
label = ['HasDetections']
ids = ['MachineIdentifier']
numerical_features = ['AVProductsEnabled', 'AVProductsInstalled', 'Census_InternalPrimaryDiagonalDisplaySizeInInches',
'Census_InternalPrimaryDisplayResolutionVertical', 'Census_ProcessorCoreCount',
'Census_SystemVolumeTotalCapacity', 'Census_TotalPhysicalRAM', 'RtpStateBitfield']
binary_features = ['Census_IsAlwaysOnAlwaysConnectedCapable', 'Census_IsTouchEnabled', 'Census_IsVirtualDevice',
'IsProtected', 'IsSxsPassiveMode', 'Wdft_IsGamer']
# low_cardinality_features = ['Census_ActivationChannel', 'Census_ChassisTypeName', 'Census_DeviceFamily',
# 'Census_FlightRing', 'Census_GenuineStateName', 'Census_MDC2FormFactor',
# 'Census_OSArchitecture', 'Census_OSBranch', 'Census_OSEdition',
# 'Census_OSInstallLanguageIdentifier', 'Census_OSInstallTypeName',
# 'Census_OSSkuName', 'Census_OSWUAutoUpdateOptionsName',
# 'Census_PowerPlatformRoleName', 'Census_PrimaryDiskTypeName',
# 'Census_ProcessorManufacturerIdentifier', 'OsPlatformSubRelease',
# 'OsSuite', 'Platform', 'Processor', 'ProductName', 'SkuEdition',
# 'SmartScreen', 'UacLuaenable', 'Wdft_RegionIdentifier']
version_features = ['AvSigVersion', 'OsBuildLab', 'Census_OSVersion', 'AppVersion', 'EngineVersion', 'OsVer']
use_columns = numerical_features + binary_features + version_features
train = pd.read_csv('train.csv', dtype=dtypes, usecols=(use_columns + label))
test = pd.read_csv('test.csv', dtype=dtypes, usecols=(use_columns + ids))
# Get test ids
test_ids = test['MachineIdentifier']
test.drop('MachineIdentifier', axis=1, inplace=True)
#@title
print('Dataset number of records: %s' % train.shape[0])
print('Dataset number of columns: %s' % train.shape[1])
display(train.head())
display(train[numerical_features].describe().T)
display(train[binary_features].describe().T)
display(train[version_features].describe().T)
```
### Reduce granularity on version features
```
for feature in version_features:
if feature in ['EngineVersion']:
train[feature] = train[feature].apply(lambda x : ".".join(x.split('.')[:3]))
test[feature] = test[feature].apply(lambda x : ".".join(x.split('.')[:3]))
elif feature in ['OsBuildLab']:
train[feature] = train[feature].apply(lambda x : ".".join(x.split('.')[:1]))
test[feature] = test[feature].apply(lambda x : ".".join(x.split('.')[:1]))
else:
train[feature] = train[feature].apply(lambda x : ".".join(x.split('.')[:2]))
test[feature] = test[feature].apply(lambda x : ".".join(x.split('.')[:2]))
# Remove rows with missing values
train.dropna(inplace=True)
print('Dataset number of records: %s' % train.shape[0])
print('Dataset number of columns: %s' % train.shape[1])
display(train[numerical_features].describe().T)
display(train[binary_features].describe().T)
display(train[version_features].describe().T)
X_train, X_val = train_test_split(train, test_size=0.2, random_state=1)
```
### Mean encoding on categorical features
```
for feature in version_features:
feature_target_mean = X_train.groupby(feature)['HasDetections'].mean()
X_train[('%s_target_enc' % feature)] = X_train[feature].map(feature_target_mean)
X_val[('%s_target_enc' % feature)] = X_val[feature].map(feature_target_mean)
test[('%s_target_enc' % feature)] = test[feature].map(feature_target_mean)
# Drop non-encoded features
for feature in version_features:
X_train.drop([feature], axis=1, inplace=True)
X_val.drop([feature], axis=1, inplace=True)
test.drop([feature], axis=1, inplace=True)
```
#### Normalize using Min-Max scaling
```
minMax_scaler = MinMaxScaler()
X_train[numerical_features] = minMax_scaler.fit_transform(X_train[numerical_features])
X_val[numerical_features] = minMax_scaler.transform(X_val[numerical_features])
test[numerical_features] = minMax_scaler.transform(test[numerical_features])
```
### Fill missing values with mean
```
X_train.fillna(X_train.mean(), inplace=True)
X_val.fillna(X_val.mean(), inplace=True)
# Get labels
Y_train = X_train['HasDetections'].values
Y_val = X_val['HasDetections'].values
X_train.drop(['HasDetections'], axis=1, inplace=True)
X_val.drop(['HasDetections'], axis=1, inplace=True)
X_train.head().T
```
#### Model parameters
```
#@title
BATCH_SIZE = 128
EPOCHS = 30
LEARNING_RATE = 0.001
print('Dataset size: %s' % X_train.shape[0])
print('Epochs: %s' % EPOCHS)
print('Learning rate: %s' % LEARNING_RATE)
print('Batch size: %s' % BATCH_SIZE)
print('Input dimension: %s' % X_train.shape[1])
print('Features used: %s' % X_train.columns.values)
model = Sequential()
model.add(Dense(64, kernel_initializer='glorot_normal', activation='relu', input_dim=X_train.shape[1]))
model.add(Dense(32, kernel_initializer='glorot_normal', activation='relu'))
model.add(Dense(16, kernel_initializer='glorot_normal', activation='relu'))
model.add(Dense(1, activation='sigmoid'))
adam = optimizers.adam(lr=LEARNING_RATE)
model.compile(loss='binary_crossentropy', optimizer=adam)
model.summary()
history = model.fit(x=X_train.values, y=Y_train, batch_size=BATCH_SIZE, epochs=EPOCHS,
verbose=1, validation_data=(X_val.values, Y_val))
```
#### Plot graph metrics
```
#@title
def plot_metrics(loss, val_loss):
fig, (ax1) = plt.subplots(1, 1, sharex='col', figsize=(20,7))
ax1.plot(loss, label='Train loss')
ax1.plot(val_loss, label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
plt.xlabel('Epochs')
plot_metrics(history.history['loss'], history.history['val_loss'])
train_predictions = model.predict_classes(X_train.values)
val_predictions = model.predict_classes(X_val.values)
#@title
print('-----Train-----')
print(classification_report(Y_train, train_predictions))
print('-----Validation-----')
print(classification_report(Y_val, val_predictions))
#@title
f, axes = plt.subplots(1, 2, figsize=(16, 5), sharex=True)
train_cnf_matrix = confusion_matrix(Y_train, train_predictions)
val_cnf_matrix = confusion_matrix(Y_val, val_predictions)
train_cnf_matrix_norm = train_cnf_matrix / train_cnf_matrix.sum(axis=1)[:, np.newaxis]
val_cnf_matrix_norm = val_cnf_matrix / val_cnf_matrix.sum(axis=1)[:, np.newaxis]
train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=[0, 1], columns=[0, 1])
val_df_cm = pd.DataFrame(val_cnf_matrix_norm, index=[0, 1], columns=[0, 1])
sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap="Blues", ax=axes[0]).set_title("Train")
sns.heatmap(val_df_cm, annot=True, fmt='.2f', cmap="Blues", ax=axes[1]).set_title("Validation")
plt.show()
predictions = model.predict(test)
submission = pd.DataFrame({"MachineIdentifier":test_ids})
submission["HasDetections"] = predictions
submission.to_csv("submission.csv", index=False)
submission.head(10)
```
|
github_jupyter
|
#@title
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler, LabelEncoder, MinMaxScaler, OneHotEncoder
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, classification_report, accuracy_score
from keras import optimizers
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
warnings.filterwarnings("ignore")
#@title
# Colab's file access feature
from google.colab import files
#retrieve uploaded file
uploaded = files.upload()
#print results
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
# Then move kaggle.json into the folder where the API expects to find it.
!mkdir -p ~/.kaggle/ && mv kaggle.json ~/.kaggle/ && chmod 600 ~/.kaggle/kaggle.json
#@title
#download microsoft-malware-prediction data
!kaggle competitions download -c microsoft-malware-prediction
#@title
!unzip train.csv.zip
!unzip test.csv.zip
!ls
dtypes = {
'MachineIdentifier': 'category',
'ProductName': 'category',
'RtpStateBitfield': 'float16',
'IsSxsPassiveMode': 'int8',
'AVProductsInstalled': 'float16',
'AVProductsEnabled': 'float16',
'OrganizationIdentifier': 'float16',
'Platform': 'category',
'Processor': 'category',
'OsSuite': 'int16',
'OsPlatformSubRelease': 'category',
'SkuEdition': 'category',
'IsProtected': 'float16',
'SmartScreen': 'category',
'UacLuaenable': 'float32',
'Census_MDC2FormFactor': 'category',
'Census_DeviceFamily': 'category',
'Census_ProcessorCoreCount': 'float16',
'Census_ProcessorManufacturerIdentifier': 'float16',
'Census_PrimaryDiskTypeName': 'category',
'Census_SystemVolumeTotalCapacity': 'float32',
'Census_TotalPhysicalRAM': 'float32',
'Census_ChassisTypeName': 'category',
'Census_InternalPrimaryDiagonalDisplaySizeInInches': 'float16',
'Census_InternalPrimaryDisplayResolutionVertical': 'float16',
'Census_PowerPlatformRoleName': 'category',
'Census_OSArchitecture': 'category',
'Census_OSBranch': 'category',
'Census_OSEdition': 'category',
'Census_OSSkuName': 'category',
'Census_OSInstallTypeName': 'category',
'Census_OSInstallLanguageIdentifier': 'float16',
'Census_OSWUAutoUpdateOptionsName': 'category',
'Census_GenuineStateName': 'category',
'Census_ActivationChannel': 'category',
'Census_FlightRing': 'category',
'Census_IsVirtualDevice': 'float16',
'Census_IsTouchEnabled': 'int8',
'Census_IsAlwaysOnAlwaysConnectedCapable': 'float16',
'Wdft_IsGamer': 'float16',
'Wdft_RegionIdentifier': 'float16',
'AvSigVersion': 'category',
'OsBuildLab': 'category',
'Census_OSVersion': 'category',
'AppVersion': 'category',
'EngineVersion': 'category',
'OsVer': 'category',
'HasDetections': 'int8'
}
label = ['HasDetections']
ids = ['MachineIdentifier']
numerical_features = ['AVProductsEnabled', 'AVProductsInstalled', 'Census_InternalPrimaryDiagonalDisplaySizeInInches',
'Census_InternalPrimaryDisplayResolutionVertical', 'Census_ProcessorCoreCount',
'Census_SystemVolumeTotalCapacity', 'Census_TotalPhysicalRAM', 'RtpStateBitfield']
binary_features = ['Census_IsAlwaysOnAlwaysConnectedCapable', 'Census_IsTouchEnabled', 'Census_IsVirtualDevice',
'IsProtected', 'IsSxsPassiveMode', 'Wdft_IsGamer']
# low_cardinality_features = ['Census_ActivationChannel', 'Census_ChassisTypeName', 'Census_DeviceFamily',
# 'Census_FlightRing', 'Census_GenuineStateName', 'Census_MDC2FormFactor',
# 'Census_OSArchitecture', 'Census_OSBranch', 'Census_OSEdition',
# 'Census_OSInstallLanguageIdentifier', 'Census_OSInstallTypeName',
# 'Census_OSSkuName', 'Census_OSWUAutoUpdateOptionsName',
# 'Census_PowerPlatformRoleName', 'Census_PrimaryDiskTypeName',
# 'Census_ProcessorManufacturerIdentifier', 'OsPlatformSubRelease',
# 'OsSuite', 'Platform', 'Processor', 'ProductName', 'SkuEdition',
# 'SmartScreen', 'UacLuaenable', 'Wdft_RegionIdentifier']
version_features = ['AvSigVersion', 'OsBuildLab', 'Census_OSVersion', 'AppVersion', 'EngineVersion', 'OsVer']
use_columns = numerical_features + binary_features + version_features
train = pd.read_csv('train.csv', dtype=dtypes, usecols=(use_columns + label))
test = pd.read_csv('test.csv', dtype=dtypes, usecols=(use_columns + ids))
# Get test ids
test_ids = test['MachineIdentifier']
test.drop('MachineIdentifier', axis=1, inplace=True)
#@title
print('Dataset number of records: %s' % train.shape[0])
print('Dataset number of columns: %s' % train.shape[1])
display(train.head())
display(train[numerical_features].describe().T)
display(train[binary_features].describe().T)
display(train[version_features].describe().T)
for feature in version_features:
if feature in ['EngineVersion']:
train[feature] = train[feature].apply(lambda x : ".".join(x.split('.')[:3]))
test[feature] = test[feature].apply(lambda x : ".".join(x.split('.')[:3]))
elif feature in ['OsBuildLab']:
train[feature] = train[feature].apply(lambda x : ".".join(x.split('.')[:1]))
test[feature] = test[feature].apply(lambda x : ".".join(x.split('.')[:1]))
else:
train[feature] = train[feature].apply(lambda x : ".".join(x.split('.')[:2]))
test[feature] = test[feature].apply(lambda x : ".".join(x.split('.')[:2]))
# Remove rows with missing values
train.dropna(inplace=True)
print('Dataset number of records: %s' % train.shape[0])
print('Dataset number of columns: %s' % train.shape[1])
display(train[numerical_features].describe().T)
display(train[binary_features].describe().T)
display(train[version_features].describe().T)
X_train, X_val = train_test_split(train, test_size=0.2, random_state=1)
for feature in version_features:
feature_target_mean = X_train.groupby(feature)['HasDetections'].mean()
X_train[('%s_target_enc' % feature)] = X_train[feature].map(feature_target_mean)
X_val[('%s_target_enc' % feature)] = X_val[feature].map(feature_target_mean)
test[('%s_target_enc' % feature)] = test[feature].map(feature_target_mean)
# Drop non-encoded features
for feature in version_features:
X_train.drop([feature], axis=1, inplace=True)
X_val.drop([feature], axis=1, inplace=True)
test.drop([feature], axis=1, inplace=True)
minMax_scaler = MinMaxScaler()
X_train[numerical_features] = minMax_scaler.fit_transform(X_train[numerical_features])
X_val[numerical_features] = minMax_scaler.transform(X_val[numerical_features])
test[numerical_features] = minMax_scaler.transform(test[numerical_features])
X_train.fillna(X_train.mean(), inplace=True)
X_val.fillna(X_val.mean(), inplace=True)
# Get labels
Y_train = X_train['HasDetections'].values
Y_val = X_val['HasDetections'].values
X_train.drop(['HasDetections'], axis=1, inplace=True)
X_val.drop(['HasDetections'], axis=1, inplace=True)
X_train.head().T
#@title
BATCH_SIZE = 128
EPOCHS = 30
LEARNING_RATE = 0.001
print('Dataset size: %s' % X_train.shape[0])
print('Epochs: %s' % EPOCHS)
print('Learning rate: %s' % LEARNING_RATE)
print('Batch size: %s' % BATCH_SIZE)
print('Input dimension: %s' % X_train.shape[1])
print('Features used: %s' % X_train.columns.values)
model = Sequential()
model.add(Dense(64, kernel_initializer='glorot_normal', activation='relu', input_dim=X_train.shape[1]))
model.add(Dense(32, kernel_initializer='glorot_normal', activation='relu'))
model.add(Dense(16, kernel_initializer='glorot_normal', activation='relu'))
model.add(Dense(1, activation='sigmoid'))
adam = optimizers.adam(lr=LEARNING_RATE)
model.compile(loss='binary_crossentropy', optimizer=adam)
model.summary()
history = model.fit(x=X_train.values, y=Y_train, batch_size=BATCH_SIZE, epochs=EPOCHS,
verbose=1, validation_data=(X_val.values, Y_val))
#@title
def plot_metrics(loss, val_loss):
fig, (ax1) = plt.subplots(1, 1, sharex='col', figsize=(20,7))
ax1.plot(loss, label='Train loss')
ax1.plot(val_loss, label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
plt.xlabel('Epochs')
plot_metrics(history.history['loss'], history.history['val_loss'])
train_predictions = model.predict_classes(X_train.values)
val_predictions = model.predict_classes(X_val.values)
#@title
print('-----Train-----')
print(classification_report(Y_train, train_predictions))
print('-----Validation-----')
print(classification_report(Y_val, val_predictions))
#@title
f, axes = plt.subplots(1, 2, figsize=(16, 5), sharex=True)
train_cnf_matrix = confusion_matrix(Y_train, train_predictions)
val_cnf_matrix = confusion_matrix(Y_val, val_predictions)
train_cnf_matrix_norm = train_cnf_matrix / train_cnf_matrix.sum(axis=1)[:, np.newaxis]
val_cnf_matrix_norm = val_cnf_matrix / val_cnf_matrix.sum(axis=1)[:, np.newaxis]
train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=[0, 1], columns=[0, 1])
val_df_cm = pd.DataFrame(val_cnf_matrix_norm, index=[0, 1], columns=[0, 1])
sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap="Blues", ax=axes[0]).set_title("Train")
sns.heatmap(val_df_cm, annot=True, fmt='.2f', cmap="Blues", ax=axes[1]).set_title("Validation")
plt.show()
predictions = model.predict(test)
submission = pd.DataFrame({"MachineIdentifier":test_ids})
submission["HasDetections"] = predictions
submission.to_csv("submission.csv", index=False)
submission.head(10)
| 0.337968 | 0.726177 |
```
from os.path import join,exists,realpath,dirname,basename
from os import makedirs,listdir, system
import numpy as np, cPickle, editdistance, seaborn as sns
import matplotlib.pyplot as plt, pandas as pd, itertools, glob, h5py
from scipy.stats import entropy
from matplotlib.font_manager import FontProperties
from IPython.display import display
from collections import defaultdict
from IPython.display import display
from itertools import izip
from scipy.stats import ranksums
import multiprocessing as mp
sns.set_style("whitegrid")
%matplotlib inline
rundir = '/cluster/zeng/code/research/OFTL-GAN/runs/motif_spikein_ATAGGC_50runs'
mapper = ['A','C','G','T']
re_mapper = {'A':0,'C':1,'G':2,'T':3}
def decode(data):
mydict = ['A', 'C', 'G', 'T']
out = []
for x in data:
out.append(''.join([mydict[np.argmax(y)] for y in x.squeeze().transpose()]))
return np.asarray(out)
def retrieve_bestrun_samples(pattern, use_abs=False):
runs = glob.glob(pattern)
bestloss = None
bestrun = ''
for run in runs:
with open(join(run, 'history.pkl')) as f:
history = cPickle.load(f)
test_disc_loss = history['test']['discriminator']
if type(test_disc_loss[0]) is not float and type(test_disc_loss[0]) is not np.float64:
test_disc_loss = [x[0] for x in test_disc_loss]
if use_abs:
test_disc_loss = np.abs(test_disc_loss)
t_argbest = np.argmin(test_disc_loss)
if bestloss is None or test_disc_loss[t_argbest] < bestloss:
bestloss = test_disc_loss[t_argbest]
bestrun = join(run, 'samples_epoch_{0:03d}_generated.pkl'.format(t_argbest))
with open(bestrun) as f:
best_sample = cPickle.load(f)
return best_sample, bestrun, bestloss
def plot_acgt_distr(rundir, epoch_num, seqlen):
all_distr = {'A':[], 'C':[], 'G':[], 'T':[]}
for epoch in range(epoch_num):
with open(join(rundir, 'samples_epoch_{0:03d}_generated.pkl'.format(epoch))) as f:
sample = cPickle.load(f).squeeze().swapaxes(1,2)
distr = defaultdict(int)
for x in sample:
for y in x:
distr[mapper[y.argmax()]]+=1
for x in ['A', 'C', 'G', 'T']:
all_distr[x].append(distr[x]/float(len(sample)))
for x in ['A', 'C', 'G', 'T']:
plt.plot(range(epoch_num), all_distr[x], label=x)
plt.legend()
def data2prob(data_ori,idx_mapper):
# Calcualte the emperical kmer distribution from a generated dataset
out = np.zeros(len(idx_mapper.keys()))
data = data_ori.squeeze().swapaxes(1,2)
for x in data:
t_data = [ mapper[y.argmax()] for y in x]
out[idx_mapper[''.join(t_data)]] += 1
return out/sum(out)
def kl_prepare(motif_file, seqlen):
# All posible kmers
candidate = [''.join(p) for p in itertools.product(mapper, repeat=seqlen)]
# Map each kmer to its index in the list
idx_mapper = dict()
for idx,x in enumerate(candidate):
idx_mapper[x] = idx
# Read the motif
with open(motif_file) as f:
f.readline()
motif_mat = [map(float,x.split()) for x in f]
# Calculate the expected probability of each kmer
design_p = np.zeros(len(candidate))
for idx,x in enumerate(candidate):
t_p = 1.0
for cidx, c in enumerate(list(x)):
t_p *= motif_mat[cidx][re_mapper[c]]
design_p[idx] = t_p
return idx_mapper, design_p
def kl_compare(samples, idx_mapper, design_p):
pred_p = data2prob(samples, idx_mapper)
return entropy(pred_p, design_p) - entropy(pred_p, pred_p)
def comparePWM(samples, motif_file, seqlen, ):
# Read the motif
with open(motif_file) as f:
f.readline()
motif_mat = [map(float,x.split()) for x in f]
empirical = np.zeros((seqlen, 4))
mydict = {'A':0, 'C':1, 'G':2, 'T':3}
print 1
for s in samples:
t_s = np.copy(s).squeeze().transpose()
for pos, c in enumerate(t_s):
empirical[pos][np.argmax(c)] += 1
print 2
for i in range(seqlen):
empirical[i] /= sum(empirical[i])
diff = empirical - motif_mat
sns.heatmap(diff)
plt.show()
dataset = ''
motif_file = join(rundir, '../../data/motifs/ATAGGC.pwm')
motif_len = 6
versions = ['1', '2' , '3']
schedules = ['None', 'adagrad', 'nesterov0.9', 'momentum0.9', 'adam']
optimizers = ['OMDA', 'SGD', 'optimAdam']
network = 'wgan'
ginters = [1, 5]
lrs = ['5e-02', '5e-03', '5e-04']
n_epoch = 100
rename_dict = {'SOMDv1':'SOMD_ver1',
'SOMDv2':'SOMD_ver2',
'SOMDv3':'SOMD_ver3',
'SGD':'SGD_vanilla',
'adagrad':'SGD_adagrad',
'nesterov':'SGD_nesterov',
'momentum':'SGD_momentum',
'SOMDv1_ratio1':'SOMD_ver1_ratio1',
'SOMDv2_ratio1':'SOMD_ver2_ratio1',
'SOMDv3_ratio1':'SOMD_ver3_ratio1',
'adam':'SGD_adam',
'optimAdam':'optimAdam',
'optimAdam_ratio1':'optimAdam_ratio1',}
```
#### Compare the performance of the iteration with the lowest test loss
```
pattern = join(rundir, 'runRUN', 'OPTIMIZER_SCHEDULE_vVERSION_lr*_NETWORK_ginterGINTER_gp1e-4')
idx_mapper, design_p = kl_prepare(motif_file, motif_len)
args = []
params = []
for run in range(50):
for optimizer in optimizers:
versions2use = versions if optimizer == 'OMDA' else ['0']
for version in versions2use:
schedule2use = schedules if optimizer == 'SGD' else ['None']
ginter2use = ginters if optimizer != 'SGD' else [5]
for ginter in ginter2use:
for schedule in schedule2use:
t_pattern = pattern.replace('OPTIMIZER', optimizer)
t_pattern = t_pattern.replace('SCHEDULE', schedule)
t_pattern = t_pattern.replace('VERSION', version)
t_pattern = t_pattern.replace('NETWORK', network)
t_pattern = t_pattern.replace('GINTER', str(ginter))
t_pattern = t_pattern.replace('RUN', str(run))
#t_pattern = t_pattern.replace('LR', str(lrs[0]))
barcode = '_'.join([optimizer, version, schedule, str(ginter)])
args.append([t_pattern, idx_mapper, design_p])
params.append([optimizer, version, schedule, run, barcode])
def lowest_loss_slave(args):
t_pattern, idx_mapper, design_p = args[:]
best_sample, bestrun, bestloss = retrieve_bestrun_samples(t_pattern)
return kl_compare(best_sample, idx_mapper, design_p)
pool = mp.Pool(processes=16)
all_kl=pool.map(lowest_loss_slave, args)
pool.close()
pool.join()
bestval_perform = []
for param, kl in izip(params, all_kl):
bestval_perform.append(param + [kl])
df = pd.DataFrame(bestval_perform, columns=['optimizer', 'version', 'schedule', 'run', 'method', 'KL'])
df_rename = pd.DataFrame()
df_rename['KL Divergence'] = df['KL']
df_rename['Method'] = [rename_dict[x] for x in df['method']]
median_df = pd.DataFrame()
median_df['Median KL'] = [np.median(df_rename[df_rename['Method']==m]['KL Divergence'])
for m in np.unique(df_rename['Method'])]
median_df['Method'] = np.unique(df_rename['Method'])
ax=sns.boxplot(x='KL Divergence', y='Method', data=df_rename, order=median_df.sort_values('Median KL')['Method'])
ax.get_figure().savefig('experimental_lowestval.eps',bbox_inches='tight')
```
#### Compare the performance of the last iteration
```
pattern = join(rundir, 'runRUN', 'OPTIMIZER_SCHEDULE_vVERSION_lrLR_NETWORK_ginterGINTER_gp1e-4')
idx_mapper, design_p = kl_prepare(motif_file, motif_len)
args = []
params = []
for run in range(50):
for optimizer in optimizers:
versions2use = versions if optimizer =='OMDA' else ['0']
for version in versions2use:
schedule2use = schedules if optimizer == 'SGD' else ['None']
ginter2use = ginters if optimizer != 'SGD' else [5]
for ginter in ginter2use:
for schedule in schedule2use:
for lr in lrs:
t_pattern = pattern.replace('OPTIMIZER', optimizer)
t_pattern = t_pattern.replace('SCHEDULE', schedule)
t_pattern = t_pattern.replace('VERSION', version)
t_pattern = t_pattern.replace('NETWORK', network)
t_pattern = t_pattern.replace('GINTER', str(ginter))
t_pattern = t_pattern.replace('LR', lr)
t_pattern = t_pattern.replace('RUN', str(run))
barcode = '_'.join([optimizer, version, schedule, str(ginter), lr])
args.append([t_pattern, idx_mapper, design_p])
params.append([optimizer, version, schedule, run, barcode])
def last_slave(args):
t_pattern, idx_mapper, design_p = args[:]
lastrun = join(t_pattern, 'samples_epoch_{0:03d}_generated.pkl'.format(n_epoch-1))
with open(lastrun) as f:
last_sample = cPickle.load(f)
return kl_compare(last_sample, idx_mapper, design_p)
pool = mp.Pool(processes=16)
all_kl=pool.map(last_slave, args)
pool.close()
pool.join()
lastiter_perform = []
for param, kl in izip(params, all_kl):
lastiter_perform.append(param + [kl])
df = pd.DataFrame(lastiter_perform, columns=['optimizer', 'version', 'schedule', 'run', 'method', 'KL'])
df_rename = pd.DataFrame()
df_rename['KL Divergence'] = df['KL']
df_rename['Method'] = [rename_dict['_'.join(x.split('_')[:-1])]+'_'+x.split('_')[-1] for x in df['method']]
median_df = pd.DataFrame()
median_df['Median KL'] = [np.median(df_rename[df_rename['Method']==m]['KL Divergence'])
for m in np.unique(df_rename['Method'])]
median_df['Method'] = np.unique(df_rename['Method'])
plt.figure(figsize=(15,8))
sns.barplot(y='Method', x='Median KL',data=median_df.sort_values('Median KL'))
plt.show()
plt.figure(figsize=(15,8))
ax = sns.boxplot(x='KL Divergence', y='Method', data=df_rename, order=median_df.sort_values('Median KL')['Method'])
ax.get_figure().savefig('experimental_lastepoch.eps', bbox_inches='tight')
plt.show()
```
|
github_jupyter
|
from os.path import join,exists,realpath,dirname,basename
from os import makedirs,listdir, system
import numpy as np, cPickle, editdistance, seaborn as sns
import matplotlib.pyplot as plt, pandas as pd, itertools, glob, h5py
from scipy.stats import entropy
from matplotlib.font_manager import FontProperties
from IPython.display import display
from collections import defaultdict
from IPython.display import display
from itertools import izip
from scipy.stats import ranksums
import multiprocessing as mp
sns.set_style("whitegrid")
%matplotlib inline
rundir = '/cluster/zeng/code/research/OFTL-GAN/runs/motif_spikein_ATAGGC_50runs'
mapper = ['A','C','G','T']
re_mapper = {'A':0,'C':1,'G':2,'T':3}
def decode(data):
mydict = ['A', 'C', 'G', 'T']
out = []
for x in data:
out.append(''.join([mydict[np.argmax(y)] for y in x.squeeze().transpose()]))
return np.asarray(out)
def retrieve_bestrun_samples(pattern, use_abs=False):
runs = glob.glob(pattern)
bestloss = None
bestrun = ''
for run in runs:
with open(join(run, 'history.pkl')) as f:
history = cPickle.load(f)
test_disc_loss = history['test']['discriminator']
if type(test_disc_loss[0]) is not float and type(test_disc_loss[0]) is not np.float64:
test_disc_loss = [x[0] for x in test_disc_loss]
if use_abs:
test_disc_loss = np.abs(test_disc_loss)
t_argbest = np.argmin(test_disc_loss)
if bestloss is None or test_disc_loss[t_argbest] < bestloss:
bestloss = test_disc_loss[t_argbest]
bestrun = join(run, 'samples_epoch_{0:03d}_generated.pkl'.format(t_argbest))
with open(bestrun) as f:
best_sample = cPickle.load(f)
return best_sample, bestrun, bestloss
def plot_acgt_distr(rundir, epoch_num, seqlen):
all_distr = {'A':[], 'C':[], 'G':[], 'T':[]}
for epoch in range(epoch_num):
with open(join(rundir, 'samples_epoch_{0:03d}_generated.pkl'.format(epoch))) as f:
sample = cPickle.load(f).squeeze().swapaxes(1,2)
distr = defaultdict(int)
for x in sample:
for y in x:
distr[mapper[y.argmax()]]+=1
for x in ['A', 'C', 'G', 'T']:
all_distr[x].append(distr[x]/float(len(sample)))
for x in ['A', 'C', 'G', 'T']:
plt.plot(range(epoch_num), all_distr[x], label=x)
plt.legend()
def data2prob(data_ori,idx_mapper):
# Calcualte the emperical kmer distribution from a generated dataset
out = np.zeros(len(idx_mapper.keys()))
data = data_ori.squeeze().swapaxes(1,2)
for x in data:
t_data = [ mapper[y.argmax()] for y in x]
out[idx_mapper[''.join(t_data)]] += 1
return out/sum(out)
def kl_prepare(motif_file, seqlen):
# All posible kmers
candidate = [''.join(p) for p in itertools.product(mapper, repeat=seqlen)]
# Map each kmer to its index in the list
idx_mapper = dict()
for idx,x in enumerate(candidate):
idx_mapper[x] = idx
# Read the motif
with open(motif_file) as f:
f.readline()
motif_mat = [map(float,x.split()) for x in f]
# Calculate the expected probability of each kmer
design_p = np.zeros(len(candidate))
for idx,x in enumerate(candidate):
t_p = 1.0
for cidx, c in enumerate(list(x)):
t_p *= motif_mat[cidx][re_mapper[c]]
design_p[idx] = t_p
return idx_mapper, design_p
def kl_compare(samples, idx_mapper, design_p):
pred_p = data2prob(samples, idx_mapper)
return entropy(pred_p, design_p) - entropy(pred_p, pred_p)
def comparePWM(samples, motif_file, seqlen, ):
# Read the motif
with open(motif_file) as f:
f.readline()
motif_mat = [map(float,x.split()) for x in f]
empirical = np.zeros((seqlen, 4))
mydict = {'A':0, 'C':1, 'G':2, 'T':3}
print 1
for s in samples:
t_s = np.copy(s).squeeze().transpose()
for pos, c in enumerate(t_s):
empirical[pos][np.argmax(c)] += 1
print 2
for i in range(seqlen):
empirical[i] /= sum(empirical[i])
diff = empirical - motif_mat
sns.heatmap(diff)
plt.show()
dataset = ''
motif_file = join(rundir, '../../data/motifs/ATAGGC.pwm')
motif_len = 6
versions = ['1', '2' , '3']
schedules = ['None', 'adagrad', 'nesterov0.9', 'momentum0.9', 'adam']
optimizers = ['OMDA', 'SGD', 'optimAdam']
network = 'wgan'
ginters = [1, 5]
lrs = ['5e-02', '5e-03', '5e-04']
n_epoch = 100
rename_dict = {'SOMDv1':'SOMD_ver1',
'SOMDv2':'SOMD_ver2',
'SOMDv3':'SOMD_ver3',
'SGD':'SGD_vanilla',
'adagrad':'SGD_adagrad',
'nesterov':'SGD_nesterov',
'momentum':'SGD_momentum',
'SOMDv1_ratio1':'SOMD_ver1_ratio1',
'SOMDv2_ratio1':'SOMD_ver2_ratio1',
'SOMDv3_ratio1':'SOMD_ver3_ratio1',
'adam':'SGD_adam',
'optimAdam':'optimAdam',
'optimAdam_ratio1':'optimAdam_ratio1',}
pattern = join(rundir, 'runRUN', 'OPTIMIZER_SCHEDULE_vVERSION_lr*_NETWORK_ginterGINTER_gp1e-4')
idx_mapper, design_p = kl_prepare(motif_file, motif_len)
args = []
params = []
for run in range(50):
for optimizer in optimizers:
versions2use = versions if optimizer == 'OMDA' else ['0']
for version in versions2use:
schedule2use = schedules if optimizer == 'SGD' else ['None']
ginter2use = ginters if optimizer != 'SGD' else [5]
for ginter in ginter2use:
for schedule in schedule2use:
t_pattern = pattern.replace('OPTIMIZER', optimizer)
t_pattern = t_pattern.replace('SCHEDULE', schedule)
t_pattern = t_pattern.replace('VERSION', version)
t_pattern = t_pattern.replace('NETWORK', network)
t_pattern = t_pattern.replace('GINTER', str(ginter))
t_pattern = t_pattern.replace('RUN', str(run))
#t_pattern = t_pattern.replace('LR', str(lrs[0]))
barcode = '_'.join([optimizer, version, schedule, str(ginter)])
args.append([t_pattern, idx_mapper, design_p])
params.append([optimizer, version, schedule, run, barcode])
def lowest_loss_slave(args):
t_pattern, idx_mapper, design_p = args[:]
best_sample, bestrun, bestloss = retrieve_bestrun_samples(t_pattern)
return kl_compare(best_sample, idx_mapper, design_p)
pool = mp.Pool(processes=16)
all_kl=pool.map(lowest_loss_slave, args)
pool.close()
pool.join()
bestval_perform = []
for param, kl in izip(params, all_kl):
bestval_perform.append(param + [kl])
df = pd.DataFrame(bestval_perform, columns=['optimizer', 'version', 'schedule', 'run', 'method', 'KL'])
df_rename = pd.DataFrame()
df_rename['KL Divergence'] = df['KL']
df_rename['Method'] = [rename_dict[x] for x in df['method']]
median_df = pd.DataFrame()
median_df['Median KL'] = [np.median(df_rename[df_rename['Method']==m]['KL Divergence'])
for m in np.unique(df_rename['Method'])]
median_df['Method'] = np.unique(df_rename['Method'])
ax=sns.boxplot(x='KL Divergence', y='Method', data=df_rename, order=median_df.sort_values('Median KL')['Method'])
ax.get_figure().savefig('experimental_lowestval.eps',bbox_inches='tight')
pattern = join(rundir, 'runRUN', 'OPTIMIZER_SCHEDULE_vVERSION_lrLR_NETWORK_ginterGINTER_gp1e-4')
idx_mapper, design_p = kl_prepare(motif_file, motif_len)
args = []
params = []
for run in range(50):
for optimizer in optimizers:
versions2use = versions if optimizer =='OMDA' else ['0']
for version in versions2use:
schedule2use = schedules if optimizer == 'SGD' else ['None']
ginter2use = ginters if optimizer != 'SGD' else [5]
for ginter in ginter2use:
for schedule in schedule2use:
for lr in lrs:
t_pattern = pattern.replace('OPTIMIZER', optimizer)
t_pattern = t_pattern.replace('SCHEDULE', schedule)
t_pattern = t_pattern.replace('VERSION', version)
t_pattern = t_pattern.replace('NETWORK', network)
t_pattern = t_pattern.replace('GINTER', str(ginter))
t_pattern = t_pattern.replace('LR', lr)
t_pattern = t_pattern.replace('RUN', str(run))
barcode = '_'.join([optimizer, version, schedule, str(ginter), lr])
args.append([t_pattern, idx_mapper, design_p])
params.append([optimizer, version, schedule, run, barcode])
def last_slave(args):
t_pattern, idx_mapper, design_p = args[:]
lastrun = join(t_pattern, 'samples_epoch_{0:03d}_generated.pkl'.format(n_epoch-1))
with open(lastrun) as f:
last_sample = cPickle.load(f)
return kl_compare(last_sample, idx_mapper, design_p)
pool = mp.Pool(processes=16)
all_kl=pool.map(last_slave, args)
pool.close()
pool.join()
lastiter_perform = []
for param, kl in izip(params, all_kl):
lastiter_perform.append(param + [kl])
df = pd.DataFrame(lastiter_perform, columns=['optimizer', 'version', 'schedule', 'run', 'method', 'KL'])
df_rename = pd.DataFrame()
df_rename['KL Divergence'] = df['KL']
df_rename['Method'] = [rename_dict['_'.join(x.split('_')[:-1])]+'_'+x.split('_')[-1] for x in df['method']]
median_df = pd.DataFrame()
median_df['Median KL'] = [np.median(df_rename[df_rename['Method']==m]['KL Divergence'])
for m in np.unique(df_rename['Method'])]
median_df['Method'] = np.unique(df_rename['Method'])
plt.figure(figsize=(15,8))
sns.barplot(y='Method', x='Median KL',data=median_df.sort_values('Median KL'))
plt.show()
plt.figure(figsize=(15,8))
ax = sns.boxplot(x='KL Divergence', y='Method', data=df_rename, order=median_df.sort_values('Median KL')['Method'])
ax.get_figure().savefig('experimental_lastepoch.eps', bbox_inches='tight')
plt.show()
| 0.535098 | 0.46563 |
# 2-3 Intro Python Practice
## power iteration of sequences
<font size="5" color="#00A0B2" face="verdana"> <B>Student will be able to</B></font>
- Iterate through Lists using **`for`** and **`in`**
- Use **`for` *`count`* `in range()`** in looping operations
- Use list methods **`.extend()`, `+, .reverse(), .sort()`**
- convert between lists and strings using **`.split()`** and **`.join()`**
- cast strings to lists **/** direct multiple print outputs to a single line. ** `print("hi", end='')`**
#
<font size="6" color="#B24C00" face="verdana"> <B>Task 1</B></font>
## list iteration: `for in`
### `for item in list:`
```
# [ ] print out the "physical states of matter" (matter_states) in 4 sentences using list iteration
# each sentence should be of the format: "Solid - is state of matter #1"
matter_states = ['solid', 'liquid', 'gas', 'plasma']
sequence = 0
for matter in matter_states:
sequence += 1
print(matter,'- state of matter #', sequence,'\n')
# [ ] iterate the list (birds) to see any bird names start with "c" and remove that item from the list
# print the birds list before and after removals
birds = ["turkey", "hawk", "chicken", "dove", "crow"]
print(birds,'\n')
for bird in birds:
if bird.lower().startswith('c'):
birds.remove(bird)
print(birds)
# the team makes 1pt, 2pt or 3pt baskets
# [ ] print the occurace of each type of basket(1pt, 2pt, 3pt) & total points using the list baskets
baskets = [2,2,2,1,2,1,3,3,1,2,2,2,2,1,3]
total_points = 0
for points in baskets:
total_points= total_points + int(points)
print(total_points)
basket_pts = []
for n in baskets:
n = str(n)
basket_pts.append(n)
print('pt'.join(basket_pts), '=', total_points)
```
#
<font size="6" color="#B24C00" face="verdana"> <B>Task 2</B></font>
## iteration with `range(start)` & `range(start,stop)`
```
# [ ] using range() print "hello" 4 times
greeting = "hello"
for greet in range(1):
print(greeting)
for multi_greet in range(1,5,4):
print(multi_greet)
print(multi_greet)
# [ ] find spell_list length
# [ ] use range() to iterate each half of spell_list
# [ ] label & print the first and second halves
spell_list = ["Tuesday", "Wednesday", "February", "November", "Annual", "Calendar", "Solstice"]
spell_len = len(spell_list)
half_1 = int(spell_len/2)
for first_half in range(0,half_1):
print('-'.join(spell_list[first_half]))
for second_half in range(half_1,spell_len):
print(spell_list[second_half])
# [ ] build a list of numbers from 20 to 29: twenties
# append each number to twenties list using range(start,stop) iteration
# [ ] print twenties
twenties = []
for num in range(20,30):
twenties.append(num)
num
print(twenties)
# [ ] iterate through the numbers populated in the list twenties and add each number to a variable: total
# [ ] print total
total = 0
for add in twenties:
total += add
add
print(total)
# check your answer above using range(start,stop)
# [ ] iterate each number from 20 to 29 using range()
# [ ] add each number to a variable (total) to calculate the sum
# should match earlier task
total = 0
for more in range(20,30):
total = total + more
print(total)
addition = []
for more_more in range(20, 30):
more_more = str(more_more)
addition.append(more_more)
print('+'.join(addition), '=', total)
```
#
<font size="6" color="#B24C00" face="verdana"> <B>Task 3</B></font>
## iteration with `range(start:stop:skip)`
```
# [ ] create a list of odd numbers (odd_nums) from 1 to 25 using range(start,stop,skip)
# [ ] print odd_nums
# hint: odd numbers are 2 digits apart
odd_nums = []
for num in range(1,26,2):
odd_nums.append(num)
num
print(odd_nums)
# [ ] create a Decending list of odd numbers (odd_nums) from 25 to 1 using range(start,stop,skip)
# [ ] print odd_nums, output should resemble [25, 23, ...]
odd_nums = []
for num in range(25,0,-2):
odd_nums.append(num)
num
print(odd_nums)
# the list, elements, contains the names of the first 20 elements in atomic number order
# [ ] print the even number elements "2 - Helium, 4 - Beryllium,.." in the list with the atomic number
elements = ['Hydrogen', 'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', \
'Neon', 'Sodium', 'Magnesium', 'Aluminum', 'Silicon', 'Phosphorus', 'Sulfur', 'Chlorine', 'Argon', \
'Potassium', 'Calcium']
for even_els in range(0,20,2):
print(elements[even_els])
# [ ] # the list, elements_60, contains the names of the first 60 elements in atomic number order
# [ ] print the odd number elements "1 - Hydrogen, 3 - Lithium,.." in the list with the atomic number elements_60
elements_60 = ['Hydrogen', 'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', \
'Oxygen', 'Fluorine', 'Neon', 'Sodium', 'Magnesium', 'Aluminum', 'Silicon', \
'Phosphorus', 'Sulfur', 'Chlorine', 'Argon', 'Potassium', 'Calcium', 'Hydrogen', \
'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', \
'Neon', 'Sodium', 'Magnesium', 'Aluminum', 'Silicon', 'Phosphorus', 'Sulfur', 'Chlorine', \
'Argon', 'Potassium', 'Calcium', 'Scandium', 'Titanium', 'Vanadium', 'Chromium', 'Manganese', \
'Iron', 'Cobalt', 'Nickel', 'Copper', 'Zinc', 'Gallium', 'Germanium', 'Arsenic', 'Selenium', \
'Bromine', 'Krypton', 'Rubidium', 'Strontium', 'Yttrium', 'Zirconium']
num_el = 1
for elem in range(1,61,2):
num_el += 2
print('#',num_el,'-',elements_60[elem], '\n')
```
#
<font size="6" color="#B24C00" face="verdana"> <B>Task 4</B></font>
## combine lists with `+` and `.extend()`
```
# [ ] print the combined lists (numbers_1 & numbers_2) using "+" operator
numbers_1 = [20, 21, 22, 23, 24, 25, 26, 27, 28, 29]
# pythonic casting of a range into a list
numbers_2 = list(range(30,50,2))
print("numbers_1:",numbers_1)
print("numbers_2",numbers_2)
combo_num = numbers_1 + numbers_2
print(combo_num)
# [ ] print the combined element lists (first_row & second_row) using ".extend()" method
first_row = ['Hydrogen', 'Helium']
second_row = ['Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', 'Neon']
print("1st Row:", first_row)
print("2nd Row:", second_row)
first_row.extend(second_row)
print(first_row)
```
## Project: Combine 3 element rows
Choose to use **"+" or ".extend()" **to build output similar to
```
The 1st three rows of the Period Table of Elements contain:
['Hydrogen', 'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', 'Neon', 'Sodium', 'Magnesium', 'Aluminum', 'Silicon', 'Phosphorus', 'Sulfur', 'Chlorine', 'Argon']
The row breakdown is
Row 1: Hydrogen, Helium
Row 2: Lithium, Beryllium, Boron, Carbon, Nitrogen, Oxygen, Fluorine, Neon
Row 3: Sodium, Magnesium, Aluminum, Silicon, Phosphorus, Sulfur, Chlorine, Argon
```
```
# [ ] create the program: combined 3 element rows
elem_1 = ['Hydrogen', 'Helium']
elem_2 = ['Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', 'Neon']
elem_3 = ['Sodium', 'Magnesium', 'Aluminum', 'Silicon', 'Phosphorus', 'Sulfur', 'Chlorine', 'Argon']
print("The 1st three rows of the Periodic Table of Elements contain: ",'\n', elem_1 + elem_2 + elem_3)
# [ ] .extend() jack_jill with "next_line" string - print the result
jack_jill = ['Jack', 'and', 'Jill', 'went', 'up', 'the', 'hill']
next_line = ['To', 'fetch', 'a', 'pail', 'of', 'water']
jack_jill.extend(next_line)
space = ''
print(' '.join(jack_jill))
```
#
<font size="6" color="#B24C00" face="verdana"> <B>Task 5</B></font>
## .reverse() : reverse a list in place
```
# [ ] use .reverse() to print elements starting with "Calcium", "Chlorine",... in reverse order
elements = ['Hydrogen', 'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', \
'Neon', 'Sodium', 'Magnesium', 'Aluminum', 'Silicon', 'Phosphorus', 'Sulfur', 'Chlorine', 'Argon', \
'Potassium', 'Calcium']
for e in range(19,0,-3):
print(elements[e])
# [ ] reverse order of the list... Then print only words that are 8 characters or longer from the now reversed order
spell_list = ["Tuesday", "Wednesday", "February", "November", "Annual", "Calendar", "Solstice"]
spell_list.reverse()
print(spell_list, '\n')
new_list = []
for letters in spell_list:
if len(letters) > len('Tuesday'):
new_list.append(letters)
letters.isalpha
else:
pass
print(new_list)
```
#
<font size="6" color="#B24C00" face="verdana"> <B>Task 6</B></font>
## .sort() and sorted()
```
# [ ] sort the list element, so names are in alphabetical order and print elements
elements = ['Hydrogen', 'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', \
'Neon', 'Sodium', 'Magnesium', 'Aluminum', 'Silicon', 'Phosphorus', 'Sulfur', 'Chlorine', 'Argon', \
'Potassium', 'Calcium']
# [ ] print the list, numbers, sorted and then below print the original numbers list
numbers = [2,2,2,1,2,1,3,3,1,2,2,2,2,1,3]
print(numbers, '\n')
numbers.sort()
print(numbers)
```
#
<font size="6" color="#B24C00" face="verdana"> <B>Task 7</B></font>
## Converting a string to a list with `.split()`
```
# [ ] split the string, daily_fact, into a list of word strings: fact_words
# [ ] print each string in fact_words in upper case on it's own line
daily_fact = "Did you know that there are 1.4 billion students in the world?"
fact_words = daily_fact.split()
print(daily_fact, '\n')
for word in fact_words:
print(word.capitalize())
# [ ] convert the string, code_tip, into a list made from splitting on the letter "o"
print(daily_fact, '\n')
code_tip = daily_fact.split('o')
print(code_tip)
# [ ] split poem on "b" to create a list: poem_words
# [ ] print poem_words by iterating the list
poem = "The bright brain, has bran!"
poem_words = poem.split('b')
for b in poem_words:
print(b)
```
#
<font size="6" color="#B24C00" face="verdana"> <B>Task 8</B></font>
## `.join()`
### build a string from a list
```
# [ ] print a comma separated string output from the list of Halogen elements using ".join()"
halogens = ['Chlorine', 'Florine', 'Bromine', 'Iodine']
print(','.join(halogens))
# [ ] split the sentence, code_tip, into a words list
# [ ] print the joined words in the list with no spaces in-between
# [ ] Bonus: capitalize each word in the list before .join()
code_tip ="Read code aloud or explain the code step by step to a peer"
no_space = ''
code_tip.split()
for code in code_tip:
code = str(code.capitalize())
code
print(code)
print("".join(code))
```
#
<font size="6" color="#B24C00" face="verdana"> <B>Task 8</B></font>
## `list(string)` & `print("hello",end=' ')`
- **Cast a string to a list**
- **print to the same line**
```
# [ ] cast the long_word into individual letters list
# [ ] print each letter on a line
long_word = 'decelerating'
print(list(long_word))
for long in long_word:
print(long)
# [ ] use use end= in print to output each string in questions with a "?" and on new lines
questions = ["What's the closest planet to the Sun", "How deep do Dolphins swim", "What time is it"]
for q in questions:
print(q, end= '?\n')
# [ ] print each item in foot bones
# - capitalized, both words if two word name
# - separated by a comma and space
# - and keeping on a single print line
foot_bones = ["calcaneus", "talus", "cuboid", "navicular", "lateral cuneiform",
"intermediate cuneiform", "medial cuneiform"]
for foot in foot_bones:
if " " in foot:
print(foot.title(), end=', ')
else:
print(foot, end=', ')
```
[Terms of use](http://go.microsoft.com/fwlink/?LinkID=206977) [Privacy & cookies](https://go.microsoft.com/fwlink/?LinkId=521839) © 2017 Microsoft
|
github_jupyter
|
# [ ] print out the "physical states of matter" (matter_states) in 4 sentences using list iteration
# each sentence should be of the format: "Solid - is state of matter #1"
matter_states = ['solid', 'liquid', 'gas', 'plasma']
sequence = 0
for matter in matter_states:
sequence += 1
print(matter,'- state of matter #', sequence,'\n')
# [ ] iterate the list (birds) to see any bird names start with "c" and remove that item from the list
# print the birds list before and after removals
birds = ["turkey", "hawk", "chicken", "dove", "crow"]
print(birds,'\n')
for bird in birds:
if bird.lower().startswith('c'):
birds.remove(bird)
print(birds)
# the team makes 1pt, 2pt or 3pt baskets
# [ ] print the occurace of each type of basket(1pt, 2pt, 3pt) & total points using the list baskets
baskets = [2,2,2,1,2,1,3,3,1,2,2,2,2,1,3]
total_points = 0
for points in baskets:
total_points= total_points + int(points)
print(total_points)
basket_pts = []
for n in baskets:
n = str(n)
basket_pts.append(n)
print('pt'.join(basket_pts), '=', total_points)
# [ ] using range() print "hello" 4 times
greeting = "hello"
for greet in range(1):
print(greeting)
for multi_greet in range(1,5,4):
print(multi_greet)
print(multi_greet)
# [ ] find spell_list length
# [ ] use range() to iterate each half of spell_list
# [ ] label & print the first and second halves
spell_list = ["Tuesday", "Wednesday", "February", "November", "Annual", "Calendar", "Solstice"]
spell_len = len(spell_list)
half_1 = int(spell_len/2)
for first_half in range(0,half_1):
print('-'.join(spell_list[first_half]))
for second_half in range(half_1,spell_len):
print(spell_list[second_half])
# [ ] build a list of numbers from 20 to 29: twenties
# append each number to twenties list using range(start,stop) iteration
# [ ] print twenties
twenties = []
for num in range(20,30):
twenties.append(num)
num
print(twenties)
# [ ] iterate through the numbers populated in the list twenties and add each number to a variable: total
# [ ] print total
total = 0
for add in twenties:
total += add
add
print(total)
# check your answer above using range(start,stop)
# [ ] iterate each number from 20 to 29 using range()
# [ ] add each number to a variable (total) to calculate the sum
# should match earlier task
total = 0
for more in range(20,30):
total = total + more
print(total)
addition = []
for more_more in range(20, 30):
more_more = str(more_more)
addition.append(more_more)
print('+'.join(addition), '=', total)
# [ ] create a list of odd numbers (odd_nums) from 1 to 25 using range(start,stop,skip)
# [ ] print odd_nums
# hint: odd numbers are 2 digits apart
odd_nums = []
for num in range(1,26,2):
odd_nums.append(num)
num
print(odd_nums)
# [ ] create a Decending list of odd numbers (odd_nums) from 25 to 1 using range(start,stop,skip)
# [ ] print odd_nums, output should resemble [25, 23, ...]
odd_nums = []
for num in range(25,0,-2):
odd_nums.append(num)
num
print(odd_nums)
# the list, elements, contains the names of the first 20 elements in atomic number order
# [ ] print the even number elements "2 - Helium, 4 - Beryllium,.." in the list with the atomic number
elements = ['Hydrogen', 'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', \
'Neon', 'Sodium', 'Magnesium', 'Aluminum', 'Silicon', 'Phosphorus', 'Sulfur', 'Chlorine', 'Argon', \
'Potassium', 'Calcium']
for even_els in range(0,20,2):
print(elements[even_els])
# [ ] # the list, elements_60, contains the names of the first 60 elements in atomic number order
# [ ] print the odd number elements "1 - Hydrogen, 3 - Lithium,.." in the list with the atomic number elements_60
elements_60 = ['Hydrogen', 'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', \
'Oxygen', 'Fluorine', 'Neon', 'Sodium', 'Magnesium', 'Aluminum', 'Silicon', \
'Phosphorus', 'Sulfur', 'Chlorine', 'Argon', 'Potassium', 'Calcium', 'Hydrogen', \
'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', \
'Neon', 'Sodium', 'Magnesium', 'Aluminum', 'Silicon', 'Phosphorus', 'Sulfur', 'Chlorine', \
'Argon', 'Potassium', 'Calcium', 'Scandium', 'Titanium', 'Vanadium', 'Chromium', 'Manganese', \
'Iron', 'Cobalt', 'Nickel', 'Copper', 'Zinc', 'Gallium', 'Germanium', 'Arsenic', 'Selenium', \
'Bromine', 'Krypton', 'Rubidium', 'Strontium', 'Yttrium', 'Zirconium']
num_el = 1
for elem in range(1,61,2):
num_el += 2
print('#',num_el,'-',elements_60[elem], '\n')
# [ ] print the combined lists (numbers_1 & numbers_2) using "+" operator
numbers_1 = [20, 21, 22, 23, 24, 25, 26, 27, 28, 29]
# pythonic casting of a range into a list
numbers_2 = list(range(30,50,2))
print("numbers_1:",numbers_1)
print("numbers_2",numbers_2)
combo_num = numbers_1 + numbers_2
print(combo_num)
# [ ] print the combined element lists (first_row & second_row) using ".extend()" method
first_row = ['Hydrogen', 'Helium']
second_row = ['Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', 'Neon']
print("1st Row:", first_row)
print("2nd Row:", second_row)
first_row.extend(second_row)
print(first_row)
The 1st three rows of the Period Table of Elements contain:
['Hydrogen', 'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', 'Neon', 'Sodium', 'Magnesium', 'Aluminum', 'Silicon', 'Phosphorus', 'Sulfur', 'Chlorine', 'Argon']
The row breakdown is
Row 1: Hydrogen, Helium
Row 2: Lithium, Beryllium, Boron, Carbon, Nitrogen, Oxygen, Fluorine, Neon
Row 3: Sodium, Magnesium, Aluminum, Silicon, Phosphorus, Sulfur, Chlorine, Argon
# [ ] create the program: combined 3 element rows
elem_1 = ['Hydrogen', 'Helium']
elem_2 = ['Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', 'Neon']
elem_3 = ['Sodium', 'Magnesium', 'Aluminum', 'Silicon', 'Phosphorus', 'Sulfur', 'Chlorine', 'Argon']
print("The 1st three rows of the Periodic Table of Elements contain: ",'\n', elem_1 + elem_2 + elem_3)
# [ ] .extend() jack_jill with "next_line" string - print the result
jack_jill = ['Jack', 'and', 'Jill', 'went', 'up', 'the', 'hill']
next_line = ['To', 'fetch', 'a', 'pail', 'of', 'water']
jack_jill.extend(next_line)
space = ''
print(' '.join(jack_jill))
# [ ] use .reverse() to print elements starting with "Calcium", "Chlorine",... in reverse order
elements = ['Hydrogen', 'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', \
'Neon', 'Sodium', 'Magnesium', 'Aluminum', 'Silicon', 'Phosphorus', 'Sulfur', 'Chlorine', 'Argon', \
'Potassium', 'Calcium']
for e in range(19,0,-3):
print(elements[e])
# [ ] reverse order of the list... Then print only words that are 8 characters or longer from the now reversed order
spell_list = ["Tuesday", "Wednesday", "February", "November", "Annual", "Calendar", "Solstice"]
spell_list.reverse()
print(spell_list, '\n')
new_list = []
for letters in spell_list:
if len(letters) > len('Tuesday'):
new_list.append(letters)
letters.isalpha
else:
pass
print(new_list)
# [ ] sort the list element, so names are in alphabetical order and print elements
elements = ['Hydrogen', 'Helium', 'Lithium', 'Beryllium', 'Boron', 'Carbon', 'Nitrogen', 'Oxygen', 'Fluorine', \
'Neon', 'Sodium', 'Magnesium', 'Aluminum', 'Silicon', 'Phosphorus', 'Sulfur', 'Chlorine', 'Argon', \
'Potassium', 'Calcium']
# [ ] print the list, numbers, sorted and then below print the original numbers list
numbers = [2,2,2,1,2,1,3,3,1,2,2,2,2,1,3]
print(numbers, '\n')
numbers.sort()
print(numbers)
# [ ] split the string, daily_fact, into a list of word strings: fact_words
# [ ] print each string in fact_words in upper case on it's own line
daily_fact = "Did you know that there are 1.4 billion students in the world?"
fact_words = daily_fact.split()
print(daily_fact, '\n')
for word in fact_words:
print(word.capitalize())
# [ ] convert the string, code_tip, into a list made from splitting on the letter "o"
print(daily_fact, '\n')
code_tip = daily_fact.split('o')
print(code_tip)
# [ ] split poem on "b" to create a list: poem_words
# [ ] print poem_words by iterating the list
poem = "The bright brain, has bran!"
poem_words = poem.split('b')
for b in poem_words:
print(b)
# [ ] print a comma separated string output from the list of Halogen elements using ".join()"
halogens = ['Chlorine', 'Florine', 'Bromine', 'Iodine']
print(','.join(halogens))
# [ ] split the sentence, code_tip, into a words list
# [ ] print the joined words in the list with no spaces in-between
# [ ] Bonus: capitalize each word in the list before .join()
code_tip ="Read code aloud or explain the code step by step to a peer"
no_space = ''
code_tip.split()
for code in code_tip:
code = str(code.capitalize())
code
print(code)
print("".join(code))
# [ ] cast the long_word into individual letters list
# [ ] print each letter on a line
long_word = 'decelerating'
print(list(long_word))
for long in long_word:
print(long)
# [ ] use use end= in print to output each string in questions with a "?" and on new lines
questions = ["What's the closest planet to the Sun", "How deep do Dolphins swim", "What time is it"]
for q in questions:
print(q, end= '?\n')
# [ ] print each item in foot bones
# - capitalized, both words if two word name
# - separated by a comma and space
# - and keeping on a single print line
foot_bones = ["calcaneus", "talus", "cuboid", "navicular", "lateral cuneiform",
"intermediate cuneiform", "medial cuneiform"]
for foot in foot_bones:
if " " in foot:
print(foot.title(), end=', ')
else:
print(foot, end=', ')
| 0.147801 | 0.894698 |
## Gaussian Transformation with Feature-Engine
Scikit-learn has recently released transformers to do Gaussian mappings as they call the variable transformations. The PowerTransformer allows to do Box-Cox and Yeo-Johnson transformation. With the FunctionTransformer, we can specify any function we want.
The transformers per se, do not allow to select columns, but we can do so using a third transformer, the ColumnTransformer
Another thing to keep in mind is that Scikit-learn transformers return NumPy arrays, and not dataframes, so we need to be mindful of the order of the columns not to mess up with our features.
## Important
Box-Cox and Yeo-Johnson transformations need to learn their parameters from the data. Therefore, as always, before attempting any transformation it is important to divide the dataset into train and test set.
In this demo, I will not do so for simplicity, but when using this transformation in your pipelines, please make sure you do so.
## In this demo
We will see how to implement variable transformations using Scikit-learn and the House Prices dataset.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
import feature_engine.transformation as vt
data = pd.read_csv('../houseprice.csv')
data.head()
```
## Plots to assess normality
To visualise the distribution of the variables, we plot a histogram and a Q-Q plot. In the Q-Q pLots, if the variable is normally distributed, the values of the variable should fall in a 45 degree line when plotted against the theoretical quantiles. We discussed this extensively in Section 3 of this course.
```
# plot the histograms to have a quick look at the variable distribution
# histogram and Q-Q plots
def diagnostic_plots(df, variable):
# function to plot a histogram and a Q-Q plot
# side by side, for a certain variable
plt.figure(figsize=(15,6))
plt.subplot(1, 2, 1)
df[variable].hist()
plt.subplot(1, 2, 2)
stats.probplot(df[variable], dist="norm", plot=plt)
plt.show()
diagnostic_plots(data, 'LotArea')
diagnostic_plots(data, 'GrLivArea')
```
## LogTransformer
```
lt = vt.LogTransformer(variables = ['LotArea', 'GrLivArea'])
lt.fit(data)
# variables that will be transformed
lt.variables_
data_tf = lt.transform(data)
diagnostic_plots(data_tf, 'LotArea')
# transformed variable
diagnostic_plots(data_tf, 'GrLivArea')
```
## ReciprocalTransformer
```
rt = vt.ReciprocalTransformer(variables = ['LotArea', 'GrLivArea'])
rt.fit(data)
data_tf = rt.transform(data)
diagnostic_plots(data_tf, 'LotArea')
# transformed variable
diagnostic_plots(data_tf, 'GrLivArea')
```
## ExponentialTransformer
```
et = vt.PowerTransformer(variables = ['LotArea', 'GrLivArea'])
et.fit(data)
data_tf = et.transform(data)
diagnostic_plots(data_tf, 'LotArea')
# transformed variable
diagnostic_plots(data_tf, 'GrLivArea')
```
## BoxCoxTransformer
```
bct = vt.BoxCoxTransformer(variables = ['LotArea', 'GrLivArea'])
bct.fit(data)
# these are the exponents for the BoxCox transformation
bct.lambda_dict_
data_tf = bct.transform(data)
diagnostic_plots(data_tf, 'LotArea')
# transformed variable
diagnostic_plots(data_tf, 'GrLivArea')
```
## Yeo-Johnson Transformer
Yeo-Johnson Transformer will be available in the next release of Feauture-Engine!!!
```
yjt = vt.YeoJohnsonTransformer(variables = ['LotArea', 'GrLivArea'])
yjt.fit(data)
# these are the exponents for the Yeo-Johnson transformation
yjt.lambda_dict_
data_tf = yjt.transform(data)
diagnostic_plots(data_tf, 'LotArea')
# transformed variable
diagnostic_plots(data_tf, 'GrLivArea')
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
import feature_engine.transformation as vt
data = pd.read_csv('../houseprice.csv')
data.head()
# plot the histograms to have a quick look at the variable distribution
# histogram and Q-Q plots
def diagnostic_plots(df, variable):
# function to plot a histogram and a Q-Q plot
# side by side, for a certain variable
plt.figure(figsize=(15,6))
plt.subplot(1, 2, 1)
df[variable].hist()
plt.subplot(1, 2, 2)
stats.probplot(df[variable], dist="norm", plot=plt)
plt.show()
diagnostic_plots(data, 'LotArea')
diagnostic_plots(data, 'GrLivArea')
lt = vt.LogTransformer(variables = ['LotArea', 'GrLivArea'])
lt.fit(data)
# variables that will be transformed
lt.variables_
data_tf = lt.transform(data)
diagnostic_plots(data_tf, 'LotArea')
# transformed variable
diagnostic_plots(data_tf, 'GrLivArea')
rt = vt.ReciprocalTransformer(variables = ['LotArea', 'GrLivArea'])
rt.fit(data)
data_tf = rt.transform(data)
diagnostic_plots(data_tf, 'LotArea')
# transformed variable
diagnostic_plots(data_tf, 'GrLivArea')
et = vt.PowerTransformer(variables = ['LotArea', 'GrLivArea'])
et.fit(data)
data_tf = et.transform(data)
diagnostic_plots(data_tf, 'LotArea')
# transformed variable
diagnostic_plots(data_tf, 'GrLivArea')
bct = vt.BoxCoxTransformer(variables = ['LotArea', 'GrLivArea'])
bct.fit(data)
# these are the exponents for the BoxCox transformation
bct.lambda_dict_
data_tf = bct.transform(data)
diagnostic_plots(data_tf, 'LotArea')
# transformed variable
diagnostic_plots(data_tf, 'GrLivArea')
yjt = vt.YeoJohnsonTransformer(variables = ['LotArea', 'GrLivArea'])
yjt.fit(data)
# these are the exponents for the Yeo-Johnson transformation
yjt.lambda_dict_
data_tf = yjt.transform(data)
diagnostic_plots(data_tf, 'LotArea')
# transformed variable
diagnostic_plots(data_tf, 'GrLivArea')
| 0.59514 | 0.984869 |
```
import time
import matplotlib.pyplot as plt
import pandas as pd
import pickle
import simpy
from sim.requestgen import *
```
## RPS profiles
rps profiles yield a target average request per second pattern at a given time
the code below runs a simulation for `until` seconds and samples from the rps profile every 0.5 seconds to generate the graph.
```
def run_rps_profile(env, profile, until, time_step=0.5):
x = list()
y = list()
def generate_profile_data():
while True:
rps = next(profile)
x.append(env.now)
y.append(rps)
yield env.timeout(time_step)
env.process(generate_profile_data())
env.run(until=until)
return x,y
```
### constant profile
does what you'd expect.
```
env = simpy.Environment()
profile = constant_rps_profile(50)
t, rps = run_rps_profile(env, profile, 200)
plt.plot(t, rps)
plt.ylim(0,100)
plt.show()
```
### sine profile
replicates a sine wave with a given peak (max rps) and period (simulation time in seconds between peaks).
```
env = simpy.Environment()
profile = sine_rps_profile(env, 80, period=100)
t, rps = run_rps_profile(env, profile, 200)
plt.plot(t, rps)
plt.ylim(0, 100)
plt.show()
```
### random walk
creates a random walk pattern with
a start rps value (mu),
standard deviation value (sigma), walk will be spikier with higher value,
and max_rps which is the rps cap at where sampled values will be rejected
```
profile = randomwalk_rps_profile(mu=10, sigma=1, max_rps=100)
t, rps = run_rps_profile(simpy.Environment(), profile, 200)
plt.plot(t, rps)
plt.ylim(0, 100)
plt.show()
profile = randomwalk_rps_profile(mu=10, sigma=5, max_rps=100)
t, rps = run_rps_profile(simpy.Environment(), profile, 200)
plt.plot(t, rps)
plt.ylim(0, 100)
plt.show()
```
## Arrival profiles
rps profiles can be decorated using arrival profiles to get a more realistic request pattern.
the code runs a simuluation for `until` simulation seconds, and returns a data frame where each record contains the
current sim time, and the interarrival to the next event.
```
def run_arrival_profile(env, ia_gen, until):
x = list()
y = list()
def event_generator():
while True:
ia = next(ia_gen)
x.append(env.now)
y.append(ia)
yield env.timeout(ia)
then = time.time()
env.process(event_generator())
env.run(until=until)
print('simulating %d events took %.2f sec' % (len(x), time.time() - then))
df = pd.DataFrame(data = {'simtime': x, 'ia': y}, index=pd.DatetimeIndex(pd.to_datetime(x,unit='s',origin='unix')))
return df
profile = expovariate_arrival_profile(constant_rps_profile(rps=20))
df = run_arrival_profile(simpy.Environment(), profile, until=3600)
x1 = df['ia'].rolling('1s').count()
x2 = df['ia'].rolling('60s').count() / 60
plt.plot(x1)
plt.plot(x2)
plt.show()
```
### with varying rps
with some rps profiles that can reach 0 rps, it is necessary to use a `max_ia` value to guard against `rps==0`
(where `ia = 1/rps` and would otherwise lead to math errors).
The `max_ia` is, in other words, the longest possible gap between events (e.g., 120 seconds).
```
env = simpy.Environment()
profile = expovariate_arrival_profile(sine_rps_profile(env, max_rps=40, period=90), max_ia=35)
df = run_arrival_profile(env, profile, until=2000)
x1 = df['ia'].rolling('1s').count()
x2 = df['ia'].rolling('60s').count() / 60
plt.plot(x1)
plt.plot(x2)
plt.show()
import matplotlib.dates as dates
fig, axs = plt.subplots(1,1, figsize=(6.8,4.27))
env = simpy.Environment()
profile = expovariate_arrival_profile(constant_rps_profile(50))
df = run_arrival_profile(env, profile, until=2000)
x1 = df['ia'].rolling('1s').count()
x2 = df['ia'].rolling('60s').count() / 60
plt.plot(x1)
plt.plot(x2)
plt.gca().xaxis.set_major_locator(dates.MinuteLocator(byminute=[0,5,10,15,20,25,30,45], interval = 1))
plt.gca().xaxis.set_major_formatter(dates.DateFormatter('%H:%M'))
plt.ylabel('reqeuests per second')
plt.xlabel('Time')
plt.show()
env = simpy.Environment()
profile = expovariate_arrival_profile(sine_rps_profile(env, 80, period=100),max_ia=240)
df = run_arrival_profile(env, profile, until=3600)
x1 = df['ia'].rolling('1s').count()
x2 = df['ia'].rolling('60s').count() / 60
plt.plot(x1)
plt.plot(x2)
plt.show()
```
# Save and replay profile
In the first step you create a profile with profile generator helper functions.
You can either create your own Environment instance or let `save_requests` generate a default simpy Environment.
```
profile = lambda env: expovariate_arrival_profile(sine_rps_profile(env, 80, period=100),max_ia=240)
file = '/tmp/exp_profile.pkl'
save_requests(profile, duration=100, file=file)
with open(file, 'rb') as fd:
print(pickle.load(fd)[:5])
```
Afterwards, use `pre_recorded_profile` as interarrival generator.
```
profile = pre_recorded_profile(file)
df = run_arrival_profile(simpy.Environment(), profile, until=100)
x1 = df['ia'].rolling('1s').count()
x2 = df['ia'].rolling('60s').count() / 60
plt.plot(x1)
plt.plot(x2)
plt.show()
```
|
github_jupyter
|
import time
import matplotlib.pyplot as plt
import pandas as pd
import pickle
import simpy
from sim.requestgen import *
def run_rps_profile(env, profile, until, time_step=0.5):
x = list()
y = list()
def generate_profile_data():
while True:
rps = next(profile)
x.append(env.now)
y.append(rps)
yield env.timeout(time_step)
env.process(generate_profile_data())
env.run(until=until)
return x,y
env = simpy.Environment()
profile = constant_rps_profile(50)
t, rps = run_rps_profile(env, profile, 200)
plt.plot(t, rps)
plt.ylim(0,100)
plt.show()
env = simpy.Environment()
profile = sine_rps_profile(env, 80, period=100)
t, rps = run_rps_profile(env, profile, 200)
plt.plot(t, rps)
plt.ylim(0, 100)
plt.show()
profile = randomwalk_rps_profile(mu=10, sigma=1, max_rps=100)
t, rps = run_rps_profile(simpy.Environment(), profile, 200)
plt.plot(t, rps)
plt.ylim(0, 100)
plt.show()
profile = randomwalk_rps_profile(mu=10, sigma=5, max_rps=100)
t, rps = run_rps_profile(simpy.Environment(), profile, 200)
plt.plot(t, rps)
plt.ylim(0, 100)
plt.show()
def run_arrival_profile(env, ia_gen, until):
x = list()
y = list()
def event_generator():
while True:
ia = next(ia_gen)
x.append(env.now)
y.append(ia)
yield env.timeout(ia)
then = time.time()
env.process(event_generator())
env.run(until=until)
print('simulating %d events took %.2f sec' % (len(x), time.time() - then))
df = pd.DataFrame(data = {'simtime': x, 'ia': y}, index=pd.DatetimeIndex(pd.to_datetime(x,unit='s',origin='unix')))
return df
profile = expovariate_arrival_profile(constant_rps_profile(rps=20))
df = run_arrival_profile(simpy.Environment(), profile, until=3600)
x1 = df['ia'].rolling('1s').count()
x2 = df['ia'].rolling('60s').count() / 60
plt.plot(x1)
plt.plot(x2)
plt.show()
env = simpy.Environment()
profile = expovariate_arrival_profile(sine_rps_profile(env, max_rps=40, period=90), max_ia=35)
df = run_arrival_profile(env, profile, until=2000)
x1 = df['ia'].rolling('1s').count()
x2 = df['ia'].rolling('60s').count() / 60
plt.plot(x1)
plt.plot(x2)
plt.show()
import matplotlib.dates as dates
fig, axs = plt.subplots(1,1, figsize=(6.8,4.27))
env = simpy.Environment()
profile = expovariate_arrival_profile(constant_rps_profile(50))
df = run_arrival_profile(env, profile, until=2000)
x1 = df['ia'].rolling('1s').count()
x2 = df['ia'].rolling('60s').count() / 60
plt.plot(x1)
plt.plot(x2)
plt.gca().xaxis.set_major_locator(dates.MinuteLocator(byminute=[0,5,10,15,20,25,30,45], interval = 1))
plt.gca().xaxis.set_major_formatter(dates.DateFormatter('%H:%M'))
plt.ylabel('reqeuests per second')
plt.xlabel('Time')
plt.show()
env = simpy.Environment()
profile = expovariate_arrival_profile(sine_rps_profile(env, 80, period=100),max_ia=240)
df = run_arrival_profile(env, profile, until=3600)
x1 = df['ia'].rolling('1s').count()
x2 = df['ia'].rolling('60s').count() / 60
plt.plot(x1)
plt.plot(x2)
plt.show()
profile = lambda env: expovariate_arrival_profile(sine_rps_profile(env, 80, period=100),max_ia=240)
file = '/tmp/exp_profile.pkl'
save_requests(profile, duration=100, file=file)
with open(file, 'rb') as fd:
print(pickle.load(fd)[:5])
profile = pre_recorded_profile(file)
df = run_arrival_profile(simpy.Environment(), profile, until=100)
x1 = df['ia'].rolling('1s').count()
x2 = df['ia'].rolling('60s').count() / 60
plt.plot(x1)
plt.plot(x2)
plt.show()
| 0.355999 | 0.869327 |
```
%run ../../common/import_all.py
from common.setup_notebook import set_css_style, setup_matplotlib, config_ipython
config_ipython()
setup_matplotlib()
set_css_style()
```
# Setting up neural networks
The way neural networks get set up changes the way they perform. Note that in general these algorithms have lots of parameters (all the weights and biases) and it's well known that with lots of parameters you can fit everything. Overfitting is a very common problem with neural networks. The discussion here loosely follows the brilliant chapter 2 of the wonderful Nielsen's book [[1]](#nielsen).
## The cost function
Following gradient descent, weights and biases of a network change proportionally to the derivatives of the cost function; if these are small, learning will be slow. Now, a quadratic cost function, whose form would be
$$
C \propto (y-f(wx+b))^2 \ ,
$$
$f$ being the network prediction and $y$ the actual value, has derivatives $\frac{\partial C}{\partial w} \propto (y-f) f' x$ and $\frac{\partial f}{\partial b} = (y-f)f'$. Now, with sigmoid neurons we have the deivative of the sigmoid which is very small when the output if close to 1, as the curve flattens, and this makes the learning quite slow.
A typical way to do better on this is to use a different cost function. A choice can be the cross-entropy,
$$
C = - \frac{1}{n} \sum_x [y \log f + (1-y) \log(1-f)] \ ,
$$
$n$ being the number of samples. This choice solves the problem as its derivatives will be proportional to $f - y$, which is the error in the prediction, making the learning proportional to the error itself. Very convenient: it's like a human who learns faster the wronger he/she is about something!
## Regularising
To tackle overfitting, regularisation is a common choice as per machine learning tasks in general. One can apply $L_2$ or $L_1$ regularisation terms as per usual, or another common choice in neural networks is the so-called *dropout*, which works by actually modifying the network itself.
What you do is starting with the whole network as is and then removing half of the neurons in the hidden layers (call them "dropout neurons"), choosing them at random. You make it proceed as normal and then repeat the procedure by choosing another set of dropout neurons. The fact that you'll have half the neurons in the hidden layers has to be compensated by halving their outputs as well. The whole mechanism is a sort of averaged result of the training of different networks and the reason why it works in reducing overfitting is because with less neurons in the hidden parts there is less complexity the networks learns, and then an averaged result is computed.
## Initialising the weights
The easiest way to initialise the network weights is to extract them at random from a Gaussian distribution. If you choose a Gaussian with standard deviation 1 for all neurong, you end up with the variable $\sum_j w_j x_j + b$ being distributed with a Gaussian which is very broad. This will make for easy saturation in several neurons as due to this broadness the probability to have large values is not so small and so the result of the sigmoid function will be easily close to 0 or 1.
To prevent this, the usual choice is to extract the weights from a Gaussian with standard deviation equal to $\frac{a}{\sqrt{n_{in}}}$, $n_{in}$ being the number of input weights in the neuron.
## Augmenting the training set
The main reason why neural networks typically require lots of training data is because they have to learn so many parameters. Augmenting the training set by perturbing it to create new, artificial data points is a usual trick. For instance, in the case of images, you can slightly rotate them to create new ones.
## References
1. <a name="nielsen"></a> M Nielsen, [**Neural networks and deep learning**](http://neuralnetworksanddeeplearning.com/), 2017
|
github_jupyter
|
%run ../../common/import_all.py
from common.setup_notebook import set_css_style, setup_matplotlib, config_ipython
config_ipython()
setup_matplotlib()
set_css_style()
| 0.181082 | 0.959913 |
# Chapter 5 - Ensmble Methods
```
import sys
sys.path.append("../")
from utils import *
np.random.seed(7)
```
## Bias-Variance Trade-off
$\newcommand{\coloneqq}{\mathrel{\vcenter{:}}=}$
$\newcommand{\E}{\mathbb{E}}$
$\newcommand{\y}{\mathbf{y}}$
Let us compute the bias-variance trade-off graph for a problem of polynomial fitting. Recall, that the error decomposition for the MSE loss function is: $$ MSE_{\y}\left(\widehat{\y}\right)=\E\left[\left(\widehat{\y}-\y^*\right)^2\right] = Var\left(\widehat{\y}\right) + Bias^2\left(\widehat{\y}\right) $$
Where the bias and variances of estimators are defined as: $$ Bias\left(\widehat{\y}\right) \coloneqq \E\left[\widehat{\y}\right] - \y, \quad Var\left(\widehat{\y}\right)\coloneqq \E\left[\left(\widehat{\y}-\E\left[\widehat{\y}\right]\right)^2\right]$$
As the $\E\left[\widehat{\y}\right]$ is over the selection of the training sets, we will first defined the "ground truth" model and retrieve a set $\mathbf{X},\y$ from it. Then, we will repeatedly sample Gaussian noise $\varepsilon$ and fit a polynomial model over $\mathbf{X},\y+\varepsilon$. In the code below `y_` denotes the true $\y$ values and `y` the responses after adding the noise.
```
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import train_test_split
# Generate data according to a polynomial model of degree 4
model = lambda x: x**4 - 2*x**3 - .5*x**2 + 1
X = np.linspace(-1.6, 2, 60)
y = model(X).astype(np.float64)
X_train, X_test, y_train_, y_test_ = train_test_split(X, y, test_size=.5, random_state=13)
# The following functions recieve two matrices of the true values and the predictions
# where rows represent different runs and columns the different responses in the run
def variance(y_pred):
return np.mean(np.var(y_pred - np.mean(y_pred, axis=0), axis=0, ddof=1))
def bias(y_pred, y_true):
mean_y = y_pred.mean(axis=0)
return np.mean((mean_y - y_true)**2)
def error(y_pred, y):
return np.mean((y_pred - y)**2)
ks, repetitions = list(range(11)), 100
biases, variances, errors = np.zeros(len(ks)), np.zeros(len(ks)), np.zeros(len(ks))
for i, k in enumerate(ks):
# Add noise to train and test samples
y_train = y_train_[np.newaxis, :] + np.random.normal(0, 3, size=(repetitions, len(y_train_)))
y_test = y_test_ + np.random.normal(size=len(y_test_))
# Fit model multiple times (each time over a slightly different training sample) and predict over test set
y_preds = np.array([make_pipeline(PolynomialFeatures(k), LinearRegression())\
.fit(X_train.reshape(-1,1), y_train[j,:])\
.predict(X_test.reshape(-1,1))
for j in range(repetitions)])
biases[i], variances[i], errors[i] = bias(y_preds, y_test_), variance(y_preds), error(y_preds, y_test_)
fig = go.Figure([
go.Scatter(x=ks, y=biases, name=r"$Bias^2$"),
go.Scatter(x=ks, y=variances, name=r"$Variance$"),
go.Scatter(x=ks, y=biases+variances, name=r"$Bias^2+Variance$"),
go.Scatter(x=ks, y=errors, name=r"$Generalization\,\,Error$")],
layout=go.Layout(title=r"$\text{Generalization Error Decomposition - Bias-Variance of Polynomial Fitting}$",
xaxis=dict(title=r"$\text{Degree of Fitted Polymonial}$"),
width=800, height=500))
fig.write_image(f"../figures/bias_variance_poly.png")
fig.show()
```
## Committee Decisions
Let $X_1,\ldots,X_T\overset{iid}{\sim}Ber\left(p\right)$ taking values in $\left\{\pm1\right\}$, with the probability of each being correct being $p>0.5$. We can bound the probability of the committee being correct by: $$\mathbb{P}\left(\sum X_i > 0\right) \geq 1-\exp\left(-\frac{T}{2p}\left(p-\frac{1}{2}\right)^2\right)$$
Let us show this bounding below empirically by sampling increasing amount of such Bernoulli random variables, and to do so for different values of $p$.
```
bound = np.vectorize(lambda p, T: 1-np.exp(-(T/(2*p))*(p-.5)**2))
ps = np.concatenate([[.5001], np.linspace(.55, 1, 14)])
Ts = [1,5,10,15,20,25,50,75,100,125,150,175,200,250,300,400,500,600]
frames = []
for p in ps:
theoretical = bound(p,Ts)
empirical = np.array([[np.sum(np.random.choice([1, -1], T, p=[p, 1-p])) > 0 for _ in range(100)] for T in Ts])
frames.append(go.Frame(data=[go.Scatter(x=Ts, y=theoretical, mode="markers+lines", name="Theoretical Bound",
line=dict(color="grey", dash='dash')),
go.Scatter(x=Ts, y=empirical.mean(axis=1),
error_y = dict(type="data", array=empirical.var(axis=1)),
mode="markers+lines", marker_color="black", name="Empirical Probability")],
layout=go.Layout(
title_text=r"$\text{{Committee Correctness Probability As Function of }}\
T\text{{: }}p={0}$".format(round(p,3)),
xaxis=dict(title=r"$T \text{ - Committee Size}$"),
yaxis=dict(title=r"$\text{Probability of Being Correct}$", range=[0.0001,1.01]))))
fig = go.Figure(data=frames[0]["data"],
frames=frames[1:],
layout=go.Layout(
title=frames[0]["layout"]["title"],
xaxis=frames[0]["layout"]["xaxis"],
yaxis=frames[0]["layout"]["yaxis"],
updatemenus=[dict(type="buttons", buttons=[AnimationButtons.play(frame_duration=1000),
AnimationButtons.pause()])] ))
animation_to_gif(fig, "../figures/committee_decision_correctness.gif", 700, width=600, height=450)
fig.show()
```
In this case, of uncorrelated committee members, we have shown the variance in the committee decision is: $$ Var\left(\sum X_i\right) = \frac{4}{T}p\left(1-p\right)$$
Let us simulate such a scenario and see what is the empirical variance we achieve
```
ps = np.concatenate([[.5001], np.linspace(.55, 1, 10)])
Ts = [1,5,10,15,20,25,50,75,100,125,150,175,200,250,300,400,500,600]
results = np.array([np.var(np.random.binomial(Ts, p, (10000, len(Ts))) >= (np.array(Ts)/2), axis=0, ddof=1) for p in ps])
df = pd.DataFrame(results, columns=Ts, index=ps)
fig = go.Figure(go.Heatmap(x=df.columns.tolist(), y=df.index.tolist(), z=df.values.tolist(), colorscale="amp"),
layout=go.Layout(title=r"$\text{Variance of Committee Decision - Independent Members}$",
xaxis=dict(title=r"$T\text{ - Committee Size}$", type="category"),
yaxis=dict(title=r"$p\text{ - Member Correctness Probability}$"),
width=800, height=500))
fig.write_image("../figures/uncorrelated_committee_decision.png")
fig.show()
```
For a set of correlated random variables, with correlation coefficient of $\rho$ and variance of $\sigma^2$, the variane of the committee's decision is: $$ Var\left(\sum X_i\right) = \rho \sigma^2 + \frac{1}{T}\left(1-\rho\right)\sigma^2 $$
Let us set $\sigma^2$ and investigate the relation between $\rho$ and $T$.
```
sigma = round((lambda p: p*(1-p))(.6), 3)
repeats = 10000
rho = np.linspace(0,1, 10)
Ts = np.array([1,5,10,15,20,25,50,75,100,125,150,175,200,250,300,400,500,600])
variances = np.zeros((len(rho), len(Ts)))
for i, r in enumerate(rho):
# Perform `repetitions` times T Bernoulli experiments
decisions = np.random.binomial(1, sigma, size=(repeats, max(Ts)))
change = np.c_[np.zeros(decisions.shape[0]), np.random.uniform(size=(repeats, max(Ts)-1)) <= r]
correlated_decisions = np.ma.array(decisions, mask=change).filled(fill_value=decisions[:,0][:, None])
correlated_decisions[correlated_decisions == 0] = -1
variances[i,:] = np.var(np.cumsum(correlated_decisions, axis=1) >= 0, axis=0)[Ts-1]
df = pd.DataFrame(variances, columns=Ts, index=rho)
fig = go.Figure(go.Heatmap(x=df.columns.tolist(), y=df.index.tolist(), z=df.values.tolist(), colorscale="amp"),
layout=go.Layout(title=rf"$\text{{Variance of Committee Decision - Correlated Committee Members - Member Decision Variance }}\sigma^2 = {sigma}$",
xaxis=dict(title=r"$T\text{ - Committee Size}$", type="category"),
yaxis=dict(title=r"$\rho\text{ - Correlation Between Members}$"),
width=500, height=300))
fig.write_image("../figures/correlated_committee_decision.png")
fig.show()
```
## Bootstrapping
### Empirical CDF
```
from statsmodels.distributions.empirical_distribution import ECDF
from scipy.stats import norm
data = np.random.normal(size=10000)
frames = []
for m in [5,10, 15, 20, 25, 50, 75, 100, 150, 200, 250, 500, 750, 1000,1500, 2000, 2500, 5000, 7500, 10000]:
ecdf = ECDF(data[:m])
frames.append(go.Frame(
data = [
go.Scatter(x=data[:m], y=[-.1]*m, mode="markers", marker=dict(size=5, color=norm.pdf(data[:m])), name="Samples"),
go.Scatter(x=ecdf.x, y=ecdf.y, marker_color="black", name="Empirical CDF"),
go.Scatter(x=np.linspace(-3,3,100), y=norm.cdf(np.linspace(-3,3,100), 0, 1), mode="lines",
line=dict(color="grey", dash='dash'), name="Theoretical CDF")],
layout = go.Layout(title=rf"$\text{{Empirical CDF of }}m={m}\text{{ Samples Drawn From }}\mathcal{{N}}\left(0,1\right)$")
))
fig = go.Figure(data = frames[0].data, frames=frames[1:],
layout=go.Layout(title=frames[0].layout.title,
updatemenus=[dict(type="buttons", buttons=[AnimationButtons.play(frame_duration=1000),
AnimationButtons.pause()])]))
animation_to_gif(fig, "../figures/empirical_cdf.gif", 700, width=600, height=450)
fig.show()
```
## AdaBoost
```
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
class StagedAdaBoostClassifier(AdaBoostClassifier):
def __init__(self, **kwargs):
super().__init__(*kwargs)
self.sample_weights = []
def _boost(self, iboost, X, y, sample_weight, random_state):
self.sample_weights.append(sample_weight.copy())
# self.res_list.append(super()._boost(iboost, X, y, sample_weight, random_state))
# return self.res_list[-1]
return super()._boost(iboost, X, y, sample_weight, random_state)
def _iteration_callback(self, iboost, X, y, sample_weight,
estimator_weight = None, estimator_error = None):
self.sample_weights.append(sample_weight.copy())
from sklearn.datasets import make_gaussian_quantiles
# Construct dataset of two sets of Gaussian quantiles
X1, y1 = make_gaussian_quantiles(cov=2., n_samples=50, n_features=2, n_classes=2, random_state=1)
X2, y2 = make_gaussian_quantiles(mean=(3, 3), cov=1.5, n_samples=50, n_features=2, n_classes=2, random_state=1)
X, y = np.concatenate((X1, X2)), np.concatenate((y1, - y2 + 1))
# Form grid of points to use for plotting decision boundaries
lims = np.array([X.min(axis=0), X.max(axis=0)]).T + np.array([-.2, .2])
xx, yy = list(map(np.ravel, np.meshgrid(np.arange(*lims[0], .2), np.arange(*lims[1], .2))))
# Fit AdaBoost classifier over training set
model = StagedAdaBoostClassifier().fit(X, y)
# Retrieve model train error at each iteration of fitting
staged_scores = list(model.staged_score(X, y))
# Predict labels of grid points at each iteration of fitting
staged_predictions = np.array(list(model.staged_predict(np.vstack([xx, yy]).T)))
# Create animation frames
frames = []
for i in range(len(staged_predictions)):
frames.append(go.Frame(
data=[
# Scatter of sample weights
go.Scatter(x=X[:,0], y= X[:,1], mode='markers', showlegend=False, marker=dict(color=y, colorscale=class_colors(2),
size=np.maximum(230*model.sample_weights[i]+1, np.ones(len(model.sample_weights[i]))*5)),
xaxis="x", yaxis="y"),
# Staged decision surface
go.Scatter(x=xx, y=yy, marker=dict(symbol = "square", colorscale=custom, color=staged_predictions[i,:]),
mode='markers', opacity = 0.4, showlegend=False, xaxis="x2", yaxis="y2"),
# Scatter of train samples with true class
go.Scatter(x=X[:,0], y=X[:,1], mode='markers', showlegend=False, xaxis="x2", yaxis="y2",
marker=dict(color=y, colorscale=class_colors(2), symbol=class_symbols[y])),
# Scatter of staged score
go.Scatter(x=list(range(i)), y=staged_scores[:i], mode='lines+markers', showlegend=False, marker_color="black",
xaxis="x3", yaxis="y3")
],
layout = go.Layout(title = rf"$\text{{AdaBoost Training - Iteration }}{i+1}/{len(staged_predictions)}$)"),
traces=[0, 1, 2, 3]))
fig = make_subplots(rows=2, cols=2, row_heights=[350, 200],
subplot_titles=(r"$\text{Sample Weights}$", r"$\text{Decisions Boundaries}$",
r"$\text{Ensemble Train Accuracy}$"),
specs=[[{}, {}], [{"colspan": 2}, None]])\
.add_traces(data=frames[0].data, rows=[1,1,1,2], cols=[1,2,2,1])\
.update(frames = frames)\
.update_layout(title=frames[0].layout.title,
updatemenus = [dict(type="buttons", buttons=[AnimationButtons.play(), AnimationButtons.pause()])],
width=600, height=550, margin=dict(t=100))\
.update_yaxes(range=[min(staged_scores)-.1, 1.1], autorange=False, row=2, col=1)\
.update_xaxes(range=[0, len(frames)], autorange=False, row=2, col=1)
animation_to_gif(fig, "../figures/adaboost.gif", 1000, width=600, height=550)
fig.show()
```
|
github_jupyter
|
import sys
sys.path.append("../")
from utils import *
np.random.seed(7)
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import train_test_split
# Generate data according to a polynomial model of degree 4
model = lambda x: x**4 - 2*x**3 - .5*x**2 + 1
X = np.linspace(-1.6, 2, 60)
y = model(X).astype(np.float64)
X_train, X_test, y_train_, y_test_ = train_test_split(X, y, test_size=.5, random_state=13)
# The following functions recieve two matrices of the true values and the predictions
# where rows represent different runs and columns the different responses in the run
def variance(y_pred):
return np.mean(np.var(y_pred - np.mean(y_pred, axis=0), axis=0, ddof=1))
def bias(y_pred, y_true):
mean_y = y_pred.mean(axis=0)
return np.mean((mean_y - y_true)**2)
def error(y_pred, y):
return np.mean((y_pred - y)**2)
ks, repetitions = list(range(11)), 100
biases, variances, errors = np.zeros(len(ks)), np.zeros(len(ks)), np.zeros(len(ks))
for i, k in enumerate(ks):
# Add noise to train and test samples
y_train = y_train_[np.newaxis, :] + np.random.normal(0, 3, size=(repetitions, len(y_train_)))
y_test = y_test_ + np.random.normal(size=len(y_test_))
# Fit model multiple times (each time over a slightly different training sample) and predict over test set
y_preds = np.array([make_pipeline(PolynomialFeatures(k), LinearRegression())\
.fit(X_train.reshape(-1,1), y_train[j,:])\
.predict(X_test.reshape(-1,1))
for j in range(repetitions)])
biases[i], variances[i], errors[i] = bias(y_preds, y_test_), variance(y_preds), error(y_preds, y_test_)
fig = go.Figure([
go.Scatter(x=ks, y=biases, name=r"$Bias^2$"),
go.Scatter(x=ks, y=variances, name=r"$Variance$"),
go.Scatter(x=ks, y=biases+variances, name=r"$Bias^2+Variance$"),
go.Scatter(x=ks, y=errors, name=r"$Generalization\,\,Error$")],
layout=go.Layout(title=r"$\text{Generalization Error Decomposition - Bias-Variance of Polynomial Fitting}$",
xaxis=dict(title=r"$\text{Degree of Fitted Polymonial}$"),
width=800, height=500))
fig.write_image(f"../figures/bias_variance_poly.png")
fig.show()
bound = np.vectorize(lambda p, T: 1-np.exp(-(T/(2*p))*(p-.5)**2))
ps = np.concatenate([[.5001], np.linspace(.55, 1, 14)])
Ts = [1,5,10,15,20,25,50,75,100,125,150,175,200,250,300,400,500,600]
frames = []
for p in ps:
theoretical = bound(p,Ts)
empirical = np.array([[np.sum(np.random.choice([1, -1], T, p=[p, 1-p])) > 0 for _ in range(100)] for T in Ts])
frames.append(go.Frame(data=[go.Scatter(x=Ts, y=theoretical, mode="markers+lines", name="Theoretical Bound",
line=dict(color="grey", dash='dash')),
go.Scatter(x=Ts, y=empirical.mean(axis=1),
error_y = dict(type="data", array=empirical.var(axis=1)),
mode="markers+lines", marker_color="black", name="Empirical Probability")],
layout=go.Layout(
title_text=r"$\text{{Committee Correctness Probability As Function of }}\
T\text{{: }}p={0}$".format(round(p,3)),
xaxis=dict(title=r"$T \text{ - Committee Size}$"),
yaxis=dict(title=r"$\text{Probability of Being Correct}$", range=[0.0001,1.01]))))
fig = go.Figure(data=frames[0]["data"],
frames=frames[1:],
layout=go.Layout(
title=frames[0]["layout"]["title"],
xaxis=frames[0]["layout"]["xaxis"],
yaxis=frames[0]["layout"]["yaxis"],
updatemenus=[dict(type="buttons", buttons=[AnimationButtons.play(frame_duration=1000),
AnimationButtons.pause()])] ))
animation_to_gif(fig, "../figures/committee_decision_correctness.gif", 700, width=600, height=450)
fig.show()
ps = np.concatenate([[.5001], np.linspace(.55, 1, 10)])
Ts = [1,5,10,15,20,25,50,75,100,125,150,175,200,250,300,400,500,600]
results = np.array([np.var(np.random.binomial(Ts, p, (10000, len(Ts))) >= (np.array(Ts)/2), axis=0, ddof=1) for p in ps])
df = pd.DataFrame(results, columns=Ts, index=ps)
fig = go.Figure(go.Heatmap(x=df.columns.tolist(), y=df.index.tolist(), z=df.values.tolist(), colorscale="amp"),
layout=go.Layout(title=r"$\text{Variance of Committee Decision - Independent Members}$",
xaxis=dict(title=r"$T\text{ - Committee Size}$", type="category"),
yaxis=dict(title=r"$p\text{ - Member Correctness Probability}$"),
width=800, height=500))
fig.write_image("../figures/uncorrelated_committee_decision.png")
fig.show()
sigma = round((lambda p: p*(1-p))(.6), 3)
repeats = 10000
rho = np.linspace(0,1, 10)
Ts = np.array([1,5,10,15,20,25,50,75,100,125,150,175,200,250,300,400,500,600])
variances = np.zeros((len(rho), len(Ts)))
for i, r in enumerate(rho):
# Perform `repetitions` times T Bernoulli experiments
decisions = np.random.binomial(1, sigma, size=(repeats, max(Ts)))
change = np.c_[np.zeros(decisions.shape[0]), np.random.uniform(size=(repeats, max(Ts)-1)) <= r]
correlated_decisions = np.ma.array(decisions, mask=change).filled(fill_value=decisions[:,0][:, None])
correlated_decisions[correlated_decisions == 0] = -1
variances[i,:] = np.var(np.cumsum(correlated_decisions, axis=1) >= 0, axis=0)[Ts-1]
df = pd.DataFrame(variances, columns=Ts, index=rho)
fig = go.Figure(go.Heatmap(x=df.columns.tolist(), y=df.index.tolist(), z=df.values.tolist(), colorscale="amp"),
layout=go.Layout(title=rf"$\text{{Variance of Committee Decision - Correlated Committee Members - Member Decision Variance }}\sigma^2 = {sigma}$",
xaxis=dict(title=r"$T\text{ - Committee Size}$", type="category"),
yaxis=dict(title=r"$\rho\text{ - Correlation Between Members}$"),
width=500, height=300))
fig.write_image("../figures/correlated_committee_decision.png")
fig.show()
from statsmodels.distributions.empirical_distribution import ECDF
from scipy.stats import norm
data = np.random.normal(size=10000)
frames = []
for m in [5,10, 15, 20, 25, 50, 75, 100, 150, 200, 250, 500, 750, 1000,1500, 2000, 2500, 5000, 7500, 10000]:
ecdf = ECDF(data[:m])
frames.append(go.Frame(
data = [
go.Scatter(x=data[:m], y=[-.1]*m, mode="markers", marker=dict(size=5, color=norm.pdf(data[:m])), name="Samples"),
go.Scatter(x=ecdf.x, y=ecdf.y, marker_color="black", name="Empirical CDF"),
go.Scatter(x=np.linspace(-3,3,100), y=norm.cdf(np.linspace(-3,3,100), 0, 1), mode="lines",
line=dict(color="grey", dash='dash'), name="Theoretical CDF")],
layout = go.Layout(title=rf"$\text{{Empirical CDF of }}m={m}\text{{ Samples Drawn From }}\mathcal{{N}}\left(0,1\right)$")
))
fig = go.Figure(data = frames[0].data, frames=frames[1:],
layout=go.Layout(title=frames[0].layout.title,
updatemenus=[dict(type="buttons", buttons=[AnimationButtons.play(frame_duration=1000),
AnimationButtons.pause()])]))
animation_to_gif(fig, "../figures/empirical_cdf.gif", 700, width=600, height=450)
fig.show()
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
class StagedAdaBoostClassifier(AdaBoostClassifier):
def __init__(self, **kwargs):
super().__init__(*kwargs)
self.sample_weights = []
def _boost(self, iboost, X, y, sample_weight, random_state):
self.sample_weights.append(sample_weight.copy())
# self.res_list.append(super()._boost(iboost, X, y, sample_weight, random_state))
# return self.res_list[-1]
return super()._boost(iboost, X, y, sample_weight, random_state)
def _iteration_callback(self, iboost, X, y, sample_weight,
estimator_weight = None, estimator_error = None):
self.sample_weights.append(sample_weight.copy())
from sklearn.datasets import make_gaussian_quantiles
# Construct dataset of two sets of Gaussian quantiles
X1, y1 = make_gaussian_quantiles(cov=2., n_samples=50, n_features=2, n_classes=2, random_state=1)
X2, y2 = make_gaussian_quantiles(mean=(3, 3), cov=1.5, n_samples=50, n_features=2, n_classes=2, random_state=1)
X, y = np.concatenate((X1, X2)), np.concatenate((y1, - y2 + 1))
# Form grid of points to use for plotting decision boundaries
lims = np.array([X.min(axis=0), X.max(axis=0)]).T + np.array([-.2, .2])
xx, yy = list(map(np.ravel, np.meshgrid(np.arange(*lims[0], .2), np.arange(*lims[1], .2))))
# Fit AdaBoost classifier over training set
model = StagedAdaBoostClassifier().fit(X, y)
# Retrieve model train error at each iteration of fitting
staged_scores = list(model.staged_score(X, y))
# Predict labels of grid points at each iteration of fitting
staged_predictions = np.array(list(model.staged_predict(np.vstack([xx, yy]).T)))
# Create animation frames
frames = []
for i in range(len(staged_predictions)):
frames.append(go.Frame(
data=[
# Scatter of sample weights
go.Scatter(x=X[:,0], y= X[:,1], mode='markers', showlegend=False, marker=dict(color=y, colorscale=class_colors(2),
size=np.maximum(230*model.sample_weights[i]+1, np.ones(len(model.sample_weights[i]))*5)),
xaxis="x", yaxis="y"),
# Staged decision surface
go.Scatter(x=xx, y=yy, marker=dict(symbol = "square", colorscale=custom, color=staged_predictions[i,:]),
mode='markers', opacity = 0.4, showlegend=False, xaxis="x2", yaxis="y2"),
# Scatter of train samples with true class
go.Scatter(x=X[:,0], y=X[:,1], mode='markers', showlegend=False, xaxis="x2", yaxis="y2",
marker=dict(color=y, colorscale=class_colors(2), symbol=class_symbols[y])),
# Scatter of staged score
go.Scatter(x=list(range(i)), y=staged_scores[:i], mode='lines+markers', showlegend=False, marker_color="black",
xaxis="x3", yaxis="y3")
],
layout = go.Layout(title = rf"$\text{{AdaBoost Training - Iteration }}{i+1}/{len(staged_predictions)}$)"),
traces=[0, 1, 2, 3]))
fig = make_subplots(rows=2, cols=2, row_heights=[350, 200],
subplot_titles=(r"$\text{Sample Weights}$", r"$\text{Decisions Boundaries}$",
r"$\text{Ensemble Train Accuracy}$"),
specs=[[{}, {}], [{"colspan": 2}, None]])\
.add_traces(data=frames[0].data, rows=[1,1,1,2], cols=[1,2,2,1])\
.update(frames = frames)\
.update_layout(title=frames[0].layout.title,
updatemenus = [dict(type="buttons", buttons=[AnimationButtons.play(), AnimationButtons.pause()])],
width=600, height=550, margin=dict(t=100))\
.update_yaxes(range=[min(staged_scores)-.1, 1.1], autorange=False, row=2, col=1)\
.update_xaxes(range=[0, len(frames)], autorange=False, row=2, col=1)
animation_to_gif(fig, "../figures/adaboost.gif", 1000, width=600, height=550)
fig.show()
| 0.512205 | 0.976084 |
## SQLAlchemyおよびraziを用いた化合物データベースの利用のチュートリアル
```
from collections import namedtuple
import csv
import psycopg2
import sys
from mytables import Compound
from razi.rdkit_postgresql.types import Mol, Bfp
from razi.rdkit_postgresql.functions import atompairbv_fp, torsionbv_fp, morganbv_fp, mol_amw
from rdkit import Chem, rdBase
from rdkit.Chem import Draw
from sqlalchemy import create_engine, desc, Column, Index, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
print(sys.version_info)
print(f'RDKit version: {rdBase.rdkitVersion}')
```
### 1. テーブルの定義と構築
複数のNotebookで利用ができるように化学構造情報を保存するテーブルは[mytables.py](http://0.0.0.0:8888/edit/tutorial/mytables.py)に保存し
`from mytables import Compound`で読み込んでいる。
例えば以下のセルを実行することテーブル`compounds`の定義及び構築をすることができる。
```
!python mytables.py
```
以下で具体的にどのようなテーブルが定義、構築されたか説明していく。
[mytables.py](http://0.0.0.0:8888/edit/tutorial/mytables.py)のコードも随時確認しながら進めてもらえるとよい。
[SQLAlchemy](http://docs.sqlalchemy.org/en/latest/orm/tutorial.html)をすでに利用している方にわかりやすく説明するとSQLAlchemyでテーブルの定義をする方法と同じように`razi`でも定義することができる。
その際カラムのデータ型として化学構造データを保存する`Mol`が追加されている。
最初に`create_engine`関数を用いてPostgreSQLに接続する。
引数として`postgresql://ユーザー名:パスワード@接続先:ポート/データベース名`を与える。
今回は`ユーザー名=postgres`, `パスワード=なし`, `接続先=db`, `ポート=5432`, `データベース名=postgres` であるので以下のようになる。
```
engine = create_engine('postgresql://postgres@db:5432/postgres')
Base = declarative_base(bind=engine)
```
PostgresSQLに接続したengineを引数とした`Base`クラスを作成する。
この`Base`クラスを継承したクラスを新たに作ることで`SQLAlchemy`や`Razi`ではデータベース内に存在するテーブルを定義していることになる。
実際に`Compound`という名前のテーブルを定義していってみる。
```
class Compound(Base):
__tablename__ = 'compounds'
id = Column(Integer, primary_key=True)
name = Column(String)
structure = Column(Mol)
atompair = Column(Bfp)
torsion = Column(Bfp)
morgan = Column(Bfp)
__table_args__ = (
Index('compounds_structure', 'structure',
postgresql_using='gist'),
)
def __init__(self, name, structure):
self.name = name
if isinstance(structure, Chem.Mol):
self.structure = Chem.MolToSmiles(structure)
elif isinstance(structure, str):
self.structure = structure
self.atompair = atompairbv_fp(self.structure)
self.torsion = torsionbv_fp(self.structure)
self.morgan = morganbv_fp(self.structure, 2)
def __repr__(self):
if isinstance(self.structure, Chem.Mol):
return '(%s) < %s >' % (self.name, Chem.MolToSmiles(self.structure))
return '(%s) < %s >' % (self.name, self.structure)
```
`Compound`クラスの中を詳しく説明していこう。
#### 1.1 テーブルの定義
#### 1.1.1 テーブルの名前
```
__tablename__ = 'compounds'
```
初めに`__tablename__`attributeにテーブルの名前`compounds`を定義している。
#### 1.1.2 テーブルのカラム
続けて以下のようにテーブル`compounds`の各カラムを定義する。
```
id = Column(Integer, primary_key=True)
name = Column(String)
structure = Column(Mol)
atompair = Column(Bfp)
torsion = Column(Bfp)
morgan = Column(Bfp)
```
テーブル`compounds`は`id`, `name`, `structure`という名前のカラムを持つように定義されている。
(`atompair`, `torsion`, `morgan`に関しては[razi-tutorial2-japanese](razi-tutorial2-japanese.ipynb)で説明するのでこのチュートリアルでは説明しない)
SQLAlchemyやraziではattributeに初期値として`Column`クラスをあたえることで`compounds`テーブルのカラムとして定義することができる。
さらに`Column`クラスのコンストラクタの引数に`Integer`, `String`, `Mol`を与えることで
それぞれ整数値、文字列、化学構造を保存するカラムであると定義している。
加えて`id`には引数として`primary_key=True`を与えることでPrimary key(主キー)であると定義している。
ここまでの定義でテーブル`compounds`は以下のようなテーブルだと想像してもらえるとわかりやすいと思う。
|id| name | structure |
|:---|----|---:|
|1|Benzen|c1ccccc1|
|2|Aspirin|CC(=O)Oc1ccccc1C(=O)O|
|3|Oseltamivir|CCC(CC)OC1C=C(CC(C1NC(=O)C)N)C(=O)OCC|
#### 1.1.3 インデックス
続けて`structure`カラムにインデックスを作成する。インデックスを作成することで登録しているデータ数が増えるても高速な検索を可能になる。
```
__table_args__ = (
Index('compounds_structure', 'structure',
postgresql_using='gist'),
)
```
`Index`クラスの引数は前から順番にインデックスの名前(データベース内で重複してなければどんな名前でもよい), インデックスを作成するカラム,
用いるインデックスの種類である。[RDKit database cartridge](http://www.rdkit.org/docs/Cartridge.html)では化学構造を保存する`mol`に対応しているインデックスは`gist`のみである。
#### 1.1.4 コンストラクタ
コンストラクタの中はSQLAlchemy, raziでは特に決まりがないが、最終的に各カラムに適切な値を与える必要がある。
今回は利便性を考えて引数`structure`は`str`でもRDKitの`rdkit.Chem.rdchem.Mol`オブジェクトでもよいようにしてみた。
```
def __init__(self, name, structure):
self.name = name
if isinstance(structure, Chem.Mol):
self.structure = Chem.MolToSmiles(structure)
elif isinstance(structure, str):
self.structure = structure
self.atompair = atompairbv_fp(self.structure)
self.torsion = torsionbv_fp(self.structure)
self.morgan = morganbv_fp(self.structure, 2)
```
#### 1.1.5 __repr__メソッド
最後に[\__repr__](https://docs.python.jp/3/reference/datamodel.html#object.__repr__)メソッドを定義する。こちらは`print`などでどう表示されるか定義するだけで今回話をしている化合物データベース特有の話ではない。
あくまで人間の目で見た時にどういうデータかわかりやすくしているだけであるので省略してもかまわない。
こちらも`structure` attributeに`str`オブジェクトあるいは`rdkit.Chem.rdchem.Mol`オブジェクトのいずれかが入る可能性があるのでどちらでもよいようにしてみた。
```
def __repr__(self):
if isinstance(self.structure, Chem.Mol):
return '(%s) < %s >' % (self.name, Chem.MolToSmiles(self.structure))
return '(%s) < %s >' % (self.name, self.structure)
```
以上でテーブルの定義は終わりである。
#### 1.2 テーブルの構築
次に、定義したテーブルを構築するには`Base.metadata.create_all`メソッドを実行する。
```
if __name__ == '__main__':
Base.metadata.create_all()
```
SQL言語がわかる方にわかりやすいように説明するとPostgreSQL内に以下のように定義されたテーブルが作成されている。
```
postgres=# \d compounds;
Table "public.compounds"
Column | Type | Modifiers
-----------+-------------------+--------------------------------------------------------
id | integer | not null default nextval('compounds_id_seq'::regclass)
name | character varying |
structure | mol |
atompair | bfp |
torsion | bfp |
morgan | bfp |
Indexes:
"compounds_pkey" PRIMARY KEY, btree (id)
"compounds_structure" gist (structure)
```
以上でテーブル`compounds`の定義、構築に関する説明は終わりである。
### 2. データベースに接続する
それではデータベースに接続し、[1. テーブルの定義と構築](#1.-%E3%83%86%E3%83%BC%E3%83%96%E3%83%AB%E3%81%AE%E5%AE%9A%E7%BE%A9%E3%81%A8%E6%A7%8B%E7%AF%89)で作成したテーブル`compounds`にアクセスしたりデータを登録したりしてみよう。
`create_engine`関数を用いてPostgreSQLに接続する。続けてテーブルを定義するクラスが継承する`Base`クラスを呼び出す。
最後に`sesionmaker`から作成した`Session`クラスのオブジェクトを作成し、データベースにアクセスしたりデータを登録したりすることができる。
これらの作業はデータベースに接続するたびに最初に必ず行う必要がある。
```
engine = create_engine('postgresql://postgres@db:5432/postgres')
Base = declarative_base(bind=engine)
Session = sessionmaker(bind=engine)
session = Session()
```
### 3. データの登録
#### 3.1 登録データのダウンロード
例えば以下のように`wget`などでCHEMBLから`chembl_23_chemreps.txt.gz`をダウンロードおよび解凍し、
このNotebookがある`work/tutorial`ディレクトリに入れてください。
```
wget ftp://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/releases/chembl_23/chembl_23_chemreps.txt.gz
gunzip chembl_23_chemreps.txt.gz
```
#### 3.2 データを抽出する関数の作成
CHEMBLのデータである`chembl_23_chemreps.txt`は以下のような形式で保存されている。
```
!head -n3 chembl_23_chemreps.txt
```
このファイルからvalidなsmilesとそれに対応する`CHEMBL ID`そして新たな`ID`を割り当てる関数`read_chembldb`を以下のように作成した。
```
Record = namedtuple('Record', 'chembl_id, smiles, inchi, inchi_key')
def read_chembldb(filepath, limit):
with open(filepath, 'rt') as inputfile:
reader = csv.reader(inputfile, delimiter='\t', skipinitialspace=True)
#headerを飛ばす
next(reader)
for count, record in enumerate(map(Record._make, reader), start=1):
smiles = record.smiles
#特定の三重結合をRDKitで読めるように変換する。
smiles = smiles.replace('=N#N','=[N+]=[N-]')
smiles = smiles.replace('N#N=','[N-]=[N+]=')
#invalidなsmilesは読み込まない
if not Chem.MolFromSmiles(smiles):
continue
yield count, record.chembl_id, smiles
if count == limit:
break
```
期待する通りに関数が動いているか確かめる。関数`read_chembldb`は引数`limit`の数の行だけデータを取り出す作業を行う。
今回は`limit=3`で行ってみる。
```
for count, chembl_id, smiles in read_chembldb('chembl_23_chemreps.txt', 3):
print(count, chembl_id, smiles)
```
期待する通りに関数が動いているようだ。
#### 3.3 データの登録
データを実際に登録してみる。定義したテーブル`Compound`クラスから一般的なオブジェクト指向型プログラミングにおけるオブジェクトを作成し、
sessionオブジェクトの`add`メソッドを用いて登録すれば良い。`read_chembldb`の引数`limit=25000`で行ってみる。
マシンパワーによっては登録に時間がかかるかもしれない。
```
for count, chembl_id, smiles in read_chembldb('chembl_23_chemreps.txt', 25000):
compound = Compound(chembl_id, smiles)
session.add(compound)
session.commit()
```
私のパソコンだと1分ほどで登録が終了した。また別の登録方法としてRDkitのMolオブジェクトを引数に与えてもよい。
```
smiles = 'c1ccccc1Cl'
mol =Chem.MolFromSmiles(smiles)
compound = Compound('111111', mol)
session.add(compound)
session.commit()
```
どちらで登録しても最後に`session.commit`メソッドを行い変更をデータベースに反映させる必要がある。
### 4. SQLAlchemy/raziにおけるテーブル内のレコードの扱い方
ここでテーブル内のデータの扱い方を学んでみよう。
#### 4.1 検索
##### 4.1.1 queryメソッド
テーブルに対して特定の条件の検索をまとめたいわゆるクエリ(検索式)は`query`メソッドを用いてテーブルにわたすことができる。
例えば`compounds`テーブルの全データ(データベースではレコードと呼ぶことが多いので以後レコードと呼ぶ)を取り出すクエリは以下のようになる。
```
compounds = session.query(Compound)
```
ここで重要なのはテーブルに対してアクセスしレコードを取り出す作業はまだ行われていない。
したがって上記のセルは非常に膨大な量のレコードが登録されていても一瞬で実行されると思う。
実際にレコードをデータベースのテーブルから取り出す作業は例えば`print`関数で表示するなど、本当にそのレコードがないと作業ができない時に行う。
これを遅延評価 (lazy evaluation)と呼ぶ。
なぜこのような処理が行われるかというと後で詳しく説明するが
`query`メソッドの返り値である`sqlalchemy.orm.query.Query`オブジェクトはその後さらに`filter`メソッドで連鎖的で絞り込むことができ、
最終的にすべてを組み合わせたクエリの条件でレコードを取り出すほうが取り出すレコードの数も減ってデータベースに負担が少ないからである。
取り出したレコードの先頭5個を表示してみる。
```
for compound in session.query(Compound)[:5]:
print(compound)
```
`query`メソッドの返り値である`sqlalchemy.orm.query.Query`オブジェクトはfor文で回すことで[mytables.py](http://0.0.0.0:8888/edit/tutorial/mytables.py)で定義した
`Compound`クラスのオブジェクトを返す。このように`Compound`クラスはデータベースのテーブルを定義するだけでなく
テーブルから取り出したレコードをPython上で扱う時のオブジェクトとしても使うことができる。
##### 4.1.2 countメソッド
`query`メソッドにて検索した結果取り出されたレコードの数を確認するのは`count`メソッドを用いる
```
compounds = session.query(Compound)
compounds.count()
```
`compounds`テーブルには25001個のレコードが登録されているのがわかる。
##### 4.1.3 特定のカラムに関する絞り込み
特定のカラムに関する絞り込みは`filter`メソッドの引数に追加することでできる。
例えばカラム`id`が5以下のレコードを取り出したい場合は以下のようにする。
```
compounds = session.query(Compound)
compounds = compounds.filter(Compound.id <= 5)
for compound in compounds:
print(f'id: {compound.id}, {compound}')
```
##### 4.1.4 allメソッド
`query`メソッドや`filter`メソッドを用いて絞り込みを行ったレコードすべてをリストに変換するのは`all`メソッドを用いる。
遅延評価が行われすべてのレコードがリストに変換されるのでレコード数が多いと変換に時間がかかるので注意する。
以下の例ではCompoundの分子量を計算し100以下のものだけをすべて`all`メソッドで取り出している。
```
compounds = session.query(Compound)
compounds = compounds.filter(mol_amw(Compound.structure) <= 100)
compounds.count()
compounds = compounds.all()
compounds[:5]
```
リストに変換しすべて同時に処理したい場合は`all`メソッドを用いると良い。
#### 4.1.5 firstメソッド
`all`メソッドが取り出したレコードすべてをリストにするなら`first`メソッドは取り出したレコードの先頭の1個のみを取り出す。
先頭2個以上が欲しい場合は次のセクションの`limit`メソッドを使うとよい。
```
compounds = session.query(Compound)
compounds = compounds.filter(mol_amw(Compound.structure) <= 100)
compounds.first()
```
##### 4.1.6 limitメソッド
先頭2個以上を欲しい場合は`limit`メソッドを用いる。
```
compounds = session.query(Compound).limit(5)
for compound in compounds:
print(compound)
```
##### 4.1.7 order_byメソッド
特定のカラムでソートしたい場合は`order_by`メソッドを用いる。降順にしたい場合は`desc`関数をさらに使えば良い。
以下の例ではidで昇順、降順で表示している。
```
compounds = session.query(Compound).order_by(Compound.id)
for compound in compounds[:5]:
print(f'{compound.id}: {compound.name}')
compounds = session.query(Compound).order_by(desc(Compound.id))
for compound in compounds[:5]:
print(f'{compound.id}: {compound.name}')
```
##### 4.1.8 functionsの関数
`razi.rdkit_postgressql.functions`モジュールの関数を引数として呼び出すことでRDKit database cartridge内で利用可能な関数を利用できる。
例えば4.4などで利用しているmol_amwもそのうちの1つであり分子量を計算する。
`label`メソッドをさらにつけることで`compound`オブジェクトのアトリビュートとして扱うことができる。
```
compounds = session.query(Compound, mol_amw(Compound.structure).label('MW'))
for compound in compounds[:5]:
print(f'{compound.MW}')
```
##### 4.1.9 特定のカラムのみを結果として取り出す
特定のカラムのみを取り出すにははじめの`query`メソッドの引数に取り出したいカラムを指定すればよい。
例えば分子量を計算し100以下のものの`name`のみ取り出す時は以下のようにすればよい。
```
compounds = session.query(Compound.name)
compounds = compounds.filter(mol_amw(Compound.structure) <= 100)
compounds.count()
compounds = compounds.all()
compounds[:5]
```
もちろんデータ量が多くなければすべてのカラムを取り出し最後に`name`のみを表示してもよい。
```
compounds = session.query(Compound)
compounds = compounds.filter(mol_amw(Compound.structure) <= 100)
for compound in compounds[:5]:
print(compound.name)
```
##### 4.1.10 hassubstructメソッド
raziモジュールを用いることでSQLAlchemyに追加される機能として部分構造検索をする`hassubstruct`メソッドがある。
例として分子量150以下かつベンゼン環を含む化合物のレコードのみを取り出してみよう。
```
compounds = session.query(Compound)
compounds = compounds.filter(mol_amw(Compound.structure) <= 150)
print(f'{compounds.count()}')
compounds = compounds.filter(Compound.structure.hassubstruct('c1ccccc1'))
print(f'{compounds.count()}')
```
97個の化合物が検索ヒットした。そのうちの6個を取り出して画像を表示してみよう。
sqlalchemy.orm.query.Queryオブジェクトは`all`メソッド以外にスライスで取り出してもリストに変換される。
```
compounds = compounds[:6]
compounds
Draw.MolsToGridImage([compound.structure for compound in compounds], legends=[compound.name for compound in compounds],
subImgSize=(250, 250))
```
確かにすべてベンゼン環を含む。
RDKit Database Cartridgeでは同時に複数の部分構造を含む部分構造検索したい場合、部分構造をピリオドで繋げば良い。
以下の例は分子量300以下、ベンゼン環、インドール環をそれぞれ1つ以上含む部分構造検索はを行う場合のクエリである。
```
compounds = session.query(Compound)
compounds = compounds.filter(mol_amw(Compound.structure) <= 300)
compounds = compounds.filter(Compound.structure.hassubstruct('c1ccccc1.c1ccc2[nH]ccc2c1'))
print(f'{compounds.count()}')
```
検索ヒットの一部を図にしてみる。
```
Draw.MolsToGridImage([compound.structure for compound in compounds[:6]], legends=[compound.name for compound in compounds[:6]],
subImgSize=(250, 250))
```
確かにすべてベンゼン環、インドール環をそれぞれ1つ以上含む。
##### 4.1.11 rollbackメソッド
正しくクエリを投げることができなかった場合、続けて別のクエリを投げてもエラーが返ることがある。
そういう場合は一度`rollback`メソッドでrollbackを行うとよい。
#### 4.2 更新
レコードのカラムの更新は更新したいレコードを呼び出しattributeの値を変更したら自動で変更される。
例として`compounds`テーブルの先頭のデータを取り出し`name`カラムの値を変更してみる。
```
compound = session.query(Compound).order_by('id').first()
print(compound)
compound.name = 'CHEMBL999999'
compound
```
この時点で既にデータベース上の値も変更されている。それを確認するために`compounds`テーブルの先頭5個を取り出して表示してみる。
```
compounds = session.query(Compound).order_by('id').limit(5)
for c in compounds:
print(f'{c.id}: {c.name}')
```
最後に変更を反映するために`session.commit`メソッドを用いる。変更を反映したくない時は`session.rollback`メソッドを用いるとよい。
```
session.commit()
```
#### 4.3 削除
##### 4.3.1 1つのレコードの削除
レコードの削除には`delete`メソッドを用いる。例としてid=1のレコードを削除してみる。
```
compounds = session.query(Compound).filter(Compound.id == 1)
compounds.delete()
```
確認してみるとid=1のレコードがもう存在しないのがわかる。
```
compounds = session.query(Compound).filter(Compound.id == 1)
compounds = compounds.all()
compounds
```
`session.rollback`メソッドで削除したのを元に戻してみる。
```
session.rollback()
compounds = session.query(Compound).filter(Compound.id == 1)
compounds = compounds.all()
compounds
```
元に戻った。
##### 4.3.2 複数のレコードの削除
絞り込んだレコードが複数ある時のレコードの削除には引数synchronize_session=`fetch`を与えておく必要がある。
例として分子量2000以上のレコードを削除してみる。
```
compounds = session.query(Compound)
compounds = compounds.filter(mol_amw(Compound.structure) >= 2000)
compounds.count()
compounds.delete(synchronize_session='fetch')
compounds = session.query(Compound)
compounds = compounds.filter(mol_amw(Compound.structure) >= 2000)
compounds.count()
```
すべて削除されている。
今回は`session.commit`メソッドで削除したのを反映してみる。
```
session.commit()
```
### 5. sessionの終了
最後はsessionをcloseしておこう。sessionをつながっているとテーブルを削除したりできなくなる。
```
session.close()
```
### 6. テーブルの削除
最後にtutorialで作成したテーブルを削除してみる。`Compound`クラスで作成したテーブルの削除には`Compound.__table__.drop`メソッドを用いる。
```
Compound.__table__.drop()
```
|
github_jupyter
|
from collections import namedtuple
import csv
import psycopg2
import sys
from mytables import Compound
from razi.rdkit_postgresql.types import Mol, Bfp
from razi.rdkit_postgresql.functions import atompairbv_fp, torsionbv_fp, morganbv_fp, mol_amw
from rdkit import Chem, rdBase
from rdkit.Chem import Draw
from sqlalchemy import create_engine, desc, Column, Index, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
print(sys.version_info)
print(f'RDKit version: {rdBase.rdkitVersion}')
!python mytables.py
engine = create_engine('postgresql://postgres@db:5432/postgres')
Base = declarative_base(bind=engine)
class Compound(Base):
__tablename__ = 'compounds'
id = Column(Integer, primary_key=True)
name = Column(String)
structure = Column(Mol)
atompair = Column(Bfp)
torsion = Column(Bfp)
morgan = Column(Bfp)
__table_args__ = (
Index('compounds_structure', 'structure',
postgresql_using='gist'),
)
def __init__(self, name, structure):
self.name = name
if isinstance(structure, Chem.Mol):
self.structure = Chem.MolToSmiles(structure)
elif isinstance(structure, str):
self.structure = structure
self.atompair = atompairbv_fp(self.structure)
self.torsion = torsionbv_fp(self.structure)
self.morgan = morganbv_fp(self.structure, 2)
def __repr__(self):
if isinstance(self.structure, Chem.Mol):
return '(%s) < %s >' % (self.name, Chem.MolToSmiles(self.structure))
return '(%s) < %s >' % (self.name, self.structure)
__tablename__ = 'compounds'
id = Column(Integer, primary_key=True)
name = Column(String)
structure = Column(Mol)
atompair = Column(Bfp)
torsion = Column(Bfp)
morgan = Column(Bfp)
__table_args__ = (
Index('compounds_structure', 'structure',
postgresql_using='gist'),
)
def __init__(self, name, structure):
self.name = name
if isinstance(structure, Chem.Mol):
self.structure = Chem.MolToSmiles(structure)
elif isinstance(structure, str):
self.structure = structure
self.atompair = atompairbv_fp(self.structure)
self.torsion = torsionbv_fp(self.structure)
self.morgan = morganbv_fp(self.structure, 2)
def __repr__(self):
if isinstance(self.structure, Chem.Mol):
return '(%s) < %s >' % (self.name, Chem.MolToSmiles(self.structure))
return '(%s) < %s >' % (self.name, self.structure)
if __name__ == '__main__':
Base.metadata.create_all()
postgres=# \d compounds;
Table "public.compounds"
Column | Type | Modifiers
-----------+-------------------+--------------------------------------------------------
id | integer | not null default nextval('compounds_id_seq'::regclass)
name | character varying |
structure | mol |
atompair | bfp |
torsion | bfp |
morgan | bfp |
Indexes:
"compounds_pkey" PRIMARY KEY, btree (id)
"compounds_structure" gist (structure)
engine = create_engine('postgresql://postgres@db:5432/postgres')
Base = declarative_base(bind=engine)
Session = sessionmaker(bind=engine)
session = Session()
wget ftp://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/releases/chembl_23/chembl_23_chemreps.txt.gz
gunzip chembl_23_chemreps.txt.gz
!head -n3 chembl_23_chemreps.txt
Record = namedtuple('Record', 'chembl_id, smiles, inchi, inchi_key')
def read_chembldb(filepath, limit):
with open(filepath, 'rt') as inputfile:
reader = csv.reader(inputfile, delimiter='\t', skipinitialspace=True)
#headerを飛ばす
next(reader)
for count, record in enumerate(map(Record._make, reader), start=1):
smiles = record.smiles
#特定の三重結合をRDKitで読めるように変換する。
smiles = smiles.replace('=N#N','=[N+]=[N-]')
smiles = smiles.replace('N#N=','[N-]=[N+]=')
#invalidなsmilesは読み込まない
if not Chem.MolFromSmiles(smiles):
continue
yield count, record.chembl_id, smiles
if count == limit:
break
for count, chembl_id, smiles in read_chembldb('chembl_23_chemreps.txt', 3):
print(count, chembl_id, smiles)
for count, chembl_id, smiles in read_chembldb('chembl_23_chemreps.txt', 25000):
compound = Compound(chembl_id, smiles)
session.add(compound)
session.commit()
smiles = 'c1ccccc1Cl'
mol =Chem.MolFromSmiles(smiles)
compound = Compound('111111', mol)
session.add(compound)
session.commit()
compounds = session.query(Compound)
for compound in session.query(Compound)[:5]:
print(compound)
compounds = session.query(Compound)
compounds.count()
compounds = session.query(Compound)
compounds = compounds.filter(Compound.id <= 5)
for compound in compounds:
print(f'id: {compound.id}, {compound}')
compounds = session.query(Compound)
compounds = compounds.filter(mol_amw(Compound.structure) <= 100)
compounds.count()
compounds = compounds.all()
compounds[:5]
compounds = session.query(Compound)
compounds = compounds.filter(mol_amw(Compound.structure) <= 100)
compounds.first()
compounds = session.query(Compound).limit(5)
for compound in compounds:
print(compound)
compounds = session.query(Compound).order_by(Compound.id)
for compound in compounds[:5]:
print(f'{compound.id}: {compound.name}')
compounds = session.query(Compound).order_by(desc(Compound.id))
for compound in compounds[:5]:
print(f'{compound.id}: {compound.name}')
compounds = session.query(Compound, mol_amw(Compound.structure).label('MW'))
for compound in compounds[:5]:
print(f'{compound.MW}')
compounds = session.query(Compound.name)
compounds = compounds.filter(mol_amw(Compound.structure) <= 100)
compounds.count()
compounds = compounds.all()
compounds[:5]
compounds = session.query(Compound)
compounds = compounds.filter(mol_amw(Compound.structure) <= 100)
for compound in compounds[:5]:
print(compound.name)
compounds = session.query(Compound)
compounds = compounds.filter(mol_amw(Compound.structure) <= 150)
print(f'{compounds.count()}')
compounds = compounds.filter(Compound.structure.hassubstruct('c1ccccc1'))
print(f'{compounds.count()}')
compounds = compounds[:6]
compounds
Draw.MolsToGridImage([compound.structure for compound in compounds], legends=[compound.name for compound in compounds],
subImgSize=(250, 250))
compounds = session.query(Compound)
compounds = compounds.filter(mol_amw(Compound.structure) <= 300)
compounds = compounds.filter(Compound.structure.hassubstruct('c1ccccc1.c1ccc2[nH]ccc2c1'))
print(f'{compounds.count()}')
Draw.MolsToGridImage([compound.structure for compound in compounds[:6]], legends=[compound.name for compound in compounds[:6]],
subImgSize=(250, 250))
compound = session.query(Compound).order_by('id').first()
print(compound)
compound.name = 'CHEMBL999999'
compound
compounds = session.query(Compound).order_by('id').limit(5)
for c in compounds:
print(f'{c.id}: {c.name}')
session.commit()
compounds = session.query(Compound).filter(Compound.id == 1)
compounds.delete()
compounds = session.query(Compound).filter(Compound.id == 1)
compounds = compounds.all()
compounds
session.rollback()
compounds = session.query(Compound).filter(Compound.id == 1)
compounds = compounds.all()
compounds
compounds = session.query(Compound)
compounds = compounds.filter(mol_amw(Compound.structure) >= 2000)
compounds.count()
compounds.delete(synchronize_session='fetch')
compounds = session.query(Compound)
compounds = compounds.filter(mol_amw(Compound.structure) >= 2000)
compounds.count()
session.commit()
session.close()
Compound.__table__.drop()
| 0.380759 | 0.904059 |
# Shashank V. Sonar
## Task 4: Exploratory Data Analysis - Terrorism
### ● Perform ‘Exploratory Data Analysis’ on dataset ‘Global Terrorism’
### ● As a security/defense analyst, try to find out the hot zone of terrorism.
### ● What all security issues and insights you can derive by EDA?
```
#importing the necessary libraries from the packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
#Loading and reading the global terrorism dataset
data = pd.read_csv("C:/Users/91814/Desktop/GRIP/Task 4/globalterrorism.csv",encoding='latin1')
data.head() #displaying the first five rows of the global terrorism dataset
data.columns.values #displaying column names
data.rename(columns={'iyear':'Year','imonth':'Month','iday':"day",'gname':'Group','country_txt':'Country','region_txt':'Region','provstate':'State','city':'City','latitude':'latitude',
'longitude':'longitude','summary':'summary','attacktype1_txt':'Attacktype','targtype1_txt':'Targettype','weaptype1_txt':'Weapon','nkill':'kill',
'nwound':'Wound'},inplace=True) #renameing the column names of dataset
data = data[['Year','Month','day','Country','State','Region','City','latitude','longitude',"Attacktype",'kill',
'Wound','target1','summary','Group','Targettype','Weapon','motive']] #value types of the dataset
data.head() #displays the first five rows of the dataset
data.shape #displays the number of rows and columns
data.isnull().sum() #number of null values in each category
#filling NA/NaN values
data['Wound'] = data['Wound'].fillna(0)
data['kill'] = data['kill'].fillna(0)
#Casualities=kill+wound
data['Casualities'] = data['kill'] + data['Wound']
data.info() # details about the dataset
data.describe() #describing the details of the dataset
year = data['Year'].unique()
years_count = data['Year'].value_counts(dropna = False).sort_index()
plt.figure(figsize = (18,10))
sns.barplot(x = year,
y = years_count,
palette = "tab10")
plt.xticks(rotation = 50)
plt.xlabel('Attacking Year',fontsize=20)
plt.ylabel('Number of Attacks Each Year',fontsize=20)
plt.title('Attacks In Years',fontsize=30)
plt.show()
pd.crosstab(data.Year, data.Region).plot(kind='area',stacked=False,figsize=(20,10))
plt.title('Terrorist Activities By Region In Each Year',fontsize=25)
plt.ylabel('Number of Attacks',fontsize=20)
plt.xlabel("Year",fontsize=20)
plt.show()
attack = data.Country.value_counts()[:10]
attack
data.Group.value_counts()[1:10]
plt.subplots(figsize=(20,10))
sns.barplot(data['Country'].value_counts()[:10].index,data['Country'].value_counts()[:10].values,palette='YlOrBr_r')
plt.title('Top Countries Affected')
plt.xlabel('Countries')
plt.ylabel('Count')
plt.xticks(rotation = 50)
plt.show()
df = data[['Year','kill']].groupby(['Year']).sum()
fig, ax4 = plt.subplots(figsize=(20,10))
df.plot(kind='bar',alpha=0.7,ax=ax4)
plt.xticks(rotation = 50)
plt.title("People Died Due To Attack",fontsize=25)
plt.ylabel("Number of killed peope",fontsize=20)
plt.xlabel('Year',fontsize=20)
top_side = ax4.spines["top"]
top_side.set_visible(False)
right_side = ax4.spines["right"]
right_side.set_visible(False)
data['City'].value_counts().to_frame().sort_values('City',axis=0,ascending=False).head(10).plot(kind='bar',figsize=(20,10),color='blue')
plt.xticks(rotation = 50)
plt.xlabel("City",fontsize=15)
plt.ylabel("Number of attack",fontsize=15)
plt.title("Top 10 most effected city",fontsize=20)
plt.show()
data['Attacktype'].value_counts().plot(kind='bar',figsize=(20,10),color='magenta')
plt.xticks(rotation = 50)
plt.xlabel("Attacktype",fontsize=15)
plt.ylabel("Number of attack",fontsize=15)
plt.title("Name of attacktype",fontsize=20)
plt.show()
data[['Attacktype','kill']].groupby(["Attacktype"],axis=0).sum().plot(kind='bar',figsize=(20,10),color=['darkslateblue'])
plt.xticks(rotation=50)
plt.title("Number of killed ",fontsize=20)
plt.ylabel('Number of people',fontsize=15)
plt.xlabel('Attack type',fontsize=15)
plt.show()
data[['Attacktype','Wound']].groupby(["Attacktype"],axis=0).sum().plot(kind='bar',figsize=(20,10),color=['cyan'])
plt.xticks(rotation=50)
plt.title("Number of wounded ",fontsize=20)
plt.ylabel('Number of people',fontsize=15)
plt.xlabel('Attack type',fontsize=15)
plt.show()
plt.subplots(figsize=(20,10))
sns.countplot(data["Targettype"],order=data['Targettype'].value_counts().index,palette="gist_heat",edgecolor=sns.color_palette("mako"));
plt.xticks(rotation=90)
plt.xlabel("Attacktype",fontsize=15)
plt.ylabel("count",fontsize=15)
plt.title("Attack per year",fontsize=20)
plt.show()
data['Group'].value_counts().to_frame().drop('Unknown').head(10).plot(kind='bar',color='green',figsize=(20,10))
plt.title("Top 10 terrorist group attack",fontsize=20)
plt.xlabel("terrorist group name",fontsize=15)
plt.ylabel("Attack number",fontsize=15)
plt.show()
data[['Group','kill']].groupby(['Group'],axis=0).sum().drop('Unknown').sort_values('kill',ascending=False).head(10).plot(kind='bar',color='yellow',figsize=(20,10))
plt.title("Top 10 terrorist group attack",fontsize=20)
plt.xlabel("terrorist group name",fontsize=15)
plt.ylabel("No of killed people",fontsize=15)
plt.show()
df=data[['Group','Country','kill']]
df=df.groupby(['Group','Country'],axis=0).sum().sort_values('kill',ascending=False).drop('Unknown').reset_index().head(10)
df
kill = data.loc[:,'kill']
print('Number of people killed by terror attack:', int(sum(kill.dropna())))
typeKill = data.pivot_table(columns='Attacktype', values='kill', aggfunc='sum')
typeKill
countryKill = data.pivot_table(columns='Country', values='kill', aggfunc='sum')
countryKill
```
**Conclusion and Results :**
- Country with the most attacks: **Iraq**
- City with the most attacks: **Baghdad**
- Region with the most attacks: **Middle East & North Africa**
- Year with the most attacks: **2014**
- Month with the most attacks: **5**
- Group with the most attacks: **Taliban**
- Most Attack Types: **Bombing/Explosion**
|
github_jupyter
|
#importing the necessary libraries from the packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
#Loading and reading the global terrorism dataset
data = pd.read_csv("C:/Users/91814/Desktop/GRIP/Task 4/globalterrorism.csv",encoding='latin1')
data.head() #displaying the first five rows of the global terrorism dataset
data.columns.values #displaying column names
data.rename(columns={'iyear':'Year','imonth':'Month','iday':"day",'gname':'Group','country_txt':'Country','region_txt':'Region','provstate':'State','city':'City','latitude':'latitude',
'longitude':'longitude','summary':'summary','attacktype1_txt':'Attacktype','targtype1_txt':'Targettype','weaptype1_txt':'Weapon','nkill':'kill',
'nwound':'Wound'},inplace=True) #renameing the column names of dataset
data = data[['Year','Month','day','Country','State','Region','City','latitude','longitude',"Attacktype",'kill',
'Wound','target1','summary','Group','Targettype','Weapon','motive']] #value types of the dataset
data.head() #displays the first five rows of the dataset
data.shape #displays the number of rows and columns
data.isnull().sum() #number of null values in each category
#filling NA/NaN values
data['Wound'] = data['Wound'].fillna(0)
data['kill'] = data['kill'].fillna(0)
#Casualities=kill+wound
data['Casualities'] = data['kill'] + data['Wound']
data.info() # details about the dataset
data.describe() #describing the details of the dataset
year = data['Year'].unique()
years_count = data['Year'].value_counts(dropna = False).sort_index()
plt.figure(figsize = (18,10))
sns.barplot(x = year,
y = years_count,
palette = "tab10")
plt.xticks(rotation = 50)
plt.xlabel('Attacking Year',fontsize=20)
plt.ylabel('Number of Attacks Each Year',fontsize=20)
plt.title('Attacks In Years',fontsize=30)
plt.show()
pd.crosstab(data.Year, data.Region).plot(kind='area',stacked=False,figsize=(20,10))
plt.title('Terrorist Activities By Region In Each Year',fontsize=25)
plt.ylabel('Number of Attacks',fontsize=20)
plt.xlabel("Year",fontsize=20)
plt.show()
attack = data.Country.value_counts()[:10]
attack
data.Group.value_counts()[1:10]
plt.subplots(figsize=(20,10))
sns.barplot(data['Country'].value_counts()[:10].index,data['Country'].value_counts()[:10].values,palette='YlOrBr_r')
plt.title('Top Countries Affected')
plt.xlabel('Countries')
plt.ylabel('Count')
plt.xticks(rotation = 50)
plt.show()
df = data[['Year','kill']].groupby(['Year']).sum()
fig, ax4 = plt.subplots(figsize=(20,10))
df.plot(kind='bar',alpha=0.7,ax=ax4)
plt.xticks(rotation = 50)
plt.title("People Died Due To Attack",fontsize=25)
plt.ylabel("Number of killed peope",fontsize=20)
plt.xlabel('Year',fontsize=20)
top_side = ax4.spines["top"]
top_side.set_visible(False)
right_side = ax4.spines["right"]
right_side.set_visible(False)
data['City'].value_counts().to_frame().sort_values('City',axis=0,ascending=False).head(10).plot(kind='bar',figsize=(20,10),color='blue')
plt.xticks(rotation = 50)
plt.xlabel("City",fontsize=15)
plt.ylabel("Number of attack",fontsize=15)
plt.title("Top 10 most effected city",fontsize=20)
plt.show()
data['Attacktype'].value_counts().plot(kind='bar',figsize=(20,10),color='magenta')
plt.xticks(rotation = 50)
plt.xlabel("Attacktype",fontsize=15)
plt.ylabel("Number of attack",fontsize=15)
plt.title("Name of attacktype",fontsize=20)
plt.show()
data[['Attacktype','kill']].groupby(["Attacktype"],axis=0).sum().plot(kind='bar',figsize=(20,10),color=['darkslateblue'])
plt.xticks(rotation=50)
plt.title("Number of killed ",fontsize=20)
plt.ylabel('Number of people',fontsize=15)
plt.xlabel('Attack type',fontsize=15)
plt.show()
data[['Attacktype','Wound']].groupby(["Attacktype"],axis=0).sum().plot(kind='bar',figsize=(20,10),color=['cyan'])
plt.xticks(rotation=50)
plt.title("Number of wounded ",fontsize=20)
plt.ylabel('Number of people',fontsize=15)
plt.xlabel('Attack type',fontsize=15)
plt.show()
plt.subplots(figsize=(20,10))
sns.countplot(data["Targettype"],order=data['Targettype'].value_counts().index,palette="gist_heat",edgecolor=sns.color_palette("mako"));
plt.xticks(rotation=90)
plt.xlabel("Attacktype",fontsize=15)
plt.ylabel("count",fontsize=15)
plt.title("Attack per year",fontsize=20)
plt.show()
data['Group'].value_counts().to_frame().drop('Unknown').head(10).plot(kind='bar',color='green',figsize=(20,10))
plt.title("Top 10 terrorist group attack",fontsize=20)
plt.xlabel("terrorist group name",fontsize=15)
plt.ylabel("Attack number",fontsize=15)
plt.show()
data[['Group','kill']].groupby(['Group'],axis=0).sum().drop('Unknown').sort_values('kill',ascending=False).head(10).plot(kind='bar',color='yellow',figsize=(20,10))
plt.title("Top 10 terrorist group attack",fontsize=20)
plt.xlabel("terrorist group name",fontsize=15)
plt.ylabel("No of killed people",fontsize=15)
plt.show()
df=data[['Group','Country','kill']]
df=df.groupby(['Group','Country'],axis=0).sum().sort_values('kill',ascending=False).drop('Unknown').reset_index().head(10)
df
kill = data.loc[:,'kill']
print('Number of people killed by terror attack:', int(sum(kill.dropna())))
typeKill = data.pivot_table(columns='Attacktype', values='kill', aggfunc='sum')
typeKill
countryKill = data.pivot_table(columns='Country', values='kill', aggfunc='sum')
countryKill
| 0.256553 | 0.871857 |
We are going to run an electromagnetic simulation, with initial code and ideas borrowed from
* Understanding the Finite-Difference Time-Domain Method, John B. Schneider, http://www.eecs.wsu.edu/~schneidj/ufdtd, 2010.
We're going to run a simple, bare-bones 1D FDTD simulation with a hard source.
The impedance of free space (or vacuum) is 377.0.
We are going to model 400 mm of space, and run the simulation for 500 time units.
We use a time step that matches the space step with a Courant number of 1, which means that $c * dt / dx = 1$, where c is the speed of light. Given that $c$ is 299,792,458 m/s and our space step ($dx$) is 1 mm, our time step ($dt$) is $dx / c$ or $3.33*10^{-12}$ s. (Don't worry about this if it doesn't make sense - read more in an FDTD book if you want to.)
Our source is the electric field at the left edge of the grid.
We use a Gaussian source that peaks at 30 time units and has a standard deviation of 7.
Our output will be the electric field measured at 250 mm over the time of the simulation.
```
import math
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from IPython.display import HTML
imp0 = 377.0 # property of free space (vacuum)
SIZE = 400 # dimension of space to model
sensorLocation = 250 # location of output sensor
maxTime = 400 # simulation time
sourcePeakTime = 30 # peak of the Gaussian source
sourceSdv = 7 # standard deviation of the Gaussian source
sourceSigma = 2 * sourceSdv**2
```
We first initialize lists for the electric and magnetic field components.
We then start at time 0, and update the magnetic fields at time 0.5 based on the magnetic fields at time -0.5 and the electric fields at time 0. And then we update the electric fields at time 1 based on the electric field at time 0 and the magnetic field at time 0.5. At the left edge of the grid, we set the source electric field. And we write out the electric field at the sensor location.
We then repeat this for each time step, until the set number of time steps have been reached.
```
ez = [0.0] * SIZE
hy = [0.0] * SIZE
# do time stepping
for qTime in range(maxTime):
# update magnetic field
for mm in range(SIZE-1):
hy[mm] = hy[mm] + (ez[mm + 1] - ez[mm]) / imp0
# update electric field
for mm in range(SIZE):
ez[mm] = ez[mm] + (hy[mm] - hy[mm - 1]) * imp0
# hardwire a source node */
if qTime < sourceSigma:
ez[0] = math.exp(-(qTime - sourcePeakTime)**2 / sourceSigma)
else:
ez[0] = 0.0
print(ez[sensorLocation])
#done with time stepping loop
```
Instead of printing the output electric field values at the sensor, let's save them to a list and then plot them
```
ez = [0.0] * SIZE
hy = [0.0] * SIZE
output = [0.0] * maxTime
epsR = [1.0] * SIZE
# do time stepping
for qTime in range(maxTime):
# update magnetic field
for mm in range(SIZE-1):
hy[mm] = hy[mm] + (ez[mm + 1] - ez[mm]) / imp0
# update electric field
for mm in range(SIZE):
ez[mm] = ez[mm] + (hy[mm] - hy[mm - 1]) * imp0 / epsR[mm]
# hardwire a source node */
if qTime < sourceSigma:
ez[0] = math.exp(-(qTime - sourcePeakTime)**2 / sourceSigma)
else:
ez[0] = 0.0
output[qTime] = ez[sensorLocation]
#done with time stepping loop
plt.plot(range(maxTime), output, color='blue', linewidth=1)
plt.xlabel('time')
plt.ylabel('Ez')
plt.show()
```
Now let's put in a slab of glass that's 50 mm thick, located betweeen 150 and 200 mm in our grid. The relative permittivity of glass (pyrex) is 4.7
```
# set up glass slab
for mm in range(150,200):
epsR[mm] = 4.7
```
and rerun the code, storing the output in output2 this time
```
ez = [0.0] * SIZE
hy = [0.0] * SIZE
output2 = [0.0] * maxTime
# do time stepping
for qTime in range(maxTime):
# update magnetic field
for mm in range(SIZE-1):
hy[mm] = hy[mm] + (ez[mm + 1] - ez[mm]) / imp0
# update electric field
for mm in range(SIZE):
ez[mm] = ez[mm] + (hy[mm] - hy[mm - 1]) * imp0 / epsR[mm]
# hardwire a source node */
if qTime < sourceSigma:
ez[0] = math.exp(-(qTime - sourcePeakTime)**2 / sourceSigma)
else:
ez[0] = 0.0
output2[qTime] = ez[sensorLocation]
#done with time stepping loop
```
Now let's plot the two outputs, for free space in blue and with a glass slab in green
```
plt.plot(range(maxTime), output, color='blue', linewidth=1)
plt.plot(range(maxTime), output2, color='green', linewidth=1)
plt.xlabel('time')
plt.ylabel('Ez')
plt.show()
```
You can see that the second (green) curve has been reduced in height and has been delayed by passing through the glass slab. The reduction in height is a property of going from one material to another - some energy is reflected by at the interface between the two materials. The delay is a property of the glass itself - waves travel more slowly in glass than in vaccum.
Let's see what's happening by visualizing the Ez field over the simulation, first with no slab
```
ims = []
# set up a plot
fig, ax = plt.subplots()
ax.axvspan(250, 250, alpha=0.9, color='black') # sensor
ax.set_xlabel('space (mm)')
ax.set_ylabel('Ez')
ez = [0.0] * SIZE
hy = [0.0] * SIZE
# free space
epsR = [1.0] * SIZE
# do time stepping
for qTime in range(maxTime):
# update magnetic field
for mm in range(SIZE-1):
hy[mm] = hy[mm] + (ez[mm + 1] - ez[mm]) / imp0
# update electric field
for mm in range(SIZE):
ez[mm] = ez[mm] + (hy[mm] - hy[mm - 1]) * imp0 / epsR[mm]
# capture a snapshot of the ez field at this timestep to the animation
ims.append((plt.plot(range(SIZE), ez, color='blue', linewidth=1)))
# hardwire a source node */
if qTime < sourceSigma:
ez[0] = math.exp(-(qTime - sourcePeakTime)**2 / sourceSigma)
else:
ez[0] = 0.0
#done with time stepping loop
#build and display the animation
im_ani = animation.ArtistAnimation(fig, ims, interval=50, repeat_delay=5000, blit=True)
HTML(im_ani.to_jshtml())
```
and then with a glass slab
```
ims = []
# set up a plot
fig, ax = plt.subplots()
ax.axvspan(150, 200, alpha=0.1, color='red') # glass slab
ax.axvspan(250, 250, alpha=0.9, color='black') # sensor
ax.set_xlabel('Space (mm)')
ax.set_ylabel('Ez')
ez = [0.0] * SIZE
hy = [0.0] * SIZE
# free space
epsR = [1.0] * SIZE
# add a glass slab
for mm in range(150,200):
epsR[mm] = 4.7
# do time stepping
for qTime in range(maxTime):
# update magnetic field
for mm in range(SIZE-1):
hy[mm] = hy[mm] + (ez[mm + 1] - ez[mm]) / imp0
# update electric field
for mm in range(SIZE):
ez[mm] = ez[mm] + (hy[mm] - hy[mm - 1]) * imp0 / epsR[mm]
# capture a snapshot of the ez field at this timestep to the animation
ims.append((plt.plot(range(SIZE), ez, color='green', linewidth=1)))
# hardwire a source node */
if qTime < sourceSigma:
ez[0] = math.exp(-(qTime - sourcePeakTime)**2 / sourceSigma)
else:
ez[0] = 0.0
#done with time stepping loop
#build and display the animation
im_ani = animation.ArtistAnimation(fig, ims, interval=50, repeat_delay=5000, blit=True)
HTML(im_ani.to_jshtml())
```
Now let's look at this numerically, not graphically. Let's calculate the sum of the electric field that passed through free space and compare it to the sum of the electric field that passed through the glass slab. The ratio is the transmission, and the part of the field that didn't pass through the slab was reflected, and is the reflectance
```
base = 0
transmitted = 0
for qTime in range(maxTime):
base += output[qTime]
transmitted += output2[qTime]
transmission = transmitted/base
print('Transmission =',transmission)
print('Reflectance =', (1 - transmission))
```
What if we run the simulation for one more time step? Remember that the source is turned off after sourceSigma time steps, so this does not inject any more energy into the simulation.
```
newMaxTime = 501 # simulation time
ez = [0.0] * SIZE
hy = [0.0] * SIZE
output3 = [0.0] * newMaxTime
# free space
epsR = [1.0] * SIZE
# do time stepping
for qTime in range(newMaxTime):
# update magnetic field
for mm in range(SIZE-1):
hy[mm] = hy[mm] + (ez[mm + 1] - ez[mm]) / imp0
# update electric field
for mm in range(SIZE):
ez[mm] = ez[mm] + (hy[mm] - hy[mm - 1]) * imp0 / epsR[mm]
# hardwire a source node */
if qTime < sourceSigma:
ez[0] = math.exp(-(qTime - sourcePeakTime)**2 / sourceSigma)
else:
ez[0] = 0.0
output3[qTime] = ez[sensorLocation]
#done with time stepping loop
ez = [0.0] * SIZE
hy = [0.0] * SIZE
output4 = [0.0] * newMaxTime
# add a glass slab
for mm in range(150,200):
epsR[mm] = 4.7
# do time stepping
for qTime in range(newMaxTime):
# update magnetic field
for mm in range(SIZE-1):
hy[mm] = hy[mm] + (ez[mm + 1] - ez[mm]) / imp0
# update electric field
for mm in range(SIZE):
ez[mm] = ez[mm] + (hy[mm] - hy[mm - 1]) * imp0 / epsR[mm]
# hardwire a source node */
if qTime < sourceSigma:
ez[0] = math.exp(-(qTime - sourcePeakTime)**2 / sourceSigma)
else:
ez[0] = 0.0
output4[qTime] = ez[sensorLocation]
#done with time stepping loop
```
Let's calculate transmission and reflectance again - they should be the same.
```
base2 = 0
transmitted2 = 0
for qTime in range(newMaxTime):
base2 += output3[qTime]
transmitted2 += output4[qTime]
transmission2 = transmitted2/base2
print('Transmission2 =',transmission2)
print('Reflectance2 =', (1 - transmission2))
```
Are they the same?
```
print('Transmission =',transmission)
print('Reflectance =', (1 - transmission))
```
Not quite. How different are they?
```
difference = (transmission2 - transmission) / transmission
print(difference)
```
|
github_jupyter
|
import math
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from IPython.display import HTML
imp0 = 377.0 # property of free space (vacuum)
SIZE = 400 # dimension of space to model
sensorLocation = 250 # location of output sensor
maxTime = 400 # simulation time
sourcePeakTime = 30 # peak of the Gaussian source
sourceSdv = 7 # standard deviation of the Gaussian source
sourceSigma = 2 * sourceSdv**2
ez = [0.0] * SIZE
hy = [0.0] * SIZE
# do time stepping
for qTime in range(maxTime):
# update magnetic field
for mm in range(SIZE-1):
hy[mm] = hy[mm] + (ez[mm + 1] - ez[mm]) / imp0
# update electric field
for mm in range(SIZE):
ez[mm] = ez[mm] + (hy[mm] - hy[mm - 1]) * imp0
# hardwire a source node */
if qTime < sourceSigma:
ez[0] = math.exp(-(qTime - sourcePeakTime)**2 / sourceSigma)
else:
ez[0] = 0.0
print(ez[sensorLocation])
#done with time stepping loop
ez = [0.0] * SIZE
hy = [0.0] * SIZE
output = [0.0] * maxTime
epsR = [1.0] * SIZE
# do time stepping
for qTime in range(maxTime):
# update magnetic field
for mm in range(SIZE-1):
hy[mm] = hy[mm] + (ez[mm + 1] - ez[mm]) / imp0
# update electric field
for mm in range(SIZE):
ez[mm] = ez[mm] + (hy[mm] - hy[mm - 1]) * imp0 / epsR[mm]
# hardwire a source node */
if qTime < sourceSigma:
ez[0] = math.exp(-(qTime - sourcePeakTime)**2 / sourceSigma)
else:
ez[0] = 0.0
output[qTime] = ez[sensorLocation]
#done with time stepping loop
plt.plot(range(maxTime), output, color='blue', linewidth=1)
plt.xlabel('time')
plt.ylabel('Ez')
plt.show()
# set up glass slab
for mm in range(150,200):
epsR[mm] = 4.7
ez = [0.0] * SIZE
hy = [0.0] * SIZE
output2 = [0.0] * maxTime
# do time stepping
for qTime in range(maxTime):
# update magnetic field
for mm in range(SIZE-1):
hy[mm] = hy[mm] + (ez[mm + 1] - ez[mm]) / imp0
# update electric field
for mm in range(SIZE):
ez[mm] = ez[mm] + (hy[mm] - hy[mm - 1]) * imp0 / epsR[mm]
# hardwire a source node */
if qTime < sourceSigma:
ez[0] = math.exp(-(qTime - sourcePeakTime)**2 / sourceSigma)
else:
ez[0] = 0.0
output2[qTime] = ez[sensorLocation]
#done with time stepping loop
plt.plot(range(maxTime), output, color='blue', linewidth=1)
plt.plot(range(maxTime), output2, color='green', linewidth=1)
plt.xlabel('time')
plt.ylabel('Ez')
plt.show()
ims = []
# set up a plot
fig, ax = plt.subplots()
ax.axvspan(250, 250, alpha=0.9, color='black') # sensor
ax.set_xlabel('space (mm)')
ax.set_ylabel('Ez')
ez = [0.0] * SIZE
hy = [0.0] * SIZE
# free space
epsR = [1.0] * SIZE
# do time stepping
for qTime in range(maxTime):
# update magnetic field
for mm in range(SIZE-1):
hy[mm] = hy[mm] + (ez[mm + 1] - ez[mm]) / imp0
# update electric field
for mm in range(SIZE):
ez[mm] = ez[mm] + (hy[mm] - hy[mm - 1]) * imp0 / epsR[mm]
# capture a snapshot of the ez field at this timestep to the animation
ims.append((plt.plot(range(SIZE), ez, color='blue', linewidth=1)))
# hardwire a source node */
if qTime < sourceSigma:
ez[0] = math.exp(-(qTime - sourcePeakTime)**2 / sourceSigma)
else:
ez[0] = 0.0
#done with time stepping loop
#build and display the animation
im_ani = animation.ArtistAnimation(fig, ims, interval=50, repeat_delay=5000, blit=True)
HTML(im_ani.to_jshtml())
ims = []
# set up a plot
fig, ax = plt.subplots()
ax.axvspan(150, 200, alpha=0.1, color='red') # glass slab
ax.axvspan(250, 250, alpha=0.9, color='black') # sensor
ax.set_xlabel('Space (mm)')
ax.set_ylabel('Ez')
ez = [0.0] * SIZE
hy = [0.0] * SIZE
# free space
epsR = [1.0] * SIZE
# add a glass slab
for mm in range(150,200):
epsR[mm] = 4.7
# do time stepping
for qTime in range(maxTime):
# update magnetic field
for mm in range(SIZE-1):
hy[mm] = hy[mm] + (ez[mm + 1] - ez[mm]) / imp0
# update electric field
for mm in range(SIZE):
ez[mm] = ez[mm] + (hy[mm] - hy[mm - 1]) * imp0 / epsR[mm]
# capture a snapshot of the ez field at this timestep to the animation
ims.append((plt.plot(range(SIZE), ez, color='green', linewidth=1)))
# hardwire a source node */
if qTime < sourceSigma:
ez[0] = math.exp(-(qTime - sourcePeakTime)**2 / sourceSigma)
else:
ez[0] = 0.0
#done with time stepping loop
#build and display the animation
im_ani = animation.ArtistAnimation(fig, ims, interval=50, repeat_delay=5000, blit=True)
HTML(im_ani.to_jshtml())
base = 0
transmitted = 0
for qTime in range(maxTime):
base += output[qTime]
transmitted += output2[qTime]
transmission = transmitted/base
print('Transmission =',transmission)
print('Reflectance =', (1 - transmission))
newMaxTime = 501 # simulation time
ez = [0.0] * SIZE
hy = [0.0] * SIZE
output3 = [0.0] * newMaxTime
# free space
epsR = [1.0] * SIZE
# do time stepping
for qTime in range(newMaxTime):
# update magnetic field
for mm in range(SIZE-1):
hy[mm] = hy[mm] + (ez[mm + 1] - ez[mm]) / imp0
# update electric field
for mm in range(SIZE):
ez[mm] = ez[mm] + (hy[mm] - hy[mm - 1]) * imp0 / epsR[mm]
# hardwire a source node */
if qTime < sourceSigma:
ez[0] = math.exp(-(qTime - sourcePeakTime)**2 / sourceSigma)
else:
ez[0] = 0.0
output3[qTime] = ez[sensorLocation]
#done with time stepping loop
ez = [0.0] * SIZE
hy = [0.0] * SIZE
output4 = [0.0] * newMaxTime
# add a glass slab
for mm in range(150,200):
epsR[mm] = 4.7
# do time stepping
for qTime in range(newMaxTime):
# update magnetic field
for mm in range(SIZE-1):
hy[mm] = hy[mm] + (ez[mm + 1] - ez[mm]) / imp0
# update electric field
for mm in range(SIZE):
ez[mm] = ez[mm] + (hy[mm] - hy[mm - 1]) * imp0 / epsR[mm]
# hardwire a source node */
if qTime < sourceSigma:
ez[0] = math.exp(-(qTime - sourcePeakTime)**2 / sourceSigma)
else:
ez[0] = 0.0
output4[qTime] = ez[sensorLocation]
#done with time stepping loop
base2 = 0
transmitted2 = 0
for qTime in range(newMaxTime):
base2 += output3[qTime]
transmitted2 += output4[qTime]
transmission2 = transmitted2/base2
print('Transmission2 =',transmission2)
print('Reflectance2 =', (1 - transmission2))
print('Transmission =',transmission)
print('Reflectance =', (1 - transmission))
difference = (transmission2 - transmission) / transmission
print(difference)
| 0.377196 | 0.975855 |
just exploring fiona and rasterio.
opening tif files and such
```
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
```
pic1 @ /home/thistle/Desktop/psuedo_mask.png
root_folder @ /media/thistle/Passport/gis/nz
shape1 @ /media/thistle/Passport/gis/nz/lds-new-zealand-3layers-SHP/nz-elevation-survey-index/nz-elevation-survey-index.shp
```
%system ls -al ..
%set_env GLASS='glasgow'
```
```
!curl -O -J https://github.com/sarasafavi/remote-sensing-with-python/blob/master/notebooks/aoi.geojson
!ls -l
import rasterio
filepath = "/home/thistle/Desktop/Idaho_Springs/nz_example/remoteNZ/hillshade.tif"
r = rasterio.open(filepath)
r.count
r.bounds
r.read_crs()
dir(r)
image_file = "20190321_174348_0f1a_3B_AnalyticMS.tif"
sat_1 = rasterio.open(image_file)
print(sat_1.count)
print(sat_1.indexes)
[print(each, end=", ") for each in dir(sat_1)];
%lsmagic
%pdef sat_1.read
b,g,r,nir = sat_1.read()
type(b), b.dtype, b.shape
print(sat_1.bounds)
width_in_projected_units = sat_1.bounds.right - sat_1.bounds.left
width_in_projected_units
height_in_projected_units = sat_1.bounds.top -sat_1.bounds.bottom
xres = width_in_projected_units/b.shape[1]
yres = height_in_projected_units/b.shape[0]
# so are pixels square
print(xres==yres)
print(f"xres is {xres}")
print(yres)
#how big is file
import os
os.path.getsize(image_file)
file4="/home/thistle/ArrowRiver_hillshade.tif"
img4= rasterio.open(file4)
show(img4)
from rasterio.plot import show
show(sat_1)
image_file = "20190321_174348_0f1a_3B_AnalyticMS.tif"
# Use Rasterio to open the image.
satdat = rasterio.open(image_file)
from rasterio.plot import show
show(satdat)
[print((band.min(), band.max())) for band in [b,g,r,nir]]
show(nir)
satdat.profile
import fiona
# use fiona to open our AOI GeoJSON
with fiona.open('aoi.geojson') as f:
aoi = [feature["geometry"] for feature in f]
type(aoi.geojson)
file1='/media/thistle/Passport/gis/nz/lds-new-zealand-3layers-SHP/nz-elevation-survey-index/nz-elevation-survey-index.shp'
with fiona.open(file1) as file:
meta = file.meta
meta
meta['crs'].items()
file2 = "/media/thistle/Passport/gis/nz/lidar/site_ArrowRiver/LowerRiver/DSM_CC11_2016_1000_0245.tif"
num2 = rasterio.open(file2)
file3 = "/media/thistle/Passport/gis/nz/lidar/site_ArrowRiver/LowerRiver/DSM_CC11_2016_1000_0350.tif"
num3 = rasterio.open(file3)
from rasterio.plot import show
%pdef show
show(num2)
show(num3)
import json
dir(json)
!ls
with open("aoi.geojson","r") as f:
d = json.load(f)
print(d)
%pdoc json.dump
#Try to get a geometry for all geos in an index file
tiles_file = "/media/thistle/Passport/gis/nz/lds-new-zealand-3layers-SHP/otago-queenstown-lidar-index-tiles-2016/otago-queenstown-lidar-index-tiles-2016.shp"
with fiona.open(tiles_file, "r") as tiles:
aoi = [feature['geometry'] for feature in tiles]
zz = tiles
zz.meta
print((type(aoi), len(aoi)))
aoi[1000]
from rasterio.mask import mask
%pdoc mask
img_file = "/home/thistle/ArrowRiver_hillshade.tif"
with rasterio.open(img_file) as hs:
show (hs)
img_file = "/home/thistle/ArrowRiver_hillshade.tif"
with rasterio.open(img_file) as hs:
z = hs
z.meta['crs']
aoi[1]
with rasterio.open(img_file) as hs:
clipped, tranform = mask(hs, aoi[1:10], crop=True)
lll = list((1,3,33,5))
lll[::2]
sat_1.count
satdat.count
satdat.meta
blue = satdat.read(1)
green = satdat.read(2)
red = satdat.read(3)
infrared = satdat.read(4)
blue.shape == red.shape == green.shape
def scale(arr):
return arr/1000.
plt.imshow(blue)
#plt.show()
show(blue)
red.max(), blue.max(), infrared.max()
fig = plt.imshow(green)
fig.set_cmap("gist_earth")
plt.colorbar()
fig = plt.imshow(red)
fig.set_cmap('inferno')
plt.colorbar()
plt.axis('off')
rgb = np.dstack((red,green,blue))
plt.imshow(rgb)
rgb = np.dstack((scale(red),scale(green),scale(blue)))
plt.imshow(rgb)
# scale values for display purposes
def scale(band):
return band / 10000.0
# Load the bands into numpy arrays
# recall that we previously learned PlanetScope band order is BGRN
blue = scale(satdat.read(1))
green = scale(satdat.read(2))
red = scale(satdat.read(3))
nir = scale(satdat.read(4))
nrg = np.dstack((nir,red,green))
rgb = np.dstack((red, green, blue))
bgr = np.dstack((blue, green, red))
plt.imshow(rgb)
plt.imshow(nrg)
!pdal info /home/thistle/grassdata/las_files/interesting.laz
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
%system ls -al ..
%set_env GLASS='glasgow'
!curl -O -J https://github.com/sarasafavi/remote-sensing-with-python/blob/master/notebooks/aoi.geojson
!ls -l
import rasterio
filepath = "/home/thistle/Desktop/Idaho_Springs/nz_example/remoteNZ/hillshade.tif"
r = rasterio.open(filepath)
r.count
r.bounds
r.read_crs()
dir(r)
image_file = "20190321_174348_0f1a_3B_AnalyticMS.tif"
sat_1 = rasterio.open(image_file)
print(sat_1.count)
print(sat_1.indexes)
[print(each, end=", ") for each in dir(sat_1)];
%lsmagic
%pdef sat_1.read
b,g,r,nir = sat_1.read()
type(b), b.dtype, b.shape
print(sat_1.bounds)
width_in_projected_units = sat_1.bounds.right - sat_1.bounds.left
width_in_projected_units
height_in_projected_units = sat_1.bounds.top -sat_1.bounds.bottom
xres = width_in_projected_units/b.shape[1]
yres = height_in_projected_units/b.shape[0]
# so are pixels square
print(xres==yres)
print(f"xres is {xres}")
print(yres)
#how big is file
import os
os.path.getsize(image_file)
file4="/home/thistle/ArrowRiver_hillshade.tif"
img4= rasterio.open(file4)
show(img4)
from rasterio.plot import show
show(sat_1)
image_file = "20190321_174348_0f1a_3B_AnalyticMS.tif"
# Use Rasterio to open the image.
satdat = rasterio.open(image_file)
from rasterio.plot import show
show(satdat)
[print((band.min(), band.max())) for band in [b,g,r,nir]]
show(nir)
satdat.profile
import fiona
# use fiona to open our AOI GeoJSON
with fiona.open('aoi.geojson') as f:
aoi = [feature["geometry"] for feature in f]
type(aoi.geojson)
file1='/media/thistle/Passport/gis/nz/lds-new-zealand-3layers-SHP/nz-elevation-survey-index/nz-elevation-survey-index.shp'
with fiona.open(file1) as file:
meta = file.meta
meta
meta['crs'].items()
file2 = "/media/thistle/Passport/gis/nz/lidar/site_ArrowRiver/LowerRiver/DSM_CC11_2016_1000_0245.tif"
num2 = rasterio.open(file2)
file3 = "/media/thistle/Passport/gis/nz/lidar/site_ArrowRiver/LowerRiver/DSM_CC11_2016_1000_0350.tif"
num3 = rasterio.open(file3)
from rasterio.plot import show
%pdef show
show(num2)
show(num3)
import json
dir(json)
!ls
with open("aoi.geojson","r") as f:
d = json.load(f)
print(d)
%pdoc json.dump
#Try to get a geometry for all geos in an index file
tiles_file = "/media/thistle/Passport/gis/nz/lds-new-zealand-3layers-SHP/otago-queenstown-lidar-index-tiles-2016/otago-queenstown-lidar-index-tiles-2016.shp"
with fiona.open(tiles_file, "r") as tiles:
aoi = [feature['geometry'] for feature in tiles]
zz = tiles
zz.meta
print((type(aoi), len(aoi)))
aoi[1000]
from rasterio.mask import mask
%pdoc mask
img_file = "/home/thistle/ArrowRiver_hillshade.tif"
with rasterio.open(img_file) as hs:
show (hs)
img_file = "/home/thistle/ArrowRiver_hillshade.tif"
with rasterio.open(img_file) as hs:
z = hs
z.meta['crs']
aoi[1]
with rasterio.open(img_file) as hs:
clipped, tranform = mask(hs, aoi[1:10], crop=True)
lll = list((1,3,33,5))
lll[::2]
sat_1.count
satdat.count
satdat.meta
blue = satdat.read(1)
green = satdat.read(2)
red = satdat.read(3)
infrared = satdat.read(4)
blue.shape == red.shape == green.shape
def scale(arr):
return arr/1000.
plt.imshow(blue)
#plt.show()
show(blue)
red.max(), blue.max(), infrared.max()
fig = plt.imshow(green)
fig.set_cmap("gist_earth")
plt.colorbar()
fig = plt.imshow(red)
fig.set_cmap('inferno')
plt.colorbar()
plt.axis('off')
rgb = np.dstack((red,green,blue))
plt.imshow(rgb)
rgb = np.dstack((scale(red),scale(green),scale(blue)))
plt.imshow(rgb)
# scale values for display purposes
def scale(band):
return band / 10000.0
# Load the bands into numpy arrays
# recall that we previously learned PlanetScope band order is BGRN
blue = scale(satdat.read(1))
green = scale(satdat.read(2))
red = scale(satdat.read(3))
nir = scale(satdat.read(4))
nrg = np.dstack((nir,red,green))
rgb = np.dstack((red, green, blue))
bgr = np.dstack((blue, green, red))
plt.imshow(rgb)
plt.imshow(nrg)
!pdal info /home/thistle/grassdata/las_files/interesting.laz
| 0.606732 | 0.670957 |
<a href="https://colab.research.google.com/github/jyjoon001/EEE4171/blob/main/aicomm_project.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import torch
import torchvision
import torch.nn as nn
import torch.nn.functional as F
from torchvision import models, transforms
import matplotlib.pyplot as plt
# device configuration
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
num_class = 10
# hyper - parameters
num_epochs = 5
learning_rate = 0.0025
batch_size = 128
max_pool_kernel = 2
train_data = torchvision.datasets.FashionMNIST("./data", download=True, transform=
transforms.Compose([transforms.ToTensor()]))
test_data = torchvision.datasets.FashionMNIST("./data", download=True, train=False, transform=
transforms.Compose([transforms.ToTensor()]))
!nvidia-smi
# Define DataLoader
train_loader = torch.utils.data.DataLoader(train_data,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(test_data,
batch_size=batch_size,
shuffle=False)
def image_show(dataset, num):
fig = plt.figure(figsize=(30,30))
for i in range(num):
plt.subplot(1, num, i+1)
plt.imshow(dataset[i][0].squeeze())
plt.title(dataset[i][1])
class ConvNet(nn.Module):
def __init__(self, num_class):
super(ConvNet, self).__init__()
self.layer1a = nn.Sequential(
nn.Conv2d(1, 10, 3, stride=1, padding=1),
nn.BatchNorm2d(10),
nn.ReLU()
)
self.layer1b = nn.Sequential(
nn.Conv2d(10, 16, 3, stride=1, padding=1),
nn.BatchNorm2d(16),
nn.ReLU()
)
self.layer2a = nn.Sequential(
nn.LeakyReLU(),
nn.Conv2d(16, 16, 3, stride=1, padding=1),
nn.BatchNorm2d(16)
)
self.layer2b = nn.Sequential(
nn.LeakyReLU(),
nn.Conv2d(16, 16, 3, stride=1, padding=1),
nn.BatchNorm2d(16)
)
self.layer2c = nn.Sequential(
nn.Conv2d(16, 32, 5, stride=1, padding=2),
nn.BatchNorm2d(32),
nn.ReLU()
)
self.layer3a = nn.Sequential(
nn.Conv2d(32, 64, 7, stride=1, padding=3),
nn.BatchNorm2d(64),
nn.ReLU()
)
self.layer3b = nn.Sequential(
nn.Conv2d(64, 128, 7, stride=1, padding=3),
nn.BatchNorm2d(128),
nn.ReLU(),
nn.AdaptiveAvgPool2d(6),
nn.Flatten()
)
self.fc0 = nn.Sequential(
nn.Linear(6*6*128, 360),
nn.BatchNorm1d(360),
nn.ReLU()
)
self.fc1 = nn.Sequential(
nn.Linear(360, 120),
nn.BatchNorm1d(120),
nn.ReLU()
)
self.fc2 = nn.Sequential(
nn.Linear(120, 84),
nn.BatchNorm1d(84),
nn.ReLU()
)
self.fc3 = nn.Sequential(
nn.Linear(84, num_class),
nn.BatchNorm1d(num_class)
)
self.maxpool = nn.MaxPool2d(max_pool_kernel)
def forward(self, x):
x = self.layer1a(x)
x = self.layer1b(x)
res = x
x = self.layer2a(x)
x = self.layer2b(x)
x = self.maxpool(x+res)
x = self.layer2c(x)
x = self.layer3a(x)
x = self.layer3b(x)
x = x.reshape(x.size(0),-1)
x = self.fc0(x)
x = self.fc1(x)
x = self.fc2(x)
x = F.log_softmax(self.fc3(x))
return x
model = ConvNet(num_class).to(device)
model
# Set Loss and Optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr = learning_rate)
import time
image, label = next(iter(test_loader))
print(image.size()) # [Batch, Channel, Height, Width]
start = time.time()
best_epoch = 0
best_loss = float('inf')
total_step = len(train_loader)
loss_list = []
for epoch in range(num_epochs):
for i, (image, label) in enumerate(train_loader):
image = image.to(device)
label = label.to(device)
# Forward
output = model(image)
loss = criterion(output, label)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss_list.append(loss.item())
if (i+1) % 1000 == 0:
print("Epoch [{}/{}], Step[{}/{}], Loss:{:.4f}".format(epoch+1, num_epochs, i+1, total_step, loss.item()))
end = time.time()
print("Train takes {:.2f}minutes".format((end-start)/60))
torch.save(model.state_dict(),'20161482_model2.pth')
plt.plot(loss_list)
plt.title("EMNIST with CNN")
plt.show()
model_saved = ConvNet(num_class).to(device)
model_saved.load_state_dict(torch.load("20161482_model2.pth"))
model_saved.eval()
with torch.no_grad():
correct = 0
for img, lab in test_loader:
img = img.to(device)
lab = lab.to(device)
out = model_saved(img)
_, pred = torch.max(out.data, 1)
correct += (pred == lab).sum().item()
print("Accuracy of the network on the {} test images: {}%".format(len(test_loader)*batch_size, 100 * correct / (len(test_loader) * batch_size)))
```
|
github_jupyter
|
import torch
import torchvision
import torch.nn as nn
import torch.nn.functional as F
from torchvision import models, transforms
import matplotlib.pyplot as plt
# device configuration
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
num_class = 10
# hyper - parameters
num_epochs = 5
learning_rate = 0.0025
batch_size = 128
max_pool_kernel = 2
train_data = torchvision.datasets.FashionMNIST("./data", download=True, transform=
transforms.Compose([transforms.ToTensor()]))
test_data = torchvision.datasets.FashionMNIST("./data", download=True, train=False, transform=
transforms.Compose([transforms.ToTensor()]))
!nvidia-smi
# Define DataLoader
train_loader = torch.utils.data.DataLoader(train_data,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(test_data,
batch_size=batch_size,
shuffle=False)
def image_show(dataset, num):
fig = plt.figure(figsize=(30,30))
for i in range(num):
plt.subplot(1, num, i+1)
plt.imshow(dataset[i][0].squeeze())
plt.title(dataset[i][1])
class ConvNet(nn.Module):
def __init__(self, num_class):
super(ConvNet, self).__init__()
self.layer1a = nn.Sequential(
nn.Conv2d(1, 10, 3, stride=1, padding=1),
nn.BatchNorm2d(10),
nn.ReLU()
)
self.layer1b = nn.Sequential(
nn.Conv2d(10, 16, 3, stride=1, padding=1),
nn.BatchNorm2d(16),
nn.ReLU()
)
self.layer2a = nn.Sequential(
nn.LeakyReLU(),
nn.Conv2d(16, 16, 3, stride=1, padding=1),
nn.BatchNorm2d(16)
)
self.layer2b = nn.Sequential(
nn.LeakyReLU(),
nn.Conv2d(16, 16, 3, stride=1, padding=1),
nn.BatchNorm2d(16)
)
self.layer2c = nn.Sequential(
nn.Conv2d(16, 32, 5, stride=1, padding=2),
nn.BatchNorm2d(32),
nn.ReLU()
)
self.layer3a = nn.Sequential(
nn.Conv2d(32, 64, 7, stride=1, padding=3),
nn.BatchNorm2d(64),
nn.ReLU()
)
self.layer3b = nn.Sequential(
nn.Conv2d(64, 128, 7, stride=1, padding=3),
nn.BatchNorm2d(128),
nn.ReLU(),
nn.AdaptiveAvgPool2d(6),
nn.Flatten()
)
self.fc0 = nn.Sequential(
nn.Linear(6*6*128, 360),
nn.BatchNorm1d(360),
nn.ReLU()
)
self.fc1 = nn.Sequential(
nn.Linear(360, 120),
nn.BatchNorm1d(120),
nn.ReLU()
)
self.fc2 = nn.Sequential(
nn.Linear(120, 84),
nn.BatchNorm1d(84),
nn.ReLU()
)
self.fc3 = nn.Sequential(
nn.Linear(84, num_class),
nn.BatchNorm1d(num_class)
)
self.maxpool = nn.MaxPool2d(max_pool_kernel)
def forward(self, x):
x = self.layer1a(x)
x = self.layer1b(x)
res = x
x = self.layer2a(x)
x = self.layer2b(x)
x = self.maxpool(x+res)
x = self.layer2c(x)
x = self.layer3a(x)
x = self.layer3b(x)
x = x.reshape(x.size(0),-1)
x = self.fc0(x)
x = self.fc1(x)
x = self.fc2(x)
x = F.log_softmax(self.fc3(x))
return x
model = ConvNet(num_class).to(device)
model
# Set Loss and Optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr = learning_rate)
import time
image, label = next(iter(test_loader))
print(image.size()) # [Batch, Channel, Height, Width]
start = time.time()
best_epoch = 0
best_loss = float('inf')
total_step = len(train_loader)
loss_list = []
for epoch in range(num_epochs):
for i, (image, label) in enumerate(train_loader):
image = image.to(device)
label = label.to(device)
# Forward
output = model(image)
loss = criterion(output, label)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss_list.append(loss.item())
if (i+1) % 1000 == 0:
print("Epoch [{}/{}], Step[{}/{}], Loss:{:.4f}".format(epoch+1, num_epochs, i+1, total_step, loss.item()))
end = time.time()
print("Train takes {:.2f}minutes".format((end-start)/60))
torch.save(model.state_dict(),'20161482_model2.pth')
plt.plot(loss_list)
plt.title("EMNIST with CNN")
plt.show()
model_saved = ConvNet(num_class).to(device)
model_saved.load_state_dict(torch.load("20161482_model2.pth"))
model_saved.eval()
with torch.no_grad():
correct = 0
for img, lab in test_loader:
img = img.to(device)
lab = lab.to(device)
out = model_saved(img)
_, pred = torch.max(out.data, 1)
correct += (pred == lab).sum().item()
print("Accuracy of the network on the {} test images: {}%".format(len(test_loader)*batch_size, 100 * correct / (len(test_loader) * batch_size)))
| 0.922631 | 0.929887 |
# SGCN用データセット作り
```
import numpy as np
import pandas as pd
import pickle
import networkx as nx
import matplotlib
%matplotlib inline
def get_dist(df,col):
df_cnt = df.groupby([col]+['rating'])['time'].count().unstack(1,fill_value=0)
df_dist = pd.DataFrame(df_cnt.values / df_cnt.sum(1).values.reshape(-1,1),
columns=df_cnt.columns,
index=df_cnt.index)
return df_dist
```
## Amazon → SGCN
### user-product network
#### network
```
amazon_dir = 'amazon_music'
amazon_network = pd.read_csv('raw_data/{0}/{0}_network.csv'.format(amazon_dir),header=None)
amazon_network.columns = ['user_id','product_id','rating','time']
amazon_network['weight'] = amazon_network.rating.map(lambda x:(x-3)/2).round()
amazon_gt = pd.read_csv('raw_data/{0}/{0}_gt.csv'.format(amazon_dir),header=None)
amazon_gt.columns = ['user_id','label']
truncated_amazon_network = amazon_network.loc[amazon_network.weight!=0,['user_id','product_id','weight']]
from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
label_encoder.fit(np.hstack((truncated_amazon_network.user_id,
truncated_amazon_network.product_id,
amazon_gt.user_id)))
truncated_amazon_network['id1'] = label_encoder.transform(truncated_amazon_network.user_id)
truncated_amazon_network['id2'] = label_encoder.transform(truncated_amazon_network.product_id)
amazon_gt['node_id'] = label_encoder.transform(amazon_gt.user_id)
```
#### node features
```
user_dist = get_dist(amazon_network,'user_id')
product_dist = get_dist(amazon_network,'product_id')
# user_product_dist = user_dist.append(product_dist)
user_product_dist = pd.concat([user_dist,product_dist],1).fillna(0)
node_features_df = user_product_dist.loc[label_encoder.classes_]
```
#### ファイル出力
```
truncated_amazon_network[['id1','id2','weight']].to_csv('input/{0}/{0}_network.csv'.format(amazon_dir),index=None)
amazon_gt[['node_id','label']].to_csv('input/{0}/{0}_gt.csv'.format(amazon_dir),index=None)
np.save(arr=label_encoder.classes_,file='input/{0}/{0}_label_encoder.npy'.format(amazon_dir))
node_features_df.to_csv('input/{0}/{0}_node_feature.csv'.format(amazon_dir),index=None)
```
## epinions
### network
```
epinions_network = pd.read_csv('raw_data/epinions/epinions_network.csv',header=None)
epinions_network.columns = ['id1','id2','rating','time']
epinions_network['weight'] = epinions_network.rating.map(lambda x:-1 if x-3.5 < 0 else 1)
epinions_gt = pd.read_csv('raw_data/epinions/epinions_gt.csv',header=None)
epinions_gt.columns = ['user_id','label']
from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
label_encoder.fit(np.hstack((epinions_network.id1,
epinions_network.id2,
epinions_gt.user_id)))
epinions_network['id1_'] = label_encoder.transform(epinions_network.id1)
epinions_network['id2_'] = label_encoder.transform(epinions_network.id2)
epinions_gt['node_id'] = label_encoder.transform(epinions_gt.user_id)
```
### node features
```
node_features_df = pd.concat([get_dist(epinions_network,'id1_'),get_dist(epinions_network,'id2_')],1).fillna(0).sort_index()
```
### ファイル出力
```
epinions_network[['id1_','id2_','weight']].to_csv('input/epinions/epinions_network.csv',index=None)
epinions_gt[['node_id','label']].to_csv('input/epinions/epinions_gt.csv',index=None)
np.save(arr=label_encoder.classes_,file='input/epinions/epinions_label_encoder.npy')
node_features_df.to_csv('input/epinions/epinions_node_feature.csv',index=None)
```
## epinions_sub
```
sampled_nodes = np.random.choice()
```
## alpha
```
alpha_network = pd.read_csv('raw_data/alpha/alpha_network.csv',header=None)
alpha_network.columns = ['id1','id2','rating','time']
alpha_network['weight'] = alpha_network.rating.map(lambda x:1 if x>0 else -1)
alpha_gt = pd.read_csv('raw_data/alpha/alpha_gt.csv',header=None)
alpha_gt.columns = ['user_id','label']
from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
label_encoder.fit(np.hstack((alpha_network.id1,
alpha_network.id2,
alpha_gt.user_id)))
alpha_network['id1_'] = label_encoder.transform(alpha_network.id1)
alpha_network['id2_'] = label_encoder.transform(alpha_network.id2)
alpha_gt['node_id'] = label_encoder.transform(alpha_gt.user_id)
node_features_df = pd.concat([get_dist(alpha_network,'id1_'),get_dist(alpha_network,'id2_')],1).fillna(0).sort_index()
alpha_network[['id1_','id2_','weight']].to_csv('input/alpha/alpha_network.csv',index=None)
alpha_gt[['node_id','label']].to_csv('input/alpha/alpha_gt.csv',index=None)
np.save(arr=label_encoder.classes_,file='input/alpha/alpha_label_encoder.npy')
node_features_df.to_csv('input/alpha/alpha_node_feature.csv',index=None)
```
## otc
```
otc_network = pd.read_csv('raw_data/otc/otc_network.csv',header=None)
otc_network.columns = ['id1','id2','rating','time']
otc_network['weight'] = otc_network.rating.map(lambda x:1 if x>0 else -1)
otc_gt = pd.read_csv('raw_data/otc/otc_gt.csv',header=None)
otc_gt.columns = ['user_id','label']
from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
label_encoder.fit(np.hstack((otc_network.id1,
otc_network.id2,
otc_gt.user_id)))
otc_network['id1_'] = label_encoder.transform(otc_network.id1)
otc_network['id2_'] = label_encoder.transform(otc_network.id2)
otc_gt['node_id'] = label_encoder.transform(otc_gt.user_id)
node_features_df = pd.concat([get_dist(otc_network,'id1_'),get_dist(otc_network,'id2_')],1).fillna(0).sort_index()
otc_network[['id1_','id2_','weight']].to_csv('input/otc/otc_network.csv',index=None)
otc_gt[['node_id','label']].to_csv('input/otc/otc_gt.csv',index=None)
np.save(arr=label_encoder.classes_,file='input/otc/otc_label_encoder.npy')
node_features_df.to_csv('input/otc/otc_node_feature.csv',index=None)
```
## Amazon Extra
```python
file_path = 'http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/reviews_Electronics_5.json.gz'
review_df_raw = pd.read_json(file_path,lines=True)
assert ~review_df_raw.duplicated(['asin','reviewerID']).any()
def preprocessing(df):
df_ = df.copy()
# converting 'reviewTime' column to datetime
df_['reviewTime'] = pd.to_datetime(df_.reviewTime,format='%m %d, %Y')
# vote_sum and helpful rate, helpful_bin
df_['vote_sum'] = df_['helpful'].map(lambda x:x[1])
df_['helpful_rate'] = df_['helpful'].map(lambda x:x[0]/x[1] if x[1]>0 else float('nan'))
df_['helpful_bin'] = pd.cut(df_.helpful_rate,bins=np.arange(0,1.1,0.1),include_lowest=True,labels=range(10))
# おかしいデータを取り除く
df_ = df_.loc[~(df_.helpful_rate>1.0)]
return df_
review_df = preprocessing(review_df_raw)
def generate_network_csv(df, from_date):
review_df_from_ = df.loc[df.reviewTime>=from_date]
return review_df_from_[['reviewerID','asin','overall','reviewTime']]
def generate_gt(df,from_date):
reviewer_all_votes = \
df.loc[df.reviewTime>=from_date].groupby('reviewerID',as_index=False)['helpful'].agg(lambda x:list(np.vstack(x).sum(0)))
reviewer_all_votes['vote_sum'] = reviewer_all_votes.helpful.map(lambda x:x[1])
reviewer_all_votes['rate'] = reviewer_all_votes.helpful.map(lambda x:x[0]/x[1])
selected_df = reviewer_all_votes.loc[(reviewer_all_votes.vote_sum>=50) &
((reviewer_all_votes.rate<=0.25) |
(reviewer_all_votes.rate>=0.75))]
selected_df['label'] = selected_df['rate'].map(lambda x: -1 if x <= 0.25 else 1)
return selected_df[['reviewerID','label']].set_index('reviewerID')
amazon_network = generate_network_csv(review_df,pd.Timestamp(2013,1,1))
amazon_gt = generate_gt(review_df,pd.Timestamp(2013,1,1))
```
# Appindix
## amazon user networkを作る
```
amazon_network
self_joined = pd.merge(amazon_network,amazon_network,on='product_id',how='right')
self_joined = self_joined.loc[~(self_joined.user_id_x==self_joined.user_id_y)]
self_joined['sign'] = self_joined.weight_x*self_joined.weight_y
user_network = self_joined.loc[self_joined.sign!=0,['user_id_x','user_id_y','sign']]
user_network = user_network.groupby(['user_id_x','user_id_y'],as_index=False)['sign'].mean().round()
user_network = user_network.loc[user_network.sign!=0]
from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
label_encoder.fit(np.hstack((user_network.user_id_x,
user_network.user_id_y,
amazon_gt.user_id)))
user_network['id1'] = label_encoder.transform(user_network.user_id_x)
user_network['id2'] = label_encoder.transform(user_network.user_id_y)
for_nx_network = user_network.copy()[['id1','id2','sign']]
for_nx_network.columns = ['source','target','weight']
G = nx.from_pandas_edgelist(for_nx_network,edge_attr=True)
amazon_user_network = nx.to_pandas_edgelist(G)
amazon_gt['node_id'] = label_encoder.transform(amazon_gt.user_id)
amazon_user_network[['source','target','weight']].to_csv('input/amazon/user_network.csv',index=None)
amazon_gt[['node_id','label']].to_csv('input/amazon/user_gt.csv',index=None)
np.save(arr=label_encoder.classes_,file='input/amazon/user_label_encoder.npy')
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import pickle
import networkx as nx
import matplotlib
%matplotlib inline
def get_dist(df,col):
df_cnt = df.groupby([col]+['rating'])['time'].count().unstack(1,fill_value=0)
df_dist = pd.DataFrame(df_cnt.values / df_cnt.sum(1).values.reshape(-1,1),
columns=df_cnt.columns,
index=df_cnt.index)
return df_dist
amazon_dir = 'amazon_music'
amazon_network = pd.read_csv('raw_data/{0}/{0}_network.csv'.format(amazon_dir),header=None)
amazon_network.columns = ['user_id','product_id','rating','time']
amazon_network['weight'] = amazon_network.rating.map(lambda x:(x-3)/2).round()
amazon_gt = pd.read_csv('raw_data/{0}/{0}_gt.csv'.format(amazon_dir),header=None)
amazon_gt.columns = ['user_id','label']
truncated_amazon_network = amazon_network.loc[amazon_network.weight!=0,['user_id','product_id','weight']]
from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
label_encoder.fit(np.hstack((truncated_amazon_network.user_id,
truncated_amazon_network.product_id,
amazon_gt.user_id)))
truncated_amazon_network['id1'] = label_encoder.transform(truncated_amazon_network.user_id)
truncated_amazon_network['id2'] = label_encoder.transform(truncated_amazon_network.product_id)
amazon_gt['node_id'] = label_encoder.transform(amazon_gt.user_id)
user_dist = get_dist(amazon_network,'user_id')
product_dist = get_dist(amazon_network,'product_id')
# user_product_dist = user_dist.append(product_dist)
user_product_dist = pd.concat([user_dist,product_dist],1).fillna(0)
node_features_df = user_product_dist.loc[label_encoder.classes_]
truncated_amazon_network[['id1','id2','weight']].to_csv('input/{0}/{0}_network.csv'.format(amazon_dir),index=None)
amazon_gt[['node_id','label']].to_csv('input/{0}/{0}_gt.csv'.format(amazon_dir),index=None)
np.save(arr=label_encoder.classes_,file='input/{0}/{0}_label_encoder.npy'.format(amazon_dir))
node_features_df.to_csv('input/{0}/{0}_node_feature.csv'.format(amazon_dir),index=None)
epinions_network = pd.read_csv('raw_data/epinions/epinions_network.csv',header=None)
epinions_network.columns = ['id1','id2','rating','time']
epinions_network['weight'] = epinions_network.rating.map(lambda x:-1 if x-3.5 < 0 else 1)
epinions_gt = pd.read_csv('raw_data/epinions/epinions_gt.csv',header=None)
epinions_gt.columns = ['user_id','label']
from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
label_encoder.fit(np.hstack((epinions_network.id1,
epinions_network.id2,
epinions_gt.user_id)))
epinions_network['id1_'] = label_encoder.transform(epinions_network.id1)
epinions_network['id2_'] = label_encoder.transform(epinions_network.id2)
epinions_gt['node_id'] = label_encoder.transform(epinions_gt.user_id)
node_features_df = pd.concat([get_dist(epinions_network,'id1_'),get_dist(epinions_network,'id2_')],1).fillna(0).sort_index()
epinions_network[['id1_','id2_','weight']].to_csv('input/epinions/epinions_network.csv',index=None)
epinions_gt[['node_id','label']].to_csv('input/epinions/epinions_gt.csv',index=None)
np.save(arr=label_encoder.classes_,file='input/epinions/epinions_label_encoder.npy')
node_features_df.to_csv('input/epinions/epinions_node_feature.csv',index=None)
sampled_nodes = np.random.choice()
alpha_network = pd.read_csv('raw_data/alpha/alpha_network.csv',header=None)
alpha_network.columns = ['id1','id2','rating','time']
alpha_network['weight'] = alpha_network.rating.map(lambda x:1 if x>0 else -1)
alpha_gt = pd.read_csv('raw_data/alpha/alpha_gt.csv',header=None)
alpha_gt.columns = ['user_id','label']
from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
label_encoder.fit(np.hstack((alpha_network.id1,
alpha_network.id2,
alpha_gt.user_id)))
alpha_network['id1_'] = label_encoder.transform(alpha_network.id1)
alpha_network['id2_'] = label_encoder.transform(alpha_network.id2)
alpha_gt['node_id'] = label_encoder.transform(alpha_gt.user_id)
node_features_df = pd.concat([get_dist(alpha_network,'id1_'),get_dist(alpha_network,'id2_')],1).fillna(0).sort_index()
alpha_network[['id1_','id2_','weight']].to_csv('input/alpha/alpha_network.csv',index=None)
alpha_gt[['node_id','label']].to_csv('input/alpha/alpha_gt.csv',index=None)
np.save(arr=label_encoder.classes_,file='input/alpha/alpha_label_encoder.npy')
node_features_df.to_csv('input/alpha/alpha_node_feature.csv',index=None)
otc_network = pd.read_csv('raw_data/otc/otc_network.csv',header=None)
otc_network.columns = ['id1','id2','rating','time']
otc_network['weight'] = otc_network.rating.map(lambda x:1 if x>0 else -1)
otc_gt = pd.read_csv('raw_data/otc/otc_gt.csv',header=None)
otc_gt.columns = ['user_id','label']
from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
label_encoder.fit(np.hstack((otc_network.id1,
otc_network.id2,
otc_gt.user_id)))
otc_network['id1_'] = label_encoder.transform(otc_network.id1)
otc_network['id2_'] = label_encoder.transform(otc_network.id2)
otc_gt['node_id'] = label_encoder.transform(otc_gt.user_id)
node_features_df = pd.concat([get_dist(otc_network,'id1_'),get_dist(otc_network,'id2_')],1).fillna(0).sort_index()
otc_network[['id1_','id2_','weight']].to_csv('input/otc/otc_network.csv',index=None)
otc_gt[['node_id','label']].to_csv('input/otc/otc_gt.csv',index=None)
np.save(arr=label_encoder.classes_,file='input/otc/otc_label_encoder.npy')
node_features_df.to_csv('input/otc/otc_node_feature.csv',index=None)
file_path = 'http://snap.stanford.edu/data/amazon/productGraph/categoryFiles/reviews_Electronics_5.json.gz'
review_df_raw = pd.read_json(file_path,lines=True)
assert ~review_df_raw.duplicated(['asin','reviewerID']).any()
def preprocessing(df):
df_ = df.copy()
# converting 'reviewTime' column to datetime
df_['reviewTime'] = pd.to_datetime(df_.reviewTime,format='%m %d, %Y')
# vote_sum and helpful rate, helpful_bin
df_['vote_sum'] = df_['helpful'].map(lambda x:x[1])
df_['helpful_rate'] = df_['helpful'].map(lambda x:x[0]/x[1] if x[1]>0 else float('nan'))
df_['helpful_bin'] = pd.cut(df_.helpful_rate,bins=np.arange(0,1.1,0.1),include_lowest=True,labels=range(10))
# おかしいデータを取り除く
df_ = df_.loc[~(df_.helpful_rate>1.0)]
return df_
review_df = preprocessing(review_df_raw)
def generate_network_csv(df, from_date):
review_df_from_ = df.loc[df.reviewTime>=from_date]
return review_df_from_[['reviewerID','asin','overall','reviewTime']]
def generate_gt(df,from_date):
reviewer_all_votes = \
df.loc[df.reviewTime>=from_date].groupby('reviewerID',as_index=False)['helpful'].agg(lambda x:list(np.vstack(x).sum(0)))
reviewer_all_votes['vote_sum'] = reviewer_all_votes.helpful.map(lambda x:x[1])
reviewer_all_votes['rate'] = reviewer_all_votes.helpful.map(lambda x:x[0]/x[1])
selected_df = reviewer_all_votes.loc[(reviewer_all_votes.vote_sum>=50) &
((reviewer_all_votes.rate<=0.25) |
(reviewer_all_votes.rate>=0.75))]
selected_df['label'] = selected_df['rate'].map(lambda x: -1 if x <= 0.25 else 1)
return selected_df[['reviewerID','label']].set_index('reviewerID')
amazon_network = generate_network_csv(review_df,pd.Timestamp(2013,1,1))
amazon_gt = generate_gt(review_df,pd.Timestamp(2013,1,1))
amazon_network
self_joined = pd.merge(amazon_network,amazon_network,on='product_id',how='right')
self_joined = self_joined.loc[~(self_joined.user_id_x==self_joined.user_id_y)]
self_joined['sign'] = self_joined.weight_x*self_joined.weight_y
user_network = self_joined.loc[self_joined.sign!=0,['user_id_x','user_id_y','sign']]
user_network = user_network.groupby(['user_id_x','user_id_y'],as_index=False)['sign'].mean().round()
user_network = user_network.loc[user_network.sign!=0]
from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
label_encoder.fit(np.hstack((user_network.user_id_x,
user_network.user_id_y,
amazon_gt.user_id)))
user_network['id1'] = label_encoder.transform(user_network.user_id_x)
user_network['id2'] = label_encoder.transform(user_network.user_id_y)
for_nx_network = user_network.copy()[['id1','id2','sign']]
for_nx_network.columns = ['source','target','weight']
G = nx.from_pandas_edgelist(for_nx_network,edge_attr=True)
amazon_user_network = nx.to_pandas_edgelist(G)
amazon_gt['node_id'] = label_encoder.transform(amazon_gt.user_id)
amazon_user_network[['source','target','weight']].to_csv('input/amazon/user_network.csv',index=None)
amazon_gt[['node_id','label']].to_csv('input/amazon/user_gt.csv',index=None)
np.save(arr=label_encoder.classes_,file='input/amazon/user_label_encoder.npy')
| 0.246443 | 0.641577 |
# 2D Isostatic gravity inversion - Figures
Este [IPython Notebook](http://ipython.org/videos.html#the-ipython-notebook) utiliza a biblioteca de código aberto [Fatiando a Terra](http://fatiando.org/)
```
%matplotlib inline
import numpy as np
from scipy.misc import derivative
import scipy as spy
from scipy import interpolate
import matplotlib
#matplotlib.use('TkAgg', force=True)
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator
import math
import cPickle as pickle
import datetime
import string as st
from scipy.misc import imread
from __future__ import division
from fatiando import gravmag, mesher, utils, gridder
from fatiando.mesher import Prism, Polygon
from fatiando.gravmag import prism
from fatiando.utils import ang2vec, si2nt, contaminate
from fatiando.gridder import regular, profile
from fatiando.vis import mpl
from numpy.testing import assert_almost_equal
from numpy.testing import assert_array_almost_equal
from pytest import raises
plt.rc('font', size=16)
import functions as fc
```
## Observation coordinates.
```
# Model`s limits
ymin = 0.0
ymax = 383000.0
zmin = -1000.0
zmax = 45000.0
xmin = -100000.0
xmax = 100000.0
area = [ymin, ymax, zmax, zmin]
ny = 150 # number of observation datas and number of prisms along the profile
# coordinates defining the horizontal boundaries of the
# adjacent columns along the profile
y = np.linspace(ymin, ymax, ny)
# coordinates of the center of the columns forming the
# interpretation model
n = ny - 1
dy = (ymax - ymin)/n
ycmin = ymin + 0.5*dy
ycmax = ymax - 0.5*dy
yc = np.reshape(np.linspace(ycmin, ycmax, n),(n,1))
x = np.zeros_like(yc)
z = np.zeros_like(yc)-150.0
## Edge extension (observation coordinates)
sigma = 2.0
edge = sigma*dy*n
```
## Model parameters
```
# Model densities
# Indices and polygons relationship:
# cc = continental crust layer
# oc = ocean crust layer
# w = water layer
# s = sediment layer
# m = mantle layer
dw = np.array([1030.0])
#ds0 = np.array([2350.0])
#ds1 = np.array([2855.0])
dcc = np.array([2870.0])
doc = np.array([2885.0])
dm = np.array([3240.0])
ds0 = np.array([2425.0])#T07
ds1 = np.array([2835.0])#T07
#dc = dcc
# coordinate defining the horizontal boundaries of the continent-ocean boundary
COT = 350000.0
# list defining crust density variance
dc = np.zeros_like(yc)
aux = yc <= COT
for i in range(len(yc[aux])):
dc[i] = dcc
for i in range(len(yc[aux]),n):
dc[i] = doc
# defining sediments layers density matrix
ds = np.vstack((np.reshape(np.repeat(ds0,n),(1,n)),np.reshape(np.repeat(ds1,n),(1,n))))
# S0 => isostatic compensation surface (Airy's model)
# SR = S0+dS0 => reference Moho (Forward modeling)
S0 = np.array([41000.0])
```
## Observed data
```
gobs = np.reshape(np.loadtxt('../data/pelotas-profile-gz.txt'),(n,1))
```
## Water bottom
```
bathymetry = np.reshape(np.loadtxt('../data/etopo1-pelotas.txt'),(n,1))
tw = 0.0 - bathymetry
```
## Interpreted surfaces
```
toi = np.reshape(np.loadtxt('../data/pelotas-profile-interpreted-toi-surface.txt'),(n,1))
interpreted_basement = np.reshape(np.loadtxt('../data/pelotas-profile-interpreted-basement-surface.txt'),(n,1))
interpreted_moho = np.reshape(np.loadtxt('../data/pelotas-profile-interpreted-moho-surface.txt'),(n,1))
# reference moho surface (SR = S0+dS0)
dS0 = np.array([2200.0])
# 1st layer sediments thickness
ts0 = toi - tw
# 2nd layer sediments thickness
ts1 = interpreted_basement - toi
# thickness sediments vector
ts = np.vstack((np.reshape(ts0,(1,n)),np.reshape(ts1,(1,n))))
# layer mantle thickness
tm = S0 - interpreted_moho
# pelotas profile parameters vector
p_interp = np.vstack((ts1, tm, dS0))
```
## Initial guess surfaces
```
# initial guess basement surface
ini_basement = np.reshape(np.loadtxt('../data/pelotas-profile-initial-basement-surface.txt'),(n,1))
# initial guess moho surface
ini_moho = np.reshape(np.loadtxt('../data/pelotas-profile-initial-moho-surface.txt'),(n,1))
# initial guess reference moho surface (SR = S0+dS0)
ini_dS0 = np.array([1000.0])
ini_RM = S0 + ini_dS0
```
## Known depths
```
# Known values: basement and moho surfaces
base_known = np.loadtxt('../data/pelotas-profile-basement-known-depths.txt')
#base_known = np.loadtxt('../data/pelotas-profile-basement-more-known-depths.txt')
#base_known_new = np.loadtxt('../data/pelotas-profile-basement-new-known-depths.txt')
#base_known = np.loadtxt('../data/pelotas-profile-basement-few-more-known-depths.txt')
#base_known_new = np.loadtxt('../data/pelotas-profile-basement-few-new-known-depths.txt')
#base_known_old = np.loadtxt('../data/pelotas-profile-basement-known-depths.txt')
moho_known = np.loadtxt('../data/pelotas-profile-moho-known-depths.txt')
```
## Initial guess data
```
#g0 = np.reshape(np.loadtxt('../data/pelotas-profile-initial-guess-gravity-data.txt'),(n,1))
g0 = np.reshape(np.loadtxt('../data/pelotas-profile-initial-guess-gravity-data-T07.txt'),(n,1))
```
## Inversion model
```
p = np.reshape(np.loadtxt('../data/pelotas-profile-parameter-vector-alphas_-10(2)_-8(1)_-7(2)_-7(1)_-6(2)-T07.txt'),(2*n+1,1))
g = np.reshape(np.loadtxt('../data/pelotas-profile-predicted-gravity-data-alphas_-10(2)_-8(1)_-7(2)_-7(1)_-6(2)-T07.txt'),(n,1))
```
```
g0 = g.copy()
ini_basement = tw + ts0 + p[0:n]
ini_moho = S0 - p[n:n+n]
p = np.reshape(np.loadtxt('../data/pelotas-profile-parameter-vector-alphas_-10(2)_-8(1)_-7(2)_-7(1)_-6(2)-sgm_17-T07.txt'),(2*n+1,1))
g = np.reshape(np.loadtxt('../data/pelotas-profile-predicted-gravity-data-alphas_-10(2)_-8(1)_-7(2)_-7(1)_-6(2)-sgm_17-T07.txt'),(n,1))
# Inverrsion results
RM = S0 + p[n+n]
basement = tw + ts0 + p[0:n]
moho = S0 - p[n:n+n]
```
## Lithostatic Stress
```
sgm_interp = 9.81*(10**(-6))*(dw*tw + ds0*ts0 + ds1*ts1 + dc*(S0-tw-ts0-ts1-tm)+dm*tm)
sgm = 9.81*(10**(-6))*(dw*tw + ds0*ts0 + ds1*p[0:n] + dc*(S0-tw-ts0-p[0:n]-p[n:n+n])+dm*p[n:n+n])
```
## Inversion model plot
```
polygons_water = []
for (yi, twi) in zip(yc, tw):
y1 = yi - 0.5*dy
y2 = yi + 0.5*dy
polygons_water.append(Polygon(np.array([[y1, y2, y2, y1],
[0.0, 0.0, twi, twi]]).T,
props={'density': dw - dcc}))
polygons_sediments0 = []
for (yi, twi, s0i) in zip(yc, np.reshape(tw,(n,)), np.reshape(toi,(n,))):
y1 = yi - 0.5*dy
y2 = yi + 0.5*dy
polygons_sediments0.append(Polygon(np.array([[y1, y2, y2, y1],
[twi, twi, s0i, s0i]]).T,
props={'density': ds0 - dcc}))
polygons_sediments1 = []
for (yi, s0i, s1i) in zip(yc, np.reshape(toi,(n,)), np.reshape(basement,(n,))):
y1 = yi - 0.5*dy
y2 = yi + 0.5*dy
polygons_sediments1.append(Polygon(np.array([[y1, y2, y2, y1],
[s0i, s0i, s1i, s1i]]).T,
props={'density': ds1 - dcc}))
polygons_crust = []
for (yi, si, Si, dci) in zip(yc, np.reshape(basement,(n,)), np.reshape(moho,(n,)), dc):
y1 = yi - 0.5*dy
y2 = yi + 0.5*dy
polygons_crust.append(Polygon(np.array([[y1, y2, y2, y1],
[si, si, Si, Si]]).T,
props={'density': dci - dcc}))
polygons_mantle = []
for (yi, Si) in zip(yc, np.reshape(moho,(n,))):
y1 = yi - 0.5*dy
y2 = yi + 0.5*dy
polygons_mantle.append(Polygon(np.array([[y1, y2, y2, y1],
[Si, Si, (S0+p[n+n]), (S0+p[n+n])]]).T,
props={'density': dm - dcc}))
```
```
%matplotlib inline
plt.close('all')
fig = plt.figure(figsize=(12,16))
import matplotlib.gridspec as gridspec
heights = [8, 8, 8, 1]
gs = gridspec.GridSpec(4, 1, height_ratios=heights)
ax1 = plt.subplot(gs[0])
ax2 = plt.subplot(gs[1])
ax3 = plt.subplot(gs[2])
ax4 = plt.subplot(gs[3])
ax1.axhline(y=0.0, xmin=ymin, xmax=ymax, color='k', linestyle='--', linewidth=1)
ax1.plot(0.001*yc, gobs, 'or', mfc='none', markersize=8, label='observed data')
ax1.plot(0.001*yc, g0, '-b', linewidth=2, label='initial guess data')
ax1.plot(0.001*yc, g, '-g', linewidth=2, label='predicted data')
ax1.set_xlim(0.001*ymin, 0.001*ymax)
ax1.set_ylabel('gravity disturbance (mGal)', fontsize=18)
ax1.set_xticklabels(['%g'% (l) for l in ax1.get_xticks()], fontsize=16)
ax1.set_yticklabels(['%g'% (l) for l in ax1.get_yticks()], fontsize=16)
ax1.legend(loc='best', fontsize=16, facecolor='silver')
#ax2.plot(0.001*yc, sgm_interp, 'or', mfc='none', markersize=8, label='preliminary interpretation lithostatic stress')
ax2.plot(0.001*yc, sgm, '-g', linewidth=2, label='predicted lithostatic stress')
ax2.set_xlim(0.001*ymin, 0.001*ymax)
ax2.set_ylim(1120.,1210.)
ax2.set_ylabel('lithostatic stress (MPa)', fontsize=18)
ax2.set_xticklabels(['%g'% (l) for l in ax2.get_xticks()], fontsize=16)
ax2.set_yticklabels(['%g'% (l) for l in ax2.get_yticks()], fontsize=16)
ax2.legend(loc='best', fontsize=16, facecolor='silver')
ax3.axhline(y=0.0, xmin=ymin, xmax=ymax, color='k', linestyle='-', linewidth=1)
aux = yc <= COT
for (pwi) in (polygons_water):
tmpx = [x for x in pwi.x]
tmpx.append(pwi.x[0])
tmpy = [y for y in pwi.y]
tmpy.append(pwi.y[0])
ax3.plot(tmpx, tmpy, linestyle='None')
ax3.fill(tmpx, tmpy, color='lightskyblue')
for (ps0i) in (polygons_sediments0):
tmpx = [x for x in ps0i.x]
tmpx.append(ps0i.x[0])
tmpy = [y for y in ps0i.y]
tmpy.append(ps0i.y[0])
ax3.plot(tmpx, tmpy, linestyle='None')
ax3.fill(tmpx, tmpy, color='tan')
for (ps1i) in (polygons_sediments1):
tmpx = [x for x in ps1i.x]
tmpx.append(ps1i.x[0])
tmpy = [y for y in ps1i.y]
tmpy.append(ps1i.y[0])
ax3.plot(tmpx, tmpy, linestyle='None')
ax3.fill(tmpx, tmpy, color='rosybrown')
for (pci) in (polygons_crust[:len(yc[aux])]):
tmpx = [x for x in pci.x]
tmpx.append(pci.x[0])
tmpy = [y for y in pci.y]
tmpy.append(pci.y[0])
ax3.plot(tmpx, tmpy, linestyle='None')
ax3.fill(tmpx, tmpy, color='orange')
for (pcoi) in (polygons_crust[len(yc[aux]):n]):
tmpx = [x for x in pcoi.x]
tmpx.append(pcoi.x[0])
tmpy = [y for y in pcoi.y]
tmpy.append(pcoi.y[0])
ax3.plot(tmpx, tmpy, linestyle='None')
ax3.fill(tmpx, tmpy, color='olive')
for (pmi) in (polygons_mantle):
tmpx = [x for x in pmi.x]
tmpx.append(pmi.x[0])
tmpy = [y for y in pmi.y]
tmpy.append(pmi.y[0])
ax3.plot(tmpx, tmpy, linestyle='None')
ax3.fill(tmpx, tmpy, color='pink')
ax3.plot(yc, tw, '-k', linewidth=3)
ax3.plot(yc, toi, '-k', linewidth=3)
ax3.plot(yc, interpreted_basement, '-k', linewidth=3, label='previous interpretation surfaces')
ax3.plot(yc, interpreted_moho, '-k', linewidth=3)
ax3.plot(yc, ini_basement, '-.b', linewidth=3, label='initial guess surfaces')
ax3.plot(yc, ini_moho, '-.b', linewidth=3)
ax3.plot(yc, basement, '--w', linewidth=3, label='estimated surfaces')
ax3.plot(yc, moho, '--w', linewidth=3)
ax3.axhline(y=S0+dS0, xmin=ymin, xmax=ymax, color='k', linestyle='-', linewidth=3)
ax3.axhline(y=S0+ini_dS0, xmin=ymin, xmax=ymax, color='b', linestyle='-.', linewidth=3)
ax3.axhline(y=S0+p[n+n], xmin=ymin, xmax=ymax, color='w', linestyle='--', linewidth=3)
ax3.plot(base_known[:,0], base_known[:,1], 'v', color = 'yellow', markersize=15, label='known depths (basement)')
#ax3.plot(base_known_old[:,0], base_known_old[:,1], 'v', color = 'yellow', markersize=15, label='known depths (basement)')
#ax3.plot(base_known_new[:,0], base_known_new[:,1], 'v', color = 'magenta', markersize=15, label='more known depths (basement)')
ax3.plot(moho_known[:,0], moho_known[:,1], 'D', color = 'lime', markersize=15, label='known depths (moho)')
#ax3.set_ylim((S0+p[n+n]), zmin)
ax3.set_ylim((50500.0), zmin)
ax3.set_xlim(ymin, ymax)
ax3.set_xlabel('y (km)', fontsize=18)
ax3.set_ylabel('z (km)', fontsize=18)
ax3.set_xticklabels(['%g'% (0.001*l) for l in ax3.get_xticks()], fontsize=16)
ax3.set_yticklabels(['%g'% (0.001*l) for l in ax3.get_yticks()], fontsize=16)
ax3.legend(loc='lower right', fontsize=16, facecolor='silver')
X, Y = fig.get_dpi()*fig.get_size_inches()
plt.title('Density contrast (kg/m$^{3}$)', fontsize=20)
ax4.axis('off')
layers_list1 = ['water', 'sediment', 'SDR', 'continental', 'oceanic', 'mantle']
layers_list2 = ['', '', '', 'crust', 'crust', '']
colors_list = ['lightskyblue', 'tan', 'rosybrown', 'orange', 'olive', 'pink']
#density_list = ['-1840', '-520', '-15', '0', '15', '370']#original
#density_list = ['-1840', '-270', '-170', '0', '15', '370']#T08
density_list = ['-1840', '-445', '-35', '0', '15', '370']#T07
#density_list = ['-1840', '-445', '-120', '0', '15', '370']#T06
#density_list = ['-1840', '-420', '-170', '0', '15', '370']#T05
ncols = len(colors_list)
nrows = 1
h = Y / nrows
w = X / (ncols + 1)
i=ncols-1
for color, density, layers1, layers2 in zip(colors_list, density_list, layers_list1, layers_list2):
col = i // nrows
row = i % nrows
x = X - (col*w) - w
yi_line = Y
yf_line = Y - Y*0.15
yi_text1 = Y - Y*0.2
yi_text2 = Y - Y*0.28
yi_text3 = Y - Y*0.08
i-=1
poly = Polygon(np.array([[x, x+w*0.75, x+w*0.75, x], [yi_line, yi_line, yf_line, yf_line]]).T)
tmpx = [x for x in poly.x]
tmpx.append(poly.x[0])
tmpy = [y for y in poly.y]
tmpy.append(poly.y[0])
ax4.plot(tmpx, tmpy, linestyle='-', color='k', linewidth=1)
ax4.fill(tmpx, tmpy, color=color)
ax4.text(x+w*0.375, yi_text1, layers1, fontsize=(w*0.16), horizontalalignment='center', verticalalignment='top')
ax4.text(x+w*0.375, yi_text2, layers2, fontsize=(w*0.16), horizontalalignment='center', verticalalignment='top')
ax4.text(x+w*0.375, yi_text3, density, color = 'k', fontsize=(w*0.16), horizontalalignment='center', verticalalignment='center')
plt.tight_layout()
#plt.savefig('../manuscript/figures/pelotas-profile-grafics-estimated-model-alphas_2_1_2_1_2-sgm_19-dpi300.png', dpi=300, bbox_inches='tight')
plt.savefig('../manuscript/figures/pelotas-profile-grafics-estimated-model-alphas_2_1_2_1_2-sgm_17-T07-dpi300.png', dpi=300, bbox_inches='tight')
plt.show()
```
|
github_jupyter
|
%matplotlib inline
import numpy as np
from scipy.misc import derivative
import scipy as spy
from scipy import interpolate
import matplotlib
#matplotlib.use('TkAgg', force=True)
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator
import math
import cPickle as pickle
import datetime
import string as st
from scipy.misc import imread
from __future__ import division
from fatiando import gravmag, mesher, utils, gridder
from fatiando.mesher import Prism, Polygon
from fatiando.gravmag import prism
from fatiando.utils import ang2vec, si2nt, contaminate
from fatiando.gridder import regular, profile
from fatiando.vis import mpl
from numpy.testing import assert_almost_equal
from numpy.testing import assert_array_almost_equal
from pytest import raises
plt.rc('font', size=16)
import functions as fc
# Model`s limits
ymin = 0.0
ymax = 383000.0
zmin = -1000.0
zmax = 45000.0
xmin = -100000.0
xmax = 100000.0
area = [ymin, ymax, zmax, zmin]
ny = 150 # number of observation datas and number of prisms along the profile
# coordinates defining the horizontal boundaries of the
# adjacent columns along the profile
y = np.linspace(ymin, ymax, ny)
# coordinates of the center of the columns forming the
# interpretation model
n = ny - 1
dy = (ymax - ymin)/n
ycmin = ymin + 0.5*dy
ycmax = ymax - 0.5*dy
yc = np.reshape(np.linspace(ycmin, ycmax, n),(n,1))
x = np.zeros_like(yc)
z = np.zeros_like(yc)-150.0
## Edge extension (observation coordinates)
sigma = 2.0
edge = sigma*dy*n
# Model densities
# Indices and polygons relationship:
# cc = continental crust layer
# oc = ocean crust layer
# w = water layer
# s = sediment layer
# m = mantle layer
dw = np.array([1030.0])
#ds0 = np.array([2350.0])
#ds1 = np.array([2855.0])
dcc = np.array([2870.0])
doc = np.array([2885.0])
dm = np.array([3240.0])
ds0 = np.array([2425.0])#T07
ds1 = np.array([2835.0])#T07
#dc = dcc
# coordinate defining the horizontal boundaries of the continent-ocean boundary
COT = 350000.0
# list defining crust density variance
dc = np.zeros_like(yc)
aux = yc <= COT
for i in range(len(yc[aux])):
dc[i] = dcc
for i in range(len(yc[aux]),n):
dc[i] = doc
# defining sediments layers density matrix
ds = np.vstack((np.reshape(np.repeat(ds0,n),(1,n)),np.reshape(np.repeat(ds1,n),(1,n))))
# S0 => isostatic compensation surface (Airy's model)
# SR = S0+dS0 => reference Moho (Forward modeling)
S0 = np.array([41000.0])
gobs = np.reshape(np.loadtxt('../data/pelotas-profile-gz.txt'),(n,1))
bathymetry = np.reshape(np.loadtxt('../data/etopo1-pelotas.txt'),(n,1))
tw = 0.0 - bathymetry
toi = np.reshape(np.loadtxt('../data/pelotas-profile-interpreted-toi-surface.txt'),(n,1))
interpreted_basement = np.reshape(np.loadtxt('../data/pelotas-profile-interpreted-basement-surface.txt'),(n,1))
interpreted_moho = np.reshape(np.loadtxt('../data/pelotas-profile-interpreted-moho-surface.txt'),(n,1))
# reference moho surface (SR = S0+dS0)
dS0 = np.array([2200.0])
# 1st layer sediments thickness
ts0 = toi - tw
# 2nd layer sediments thickness
ts1 = interpreted_basement - toi
# thickness sediments vector
ts = np.vstack((np.reshape(ts0,(1,n)),np.reshape(ts1,(1,n))))
# layer mantle thickness
tm = S0 - interpreted_moho
# pelotas profile parameters vector
p_interp = np.vstack((ts1, tm, dS0))
# initial guess basement surface
ini_basement = np.reshape(np.loadtxt('../data/pelotas-profile-initial-basement-surface.txt'),(n,1))
# initial guess moho surface
ini_moho = np.reshape(np.loadtxt('../data/pelotas-profile-initial-moho-surface.txt'),(n,1))
# initial guess reference moho surface (SR = S0+dS0)
ini_dS0 = np.array([1000.0])
ini_RM = S0 + ini_dS0
# Known values: basement and moho surfaces
base_known = np.loadtxt('../data/pelotas-profile-basement-known-depths.txt')
#base_known = np.loadtxt('../data/pelotas-profile-basement-more-known-depths.txt')
#base_known_new = np.loadtxt('../data/pelotas-profile-basement-new-known-depths.txt')
#base_known = np.loadtxt('../data/pelotas-profile-basement-few-more-known-depths.txt')
#base_known_new = np.loadtxt('../data/pelotas-profile-basement-few-new-known-depths.txt')
#base_known_old = np.loadtxt('../data/pelotas-profile-basement-known-depths.txt')
moho_known = np.loadtxt('../data/pelotas-profile-moho-known-depths.txt')
#g0 = np.reshape(np.loadtxt('../data/pelotas-profile-initial-guess-gravity-data.txt'),(n,1))
g0 = np.reshape(np.loadtxt('../data/pelotas-profile-initial-guess-gravity-data-T07.txt'),(n,1))
p = np.reshape(np.loadtxt('../data/pelotas-profile-parameter-vector-alphas_-10(2)_-8(1)_-7(2)_-7(1)_-6(2)-T07.txt'),(2*n+1,1))
g = np.reshape(np.loadtxt('../data/pelotas-profile-predicted-gravity-data-alphas_-10(2)_-8(1)_-7(2)_-7(1)_-6(2)-T07.txt'),(n,1))
g0 = g.copy()
ini_basement = tw + ts0 + p[0:n]
ini_moho = S0 - p[n:n+n]
p = np.reshape(np.loadtxt('../data/pelotas-profile-parameter-vector-alphas_-10(2)_-8(1)_-7(2)_-7(1)_-6(2)-sgm_17-T07.txt'),(2*n+1,1))
g = np.reshape(np.loadtxt('../data/pelotas-profile-predicted-gravity-data-alphas_-10(2)_-8(1)_-7(2)_-7(1)_-6(2)-sgm_17-T07.txt'),(n,1))
# Inverrsion results
RM = S0 + p[n+n]
basement = tw + ts0 + p[0:n]
moho = S0 - p[n:n+n]
sgm_interp = 9.81*(10**(-6))*(dw*tw + ds0*ts0 + ds1*ts1 + dc*(S0-tw-ts0-ts1-tm)+dm*tm)
sgm = 9.81*(10**(-6))*(dw*tw + ds0*ts0 + ds1*p[0:n] + dc*(S0-tw-ts0-p[0:n]-p[n:n+n])+dm*p[n:n+n])
polygons_water = []
for (yi, twi) in zip(yc, tw):
y1 = yi - 0.5*dy
y2 = yi + 0.5*dy
polygons_water.append(Polygon(np.array([[y1, y2, y2, y1],
[0.0, 0.0, twi, twi]]).T,
props={'density': dw - dcc}))
polygons_sediments0 = []
for (yi, twi, s0i) in zip(yc, np.reshape(tw,(n,)), np.reshape(toi,(n,))):
y1 = yi - 0.5*dy
y2 = yi + 0.5*dy
polygons_sediments0.append(Polygon(np.array([[y1, y2, y2, y1],
[twi, twi, s0i, s0i]]).T,
props={'density': ds0 - dcc}))
polygons_sediments1 = []
for (yi, s0i, s1i) in zip(yc, np.reshape(toi,(n,)), np.reshape(basement,(n,))):
y1 = yi - 0.5*dy
y2 = yi + 0.5*dy
polygons_sediments1.append(Polygon(np.array([[y1, y2, y2, y1],
[s0i, s0i, s1i, s1i]]).T,
props={'density': ds1 - dcc}))
polygons_crust = []
for (yi, si, Si, dci) in zip(yc, np.reshape(basement,(n,)), np.reshape(moho,(n,)), dc):
y1 = yi - 0.5*dy
y2 = yi + 0.5*dy
polygons_crust.append(Polygon(np.array([[y1, y2, y2, y1],
[si, si, Si, Si]]).T,
props={'density': dci - dcc}))
polygons_mantle = []
for (yi, Si) in zip(yc, np.reshape(moho,(n,))):
y1 = yi - 0.5*dy
y2 = yi + 0.5*dy
polygons_mantle.append(Polygon(np.array([[y1, y2, y2, y1],
[Si, Si, (S0+p[n+n]), (S0+p[n+n])]]).T,
props={'density': dm - dcc}))
%matplotlib inline
plt.close('all')
fig = plt.figure(figsize=(12,16))
import matplotlib.gridspec as gridspec
heights = [8, 8, 8, 1]
gs = gridspec.GridSpec(4, 1, height_ratios=heights)
ax1 = plt.subplot(gs[0])
ax2 = plt.subplot(gs[1])
ax3 = plt.subplot(gs[2])
ax4 = plt.subplot(gs[3])
ax1.axhline(y=0.0, xmin=ymin, xmax=ymax, color='k', linestyle='--', linewidth=1)
ax1.plot(0.001*yc, gobs, 'or', mfc='none', markersize=8, label='observed data')
ax1.plot(0.001*yc, g0, '-b', linewidth=2, label='initial guess data')
ax1.plot(0.001*yc, g, '-g', linewidth=2, label='predicted data')
ax1.set_xlim(0.001*ymin, 0.001*ymax)
ax1.set_ylabel('gravity disturbance (mGal)', fontsize=18)
ax1.set_xticklabels(['%g'% (l) for l in ax1.get_xticks()], fontsize=16)
ax1.set_yticklabels(['%g'% (l) for l in ax1.get_yticks()], fontsize=16)
ax1.legend(loc='best', fontsize=16, facecolor='silver')
#ax2.plot(0.001*yc, sgm_interp, 'or', mfc='none', markersize=8, label='preliminary interpretation lithostatic stress')
ax2.plot(0.001*yc, sgm, '-g', linewidth=2, label='predicted lithostatic stress')
ax2.set_xlim(0.001*ymin, 0.001*ymax)
ax2.set_ylim(1120.,1210.)
ax2.set_ylabel('lithostatic stress (MPa)', fontsize=18)
ax2.set_xticklabels(['%g'% (l) for l in ax2.get_xticks()], fontsize=16)
ax2.set_yticklabels(['%g'% (l) for l in ax2.get_yticks()], fontsize=16)
ax2.legend(loc='best', fontsize=16, facecolor='silver')
ax3.axhline(y=0.0, xmin=ymin, xmax=ymax, color='k', linestyle='-', linewidth=1)
aux = yc <= COT
for (pwi) in (polygons_water):
tmpx = [x for x in pwi.x]
tmpx.append(pwi.x[0])
tmpy = [y for y in pwi.y]
tmpy.append(pwi.y[0])
ax3.plot(tmpx, tmpy, linestyle='None')
ax3.fill(tmpx, tmpy, color='lightskyblue')
for (ps0i) in (polygons_sediments0):
tmpx = [x for x in ps0i.x]
tmpx.append(ps0i.x[0])
tmpy = [y for y in ps0i.y]
tmpy.append(ps0i.y[0])
ax3.plot(tmpx, tmpy, linestyle='None')
ax3.fill(tmpx, tmpy, color='tan')
for (ps1i) in (polygons_sediments1):
tmpx = [x for x in ps1i.x]
tmpx.append(ps1i.x[0])
tmpy = [y for y in ps1i.y]
tmpy.append(ps1i.y[0])
ax3.plot(tmpx, tmpy, linestyle='None')
ax3.fill(tmpx, tmpy, color='rosybrown')
for (pci) in (polygons_crust[:len(yc[aux])]):
tmpx = [x for x in pci.x]
tmpx.append(pci.x[0])
tmpy = [y for y in pci.y]
tmpy.append(pci.y[0])
ax3.plot(tmpx, tmpy, linestyle='None')
ax3.fill(tmpx, tmpy, color='orange')
for (pcoi) in (polygons_crust[len(yc[aux]):n]):
tmpx = [x for x in pcoi.x]
tmpx.append(pcoi.x[0])
tmpy = [y for y in pcoi.y]
tmpy.append(pcoi.y[0])
ax3.plot(tmpx, tmpy, linestyle='None')
ax3.fill(tmpx, tmpy, color='olive')
for (pmi) in (polygons_mantle):
tmpx = [x for x in pmi.x]
tmpx.append(pmi.x[0])
tmpy = [y for y in pmi.y]
tmpy.append(pmi.y[0])
ax3.plot(tmpx, tmpy, linestyle='None')
ax3.fill(tmpx, tmpy, color='pink')
ax3.plot(yc, tw, '-k', linewidth=3)
ax3.plot(yc, toi, '-k', linewidth=3)
ax3.plot(yc, interpreted_basement, '-k', linewidth=3, label='previous interpretation surfaces')
ax3.plot(yc, interpreted_moho, '-k', linewidth=3)
ax3.plot(yc, ini_basement, '-.b', linewidth=3, label='initial guess surfaces')
ax3.plot(yc, ini_moho, '-.b', linewidth=3)
ax3.plot(yc, basement, '--w', linewidth=3, label='estimated surfaces')
ax3.plot(yc, moho, '--w', linewidth=3)
ax3.axhline(y=S0+dS0, xmin=ymin, xmax=ymax, color='k', linestyle='-', linewidth=3)
ax3.axhline(y=S0+ini_dS0, xmin=ymin, xmax=ymax, color='b', linestyle='-.', linewidth=3)
ax3.axhline(y=S0+p[n+n], xmin=ymin, xmax=ymax, color='w', linestyle='--', linewidth=3)
ax3.plot(base_known[:,0], base_known[:,1], 'v', color = 'yellow', markersize=15, label='known depths (basement)')
#ax3.plot(base_known_old[:,0], base_known_old[:,1], 'v', color = 'yellow', markersize=15, label='known depths (basement)')
#ax3.plot(base_known_new[:,0], base_known_new[:,1], 'v', color = 'magenta', markersize=15, label='more known depths (basement)')
ax3.plot(moho_known[:,0], moho_known[:,1], 'D', color = 'lime', markersize=15, label='known depths (moho)')
#ax3.set_ylim((S0+p[n+n]), zmin)
ax3.set_ylim((50500.0), zmin)
ax3.set_xlim(ymin, ymax)
ax3.set_xlabel('y (km)', fontsize=18)
ax3.set_ylabel('z (km)', fontsize=18)
ax3.set_xticklabels(['%g'% (0.001*l) for l in ax3.get_xticks()], fontsize=16)
ax3.set_yticklabels(['%g'% (0.001*l) for l in ax3.get_yticks()], fontsize=16)
ax3.legend(loc='lower right', fontsize=16, facecolor='silver')
X, Y = fig.get_dpi()*fig.get_size_inches()
plt.title('Density contrast (kg/m$^{3}$)', fontsize=20)
ax4.axis('off')
layers_list1 = ['water', 'sediment', 'SDR', 'continental', 'oceanic', 'mantle']
layers_list2 = ['', '', '', 'crust', 'crust', '']
colors_list = ['lightskyblue', 'tan', 'rosybrown', 'orange', 'olive', 'pink']
#density_list = ['-1840', '-520', '-15', '0', '15', '370']#original
#density_list = ['-1840', '-270', '-170', '0', '15', '370']#T08
density_list = ['-1840', '-445', '-35', '0', '15', '370']#T07
#density_list = ['-1840', '-445', '-120', '0', '15', '370']#T06
#density_list = ['-1840', '-420', '-170', '0', '15', '370']#T05
ncols = len(colors_list)
nrows = 1
h = Y / nrows
w = X / (ncols + 1)
i=ncols-1
for color, density, layers1, layers2 in zip(colors_list, density_list, layers_list1, layers_list2):
col = i // nrows
row = i % nrows
x = X - (col*w) - w
yi_line = Y
yf_line = Y - Y*0.15
yi_text1 = Y - Y*0.2
yi_text2 = Y - Y*0.28
yi_text3 = Y - Y*0.08
i-=1
poly = Polygon(np.array([[x, x+w*0.75, x+w*0.75, x], [yi_line, yi_line, yf_line, yf_line]]).T)
tmpx = [x for x in poly.x]
tmpx.append(poly.x[0])
tmpy = [y for y in poly.y]
tmpy.append(poly.y[0])
ax4.plot(tmpx, tmpy, linestyle='-', color='k', linewidth=1)
ax4.fill(tmpx, tmpy, color=color)
ax4.text(x+w*0.375, yi_text1, layers1, fontsize=(w*0.16), horizontalalignment='center', verticalalignment='top')
ax4.text(x+w*0.375, yi_text2, layers2, fontsize=(w*0.16), horizontalalignment='center', verticalalignment='top')
ax4.text(x+w*0.375, yi_text3, density, color = 'k', fontsize=(w*0.16), horizontalalignment='center', verticalalignment='center')
plt.tight_layout()
#plt.savefig('../manuscript/figures/pelotas-profile-grafics-estimated-model-alphas_2_1_2_1_2-sgm_19-dpi300.png', dpi=300, bbox_inches='tight')
plt.savefig('../manuscript/figures/pelotas-profile-grafics-estimated-model-alphas_2_1_2_1_2-sgm_17-T07-dpi300.png', dpi=300, bbox_inches='tight')
plt.show()
| 0.486819 | 0.864825 |
# Data Dive 2: Loading and Summarizing Data
### Overtime: Scraping the Web for Unique Data
#### [Web scraping](https://en.wikipedia.org/wiki/Web_scraping) is the process of extracting data from html code on the internet.
Resources on web scraping:
* [Digital Ocean Tutorial](https://www.digitalocean.com/community/tutorials/how-to-scrape-web-pages-with-beautiful-soup-and-python-3) (requests, Beautiful Soup)
* [DataCamp Tutorial](https://www.datacamp.com/community/tutorials/web-scraping-using-python) (urllib, Beautiful Soup)
* [Hitchhiker's Guide to Python](https://docs.python-guide.org/scenarios/scrape/) (requests, lxml)
**Important Note**: This is for demonstration purposes only, and only harvests content from individual pages. When building a scraper to harvest large amounts of data from multiple pages, be mindful of [legal](https://www.fastcompany.com/40456140/bots-are-scraping-your-public-data-for-cash-amid-murky-laws-and-ethics-linkedin-hiq) and [ethical](https://towardsdatascience.com/ethics-in-web-scraping-b96b18136f01) issues in web scraping.
## Today's Exercise
Say we want to learn more about where Google's offices are located. Helpfully, the provide a list of all of their campuses globally at [google.com/about/locations](https://www.google.com/about/locations). However, copying this list by hand to do data analysis on would be frustrating and time-consuming. Let's take a look at how web scraping can make this process easier.
First, let's import all of the packages we'll need for today's exercise. There are a wide variety of packages that can be helpful, but today we'll be using *requests* and *Beautiful Soup* to pull the contents of these websites.
```
import re
import pandas as pd
from bs4 import BeautifulSoup as soup
import requests
```
First, we use the *requests* package to get the content of the google site we'd like to extract office location information from:
```
url = 'https://www.google.com/about/locations/'
site_source = requests.get(url)
```
This is going to give us an enormous amount of content - everything we would get if we looked directly at the source code in the browser.
```
print(site_source.text)
```
#### We could parse this ourselves, but fortunately scraping packages make this much easier
We'll use Beautiful Soup's built in functionality to extract info on the individual offices.
First, we parse the full site content.
```
site_content = soup(site_source.content, "html.parser")
type(site_content)
```
Next, we pick out the office elements
```
offices = site_content.select(".office-info")
len(offices)
site_content.select(".office-info")
```
Now that we've isolated the office elements, let's extract the location name and address for each.
```
countries = []
for o in offices:
office = o.select(".office-name")[0].string.strip()
address = o.select(".office-address")[0].string.strip()
address_list = re.split(r'\n|\,', address)
country = address_list[-1].strip()
if country not in countries:
countries.append(country)
print(country)
print()
countries
```
#### If we look carefully at our extracted elements, we'll see we have some issues:
1. All elements appear twice.
2. The zip codes - which we're interested in - are part of broader strings.
These are trivial to handle, we'll just need to pass over the data carefully to handle both.
```
us_offices = []
for o in offices:
office = o.select(".office-name")[0].string.strip()
address = o.select(".office-address")[0].string.strip()
is_US = re.search(r'(United States)', address)
if is_US:
print(office)
zip_code = re.search(r'(\d{5})', address)
if zip_code:
print(zip_code.group())
if [office, zip_code.group()] not in us_offices:
us_offices.append([office, zip_code.group()])
print()
```
Now that we've extracted a list of offices and zip codes, we can load them into a data frame.
```
office_df = pd.DataFrame(us_offices, columns=['Office', 'Zip Code'])
office_df
```
## Exercise: In What Countries Does Google Maintain Offices?
## Adding County Information
The Department of Housing and Urban Development makes a *crosswalk* of zip codes to counties available [here](https://www.huduser.gov/portal/datasets/usps_crosswalk.html). We can load these into pandas and clean them up to find the county for each office.
```
zip_df = pd.read_csv('https://grantmlong.com/data/ZIP_COUNTY_122016.csv')
zip_df.head(5)
```
#### A good rule of thumb: if two numbers cannot be added together to produce a logical result, they should be stored as strings. '
We can recast the zip and county ids as strings - with leading zeros - to make this dataframe easier to handle. To do this we can use the [.astype()](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.astype.html) and [.zfill()](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.zfill.html) methods.
```
zip_df['Zip Code'] = zip_df['ZIP'].astype(str).str.zfill(5)
zip_df['County Number'] = zip_df['COUNTY'].astype(str).str.zfill(5)
zip_df.sort_values(by='COUNTY').head(5)
```
Of course we don't need all of these columns, but we do need to attach the ***County Number*** column to our Google office data in order to learn more about the data. The [.merge()](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html) method allows us to do this easily in one line.
```
office_df = office_df.merge(zip_df[['Zip Code', 'County Number']],
how='left')
office_df.sort_values(by='Office')
office_df.shape
office_df.sort_values(by='County Number')
```
### Merge Office Data with Census Data
First, we'll need to load the data extract we produced in the first data dive. We'll also need to make sure that the *County Number* - the variable we need to join our data on - is appropriately formatted as a string.
```
census_df = pd.read_csv('https://grantmlong.com/data/census_counties_backup.csv')
census_df['County Number'] = census_df['County Number'].astype(str).str.zfill(5)
census_df.head(5)
```
Next, we'll use the [.merge()](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html) method to attach the two data sets.
```
full_df = census_df.merge(office_df, how='left')
full_df.loc[full_df['Office'].notnull(), ].head(5)
```
#### With our full data set, we can now begin to look at some interesting numbers, like the median income in counties where google has an office, and where they don't.
```
print(full_df['Median Rent'].mean())
print(full_df.loc[full_df['Office'].notnull(), 'Median Rent'].mean())
print(full_df.loc[full_df['Office'].isnull(), 'Median Rent'].mean())
(full_df.loc[full_df['Office'].notnull(),]
.sort_values(by='Median Rent',
ascending=False)
.head(50))
```
## Exercise: What Other Interesting Findings Can We Identify?
|
github_jupyter
|
import re
import pandas as pd
from bs4 import BeautifulSoup as soup
import requests
url = 'https://www.google.com/about/locations/'
site_source = requests.get(url)
print(site_source.text)
site_content = soup(site_source.content, "html.parser")
type(site_content)
offices = site_content.select(".office-info")
len(offices)
site_content.select(".office-info")
countries = []
for o in offices:
office = o.select(".office-name")[0].string.strip()
address = o.select(".office-address")[0].string.strip()
address_list = re.split(r'\n|\,', address)
country = address_list[-1].strip()
if country not in countries:
countries.append(country)
print(country)
print()
countries
us_offices = []
for o in offices:
office = o.select(".office-name")[0].string.strip()
address = o.select(".office-address")[0].string.strip()
is_US = re.search(r'(United States)', address)
if is_US:
print(office)
zip_code = re.search(r'(\d{5})', address)
if zip_code:
print(zip_code.group())
if [office, zip_code.group()] not in us_offices:
us_offices.append([office, zip_code.group()])
print()
office_df = pd.DataFrame(us_offices, columns=['Office', 'Zip Code'])
office_df
zip_df = pd.read_csv('https://grantmlong.com/data/ZIP_COUNTY_122016.csv')
zip_df.head(5)
zip_df['Zip Code'] = zip_df['ZIP'].astype(str).str.zfill(5)
zip_df['County Number'] = zip_df['COUNTY'].astype(str).str.zfill(5)
zip_df.sort_values(by='COUNTY').head(5)
office_df = office_df.merge(zip_df[['Zip Code', 'County Number']],
how='left')
office_df.sort_values(by='Office')
office_df.shape
office_df.sort_values(by='County Number')
census_df = pd.read_csv('https://grantmlong.com/data/census_counties_backup.csv')
census_df['County Number'] = census_df['County Number'].astype(str).str.zfill(5)
census_df.head(5)
full_df = census_df.merge(office_df, how='left')
full_df.loc[full_df['Office'].notnull(), ].head(5)
print(full_df['Median Rent'].mean())
print(full_df.loc[full_df['Office'].notnull(), 'Median Rent'].mean())
print(full_df.loc[full_df['Office'].isnull(), 'Median Rent'].mean())
(full_df.loc[full_df['Office'].notnull(),]
.sort_values(by='Median Rent',
ascending=False)
.head(50))
| 0.158207 | 0.978302 |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/ImageCollection/landsat_filtering.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/ImageCollection/landsat_filtering.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/ImageCollection/landsat_filtering.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
```
Map = geemap.Map(center=[40,-100], zoom=4)
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
collection = ee.ImageCollection('LANDSAT/LC8_L1T_TOA')
path = collection.filter(ee.Filter.eq('WRS_PATH', 44))
row = path.filter(ee.Filter.eq('WRS_ROW', 34))
images = row.filterDate('2016-01-01', '2016-12-31')
print(images.size().getInfo())
# images.map(lambda image: image.getInfo())
lng = -122.3578
lat = 37.7726
median = images.median()
Map.setCenter(lng, lat, 12)
vis = {'bands': ['B5', 'B4', 'B3'], 'max': 0.3}
Map.addLayer(median, vis)
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
|
github_jupyter
|
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
Map = geemap.Map(center=[40,-100], zoom=4)
Map
# Add Earth Engine dataset
collection = ee.ImageCollection('LANDSAT/LC8_L1T_TOA')
path = collection.filter(ee.Filter.eq('WRS_PATH', 44))
row = path.filter(ee.Filter.eq('WRS_ROW', 34))
images = row.filterDate('2016-01-01', '2016-12-31')
print(images.size().getInfo())
# images.map(lambda image: image.getInfo())
lng = -122.3578
lat = 37.7726
median = images.median()
Map.setCenter(lng, lat, 12)
vis = {'bands': ['B5', 'B4', 'B3'], 'max': 0.3}
Map.addLayer(median, vis)
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
| 0.568416 | 0.963161 |
```
# Dependencies and Setup
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from scipy import stats
# Hide warning messages in notebook
import warnings
warnings.filterwarnings('ignore')
# File to Load (Remember to Change These)
mouse_data_path= 'data/mouse_drug_data.csv'
clinical_data_path = 'data/clinicaltrial_data.csv'
# Read the Mouse and Drug Data and the Clinical Trial Data
mouse_drug_df = pd.read_csv(mouse_data_path)
clinical_trial_df = pd.read_csv(clinical_data_path)
# Combine the data into a single dataset
mouse_clinical_combine = pd.merge(clinical_trial_df, mouse_drug_df,how='outer', on="Mouse ID")
# Display the data table for preview
mouse_clinical_combine.head()
```
## Tumor Response to Treatment
```
# Store the Mean Tumor Volume Data Grouped by Drug and Timepoint
combine_group_mean = mouse_clinical_combine.groupby(["Drug","Timepoint"]).mean()
combine_group_mean.reset_index(level = None, inplace = True)
# Convert to DataFrame
tumor_response_mean_df = pd.DataFrame(combine_group_mean)
# Preview DataFrame
tumor_response_mean_df.head()
# Store the Standard Error of Tumor Volumes Grouped by Drug and Timepoint
combine_group_sem = mouse_clinical_combine.groupby(["Drug","Timepoint"]).sem()
combine_group_sem.reset_index(level = None, inplace = True)
# Convert to DataFrame
tumor_response_sem_tumorvol_df = pd.DataFrame(combine_group_sem)
# Preview DataFrame
tumor_response_sem_tumorvol_df.head()
# Minor Data Munging to Re-Format the Data Frames
pivot_table = tumor_response_mean_df.pivot(index ="Timepoint", columns = 'Drug', values = "Tumor Volume (mm3)")
pivot_table.reset_index(level = None, inplace = True)
# Preview that Reformatting worked
pivot_table.head()
table_fourdrugs = pivot_table[["Timepoint", "Capomulin", "Infubinol", "Ketapril", "Placebo"]]
table_fourdrugs.head()
# Generate the Plot (with Error Bars)
plt.figure(figsize = (10,5))
#fig=table_fourdrugs.plot(kind='scatter', x='Timepoint',y='Capomulin', linestyle='--', color='red', marker='o',yerr = Capomulin_error);
plt.errorbar(x=table_fourdrugs['Timepoint'],y=table_fourdrugs['Capomulin'], yerr=None, linestyle="--", fmt='o')
plt.errorbar(x=table_fourdrugs['Timepoint'],y=table_fourdrugs['Infubinol'], yerr=None, linestyle="--", fmt='o')
plt.errorbar(x=table_fourdrugs['Timepoint'],y=table_fourdrugs['Ketapril'], yerr=None, linestyle="--",fmt='o')
plt.errorbar(x=table_fourdrugs['Timepoint'],y=table_fourdrugs['Placebo'], yerr=None, linestyle="--", fmt='o')
plt.ylabel('Tumor Volume(mm3)')
plt.xlabel('Time (Days)')
plt.title('Tumor Response to Treatment')
plt.grid()
plt.legend(table_fourdrugs)
#Save plot
plt.savefig('TumorResponse.png')
plt.show()
```

## Metastatic Response to Treatment
```
# Store the Mean Met. Site Data Grouped by Drug and Timepoint
combine_group_mean_met= mouse_clinical_combine.groupby(["Drug","Timepoint"]).mean()
# Convert to DataFrame
met_response_mean_df = pd.DataFrame(combine_group_mean_met["Metastatic Sites"])
# Preview DataFrame
met_response_mean_df.head()
# Store the Standard Error associated with Met. Sites Grouped by Drug and Timepoint
combine_group_met_sem = mouse_clinical_combine.groupby(["Drug","Timepoint"]).sem()
# Convert to DataFrame
met_response_sem_df = pd.DataFrame(combine_group_sem)
# Preview DataFrame
met_response_sem_df.head()
# Minor Data Munging to Re-Format the Data Frames
met_response_mean_df.reset_index(level = None, inplace = True)
met_response_mean_df2 = pd.DataFrame(combine_group_mean)
pivot_table_met = met_response_mean_df2.pivot(index ="Timepoint", columns = 'Drug', values = "Metastatic Sites")
pivot_table_met.reset_index(level = None, inplace = True)
# Preview that Reformatting worked
pivot_table_met.head()
met_table_fourdrugs = pivot_table_met[["Timepoint","Capomulin", "Infubinol", "Ketapril", "Placebo"]]
met_table_fourdrugs.head()
# Generate the Plot (with Error Bars)
plt.figure(figsize = (10, 5))
plt.errorbar(x=met_table_fourdrugs['Timepoint'],y=met_table_fourdrugs['Capomulin'], yerr=None, linestyle="--", fmt='o')
plt.errorbar(x=met_table_fourdrugs['Timepoint'],y=met_table_fourdrugs['Infubinol'], yerr=None, linestyle="--", fmt='o')
plt.errorbar(x=met_table_fourdrugs['Timepoint'],y=met_table_fourdrugs['Ketapril'], yerr=None, linestyle="--",fmt='o')
plt.errorbar(x=met_table_fourdrugs['Timepoint'],y=met_table_fourdrugs['Placebo'], yerr=None, linestyle="--", fmt='o')
plt.ylabel("Met Sites")
plt.xlabel('Time (Days)')
plt.title('Metastatic Response to Treatment')
plt.grid()
plt.legend(met_table_fourdrugs)
# Save the Figure
plt.savefig("MetSiteResponse.png")
plt.show()
```

## Survival Rates
```
# Store the Count of Mice Grouped by Drug and Timepoint (W can pass any metric)
micecount=mouse_clinical_combine.groupby(["Drug","Timepoint"]).count()
# Convert to DataFrame
micecount_df=pd.DataFrame(micecount["Mouse ID"])
micecount_df.reset_index(inplace=True)
#Display dataframe
micecount_df.head()
# Minor Data Munging to Re-Format the Data Frames
pivot_table_mice = micecount_df.pivot(index ="Timepoint", columns = 'Drug', values = "Mouse ID")
pivot_table_mice.reset_index(level = None, inplace = True)
# Preview the Data Frame
pivot_table_mice.head()
mice_table_fourdrugs = pivot_table_mice[["Timepoint", "Capomulin", "Infubinol", "Ketapril", "Placebo"]]
mice_table_fourdrugs.head()
#Calculations for the survival rate
survival_fourdrugs_df = mice_table_fourdrugs.astype(float)
survival_fourdrugs_df["Capomulin_percent"]=survival_fourdrugs_df["Capomulin"]/survival_fourdrugs_df["Capomulin"].iloc[0] * 100
survival_fourdrugs_df["Infubinol_percent"]=survival_fourdrugs_df["Infubinol"]/survival_fourdrugs_df["Infubinol"].iloc[0] * 100
survival_fourdrugs_df["Ketapril_percent"]=survival_fourdrugs_df["Ketapril"]/survival_fourdrugs_df["Ketapril"].iloc[0] * 100
survival_fourdrugs_df["Placebo_percent"]=survival_fourdrugs_df["Placebo"]/survival_fourdrugs_df["Placebo"].iloc[0] * 100
survival_fourdrugs_df
# Generate the Plot (Accounting for percentages)
plt.figure(figsize = (10, 5))
plt.errorbar(x=survival_fourdrugs_df ['Timepoint'],y=survival_fourdrugs_df['Capomulin_percent'], linestyle="--", fmt='o')
plt.errorbar(x=survival_fourdrugs_df['Timepoint'],y=survival_fourdrugs_df['Infubinol_percent'], linestyle="--", fmt='o')
plt.errorbar(x=survival_fourdrugs_df['Timepoint'],y=survival_fourdrugs_df['Ketapril_percent'], linestyle="--",fmt='o')
plt.errorbar(x=survival_fourdrugs_df['Timepoint'],y=survival_fourdrugs_df['Placebo_percent'], linestyle="--", fmt='o')
plt.ylabel("Survival Rate (%)")
plt.xlabel('Time (Days)')
plt.title(' Survival During Treatment')
plt.grid()
plt.legend(mice_table_fourdrugs)
# Save the Figure
plt.savefig("SurvivalRespnse.png")
plt.show()
```

## Summary Bar Graph
```
# Calculate the percent for Capomulin drug
Capomulin_percent=(table_fourdrugs["Capomulin"].iloc[9]-table_fourdrugs["Capomulin"].iloc[0])/table_fourdrugs["Capomulin"].iloc[0]*100
# Display the data to confirm
Capomulin_percent
# Calculate the percent changes for Infubinol drug
Infubinol_percent=(table_fourdrugs["Infubinol"].iloc[9]-table_fourdrugs["Infubinol"].iloc[0])/table_fourdrugs["Infubinol"].iloc[0]*100
# Display the data to confirm
Infubinol_percent
# Calculate the percent changes for Ketapril drug
Ketapril_percent=(table_fourdrugs["Ketapril"].iloc[9]-table_fourdrugs["Ketapril"].iloc[0])/table_fourdrugs["Ketapril"].iloc[0]*100
# Display the data to confirm
Ketapril_percent
# Calculate the percent changes for Placebo drug
Placebo_percent=(table_fourdrugs["Placebo"].iloc[9]-table_fourdrugs["Placebo"].iloc[0])/table_fourdrugs["Placebo"].iloc[0]*100
# Display the data to confirm
Placebo_percent
# Store all Relevant Percent Changes into a Tuple
percent_tuple = {'Capomulin': Capomulin_percent, 'Infubinol': Infubinol_percent, 'Ketapril': Ketapril_percent, 'Placebo': Placebo_percent}
percentchange_tumorvolume = pd.Series(percent_tuple)
percentchange_tumorvolume
```

```
#Index the 4 drugs
testdrugs=percentchange_tumorvolume.keys()
testdrugs
summary_bar = plt.subplot()
x_axis = np.arange(0, len(testdrugs))
# Splice the data between passing and failing drugs
# Orient widths. Add labels, tick marks, etc.
tick_locations = []
for x in x_axis:
tick_locations.append(x + 0.5)
plt.xticks(tick_locations, testdrugs)
colors = []
for value in percentchange_tumorvolume:
if value >= 0 :
colors.append('r')
else:
colors.append('g')
#Plot
percent_change = summary_bar.bar(x_axis, percentchange_tumorvolume, color=colors, align="edge")
plt.title("Tumor Change Over 45 Days Treatment")
plt.ylabel("% Tumor Volume Change")
plt.xlim(-0.25, len(testdrugs))
plt.ylim(-30, max(percentchange_tumorvolume) + 20)
plt.grid()
# Save the Figure
plt.savefig("MeanTumorChange.png")
plt.show()
```
|
github_jupyter
|
# Dependencies and Setup
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from scipy import stats
# Hide warning messages in notebook
import warnings
warnings.filterwarnings('ignore')
# File to Load (Remember to Change These)
mouse_data_path= 'data/mouse_drug_data.csv'
clinical_data_path = 'data/clinicaltrial_data.csv'
# Read the Mouse and Drug Data and the Clinical Trial Data
mouse_drug_df = pd.read_csv(mouse_data_path)
clinical_trial_df = pd.read_csv(clinical_data_path)
# Combine the data into a single dataset
mouse_clinical_combine = pd.merge(clinical_trial_df, mouse_drug_df,how='outer', on="Mouse ID")
# Display the data table for preview
mouse_clinical_combine.head()
# Store the Mean Tumor Volume Data Grouped by Drug and Timepoint
combine_group_mean = mouse_clinical_combine.groupby(["Drug","Timepoint"]).mean()
combine_group_mean.reset_index(level = None, inplace = True)
# Convert to DataFrame
tumor_response_mean_df = pd.DataFrame(combine_group_mean)
# Preview DataFrame
tumor_response_mean_df.head()
# Store the Standard Error of Tumor Volumes Grouped by Drug and Timepoint
combine_group_sem = mouse_clinical_combine.groupby(["Drug","Timepoint"]).sem()
combine_group_sem.reset_index(level = None, inplace = True)
# Convert to DataFrame
tumor_response_sem_tumorvol_df = pd.DataFrame(combine_group_sem)
# Preview DataFrame
tumor_response_sem_tumorvol_df.head()
# Minor Data Munging to Re-Format the Data Frames
pivot_table = tumor_response_mean_df.pivot(index ="Timepoint", columns = 'Drug', values = "Tumor Volume (mm3)")
pivot_table.reset_index(level = None, inplace = True)
# Preview that Reformatting worked
pivot_table.head()
table_fourdrugs = pivot_table[["Timepoint", "Capomulin", "Infubinol", "Ketapril", "Placebo"]]
table_fourdrugs.head()
# Generate the Plot (with Error Bars)
plt.figure(figsize = (10,5))
#fig=table_fourdrugs.plot(kind='scatter', x='Timepoint',y='Capomulin', linestyle='--', color='red', marker='o',yerr = Capomulin_error);
plt.errorbar(x=table_fourdrugs['Timepoint'],y=table_fourdrugs['Capomulin'], yerr=None, linestyle="--", fmt='o')
plt.errorbar(x=table_fourdrugs['Timepoint'],y=table_fourdrugs['Infubinol'], yerr=None, linestyle="--", fmt='o')
plt.errorbar(x=table_fourdrugs['Timepoint'],y=table_fourdrugs['Ketapril'], yerr=None, linestyle="--",fmt='o')
plt.errorbar(x=table_fourdrugs['Timepoint'],y=table_fourdrugs['Placebo'], yerr=None, linestyle="--", fmt='o')
plt.ylabel('Tumor Volume(mm3)')
plt.xlabel('Time (Days)')
plt.title('Tumor Response to Treatment')
plt.grid()
plt.legend(table_fourdrugs)
#Save plot
plt.savefig('TumorResponse.png')
plt.show()
# Store the Mean Met. Site Data Grouped by Drug and Timepoint
combine_group_mean_met= mouse_clinical_combine.groupby(["Drug","Timepoint"]).mean()
# Convert to DataFrame
met_response_mean_df = pd.DataFrame(combine_group_mean_met["Metastatic Sites"])
# Preview DataFrame
met_response_mean_df.head()
# Store the Standard Error associated with Met. Sites Grouped by Drug and Timepoint
combine_group_met_sem = mouse_clinical_combine.groupby(["Drug","Timepoint"]).sem()
# Convert to DataFrame
met_response_sem_df = pd.DataFrame(combine_group_sem)
# Preview DataFrame
met_response_sem_df.head()
# Minor Data Munging to Re-Format the Data Frames
met_response_mean_df.reset_index(level = None, inplace = True)
met_response_mean_df2 = pd.DataFrame(combine_group_mean)
pivot_table_met = met_response_mean_df2.pivot(index ="Timepoint", columns = 'Drug', values = "Metastatic Sites")
pivot_table_met.reset_index(level = None, inplace = True)
# Preview that Reformatting worked
pivot_table_met.head()
met_table_fourdrugs = pivot_table_met[["Timepoint","Capomulin", "Infubinol", "Ketapril", "Placebo"]]
met_table_fourdrugs.head()
# Generate the Plot (with Error Bars)
plt.figure(figsize = (10, 5))
plt.errorbar(x=met_table_fourdrugs['Timepoint'],y=met_table_fourdrugs['Capomulin'], yerr=None, linestyle="--", fmt='o')
plt.errorbar(x=met_table_fourdrugs['Timepoint'],y=met_table_fourdrugs['Infubinol'], yerr=None, linestyle="--", fmt='o')
plt.errorbar(x=met_table_fourdrugs['Timepoint'],y=met_table_fourdrugs['Ketapril'], yerr=None, linestyle="--",fmt='o')
plt.errorbar(x=met_table_fourdrugs['Timepoint'],y=met_table_fourdrugs['Placebo'], yerr=None, linestyle="--", fmt='o')
plt.ylabel("Met Sites")
plt.xlabel('Time (Days)')
plt.title('Metastatic Response to Treatment')
plt.grid()
plt.legend(met_table_fourdrugs)
# Save the Figure
plt.savefig("MetSiteResponse.png")
plt.show()
# Store the Count of Mice Grouped by Drug and Timepoint (W can pass any metric)
micecount=mouse_clinical_combine.groupby(["Drug","Timepoint"]).count()
# Convert to DataFrame
micecount_df=pd.DataFrame(micecount["Mouse ID"])
micecount_df.reset_index(inplace=True)
#Display dataframe
micecount_df.head()
# Minor Data Munging to Re-Format the Data Frames
pivot_table_mice = micecount_df.pivot(index ="Timepoint", columns = 'Drug', values = "Mouse ID")
pivot_table_mice.reset_index(level = None, inplace = True)
# Preview the Data Frame
pivot_table_mice.head()
mice_table_fourdrugs = pivot_table_mice[["Timepoint", "Capomulin", "Infubinol", "Ketapril", "Placebo"]]
mice_table_fourdrugs.head()
#Calculations for the survival rate
survival_fourdrugs_df = mice_table_fourdrugs.astype(float)
survival_fourdrugs_df["Capomulin_percent"]=survival_fourdrugs_df["Capomulin"]/survival_fourdrugs_df["Capomulin"].iloc[0] * 100
survival_fourdrugs_df["Infubinol_percent"]=survival_fourdrugs_df["Infubinol"]/survival_fourdrugs_df["Infubinol"].iloc[0] * 100
survival_fourdrugs_df["Ketapril_percent"]=survival_fourdrugs_df["Ketapril"]/survival_fourdrugs_df["Ketapril"].iloc[0] * 100
survival_fourdrugs_df["Placebo_percent"]=survival_fourdrugs_df["Placebo"]/survival_fourdrugs_df["Placebo"].iloc[0] * 100
survival_fourdrugs_df
# Generate the Plot (Accounting for percentages)
plt.figure(figsize = (10, 5))
plt.errorbar(x=survival_fourdrugs_df ['Timepoint'],y=survival_fourdrugs_df['Capomulin_percent'], linestyle="--", fmt='o')
plt.errorbar(x=survival_fourdrugs_df['Timepoint'],y=survival_fourdrugs_df['Infubinol_percent'], linestyle="--", fmt='o')
plt.errorbar(x=survival_fourdrugs_df['Timepoint'],y=survival_fourdrugs_df['Ketapril_percent'], linestyle="--",fmt='o')
plt.errorbar(x=survival_fourdrugs_df['Timepoint'],y=survival_fourdrugs_df['Placebo_percent'], linestyle="--", fmt='o')
plt.ylabel("Survival Rate (%)")
plt.xlabel('Time (Days)')
plt.title(' Survival During Treatment')
plt.grid()
plt.legend(mice_table_fourdrugs)
# Save the Figure
plt.savefig("SurvivalRespnse.png")
plt.show()
# Calculate the percent for Capomulin drug
Capomulin_percent=(table_fourdrugs["Capomulin"].iloc[9]-table_fourdrugs["Capomulin"].iloc[0])/table_fourdrugs["Capomulin"].iloc[0]*100
# Display the data to confirm
Capomulin_percent
# Calculate the percent changes for Infubinol drug
Infubinol_percent=(table_fourdrugs["Infubinol"].iloc[9]-table_fourdrugs["Infubinol"].iloc[0])/table_fourdrugs["Infubinol"].iloc[0]*100
# Display the data to confirm
Infubinol_percent
# Calculate the percent changes for Ketapril drug
Ketapril_percent=(table_fourdrugs["Ketapril"].iloc[9]-table_fourdrugs["Ketapril"].iloc[0])/table_fourdrugs["Ketapril"].iloc[0]*100
# Display the data to confirm
Ketapril_percent
# Calculate the percent changes for Placebo drug
Placebo_percent=(table_fourdrugs["Placebo"].iloc[9]-table_fourdrugs["Placebo"].iloc[0])/table_fourdrugs["Placebo"].iloc[0]*100
# Display the data to confirm
Placebo_percent
# Store all Relevant Percent Changes into a Tuple
percent_tuple = {'Capomulin': Capomulin_percent, 'Infubinol': Infubinol_percent, 'Ketapril': Ketapril_percent, 'Placebo': Placebo_percent}
percentchange_tumorvolume = pd.Series(percent_tuple)
percentchange_tumorvolume
#Index the 4 drugs
testdrugs=percentchange_tumorvolume.keys()
testdrugs
summary_bar = plt.subplot()
x_axis = np.arange(0, len(testdrugs))
# Splice the data between passing and failing drugs
# Orient widths. Add labels, tick marks, etc.
tick_locations = []
for x in x_axis:
tick_locations.append(x + 0.5)
plt.xticks(tick_locations, testdrugs)
colors = []
for value in percentchange_tumorvolume:
if value >= 0 :
colors.append('r')
else:
colors.append('g')
#Plot
percent_change = summary_bar.bar(x_axis, percentchange_tumorvolume, color=colors, align="edge")
plt.title("Tumor Change Over 45 Days Treatment")
plt.ylabel("% Tumor Volume Change")
plt.xlim(-0.25, len(testdrugs))
plt.ylim(-30, max(percentchange_tumorvolume) + 20)
plt.grid()
# Save the Figure
plt.savefig("MeanTumorChange.png")
plt.show()
| 0.58818 | 0.838018 |
```
(ql:quickload '(:alexandria :png :iterate :flexi-streams :lparallel :cl-cpus))
(setf lparallel:*kernel* (lparallel:make-kernel (cpus:get-number-of-processors)))
(defun julia-count (z c &key (max 255))
(do ((i 0 (1+ i))
(p z (+ (expt p 2) c)))
((or (= i max)
(> (abs p) 2))
i)))
(defclass julia-widget (jupyter-widgets:grid-box)
((image
:reader julia-image
:initform (make-instance 'jupyter-widgets:image
:width 640 :height 640
:layout (make-instance 'jupyter-widgets:layout :grid-area "image")))
(frame
:reader julia-frame
:initform (png:make-image 640 640 1 8))
(x
:reader julia-x
:initform (make-instance 'jupyter-widgets:float-text
:value 0 :description "x"
:layout (make-instance 'jupyter-widgets:layout :grid-area "x")))
(y
:reader julia-y
:initform (make-instance 'jupyter-widgets:float-text
:value 0 :description "y"
:layout (make-instance 'jupyter-widgets:layout :grid-area "y")))
(size
:reader julia-size
:initform (make-instance 'jupyter-widgets:float-text
:value 4 :description "size"
:layout (make-instance 'jupyter-widgets:layout :grid-area "size")))
(ca
:reader julia-ca
:initform (make-instance 'jupyter-widgets:float-text
:value -0.8 :step 0.001 :description "ca"
:layout (make-instance 'jupyter-widgets:layout :grid-area "ca")))
(cb
:reader julia-cb
:initform (make-instance 'jupyter-widgets:float-text
:value 0.156 :step 0.001 :description "cb"
:layout (make-instance 'jupyter-widgets:layout :grid-area "cb")))
(progress
:reader julia-progress
:initform (make-instance 'jupyter-widgets:int-progress
:description "Progress"
:max 640
:layout (make-instance 'jupyter-widgets:layout :grid-area "progress")))
(task-channel
:reader julia-task-channel
:initform (lparallel:make-channel)))
(:metaclass jupyter-widgets:trait-metaclass)
(:default-initargs
:layout (make-instance 'jupyter-widgets:layout
:grid-gap ".25em"
:grid-template-columns "1fr min-content"
:grid-template-rows "min-content min-content min-content min-content min-content min-content 1fr"
:grid-template-areas "\"image x\" \"image y\" \"image size\" \"image ca\" \"image cb\" \"image progress\" \"image .\"")))
(defun calculate-row (frame y c width height xmin xmax ymin ymax)
(let ((zy (+ ymin (* (coerce (/ y height) 'float) (- ymax ymin)))))
(dotimes (x width)
(setf (aref frame y x 0)
(- 255 (julia-count (complex (+ xmin (* (coerce (/ x width) 'float) (- xmax xmin))) zy) c))))))
(defun update (instance)
(bordeaux-threads:make-thread
(lambda ()
(with-slots (task-channel image frame x y size ca cb progress) instance
(let* ((c (complex (jupyter-widgets:widget-value ca)
(jupyter-widgets:widget-value cb)))
(x-value (jupyter-widgets:widget-value x))
(y-value (jupyter-widgets:widget-value y))
(size-value (jupyter-widgets:widget-value size))
(width (jupyter-widgets:widget-width image))
(height (jupyter-widgets:widget-height image))
(xmin (- x-value (/ size-value 2)))
(xmax (+ x-value (/ size-value 2)))
(ymin (- y-value (/ size-value 2)))
(ymax (+ y-value (/ size-value 2))))
(dotimes (y height)
(lparallel:submit-task task-channel #'calculate-row frame y c width height xmin xmax ymin ymax))
(dotimes (y height)
(lparallel:receive-result task-channel)
(setf (jupyter-widgets:widget-value progress) y))
(setf (jupyter-widgets:widget-value image)
(flexi-streams:with-output-to-sequence (o)
(png:encode frame o))))))))
(defmethod initialize-instance :after ((instance julia-widget) &rest initargs &key &allow-other-keys)
(declare (ignore initargs))
(jupyter-widgets:observe (julia-x instance) :value
(lambda (inst type name old-value new-value source)
(declare (ignore inst type name old-value new-value source))
(update instance)))
(jupyter-widgets:observe (julia-y instance) :value
(lambda (inst type name old-value new-value source)
(declare (ignore inst type name old-value new-value source))
(update instance)))
(jupyter-widgets:observe (julia-size instance) :value
(lambda (inst type name old-value new-value source)
(declare (ignore inst type name old-value new-value source))
(update instance)))
(jupyter-widgets:observe (julia-ca instance) :value
(lambda (inst type name old-value new-value source)
(declare (ignore inst type name old-value new-value source))
(update instance)))
(jupyter-widgets:observe (julia-cb instance) :value
(lambda (inst type name old-value new-value source)
(declare (ignore inst type name old-value new-value source))
(update instance)))
(setf (jupyter-widgets:widget-children instance)
(list (julia-image instance)
(julia-x instance)
(julia-y instance)
(julia-size instance)
(julia-ca instance)
(julia-cb instance)
(julia-progress instance)))
(update instance))
(make-instance 'julia-widget)
```
|
github_jupyter
|
(ql:quickload '(:alexandria :png :iterate :flexi-streams :lparallel :cl-cpus))
(setf lparallel:*kernel* (lparallel:make-kernel (cpus:get-number-of-processors)))
(defun julia-count (z c &key (max 255))
(do ((i 0 (1+ i))
(p z (+ (expt p 2) c)))
((or (= i max)
(> (abs p) 2))
i)))
(defclass julia-widget (jupyter-widgets:grid-box)
((image
:reader julia-image
:initform (make-instance 'jupyter-widgets:image
:width 640 :height 640
:layout (make-instance 'jupyter-widgets:layout :grid-area "image")))
(frame
:reader julia-frame
:initform (png:make-image 640 640 1 8))
(x
:reader julia-x
:initform (make-instance 'jupyter-widgets:float-text
:value 0 :description "x"
:layout (make-instance 'jupyter-widgets:layout :grid-area "x")))
(y
:reader julia-y
:initform (make-instance 'jupyter-widgets:float-text
:value 0 :description "y"
:layout (make-instance 'jupyter-widgets:layout :grid-area "y")))
(size
:reader julia-size
:initform (make-instance 'jupyter-widgets:float-text
:value 4 :description "size"
:layout (make-instance 'jupyter-widgets:layout :grid-area "size")))
(ca
:reader julia-ca
:initform (make-instance 'jupyter-widgets:float-text
:value -0.8 :step 0.001 :description "ca"
:layout (make-instance 'jupyter-widgets:layout :grid-area "ca")))
(cb
:reader julia-cb
:initform (make-instance 'jupyter-widgets:float-text
:value 0.156 :step 0.001 :description "cb"
:layout (make-instance 'jupyter-widgets:layout :grid-area "cb")))
(progress
:reader julia-progress
:initform (make-instance 'jupyter-widgets:int-progress
:description "Progress"
:max 640
:layout (make-instance 'jupyter-widgets:layout :grid-area "progress")))
(task-channel
:reader julia-task-channel
:initform (lparallel:make-channel)))
(:metaclass jupyter-widgets:trait-metaclass)
(:default-initargs
:layout (make-instance 'jupyter-widgets:layout
:grid-gap ".25em"
:grid-template-columns "1fr min-content"
:grid-template-rows "min-content min-content min-content min-content min-content min-content 1fr"
:grid-template-areas "\"image x\" \"image y\" \"image size\" \"image ca\" \"image cb\" \"image progress\" \"image .\"")))
(defun calculate-row (frame y c width height xmin xmax ymin ymax)
(let ((zy (+ ymin (* (coerce (/ y height) 'float) (- ymax ymin)))))
(dotimes (x width)
(setf (aref frame y x 0)
(- 255 (julia-count (complex (+ xmin (* (coerce (/ x width) 'float) (- xmax xmin))) zy) c))))))
(defun update (instance)
(bordeaux-threads:make-thread
(lambda ()
(with-slots (task-channel image frame x y size ca cb progress) instance
(let* ((c (complex (jupyter-widgets:widget-value ca)
(jupyter-widgets:widget-value cb)))
(x-value (jupyter-widgets:widget-value x))
(y-value (jupyter-widgets:widget-value y))
(size-value (jupyter-widgets:widget-value size))
(width (jupyter-widgets:widget-width image))
(height (jupyter-widgets:widget-height image))
(xmin (- x-value (/ size-value 2)))
(xmax (+ x-value (/ size-value 2)))
(ymin (- y-value (/ size-value 2)))
(ymax (+ y-value (/ size-value 2))))
(dotimes (y height)
(lparallel:submit-task task-channel #'calculate-row frame y c width height xmin xmax ymin ymax))
(dotimes (y height)
(lparallel:receive-result task-channel)
(setf (jupyter-widgets:widget-value progress) y))
(setf (jupyter-widgets:widget-value image)
(flexi-streams:with-output-to-sequence (o)
(png:encode frame o))))))))
(defmethod initialize-instance :after ((instance julia-widget) &rest initargs &key &allow-other-keys)
(declare (ignore initargs))
(jupyter-widgets:observe (julia-x instance) :value
(lambda (inst type name old-value new-value source)
(declare (ignore inst type name old-value new-value source))
(update instance)))
(jupyter-widgets:observe (julia-y instance) :value
(lambda (inst type name old-value new-value source)
(declare (ignore inst type name old-value new-value source))
(update instance)))
(jupyter-widgets:observe (julia-size instance) :value
(lambda (inst type name old-value new-value source)
(declare (ignore inst type name old-value new-value source))
(update instance)))
(jupyter-widgets:observe (julia-ca instance) :value
(lambda (inst type name old-value new-value source)
(declare (ignore inst type name old-value new-value source))
(update instance)))
(jupyter-widgets:observe (julia-cb instance) :value
(lambda (inst type name old-value new-value source)
(declare (ignore inst type name old-value new-value source))
(update instance)))
(setf (jupyter-widgets:widget-children instance)
(list (julia-image instance)
(julia-x instance)
(julia-y instance)
(julia-size instance)
(julia-ca instance)
(julia-cb instance)
(julia-progress instance)))
(update instance))
(make-instance 'julia-widget)
| 0.394551 | 0.358971 |
# Binary Trees
```
from graphviz import Digraph
import IPython
class Tree:
def __init__(self, val, left=None, right=None):
self.val = val
self.left = left
self.right = right
def depth_first_preorder(tree):
if tree is not None:
print(tree.val)
depth_first_preorder(tree.left)
depth_first_preorder(tree.right)
def depth_first_inorder(tree):
if tree is not None:
depth_first_inorder(tree.left)
print(tree.val)
depth_first_inorder(tree.right)
def depth_first_postorder(tree):
if tree is not None:
depth_first_postorder(tree.left)
depth_first_postorder(tree.right)
print(tree.val)
def _tree_to_dot(tree, g):
if tree.left is not None:
g.edge(str(tree.val), str(tree.left.val))
_tree_to_dot(tree.left, g)
if tree.right is not None:
g.edge(str(tree.val), str(tree.right.val))
_tree_to_dot(tree.right, g)
def tree_to_dot(tree):
g = Digraph()
if tree is not None:
_tree_to_dot(tree, g)
return g
tree = Tree(1, Tree(2, Tree(3), Tree(4)), Tree(5, Tree(6), Tree(7)))
depth_first_preorder(tree)
print()
depth_first_inorder(tree)
print()
depth_first_postorder(tree)
tree_to_dot(tree)
def breadth_first(tree):
if tree is not None:
q = [tree]
while len(q) > 0:
cur = q.pop(0)
print(cur.val, end=' ')
if cur.left is not None:
q.append(cur.left)
if cur.right is not None:
q.append(cur.right)
print()
def breadth_first_with_level(tree):
if tree is not None:
q = [tree, None]
while len(q) > 0:
cur = q.pop(0)
if cur is None:
print()
if len(q) > 0:
q.append(None)
continue
print(cur.val, end=' ')
if cur.left is not None:
q.append(cur.left)
if cur.right is not None:
q.append(cur.right)
breadth_first(tree)
print()
breadth_first_with_level(tree)
```
## Binary Search Tree
**BST**: Binary trees with order on node's value
For any node `N` we verify that:
* All values in the sub-tree `N.left` are smaller than `N`'s value
* All values in the sub-tree `N.right` are bigger than `N`'s value
```
def bst_search(n, x):
if n is None:
return False
if x == n.val:
return True
if x < n.val:
return bst_search(n.left, x)
return bst_search(n.right, x)
def bst_search_iter(n, x):
while n is not None:
if x == n.val:
return True
if x < n.val:
n = n.left
else:
n = n.right
return False
def is_bst(tree, lower, upper):
if tree.val < lower or tree.val > upper:
return False
return is_bst(tree.left, lower, tree.val) and is_bst(tree.right, tree.val, upper)
```
### Complexity ?
#### Ideal Case
In the ideal case $O(log~n)$
What's the ideal case ?
**Why ?**
What's the breadth first order of such a tree ?
Do you see the relation with the binary search on sorted array ?
```
tree1 = Tree(4, Tree(2, Tree(1), Tree(3)), Tree(6, Tree(5), Tree(7)))
IPython.display.display(tree_to_dot(tree1))
tree2 = Tree(2, Tree(1), Tree(4, Tree(3), Tree(6, Tree(5), Tree(7))))
IPython.display.display(tree_to_dot(tree2))
```
### Well balanced trees
* For any nodes, the amount of nodes of its left subtree is roughly the same as the amount of nodes in its right subtree
* Makes search optimal as we split the search space uniformly
In case of perfect balance, we have the following recursive formula for the worst case:
$$T(n) = T\left(\frac{n}{2}\right) + O(1)$$
If we apply the master theorem here, we end up with complexity $O(log~n)$ as expected.
Usually trees are balanced on their height for practical reasons, there's two main form of such trees:
* AVL: each node stores the heights (or the difference) of its subtrees. The operations keep the difference of heights in the range $[-1,1]$
* Red/Black: simulating 2-4 trees with binary tree structure extended with a color flag.
Both techniques use rotations to rebalance trees before it's too late.
```
def left_rotate(tree):
new_root = tree.right
tree.right = new_root.left
new_root.left = tree
return new_root
print('Before left rotation:')
T = Tree(2, Tree(1), Tree(4, Tree(3), Tree(6, Tree(5), Tree(7))))
IPython.display.display(tree_to_dot(T))
T = left_rotate(T)
print('After:')
IPython.display.display(tree_to_dot(T))
```
### AVL
An AVL is a well balanced binary search tree based. Balancing is maintained by keeping extra information in nodes regarding the current _unbalancing_ state. Usually we store the difference of heights between the left and right subtree. The absolute value indicates how much we're unbalanced and the sign which sub-trees is deeper than the other.
AVL operations maintains this difference in the range $[-1...1]$ any time a node leaves this range, we apply a rotation to restore the balance.
We have two simple rotations (left and right) and two double rotations. The double rotations are needed when applying the simple rotation just invert the unbalancing like in the next example.
```
UBTree = Tree(2, Tree(1), Tree(6, Tree(4, Tree(3), Tree(5)), Tree(7)))
IPython.display.display(tree_to_dot(UBTree))
UBTree = left_rotate(UBTree)
IPython.display.display(tree_to_dot(UBTree))
```
In the previous example, we can see that node `2` has a balance factor (the famous difference) of $-2$ (left sub-tree has a height of $0$ and the right a height of $2$), this is usually solved with left rotation, but if we look at the root of the right sub-tree (the `6`), it's balance factor is $1$. This change of sign indicates the necessity of the double rotation.
The rotations are used during insertion or suppression of a node in the tree. Those operations enforce the AVL properties, so we can assume that if during the process we found an unbalanced node, the rest of the sub-tree is correctly balanced (otherwise it would have triggered rebalancing earlier). The condition to apply rotations is as follow:
* If the node balancing factor is either $-2$ or $2$
* For $-2$:
* if balance factor of the right child is $0$ or $-1$, apply a **left rotation**
* if balance factor of the right child is $-1$ apply a **right rotation on the right child, then left rotation on the node**
* The case for $2$ is the exact symetrics in signs and directions
#### Inserting
The insertion operation is recursive algorithm which maintain balance in the back path (from the modified leaf to the root, that is when comming back from recursion). The key idea is to return to the calling parent the change of height (either 0 or 1) so that the parent updates its balance factor and apply rotations accordingly.
It is interesting to do a case study of what happen on the height after insertion in one of the sub-tree (with a change of the height of this sub-tree):
* If we insert the new node in the right sub-tree:
* If the balance factor of the node is $1$: the challenge _rebalance_ the node and we no longer have a change of height
* If the factor is $0$: we have a new factor of $-1$ and we gain a level
* If the factor is $-1$: we have a new factoro of $-2$ and we need a rotation
* we follow the previous rules to choose the accurate rotation
We can build the insertion in the left sub-tree symetrically.
If we apply the same form of case study to the impact of the rotation, we can note that the change of height due to the insertion is **always** absorbed by the rotation, thus during the insertion of a node, atmost one rotation is needed to maintain the balance of the tree. We can also observe that after a rotation in those cases, the balance factor of the node is $0$, thus we can easily deduce that after insertion and possible rotation, the height has changed only if the updated balance factor is different from $0$.
And the complexity ? Since the original tree is an AVL and is thus balanced, we knoow that the length of the path from the root to the deepest node where the insertion take place is in $O(log~N)$ and thus the time complexity is $O(log~N)$.
```
insert x in AVL tree T:
Pre-condition: T is a valid AVL
if T is empty:
replace T with a new node containing x
return 1
else:
if x == T.key: x is present, nothing else to do return 0
if x < T.key:
insert x in T.left
if recursive call returns 0, then return 0
else:
add 1 to balance factor
if unbalance choose and apply rotation
return 1 if balance factor is not 0
else:
insert x in T.right
if recursive call returns 0, then return 0
else:
add -1 to balance factor
if unbalance choose and apply rotation
return 1 if balance factor is not 0
```
```
class AVL:
def __init__(self, key):
self.key = key
self.left = None
self.right = None
self.balance = 0
def name(self):
return "{} ({})".format(self.key, self.balance)
def _tree_to_dot_AVL(tree, g):
if tree.left == tree.right:
return
if tree.left is not None:
g.edge(tree.name(), tree.left.name())
_tree_to_dot_AVL(tree.left, g)
else:
g.node("empty" + tree.name(), style="invis")
g.edge(tree.name(), "empty" + tree.name(), style="invis")
if tree.right is not None:
g.edge(tree.name(), tree.right.name())
_tree_to_dot_AVL(tree.right, g)
else:
g.node("empty" + tree.name(), style="invis")
g.edge(tree.name(), "empty" + tree.name(), style="invis")
def tree_to_dot_AVL(tree):
g = Digraph()
if tree is not None:
g.node(tree.name())
_tree_to_dot_AVL(tree, g)
return g
def _lr(tree):
new_root = tree.right
tree.right = new_root.left
new_root.left = tree
return new_root
def left_rotation(tree):
new_root = _lr(tree)
new_root.left.balance = -1 - new_root.balance
new_root.balance = - new_root.left.balance
return new_root
def _rr(tree):
new_root = tree.left
tree.left = new_root.right
new_root.right = tree
return new_root
def right_rotation(tree):
new_root = _rr(tree)
new_root.right.balance = 1 - new_root.balance
new_root.balance = - new_root.right.balance
return new_root
def right_left_rotation(tree):
tree.right = _rr(tree.right)
new_root = _lr(tree)
new_root.left.balance = (new_root.balance * (new_root.balance - 1)) // 2
new_root.right.balance = - (new_root.balance * (new_root.balance + 1)) // 2
new_root.balance = 0
return new_root
def left_right_rotation(tree):
tree.left = _lr(tree.left)
new_root = _rr(tree)
new_root.left.balance = (new_root.balance * (new_root.balance - 1)) // 2
new_root.right.balance = - (new_root.balance * (new_root.balance + 1)) // 2
new_root.balance = 0
return new_root
def AVL_insert(x, T):
if T is None:
return AVL(x), 1
if x == T.key:
return T, 0
if x < T.key:
T.left, r = AVL_insert(x, T.left)
if r == 0:
return T, 0
T.balance += 1
if T.balance == 2:
if T.left.balance == -1:
T = left_right_rotation(T)
else:
T = right_rotation(T)
else:
T.right, r = AVL_insert(x, T.right)
if r == 0:
return T, 0
T.balance -= 1
if T.balance == -2:
if T.right.balance == 1:
T = right_left_rotation(T)
else:
T = left_rotation(T)
return T, (T.balance != 0)
t = None
from random import sample
for i in sample(range(1, 100), k=7):
print("insert", i, "in the tree")
t, _ = AVL_insert(i, t)
IPython.display.display(tree_to_dot_AVL(t))
```
|
github_jupyter
|
from graphviz import Digraph
import IPython
class Tree:
def __init__(self, val, left=None, right=None):
self.val = val
self.left = left
self.right = right
def depth_first_preorder(tree):
if tree is not None:
print(tree.val)
depth_first_preorder(tree.left)
depth_first_preorder(tree.right)
def depth_first_inorder(tree):
if tree is not None:
depth_first_inorder(tree.left)
print(tree.val)
depth_first_inorder(tree.right)
def depth_first_postorder(tree):
if tree is not None:
depth_first_postorder(tree.left)
depth_first_postorder(tree.right)
print(tree.val)
def _tree_to_dot(tree, g):
if tree.left is not None:
g.edge(str(tree.val), str(tree.left.val))
_tree_to_dot(tree.left, g)
if tree.right is not None:
g.edge(str(tree.val), str(tree.right.val))
_tree_to_dot(tree.right, g)
def tree_to_dot(tree):
g = Digraph()
if tree is not None:
_tree_to_dot(tree, g)
return g
tree = Tree(1, Tree(2, Tree(3), Tree(4)), Tree(5, Tree(6), Tree(7)))
depth_first_preorder(tree)
print()
depth_first_inorder(tree)
print()
depth_first_postorder(tree)
tree_to_dot(tree)
def breadth_first(tree):
if tree is not None:
q = [tree]
while len(q) > 0:
cur = q.pop(0)
print(cur.val, end=' ')
if cur.left is not None:
q.append(cur.left)
if cur.right is not None:
q.append(cur.right)
print()
def breadth_first_with_level(tree):
if tree is not None:
q = [tree, None]
while len(q) > 0:
cur = q.pop(0)
if cur is None:
print()
if len(q) > 0:
q.append(None)
continue
print(cur.val, end=' ')
if cur.left is not None:
q.append(cur.left)
if cur.right is not None:
q.append(cur.right)
breadth_first(tree)
print()
breadth_first_with_level(tree)
def bst_search(n, x):
if n is None:
return False
if x == n.val:
return True
if x < n.val:
return bst_search(n.left, x)
return bst_search(n.right, x)
def bst_search_iter(n, x):
while n is not None:
if x == n.val:
return True
if x < n.val:
n = n.left
else:
n = n.right
return False
def is_bst(tree, lower, upper):
if tree.val < lower or tree.val > upper:
return False
return is_bst(tree.left, lower, tree.val) and is_bst(tree.right, tree.val, upper)
tree1 = Tree(4, Tree(2, Tree(1), Tree(3)), Tree(6, Tree(5), Tree(7)))
IPython.display.display(tree_to_dot(tree1))
tree2 = Tree(2, Tree(1), Tree(4, Tree(3), Tree(6, Tree(5), Tree(7))))
IPython.display.display(tree_to_dot(tree2))
def left_rotate(tree):
new_root = tree.right
tree.right = new_root.left
new_root.left = tree
return new_root
print('Before left rotation:')
T = Tree(2, Tree(1), Tree(4, Tree(3), Tree(6, Tree(5), Tree(7))))
IPython.display.display(tree_to_dot(T))
T = left_rotate(T)
print('After:')
IPython.display.display(tree_to_dot(T))
UBTree = Tree(2, Tree(1), Tree(6, Tree(4, Tree(3), Tree(5)), Tree(7)))
IPython.display.display(tree_to_dot(UBTree))
UBTree = left_rotate(UBTree)
IPython.display.display(tree_to_dot(UBTree))
insert x in AVL tree T:
Pre-condition: T is a valid AVL
if T is empty:
replace T with a new node containing x
return 1
else:
if x == T.key: x is present, nothing else to do return 0
if x < T.key:
insert x in T.left
if recursive call returns 0, then return 0
else:
add 1 to balance factor
if unbalance choose and apply rotation
return 1 if balance factor is not 0
else:
insert x in T.right
if recursive call returns 0, then return 0
else:
add -1 to balance factor
if unbalance choose and apply rotation
return 1 if balance factor is not 0
class AVL:
def __init__(self, key):
self.key = key
self.left = None
self.right = None
self.balance = 0
def name(self):
return "{} ({})".format(self.key, self.balance)
def _tree_to_dot_AVL(tree, g):
if tree.left == tree.right:
return
if tree.left is not None:
g.edge(tree.name(), tree.left.name())
_tree_to_dot_AVL(tree.left, g)
else:
g.node("empty" + tree.name(), style="invis")
g.edge(tree.name(), "empty" + tree.name(), style="invis")
if tree.right is not None:
g.edge(tree.name(), tree.right.name())
_tree_to_dot_AVL(tree.right, g)
else:
g.node("empty" + tree.name(), style="invis")
g.edge(tree.name(), "empty" + tree.name(), style="invis")
def tree_to_dot_AVL(tree):
g = Digraph()
if tree is not None:
g.node(tree.name())
_tree_to_dot_AVL(tree, g)
return g
def _lr(tree):
new_root = tree.right
tree.right = new_root.left
new_root.left = tree
return new_root
def left_rotation(tree):
new_root = _lr(tree)
new_root.left.balance = -1 - new_root.balance
new_root.balance = - new_root.left.balance
return new_root
def _rr(tree):
new_root = tree.left
tree.left = new_root.right
new_root.right = tree
return new_root
def right_rotation(tree):
new_root = _rr(tree)
new_root.right.balance = 1 - new_root.balance
new_root.balance = - new_root.right.balance
return new_root
def right_left_rotation(tree):
tree.right = _rr(tree.right)
new_root = _lr(tree)
new_root.left.balance = (new_root.balance * (new_root.balance - 1)) // 2
new_root.right.balance = - (new_root.balance * (new_root.balance + 1)) // 2
new_root.balance = 0
return new_root
def left_right_rotation(tree):
tree.left = _lr(tree.left)
new_root = _rr(tree)
new_root.left.balance = (new_root.balance * (new_root.balance - 1)) // 2
new_root.right.balance = - (new_root.balance * (new_root.balance + 1)) // 2
new_root.balance = 0
return new_root
def AVL_insert(x, T):
if T is None:
return AVL(x), 1
if x == T.key:
return T, 0
if x < T.key:
T.left, r = AVL_insert(x, T.left)
if r == 0:
return T, 0
T.balance += 1
if T.balance == 2:
if T.left.balance == -1:
T = left_right_rotation(T)
else:
T = right_rotation(T)
else:
T.right, r = AVL_insert(x, T.right)
if r == 0:
return T, 0
T.balance -= 1
if T.balance == -2:
if T.right.balance == 1:
T = right_left_rotation(T)
else:
T = left_rotation(T)
return T, (T.balance != 0)
t = None
from random import sample
for i in sample(range(1, 100), k=7):
print("insert", i, "in the tree")
t, _ = AVL_insert(i, t)
IPython.display.display(tree_to_dot_AVL(t))
| 0.714528 | 0.905239 |
# aitextgen — Train a GPT-2 (or GPT Neo) Text-Generating Model w/ GPU
by [Max Woolf](https://minimaxir.com)
*Last updated: May 16th, 2021 (aitextgen v0.5.2)*
Retrain an advanced text generating neural network on any text dataset **for free on a GPU using Colaboratory** using `aitextgen`!
For more about `aitextgen`, you can visit [this GitHub repository](https://github.com/minimaxir/aitextgen) or [read the documentation](https://docs.aitextgen.io/).
To get started:
1. Copy this notebook to your Google Drive to keep it and save your changes. (File -> Save a Copy in Drive)
2. Run the cells below:
```
!pip install -q aitextgen
import aitextgen
import datetime
import gc
import logging
import os
import requests
import torch
session_url = 'http://172.28.0.2:9000/api/sessions'
notebook_name = requests.get(session_url).json()[0]['name']
run_datetime = datetime.datetime.now().strftime('%Y-%m-%d-%H-%M-%S')
run_id = notebook_name + '_run_' + run_datetime
log_format = '%(asctime)s — %(levelname)s — %(name)s — %(message)s'
date_format = '%d/%m/%Y %H:%M:%S'
log_level = logging.DEBUG
logging.basicConfig(format=log_format, datefmt=date_format, level=log_level)
```
## GPU
Colaboratory uses a Nvidia P4, an Nvidia T4, an Nvidia P100, or an Nvidia V100. For finetuning GPT-2 124M, any of these GPUs will be fine, but for text generation, a T4 or a P100 is ideal since they have more VRAM. **If you receive a T4 or a V100 GPU, you can enable `fp16=True` during training for faster/more memory efficient training.**
You can verify which GPU is active by running the cell below. If you want to try for a different GPU, go to **Runtime -> Factory Reset Runtime**.
```
!nvidia-smi
```
## Mounting Google Drive
The best way to get input text to-be-trained into the Colaboratory VM, and to get the trained model *out* of Colaboratory, is to route it through Google Drive *first*.
Running this cell (which will only work in Colaboratory) will mount your personal Google Drive in the VM, which later cells can use to get data in/out. (it will ask for an auth code; that auth is not saved anywhere)
```
aitextgen.colab.mount_gdrive()
gdrive_rootdir = '/content/drive/My Drive'
```
## Load a Trained Model
If you already had a trained model from this notebook, running the next cell will copy the `pytorch_model.bin` and the `config.json`file from the specified folder in Google Drive into the Colaboratory VM. (If no `from_folder` is specified, it assumes the two files are located at the root level of your Google Drive)
```
#load_model = None
load_model = 'aitextgen-CCS-124M-articles-7200_run_2022-05-13-15-52-04'
if load_model is not None:
model_load_dir = gdrive_rootdir + '/aitextgen/models/' + load_model
ai = aitextgen.aitextgen(model_folder=model_load_dir, to_gpu=True)
```
The next cell will allow you to load the retrained model + metadata necessary to generate text.
## Loading GPT-2 or GPT Neo
If you're retraining a model on new text, you need to download and load the GPT-2 model into the GPU.
There are several sizes of GPT-2:
* `124M` (default): the "small" model, 500MB on disk.
* `355M` (default): the "medium" model, 1.5GB on disk.
* `774M` (default): the "large" model, 3GB on disk.
You can also finetune a GPT Neo model instead, which is more suitable for longer texts and the base model has more recent data:
* `125M`: Analogous to the GPT-2 124M model.
* `350M`: Analogous to the GPT-2 355M model
The next cell downloads the model and saves it in the Colaboratory VM. If the model has already been downloaded, running this cell will reload it.
```
model='124M'
#model='355M'
#model='774M'
#model='gpt-neo-125M'
#model='gpt-neo-350M'
if load_model is None:
if model == '124M' or model == '355M' or model == '774M':
ai = aitextgen.aitextgen(tf_gpt2=model, to_gpu=True)
else:
ai = aitextgen.aitextgen(model='EleutherAI/' + model, to_gpu=True)
```
## Uploading a Text File to be Trained to Colaboratory
In the Colaboratory Notebook sidebar on the left of the screen, select *Files*. From there you can upload files:

Upload **any smaller text file** (for example, [a text file of Shakespeare plays](https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt)) and update the file name in the cell below, then run the cell.
```
data_rootdir = gdrive_rootdir + '/aitextgen/training_data'
datasets = ['meta_reports_combined']
dataset_splits = [1]
dataset_iterations = [500]
file_basename = 'dataset_cache'
file_ext = '.tar.gz'
from_cache = True
```
If your text file is large (>10MB), it is recommended to upload that file to Google Drive first, then copy that file from Google Drive to the Colaboratory VM.
Additionally, you may want to consider [compressing the dataset to a cache first](https://docs.aitextgen.io/dataset/) on your local computer, then uploading the resulting `dataset_cache.tar.gz` and setting the `file_name`in the previous cell to that.
## Finetune GPT-2
The next cell will start the actual finetuning of GPT-2 in aitextgen. It runs for `num_steps`, and a progress bar will appear to show training progress, current loss (the lower the better the model), and average loss (to give a sense on loss trajectory).
The model will be saved every `save_every` steps in `trained_model` by default, and when training completes. If you mounted your Google Drive, the model will _also_ be saved there in a unique folder.
The training might time out after 4ish hours; if you did not mount to Google Drive, make sure you end training and save the results so you don't lose them! (if this happens frequently, you may want to consider using [Colab Pro](https://colab.research.google.com/signup))
Important parameters for `train()`:
- **`line_by_line`**: Set this to `True` if the input text file is a single-column CSV, with one record per row. aitextgen will automatically process it optimally.
- **`from_cache`**: If you compressed your dataset locally (as noted in the previous section) and are using that cache file, set this to `True`.
- **`num_steps`**: Number of steps to train the model for.
- **`generate_every`**: Interval of steps to generate example text from the model; good for qualitatively validating training.
- **`save_every`**: Interval of steps to save the model: the model will be saved in the VM to `/trained_model`.
- **`save_gdrive`**: Set this to `True` to copy the model to a unique folder in your Google Drive, if you have mounted it in the earlier cells
- **`fp16`**: Enables half-precision training for faster/more memory-efficient training. Only works on a T4 or V100 GPU.
Here are other important parameters for `train()` that are useful but you likely do not need to change.
- **`learning_rate`**: Learning rate of the model training.
- **`batch_size`**: Batch size of the model training; setting it too high will cause the GPU to go OOM. (if using `fp16`, you can increase the batch size more safely)
```
model_save_dir = gdrive_rootdir + '/aitextgen/models/' + run_id
save_every = 200
for i in range(len(datasets)):
file_basepath = data_rootdir + '/' + datasets[i] + '/' + file_basename
for j in range(dataset_splits[i]):
if dataset_splits[i] > 1:
current_file = file_basepath + '.' + str(j) + file_ext
else:
current_file = file_basepath + file_ext
ai.train(current_file,
line_by_line=False,
from_cache=from_cache,
num_steps=dataset_iterations[i],
generate_every=dataset_iterations[i],
save_every=save_every,
save_gdrive=False,
run_id=run_id,
output_dir=model_save_dir,
learning_rate=1e-3,
fp16=False,
batch_size=1)
# R.B.: required to prevent memory leaks in Colab
gc.collect()
```
You're done! Feel free to go to the **Generate Text From The Trained Model** section to generate text based on your retrained model.
## Generate Text From The Trained Model
After you've trained the model or loaded a retrained model from checkpoint, you can now generate text.
**If you just trained a model**, you'll get much faster training performance if you reload the model; the next cell will reload the model you just trained from the `trained_model` folder.
```
if len(datasets) > 0:
ai = aitextgen.aitextgen(model_folder=model_save_dir, to_gpu=True)
```
`generate()` without any parameters generates a single text from the loaded model to the console.
If you're creating an API based on your model and need to pass the generated text elsewhere, you can do `text = ai.generate_one()`
You can also pass in a `prompt` to the generate function to force the text to start with a given character sequence and generate text from there (good if you add an indicator when the text starts).
You can also generate multiple texts at a time by specifing `n`. You can pass a `batch_size` to generate multiple samples in parallel, giving a massive speedup (in Colaboratory, set a maximum of 50 for `batch_size` to avoid going OOM).
Other optional-but-helpful parameters for `ai.generate()` and friends:
* **`min length`**: The minimum length of the generated text: if the text is shorter than this value after cleanup, aitextgen will generate another one.
* **`max_length`**: Number of tokens to generate (default 256, you can generate up to 1024 tokens with GPT-2 and 2048 with GPT Neo)
* **`temperature`**: The higher the temperature, the crazier the text (default 0.7, recommended to keep between 0.7 and 1.0)
* **`top_k`**: Limits the generated guesses to the top *k* guesses (default 0 which disables the behavior; if the generated output is super crazy, you may want to set `top_k=40`)
* **`top_p`**: Nucleus sampling: limits the generated guesses to a cumulative probability. (gets good results on a dataset with `top_p=0.9`)
For bulk generation, you can generate a large amount of texts to a file and sort out the samples locally on your computer. The next cell will generate `num_files` files, each with `n` texts and whatever other parameters you would pass to `generate()`. The files can then be downloaded from the Files sidebar!
You can rerun the cells as many times as you want for even more generated texts!
```
prompts = ['Digital Forensics Analysis Report\n',
'This report is ',
'The contents of ',
'Conclusion\n',
'It is recommended that ',
'In the opinion of the expert, ',
'File \'Exploit_Office\' contains ',
'File \'Exploit_Office\' does not contain ',
'Website \'Webmail SquirrelMail\' contains ',
'Website \'Webmail SquirrelMail\' does not contain ',
'Bill Due to past contains a link \'https://genom.mefst.hr/webmail/src/login.php\' to a website \'Webmail SquirrelMail\'.',
'New Dogecoin Crypto Sale contains a link \'http://webmail.forumofthemall.hr/mail/loging.php\' to a website \'Webmail SquirrelMail Popular Forum\'.',
'New OneCoin Crypto Sale contains a link \'http://',
'Note of eviction contains ',
'Note of eviction contains attachment ',
'Note of eviction contains attachment \'Exploit_Office\'. Attachment is quarantined on \'Mail server EP\'.',
'Log entry found: ',
'Log entry found: Firewall (Type: Firewall) ',
'Log entry found: Firewall (Type: Firewall) detected. [Allowed network traffic protocol ',
'Log entry found: Firewall (Type: Firewall) blocked. [Blocked network traffic protocol ',
'Log entry found: Firewall (Type: Firewall) detected. [Allowed network traffic protocol \'smtp:25\' from \'server74.aws.com\' to \'Mail server EP\'. Rule \'Internet_to_Mail_Server\'.]',
'Log entry found: Firewall (Type: Firewall) detected. [Allowed network traffic protocol \'https:443\' from \'Proxy server\' to \'server74.aws.com\'. Rule \'Proxy_to_Internet, https:443\'.]',
'Log entry found: Firewall (Type: Firewall) blocked. [Blocked network traffic protocol \'https:443\' from \'PCSZT03\' to \'Firewall TSO Enterprise\'.]',
'Log analysis on ',
'Log analysis on \'Firewall TSO Enterprise\' for period 1.1.2022. 0:00:00 - 4.2.2022. 13:50:44 finished. Report is ready.']
output_dir = gdrive_rootdir + '/aitextgen/outputs/' + run_id
output_basepath = output_dir + '/' + run_id + '_output'
output_ext = '.txt'
if not os.path.exists(output_dir):
os.makedirs(output_dir)
num_outputs = 5
max_length = 1000
temperature = 1.0
top_p = 0.9
for i in range(len(prompts)):
if len(prompts) > 1:
current_output = output_basepath + '.' + str(i) + output_ext
else:
current_output = output_basepath + output_ext
ai.generate_to_file(n=num_outputs,
batch_size=1,
prompt=prompts[i],
max_length=max_length,
temperature=temperature,
top_p=top_p,
destination_path=current_output)
```
# LICENSE
MIT License
Copyright (c) 2020-2021 Max Woolf
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
|
github_jupyter
|
!pip install -q aitextgen
import aitextgen
import datetime
import gc
import logging
import os
import requests
import torch
session_url = 'http://172.28.0.2:9000/api/sessions'
notebook_name = requests.get(session_url).json()[0]['name']
run_datetime = datetime.datetime.now().strftime('%Y-%m-%d-%H-%M-%S')
run_id = notebook_name + '_run_' + run_datetime
log_format = '%(asctime)s — %(levelname)s — %(name)s — %(message)s'
date_format = '%d/%m/%Y %H:%M:%S'
log_level = logging.DEBUG
logging.basicConfig(format=log_format, datefmt=date_format, level=log_level)
!nvidia-smi
aitextgen.colab.mount_gdrive()
gdrive_rootdir = '/content/drive/My Drive'
#load_model = None
load_model = 'aitextgen-CCS-124M-articles-7200_run_2022-05-13-15-52-04'
if load_model is not None:
model_load_dir = gdrive_rootdir + '/aitextgen/models/' + load_model
ai = aitextgen.aitextgen(model_folder=model_load_dir, to_gpu=True)
model='124M'
#model='355M'
#model='774M'
#model='gpt-neo-125M'
#model='gpt-neo-350M'
if load_model is None:
if model == '124M' or model == '355M' or model == '774M':
ai = aitextgen.aitextgen(tf_gpt2=model, to_gpu=True)
else:
ai = aitextgen.aitextgen(model='EleutherAI/' + model, to_gpu=True)
data_rootdir = gdrive_rootdir + '/aitextgen/training_data'
datasets = ['meta_reports_combined']
dataset_splits = [1]
dataset_iterations = [500]
file_basename = 'dataset_cache'
file_ext = '.tar.gz'
from_cache = True
model_save_dir = gdrive_rootdir + '/aitextgen/models/' + run_id
save_every = 200
for i in range(len(datasets)):
file_basepath = data_rootdir + '/' + datasets[i] + '/' + file_basename
for j in range(dataset_splits[i]):
if dataset_splits[i] > 1:
current_file = file_basepath + '.' + str(j) + file_ext
else:
current_file = file_basepath + file_ext
ai.train(current_file,
line_by_line=False,
from_cache=from_cache,
num_steps=dataset_iterations[i],
generate_every=dataset_iterations[i],
save_every=save_every,
save_gdrive=False,
run_id=run_id,
output_dir=model_save_dir,
learning_rate=1e-3,
fp16=False,
batch_size=1)
# R.B.: required to prevent memory leaks in Colab
gc.collect()
if len(datasets) > 0:
ai = aitextgen.aitextgen(model_folder=model_save_dir, to_gpu=True)
prompts = ['Digital Forensics Analysis Report\n',
'This report is ',
'The contents of ',
'Conclusion\n',
'It is recommended that ',
'In the opinion of the expert, ',
'File \'Exploit_Office\' contains ',
'File \'Exploit_Office\' does not contain ',
'Website \'Webmail SquirrelMail\' contains ',
'Website \'Webmail SquirrelMail\' does not contain ',
'Bill Due to past contains a link \'https://genom.mefst.hr/webmail/src/login.php\' to a website \'Webmail SquirrelMail\'.',
'New Dogecoin Crypto Sale contains a link \'http://webmail.forumofthemall.hr/mail/loging.php\' to a website \'Webmail SquirrelMail Popular Forum\'.',
'New OneCoin Crypto Sale contains a link \'http://',
'Note of eviction contains ',
'Note of eviction contains attachment ',
'Note of eviction contains attachment \'Exploit_Office\'. Attachment is quarantined on \'Mail server EP\'.',
'Log entry found: ',
'Log entry found: Firewall (Type: Firewall) ',
'Log entry found: Firewall (Type: Firewall) detected. [Allowed network traffic protocol ',
'Log entry found: Firewall (Type: Firewall) blocked. [Blocked network traffic protocol ',
'Log entry found: Firewall (Type: Firewall) detected. [Allowed network traffic protocol \'smtp:25\' from \'server74.aws.com\' to \'Mail server EP\'. Rule \'Internet_to_Mail_Server\'.]',
'Log entry found: Firewall (Type: Firewall) detected. [Allowed network traffic protocol \'https:443\' from \'Proxy server\' to \'server74.aws.com\'. Rule \'Proxy_to_Internet, https:443\'.]',
'Log entry found: Firewall (Type: Firewall) blocked. [Blocked network traffic protocol \'https:443\' from \'PCSZT03\' to \'Firewall TSO Enterprise\'.]',
'Log analysis on ',
'Log analysis on \'Firewall TSO Enterprise\' for period 1.1.2022. 0:00:00 - 4.2.2022. 13:50:44 finished. Report is ready.']
output_dir = gdrive_rootdir + '/aitextgen/outputs/' + run_id
output_basepath = output_dir + '/' + run_id + '_output'
output_ext = '.txt'
if not os.path.exists(output_dir):
os.makedirs(output_dir)
num_outputs = 5
max_length = 1000
temperature = 1.0
top_p = 0.9
for i in range(len(prompts)):
if len(prompts) > 1:
current_output = output_basepath + '.' + str(i) + output_ext
else:
current_output = output_basepath + output_ext
ai.generate_to_file(n=num_outputs,
batch_size=1,
prompt=prompts[i],
max_length=max_length,
temperature=temperature,
top_p=top_p,
destination_path=current_output)
| 0.354098 | 0.826046 |
[this doc on github](https://github.com/dotnet/interactive/tree/master/samples/notebooks/csharp/Samples)
# .NET interactive report
project report for [.NET interactive repo]()
## Setup
Importing pacakges and setting up connection
```
#i "nuget:https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet5/nuget/v3/index.json"
#i "nuget:https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-tools/nuget/v3/index.json"
#r "nuget:NodaTime, 2.4.8"
#r "nuget:Octokit, 0.47.0"
#r "nuget: XPlot.Plotly.Interactive, 4.0.1"
using static Microsoft.DotNet.Interactive.Formatting.PocketViewTags;
using Microsoft.DotNet.Interactive.Formatting;
using Octokit;
using NodaTime;
using NodaTime.Extensions;
using XPlot.Plotly;
var organization = "dotnet";
var repositoryName = "interactive";
var options = new ApiOptions();
var gitHubClient = new GitHubClient(new ProductHeaderValue("notebook"));
```
[Generate a user token](https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line) to get rid of public [api](https://github.com/octokit/octokit.net/blob/master/docs/getting-started.md) throttling policies for anonymous users
```
var tokenAuth = new Credentials("your token");
gitHubClient.Credentials = tokenAuth;
var today = SystemClock.Instance.InUtc().GetCurrentDate();
var startOfTheMonth = today.With(DateAdjusters.StartOfMonth);
var startOfPreviousMonth = today.With(DateAdjusters.StartOfMonth) - Period.FromMonths(1);
var startOfTheYear = new LocalDate(today.Year, 1, 1).AtMidnight();
var currentYearIssuesRequest = new RepositoryIssueRequest {
State = ItemStateFilter.All,
Since = startOfTheYear.ToDateTimeUnspecified()
};
var pullRequestRequest = new PullRequestRequest {
State = ItemStateFilter.All
};
```
Perform github queries
```
#!time
var branches = await gitHubClient.Repository.Branch.GetAll(organization, repositoryName);
var pullRequests = await gitHubClient.Repository.PullRequest.GetAllForRepository(organization, repositoryName, pullRequestRequest);
var forks = await gitHubClient.Repository.Forks.GetAll(organization, repositoryName);
var currentYearIssues = await gitHubClient.Issue.GetAllForRepository(organization, repositoryName, currentYearIssuesRequest);
```
Branch data
Pull request data
```
var pullRequestCreatedThisMonth = pullRequests.Where(pr => pr.CreatedAt > startOfTheMonth.ToDateTimeUnspecified());
var pullRequestClosedThisMonth =pullRequests.Where(pr => (pr.MergedAt != null && pr.MergedAt > startOfTheMonth.ToDateTimeUnspecified()));
var contributorsCount = pullRequestClosedThisMonth.GroupBy(pr => pr.User.Login);
var pullRequestLifespan = pullRequests.GroupBy(pr =>
{
var lifeSpan = (pr.ClosedAt ?? today.ToDateTimeUnspecified()) - pr.CreatedAt;
return Math.Max(0, Math.Ceiling(lifeSpan.TotalDays));
})
.Where(g => g.Key > 0)
.OrderBy(g => g.Key)
.ToDictionary(g => g.Key, g => g.Count());
```
Fork data
```
var forkCreatedThisMonth = forks.Where(fork => fork.CreatedAt >= startOfTheMonth.ToDateTimeUnspecified());
var forkCreatedPreviousMonth = forks.Where(fork => (fork.CreatedAt >= startOfPreviousMonth.ToDateTimeUnspecified()) && (fork.CreatedAt < startOfTheMonth.ToDateTimeUnspecified()));
var forkCreatedByMonth = forks.GroupBy(fork => new DateTime(fork.CreatedAt.Year, fork.CreatedAt.Month, 1));
var forkUpdateByMonth = forks.GroupBy(f => new DateTime(f.UpdatedAt.Year, f.UpdatedAt.Month, 1) ).Select(g => new {Date = g.Key, Count = g.Count()}).OrderBy(g => g.Date).ToArray();
var total = 0;
var forkCountByMonth = forkCreatedByMonth.OrderBy(g => g.Key).Select(g => new {Date = g.Key, Count = total += g.Count()}).ToArray();
```
Issues data
```
bool IsBug(Issue issue){
return issue.Labels.FirstOrDefault(l => l.Name == "bug")!= null;
}
bool TargetsArea(Issue issue){
return issue.Labels.FirstOrDefault(l => l.Name.StartsWith("Area-"))!= null;
}
string GetArea(Issue issue){
return issue.Labels.FirstOrDefault(l => l.Name.StartsWith("Area-"))?.Name;
}
var openIssues = currentYearIssues.Where(IsBug).Where(issue => issue.State == "open");
var closedIssues = currentYearIssues.Where(IsBug).Where(issue => issue.State == "closed");
var oldestIssues = openIssues.OrderBy(issue => today.ToDateTimeUnspecified() - issue.CreatedAt).Take(20);
var createdCurrentMonth = currentYearIssues.Where(IsBug).Where(issue => issue.CreatedAt >= startOfTheMonth.ToDateTimeUnspecified());
var createdPreviousMonth = currentYearIssues.Where(IsBug).Where(issue => (issue.CreatedAt >= startOfPreviousMonth.ToDateTimeUnspecified()) && (issue.CreatedAt < startOfTheMonth.ToDateTimeUnspecified()));
var openFromPreviousMonth = openIssues.Where(issue => (issue.CreatedAt > startOfPreviousMonth.ToDateTimeUnspecified()) && (issue.CreatedAt < startOfTheMonth.ToDateTimeUnspecified()));
var createdByMonth = currentYearIssues.Where(IsBug).GroupBy(issue => new DateTime(issue.CreatedAt.Year, issue.CreatedAt.Month, 1)).OrderBy(g=>g.Key).ToDictionary(g => g.Key, g => g.Count());
var closedByMonth = closedIssues.GroupBy(issue => new DateTime((int) issue.ClosedAt?.Year, (int) issue.ClosedAt?.Month, 1)).OrderBy(g=>g.Key).ToDictionary(g => g.Key, g => g.Count());
var openIssueAge = openIssues.GroupBy(issue => new DateTime(issue.CreatedAt.Year, issue.CreatedAt.Month, issue.CreatedAt.Day)).ToDictionary(g => g.Key, g => g.Max(issue =>Math.Max(0, Math.Ceiling( (today.ToDateTimeUnspecified() - issue.CreatedAt).TotalDays))));
var openByMonth = new Dictionary<DateTime, int>();
var minDate = createdByMonth.Min(g => g.Key);
var maxCreatedAtDate = createdByMonth.Max(g => g.Key);
var maxClosedAtDate = closedByMonth.Max(g => g.Key);
var maxDate = maxCreatedAtDate > maxClosedAtDate ?maxCreatedAtDate : maxClosedAtDate;
var cursor = minDate;
var runningTotal = 0;
var issuesCreatedThisMonthByArea = currentYearIssues.Where(issue => issue.CreatedAt >= startOfTheMonth.ToDateTimeUnspecified()).Where(issue => IsBug(issue) && TargetsArea(issue)).GroupBy(issue => GetArea(issue)).ToDictionary(g => g.Key, g => g.Count());
var openIssueByArea = currentYearIssues.Where(issue => issue.State == "open").Where(issue => IsBug(issue) && TargetsArea(issue)).GroupBy(issue => GetArea(issue)).ToDictionary(g => g.Key, g => g.Count());
while (cursor <= maxDate )
{
createdByMonth.TryGetValue(cursor, out var openCount);
closedByMonth.TryGetValue(cursor, out var closedCount);
runningTotal += (openCount - closedCount);
openByMonth[cursor] = runningTotal;
cursor = cursor.AddMonths(1);
}
var issueLifespan = currentYearIssues.Where(IsBug).GroupBy(issue =>
{
var lifeSpan = (issue.ClosedAt ?? today.ToDateTimeUnspecified()) - issue.CreatedAt;
return Math.Max(0, Math.Round(Math.Ceiling(lifeSpan.TotalDays),0));
})
.Where(g => g.Key > 0)
.OrderBy(g => g.Key)
.ToDictionary(g => g.Key, g => g.Count());
display(new {
less_then_one_sprint = issueLifespan.Where(i=> i.Key < 21).Select(i => i.Value).Sum(),
less_then_two_sprint = issueLifespan.Where(i=> i.Key >= 21 && i.Key < 42).Select(i => i.Value).Sum(),
more_then_two_sprint = issueLifespan.Where(i=> i.Key >= 42).Select(i => i.Value).Sum()
});
```
# Activity dashboard
```
var createdByMonthSeries = new Scattergl{
name = "Created",
x = createdByMonth.Select(g => g.Key),
y = createdByMonth.Select(g => g.Value),
};
var openByMonthSeries = new Scattergl{
name = "Open",
x = openByMonth.Select(g => g.Key),
y = openByMonth.Select(g => g.Value),
};
var closedByMonthSeries = new Scattergl{
name = "Closed",
x = closedByMonth.Select(g => g.Key),
y = closedByMonth.Select(g => g.Value),
};
var issueChart = Chart.Plot(new[] {createdByMonthSeries, closedByMonthSeries, openByMonthSeries});
issueChart.WithTitle("Bugs by month");
display(issueChart);
var issueLifespanOnWeekSeries = new Bar
{
name = "One week old",
y = issueLifespan.Where(issue => issue.Key < 7).OrderBy(issue => issue.Key).Select(issue => issue.Value),
x = issueLifespan.Where(issue => issue.Key < 7).OrderBy(issue => issue.Key).Select(issue => issue.Key) ,
marker = new Marker{
color = "green"
}
};
var issueLifespanOneSprintSeries = new Bar
{
name = "One Sprint old",
y = issueLifespan.Where(issue => issue.Key >= 7 && issue.Key < 21).OrderBy(issue => issue.Key).Select(issue => issue.Value),
x = issueLifespan.Where(issue => issue.Key >= 7 && issue.Key < 21).OrderBy(issue => issue.Key).Select(issue => issue.Key) ,
marker = new Marker{
color = "yellow"
}
};
var issueLifespanOldSeries = new Bar
{
name = "More then a Sprint",
y = issueLifespan.Where(issue => issue.Key >= 21).OrderBy(issue => issue.Key).Select(issue => issue.Value),
x = issueLifespan.Where(issue => issue.Key >= 21).OrderBy(issue => issue.Key).Select(issue => issue.Key) ,
marker = new Marker{
color = "red"
}
};
var issueLifespanChart = Chart.Plot(new[] {issueLifespanOnWeekSeries, issueLifespanOneSprintSeries, issueLifespanOldSeries});
issueLifespanChart.WithLayout(new Layout.Layout
{
title = "Bugs by life span",
xaxis = new Xaxis {
title = "Number of days a bug stays open",
showgrid = false,
zeroline = false
},
yaxis = new Yaxis {
showgrid = true,
zeroline = false
}
});
display(issueLifespanChart);
var openIssuesAgeSeriesWeek = new Bar
{
name = "Closed in a week",
y = openIssueAge.Where(issue => issue.Value < 7).OrderBy(issue => issue.Key).Select(issue => issue.Value),
x = openIssueAge.Where(issue => issue.Value < 7).OrderBy(issue => issue.Key).Select(issue => issue.Key) ,
marker = new Marker{
color = "green"
}
};
var openIssuesAgeSeriesSprint = new Bar
{
name = "Closed within a sprint",
y = openIssueAge.Where(issue => issue.Value >= 7 && issue.Value < 21).OrderBy(issue => issue.Key).Select(issue => issue.Value),
x = openIssueAge.Where(issue => issue.Value >= 7 && issue.Value < 21).OrderBy(issue => issue.Key).Select(issue => issue.Key) ,
marker = new Marker{
color = "yellow"
}
};
var openIssuesAgeSeriesLong = new Bar
{
name = "Long standing",
y = openIssueAge.Where(issue => issue.Value >= 21).OrderBy(issue => issue.Key).Select(issue => issue.Value),
x = openIssueAge.Where(issue => issue.Value >= 21).OrderBy(issue => issue.Key).Select(issue => issue.Key) ,
marker = new Marker{
color = "red"
}
};
var openIssuesAgeChart = Chart.Plot(new[] {openIssuesAgeSeriesWeek, openIssuesAgeSeriesSprint, openIssuesAgeSeriesLong});
openIssuesAgeChart.WithLayout(new Layout.Layout
{
title = "Open bugs age",
yaxis = new Yaxis {
title = "Number of days a bug stays open",
showgrid = true,
zeroline = false
}
});
display(openIssuesAgeChart);
var createdThisMonthAreaSeries = new Pie {
values = issuesCreatedThisMonthByArea.Select(e => e.Value),
labels = issuesCreatedThisMonthByArea.Select(e => e.Key),
};
var createdArea = Chart.Plot(new[] {createdThisMonthAreaSeries});
createdArea.WithLayout(new Layout.Layout
{
title = "Bugs created this month by Area",
});
display(createdArea);
var openAreaSeries = new Pie {
values = openIssueByArea.Select(e => e.Value),
labels = openIssueByArea.Select(e => e.Key),
};
var openArea = Chart.Plot(new[] {openAreaSeries});
openArea.WithLayout(new Layout.Layout
{
title = "Open bugs by Area",
});
display(openArea);
var prColors = pullRequestLifespan.OrderBy(pr => pr.Key).Select(pr => pr.Key < 7 ? "green" : pr.Key < 21 ? "yellow" : "red");
var prLifespanOneWeekSeries = new Bar
{
name = "One week",
y = pullRequestLifespan.Where(issue => issue.Key < 7).OrderBy(pr => pr.Key).Select(pr => pr.Value),
x = pullRequestLifespan.Where(issue => issue.Key < 7).OrderBy(pr => pr.Key).Select(pr => pr.Key) ,
marker = new Marker{
color = "green"
}
};
var prLifespanOneSprintSeries = new Bar
{
name = "One Sprint",
y = pullRequestLifespan.Where(issue => issue.Key >= 7 && issue.Key < 21).OrderBy(pr => pr.Key).Select(pr => pr.Value),
x = pullRequestLifespan.Where(issue => issue.Key >= 7 && issue.Key < 21).OrderBy(pr => pr.Key).Select(pr => pr.Key) ,
marker = new Marker{
color = "yellow"
}
};
var prLifespanMoreThanASprintSeries = new Bar
{
name = "More than a Sprint",
y = pullRequestLifespan.Where(issue => issue.Key >= 21).OrderBy(pr => pr.Key).Select(pr => pr.Value),
x = pullRequestLifespan.Where(issue => issue.Key >= 21).OrderBy(pr => pr.Key).Select(pr => pr.Key) ,
marker = new Marker{
color = "red"
}
};
var prLifespanChart = Chart.Plot(new[] {prLifespanOneWeekSeries, prLifespanOneSprintSeries, prLifespanMoreThanASprintSeries});
prLifespanChart.WithLayout(new Layout.Layout
{
title = "Pull Request by life span",
xaxis = new Xaxis {
title = "Number of days a PR stays open",
showgrid = false,
zeroline = false
},
yaxis = new Yaxis {
title = "Number of PR",
showgrid = true,
zeroline = false
}
});
display(prLifespanChart);
var forkCreationSeries = new Scattergl
{
name = "created by month",
y = forkCreatedByMonth.Select(g => g.Count() ).ToArray(),
x = forkCreatedByMonth.Select(g => g.Key ).ToArray()
};
var forkTotalSeries = new Scattergl
{
name = "running total",
y = forkCountByMonth.Select(g => g.Count ).ToArray(),
x = forkCountByMonth.Select(g => g.Date ).ToArray()
};
var forkUpdateSeries = new Scattergl
{
name = "last update by month",
y = forkUpdateByMonth.Select(g => g.Count ).ToArray(),
x = forkUpdateByMonth.Select(g => g.Date ).ToArray()
};
var chart = Chart.Plot(new[] {forkCreationSeries,forkTotalSeries,forkUpdateSeries});
chart.WithTitle("Fork activity");
display(chart);
```
|
github_jupyter
|
#i "nuget:https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet5/nuget/v3/index.json"
#i "nuget:https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-tools/nuget/v3/index.json"
#r "nuget:NodaTime, 2.4.8"
#r "nuget:Octokit, 0.47.0"
#r "nuget: XPlot.Plotly.Interactive, 4.0.1"
using static Microsoft.DotNet.Interactive.Formatting.PocketViewTags;
using Microsoft.DotNet.Interactive.Formatting;
using Octokit;
using NodaTime;
using NodaTime.Extensions;
using XPlot.Plotly;
var organization = "dotnet";
var repositoryName = "interactive";
var options = new ApiOptions();
var gitHubClient = new GitHubClient(new ProductHeaderValue("notebook"));
var tokenAuth = new Credentials("your token");
gitHubClient.Credentials = tokenAuth;
var today = SystemClock.Instance.InUtc().GetCurrentDate();
var startOfTheMonth = today.With(DateAdjusters.StartOfMonth);
var startOfPreviousMonth = today.With(DateAdjusters.StartOfMonth) - Period.FromMonths(1);
var startOfTheYear = new LocalDate(today.Year, 1, 1).AtMidnight();
var currentYearIssuesRequest = new RepositoryIssueRequest {
State = ItemStateFilter.All,
Since = startOfTheYear.ToDateTimeUnspecified()
};
var pullRequestRequest = new PullRequestRequest {
State = ItemStateFilter.All
};
#!time
var branches = await gitHubClient.Repository.Branch.GetAll(organization, repositoryName);
var pullRequests = await gitHubClient.Repository.PullRequest.GetAllForRepository(organization, repositoryName, pullRequestRequest);
var forks = await gitHubClient.Repository.Forks.GetAll(organization, repositoryName);
var currentYearIssues = await gitHubClient.Issue.GetAllForRepository(organization, repositoryName, currentYearIssuesRequest);
var pullRequestCreatedThisMonth = pullRequests.Where(pr => pr.CreatedAt > startOfTheMonth.ToDateTimeUnspecified());
var pullRequestClosedThisMonth =pullRequests.Where(pr => (pr.MergedAt != null && pr.MergedAt > startOfTheMonth.ToDateTimeUnspecified()));
var contributorsCount = pullRequestClosedThisMonth.GroupBy(pr => pr.User.Login);
var pullRequestLifespan = pullRequests.GroupBy(pr =>
{
var lifeSpan = (pr.ClosedAt ?? today.ToDateTimeUnspecified()) - pr.CreatedAt;
return Math.Max(0, Math.Ceiling(lifeSpan.TotalDays));
})
.Where(g => g.Key > 0)
.OrderBy(g => g.Key)
.ToDictionary(g => g.Key, g => g.Count());
var forkCreatedThisMonth = forks.Where(fork => fork.CreatedAt >= startOfTheMonth.ToDateTimeUnspecified());
var forkCreatedPreviousMonth = forks.Where(fork => (fork.CreatedAt >= startOfPreviousMonth.ToDateTimeUnspecified()) && (fork.CreatedAt < startOfTheMonth.ToDateTimeUnspecified()));
var forkCreatedByMonth = forks.GroupBy(fork => new DateTime(fork.CreatedAt.Year, fork.CreatedAt.Month, 1));
var forkUpdateByMonth = forks.GroupBy(f => new DateTime(f.UpdatedAt.Year, f.UpdatedAt.Month, 1) ).Select(g => new {Date = g.Key, Count = g.Count()}).OrderBy(g => g.Date).ToArray();
var total = 0;
var forkCountByMonth = forkCreatedByMonth.OrderBy(g => g.Key).Select(g => new {Date = g.Key, Count = total += g.Count()}).ToArray();
bool IsBug(Issue issue){
return issue.Labels.FirstOrDefault(l => l.Name == "bug")!= null;
}
bool TargetsArea(Issue issue){
return issue.Labels.FirstOrDefault(l => l.Name.StartsWith("Area-"))!= null;
}
string GetArea(Issue issue){
return issue.Labels.FirstOrDefault(l => l.Name.StartsWith("Area-"))?.Name;
}
var openIssues = currentYearIssues.Where(IsBug).Where(issue => issue.State == "open");
var closedIssues = currentYearIssues.Where(IsBug).Where(issue => issue.State == "closed");
var oldestIssues = openIssues.OrderBy(issue => today.ToDateTimeUnspecified() - issue.CreatedAt).Take(20);
var createdCurrentMonth = currentYearIssues.Where(IsBug).Where(issue => issue.CreatedAt >= startOfTheMonth.ToDateTimeUnspecified());
var createdPreviousMonth = currentYearIssues.Where(IsBug).Where(issue => (issue.CreatedAt >= startOfPreviousMonth.ToDateTimeUnspecified()) && (issue.CreatedAt < startOfTheMonth.ToDateTimeUnspecified()));
var openFromPreviousMonth = openIssues.Where(issue => (issue.CreatedAt > startOfPreviousMonth.ToDateTimeUnspecified()) && (issue.CreatedAt < startOfTheMonth.ToDateTimeUnspecified()));
var createdByMonth = currentYearIssues.Where(IsBug).GroupBy(issue => new DateTime(issue.CreatedAt.Year, issue.CreatedAt.Month, 1)).OrderBy(g=>g.Key).ToDictionary(g => g.Key, g => g.Count());
var closedByMonth = closedIssues.GroupBy(issue => new DateTime((int) issue.ClosedAt?.Year, (int) issue.ClosedAt?.Month, 1)).OrderBy(g=>g.Key).ToDictionary(g => g.Key, g => g.Count());
var openIssueAge = openIssues.GroupBy(issue => new DateTime(issue.CreatedAt.Year, issue.CreatedAt.Month, issue.CreatedAt.Day)).ToDictionary(g => g.Key, g => g.Max(issue =>Math.Max(0, Math.Ceiling( (today.ToDateTimeUnspecified() - issue.CreatedAt).TotalDays))));
var openByMonth = new Dictionary<DateTime, int>();
var minDate = createdByMonth.Min(g => g.Key);
var maxCreatedAtDate = createdByMonth.Max(g => g.Key);
var maxClosedAtDate = closedByMonth.Max(g => g.Key);
var maxDate = maxCreatedAtDate > maxClosedAtDate ?maxCreatedAtDate : maxClosedAtDate;
var cursor = minDate;
var runningTotal = 0;
var issuesCreatedThisMonthByArea = currentYearIssues.Where(issue => issue.CreatedAt >= startOfTheMonth.ToDateTimeUnspecified()).Where(issue => IsBug(issue) && TargetsArea(issue)).GroupBy(issue => GetArea(issue)).ToDictionary(g => g.Key, g => g.Count());
var openIssueByArea = currentYearIssues.Where(issue => issue.State == "open").Where(issue => IsBug(issue) && TargetsArea(issue)).GroupBy(issue => GetArea(issue)).ToDictionary(g => g.Key, g => g.Count());
while (cursor <= maxDate )
{
createdByMonth.TryGetValue(cursor, out var openCount);
closedByMonth.TryGetValue(cursor, out var closedCount);
runningTotal += (openCount - closedCount);
openByMonth[cursor] = runningTotal;
cursor = cursor.AddMonths(1);
}
var issueLifespan = currentYearIssues.Where(IsBug).GroupBy(issue =>
{
var lifeSpan = (issue.ClosedAt ?? today.ToDateTimeUnspecified()) - issue.CreatedAt;
return Math.Max(0, Math.Round(Math.Ceiling(lifeSpan.TotalDays),0));
})
.Where(g => g.Key > 0)
.OrderBy(g => g.Key)
.ToDictionary(g => g.Key, g => g.Count());
display(new {
less_then_one_sprint = issueLifespan.Where(i=> i.Key < 21).Select(i => i.Value).Sum(),
less_then_two_sprint = issueLifespan.Where(i=> i.Key >= 21 && i.Key < 42).Select(i => i.Value).Sum(),
more_then_two_sprint = issueLifespan.Where(i=> i.Key >= 42).Select(i => i.Value).Sum()
});
var createdByMonthSeries = new Scattergl{
name = "Created",
x = createdByMonth.Select(g => g.Key),
y = createdByMonth.Select(g => g.Value),
};
var openByMonthSeries = new Scattergl{
name = "Open",
x = openByMonth.Select(g => g.Key),
y = openByMonth.Select(g => g.Value),
};
var closedByMonthSeries = new Scattergl{
name = "Closed",
x = closedByMonth.Select(g => g.Key),
y = closedByMonth.Select(g => g.Value),
};
var issueChart = Chart.Plot(new[] {createdByMonthSeries, closedByMonthSeries, openByMonthSeries});
issueChart.WithTitle("Bugs by month");
display(issueChart);
var issueLifespanOnWeekSeries = new Bar
{
name = "One week old",
y = issueLifespan.Where(issue => issue.Key < 7).OrderBy(issue => issue.Key).Select(issue => issue.Value),
x = issueLifespan.Where(issue => issue.Key < 7).OrderBy(issue => issue.Key).Select(issue => issue.Key) ,
marker = new Marker{
color = "green"
}
};
var issueLifespanOneSprintSeries = new Bar
{
name = "One Sprint old",
y = issueLifespan.Where(issue => issue.Key >= 7 && issue.Key < 21).OrderBy(issue => issue.Key).Select(issue => issue.Value),
x = issueLifespan.Where(issue => issue.Key >= 7 && issue.Key < 21).OrderBy(issue => issue.Key).Select(issue => issue.Key) ,
marker = new Marker{
color = "yellow"
}
};
var issueLifespanOldSeries = new Bar
{
name = "More then a Sprint",
y = issueLifespan.Where(issue => issue.Key >= 21).OrderBy(issue => issue.Key).Select(issue => issue.Value),
x = issueLifespan.Where(issue => issue.Key >= 21).OrderBy(issue => issue.Key).Select(issue => issue.Key) ,
marker = new Marker{
color = "red"
}
};
var issueLifespanChart = Chart.Plot(new[] {issueLifespanOnWeekSeries, issueLifespanOneSprintSeries, issueLifespanOldSeries});
issueLifespanChart.WithLayout(new Layout.Layout
{
title = "Bugs by life span",
xaxis = new Xaxis {
title = "Number of days a bug stays open",
showgrid = false,
zeroline = false
},
yaxis = new Yaxis {
showgrid = true,
zeroline = false
}
});
display(issueLifespanChart);
var openIssuesAgeSeriesWeek = new Bar
{
name = "Closed in a week",
y = openIssueAge.Where(issue => issue.Value < 7).OrderBy(issue => issue.Key).Select(issue => issue.Value),
x = openIssueAge.Where(issue => issue.Value < 7).OrderBy(issue => issue.Key).Select(issue => issue.Key) ,
marker = new Marker{
color = "green"
}
};
var openIssuesAgeSeriesSprint = new Bar
{
name = "Closed within a sprint",
y = openIssueAge.Where(issue => issue.Value >= 7 && issue.Value < 21).OrderBy(issue => issue.Key).Select(issue => issue.Value),
x = openIssueAge.Where(issue => issue.Value >= 7 && issue.Value < 21).OrderBy(issue => issue.Key).Select(issue => issue.Key) ,
marker = new Marker{
color = "yellow"
}
};
var openIssuesAgeSeriesLong = new Bar
{
name = "Long standing",
y = openIssueAge.Where(issue => issue.Value >= 21).OrderBy(issue => issue.Key).Select(issue => issue.Value),
x = openIssueAge.Where(issue => issue.Value >= 21).OrderBy(issue => issue.Key).Select(issue => issue.Key) ,
marker = new Marker{
color = "red"
}
};
var openIssuesAgeChart = Chart.Plot(new[] {openIssuesAgeSeriesWeek, openIssuesAgeSeriesSprint, openIssuesAgeSeriesLong});
openIssuesAgeChart.WithLayout(new Layout.Layout
{
title = "Open bugs age",
yaxis = new Yaxis {
title = "Number of days a bug stays open",
showgrid = true,
zeroline = false
}
});
display(openIssuesAgeChart);
var createdThisMonthAreaSeries = new Pie {
values = issuesCreatedThisMonthByArea.Select(e => e.Value),
labels = issuesCreatedThisMonthByArea.Select(e => e.Key),
};
var createdArea = Chart.Plot(new[] {createdThisMonthAreaSeries});
createdArea.WithLayout(new Layout.Layout
{
title = "Bugs created this month by Area",
});
display(createdArea);
var openAreaSeries = new Pie {
values = openIssueByArea.Select(e => e.Value),
labels = openIssueByArea.Select(e => e.Key),
};
var openArea = Chart.Plot(new[] {openAreaSeries});
openArea.WithLayout(new Layout.Layout
{
title = "Open bugs by Area",
});
display(openArea);
var prColors = pullRequestLifespan.OrderBy(pr => pr.Key).Select(pr => pr.Key < 7 ? "green" : pr.Key < 21 ? "yellow" : "red");
var prLifespanOneWeekSeries = new Bar
{
name = "One week",
y = pullRequestLifespan.Where(issue => issue.Key < 7).OrderBy(pr => pr.Key).Select(pr => pr.Value),
x = pullRequestLifespan.Where(issue => issue.Key < 7).OrderBy(pr => pr.Key).Select(pr => pr.Key) ,
marker = new Marker{
color = "green"
}
};
var prLifespanOneSprintSeries = new Bar
{
name = "One Sprint",
y = pullRequestLifespan.Where(issue => issue.Key >= 7 && issue.Key < 21).OrderBy(pr => pr.Key).Select(pr => pr.Value),
x = pullRequestLifespan.Where(issue => issue.Key >= 7 && issue.Key < 21).OrderBy(pr => pr.Key).Select(pr => pr.Key) ,
marker = new Marker{
color = "yellow"
}
};
var prLifespanMoreThanASprintSeries = new Bar
{
name = "More than a Sprint",
y = pullRequestLifespan.Where(issue => issue.Key >= 21).OrderBy(pr => pr.Key).Select(pr => pr.Value),
x = pullRequestLifespan.Where(issue => issue.Key >= 21).OrderBy(pr => pr.Key).Select(pr => pr.Key) ,
marker = new Marker{
color = "red"
}
};
var prLifespanChart = Chart.Plot(new[] {prLifespanOneWeekSeries, prLifespanOneSprintSeries, prLifespanMoreThanASprintSeries});
prLifespanChart.WithLayout(new Layout.Layout
{
title = "Pull Request by life span",
xaxis = new Xaxis {
title = "Number of days a PR stays open",
showgrid = false,
zeroline = false
},
yaxis = new Yaxis {
title = "Number of PR",
showgrid = true,
zeroline = false
}
});
display(prLifespanChart);
var forkCreationSeries = new Scattergl
{
name = "created by month",
y = forkCreatedByMonth.Select(g => g.Count() ).ToArray(),
x = forkCreatedByMonth.Select(g => g.Key ).ToArray()
};
var forkTotalSeries = new Scattergl
{
name = "running total",
y = forkCountByMonth.Select(g => g.Count ).ToArray(),
x = forkCountByMonth.Select(g => g.Date ).ToArray()
};
var forkUpdateSeries = new Scattergl
{
name = "last update by month",
y = forkUpdateByMonth.Select(g => g.Count ).ToArray(),
x = forkUpdateByMonth.Select(g => g.Date ).ToArray()
};
var chart = Chart.Plot(new[] {forkCreationSeries,forkTotalSeries,forkUpdateSeries});
chart.WithTitle("Fork activity");
display(chart);
| 0.285372 | 0.75956 |
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import sklearn
from scipy import stats, optimize
from sklearn.preprocessing import Imputer, StandardScaler, MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.linear_model import Lasso, LinearRegression
from sklearn.linear_model import Ridge, LassoLars, BayesianRidge, ARDRegression, Lars
from sklearn.linear_model import RANSACRegressor, ElasticNet
from sklearn.linear_model import PassiveAggressiveRegressor, Perceptron
from sklearn.pipeline import Pipeline
from sklearn.feature_selection import SelectFromModel, SelectKBest, f_regression
from sklearn.linear_model import LassoCV, RidgeCV
from sklearn.svm import LinearSVR
from sklearn.base import clone
from itertools import combinations
from sklearn.metrics import explained_variance_score, r2_score, median_absolute_error, mean_squared_error
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import StratifiedKFold
from sklearn.feature_selection import RFECV
print('The scikit-learn version is {}.'.format(sklearn.__version__))
print('The pandas version is {}.'.format(pd.__version__))
print('The numpy version is {}.'.format(np.__version__))
goal_features = ['murders', 'murdPerPop', 'rapes', 'rapesPerPop', 'robberies','robbbPerPop',
'assaults', 'assaultPerPop', 'burglaries', 'burglPerPop', 'larcenies', 'larcPerPop',
'autoTheft', 'autoTheftPerPop', 'arsons', 'arsonsPerPop', 'violentPerPop', 'nonViolPerPop']
non_predictive_features = ['communityname', 'state', 'countyCode', 'communityCode', 'fold']
df = pd.read_csv('../datasets/UnnormalizedCrimeData.csv');
df = df.replace('?',np.NAN)
features = [x for x in df.columns if x not in goal_features and x not in non_predictive_features]
len(features)
def drop_rows_with_null_goal_feature(old_df, feature):
new_df = old_df.dropna(subset=[feature])
return new_df
```
# Distribution of output for each crime
Let us see the distribution of all the 4 crimes. The idea is to remove the outliers so that we get more accuracy
```
goal_feature = 'murders'
goal_df = drop_rows_with_null_goal_feature(df, goal_feature)
print goal_df['murders'].describe()
# Create a figure instance
fig = plt.figure(1, figsize=(9, 6))
# Create an axes instance
ax = fig.add_subplot(111)
# Create the boxplot
bp = ax.boxplot(goal_df['murders'])
#plt.boxplot(goal_df['murders'], sym='k.', showfliers=True, showmeans=True, showcaps=True, showbox=True)
plt.show()
goal_feature = 'rapes'
goal_df = drop_rows_with_null_goal_feature(df, goal_feature)
goal_df[[goal_feature]] = goal_df[[goal_feature]].apply(pd.to_numeric)
print goal_df[goal_feature].describe()
# Create a figure instance
fig = plt.figure(1, figsize=(9, 6))
# Create an axes instance
ax = fig.add_subplot(111)
# Create the boxplot
bp = ax.boxplot(goal_df[goal_feature])
#plt.boxplot(goal_df['murders'], sym='k.', showfliers=True, showmeans=True, showcaps=True, showbox=True)
plt.show()
goal_feature = 'robberies'
goal_df = drop_rows_with_null_goal_feature(df, goal_feature)
goal_df[[goal_feature]] = goal_df[[goal_feature]].apply(pd.to_numeric)
print goal_df[goal_feature].describe()
# Create a figure instance
fig = plt.figure(1, figsize=(9, 6))
# Create an axes instance
ax = fig.add_subplot(111)
# Create the boxplot
bp = ax.boxplot(goal_df[goal_feature])
#plt.boxplot(goal_df['murders'], sym='k.', showfliers=True, showmeans=True, showcaps=True, showbox=True)
plt.show()
goal_feature = 'assaults'
goal_df = drop_rows_with_null_goal_feature(df, goal_feature)
goal_df[[goal_feature]] = goal_df[[goal_feature]].apply(pd.to_numeric)
print goal_df[goal_feature].describe()
# Create a figure instance
fig = plt.figure(1, figsize=(9, 6))
# Create an axes instance
ax = fig.add_subplot(111)
# Create the boxplot
bp = ax.boxplot(goal_df[goal_feature])
#plt.boxplot(goal_df['murders'], sym='k.', showfliers=True, showmeans=True, showcaps=True, showbox=True)
plt.show()
goal_feature = 'burglaries'
goal_df = drop_rows_with_null_goal_feature(df, goal_feature)
goal_df[[goal_feature]] = goal_df[[goal_feature]].apply(pd.to_numeric)
print goal_df[goal_feature].describe()
# Create a figure instance
fig = plt.figure(1, figsize=(9, 6))
# Create an axes instance
ax = fig.add_subplot(111)
# Create the boxplot
bp = ax.boxplot(goal_df[goal_feature])
#plt.boxplot(goal_df['murders'], sym='k.', showfliers=True, showmeans=True, showcaps=True, showbox=True)
plt.show()
clf = Pipeline([
('feature_selection', SelectKBest(k=96, score_func=f_regression)),
('regression', (Ridge()))
])
goal_feature = 'murders'
goal_df = drop_rows_with_null_goal_feature(df, goal_feature)
goal_df[[goal_feature]] = goal_df[[goal_feature]].apply(pd.to_numeric)
goal_df = goal_df[goal_df.murders <= goal_df.murders.quantile(.98)]
print len(goal_df)
#print goal_df.describe()
imr = Imputer(missing_values='NaN', strategy='mean', axis=0)
imr = imr.fit(goal_df[features])
imputed_data = imr.transform(goal_df[features]);
df_X_train, df_X_test, df_y_train, df_y_test = \
train_test_split(imputed_data, goal_df[goal_feature], test_size=0.3)
mse_cv = cross_val_score(estimator = clf, X=df_X_train, y=df_y_train, scoring='neg_mean_squared_error')
r2_cv = cross_val_score(estimator=clf, X=df_X_train, y=df_y_train, scoring='r2')
print "Cross Validation Score MSE and R_2 are {0} and {1}".format(mse_cv.mean(), r2_cv.mean())
clf.fit(df_X_train, df_y_train)
mse_train = mean_squared_error(df_y_train, clf.predict(df_X_train))
r2_train = r2_score(df_y_train, clf.predict(df_X_train))
print "Training MSE error & R_2 SCore are {0} and {1} ".format(mse_train, r2_train)
mse = mean_squared_error(df_y_test, clf.predict(df_X_test))
r2_sc = r2_score(df_y_test, clf.predict(df_X_test))
print "Test MSE error & R_2 SCore are {0} and {1} ".format(mse, r2_sc)
clf = Pipeline([
('feature_selection', SelectKBest(k=100, score_func=f_regression)),
('regression', GradientBoostingRegressor())
])
goal_feature = 'rapes'
goal_df = drop_rows_with_null_goal_feature(df, goal_feature)
goal_df[[goal_feature]] = goal_df[[goal_feature]].apply(pd.to_numeric)
goal_df = goal_df[goal_df.murders <= goal_df.murders.quantile(.98)]
print len(goal_df)
#print goal_df.describe()
imr = Imputer(missing_values='NaN', strategy='mean', axis=0)
imr = imr.fit(goal_df[features])
imputed_data = imr.transform(goal_df[features]);
df_X_train, df_X_test, df_y_train, df_y_test = \
train_test_split(imputed_data, goal_df[goal_feature], test_size=0.3)
mse_cv = cross_val_score(estimator = clf, X=df_X_train, y=df_y_train, scoring='neg_mean_squared_error')
r2_cv = cross_val_score(estimator=clf, X=df_X_train, y=df_y_train, scoring='r2')
print "Cross Validation Score MSE and R_2 are {0} and {1}".format(mse_cv.mean(), r2_cv.mean())
clf.fit(df_X_train, df_y_train)
mse_train = mean_squared_error(df_y_train, clf.predict(df_X_train))
r2_train = r2_score(df_y_train, clf.predict(df_X_train))
print "Training MSE error & R_2 SCore are {0} and {1} ".format(mse_train, r2_train)
mse = mean_squared_error(df_y_test, clf.predict(df_X_test))
r2_sc = r2_score(df_y_test, clf.predict(df_X_test))
print "Test MSE error & R_2 SCore are {0} and {1} ".format(mse, r2_sc)
clf = Pipeline([
('feature_selection', SelectKBest(k=116, score_func=f_regression)),
('regression', LinearRegression())
])
goal_feature = 'assaults'
goal_df = drop_rows_with_null_goal_feature(df, goal_feature)
goal_df[[goal_feature]] = goal_df[[goal_feature]].apply(pd.to_numeric)
#goal_df = goal_df[goal_df.murders <= goal_df.murders.quantile(0.70)]
print len(goal_df)
#print goal_df.describe()
imr = Imputer(missing_values='NaN', strategy='mean', axis=0)
imr = imr.fit(goal_df[features])
imputed_data = imr.transform(goal_df[features]);
df_X_train, df_X_test, df_y_train, df_y_test = \
train_test_split(imputed_data, goal_df[goal_feature], test_size=0.2)
mse_cv = cross_val_score(estimator = clf, X=df_X_train, y=df_y_train, scoring='neg_mean_squared_error')
r2_cv = cross_val_score(estimator=clf, X=df_X_train, y=df_y_train, scoring='r2')
print "Cross Validation Score MSE and R_2 are {0} and {1}".format(mse_cv.mean(), r2_cv.mean())
clf.fit(df_X_train, df_y_train)
mse_train = mean_squared_error(df_y_train, clf.predict(df_X_train))
r2_train = r2_score(df_y_train, clf.predict(df_X_train))
print df_y_train
print clf.predict(df_X_train)
print "Training MSE error & R_2 SCore are {0} and {1} ".format(mse_train, r2_train)
mse = mean_squared_error(df_y_test, clf.predict(df_X_test))
r2_sc = r2_score(df_y_test, clf.predict(df_X_test))
print "Test MSE error & R_2 SCore are {0} and {1} ".format(mse, r2_sc)
clf.predict(df_X_test)
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import sklearn
from scipy import stats, optimize
from sklearn.preprocessing import Imputer, StandardScaler, MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.linear_model import Lasso, LinearRegression
from sklearn.linear_model import Ridge, LassoLars, BayesianRidge, ARDRegression, Lars
from sklearn.linear_model import RANSACRegressor, ElasticNet
from sklearn.linear_model import PassiveAggressiveRegressor, Perceptron
from sklearn.pipeline import Pipeline
from sklearn.feature_selection import SelectFromModel, SelectKBest, f_regression
from sklearn.linear_model import LassoCV, RidgeCV
from sklearn.svm import LinearSVR
from sklearn.base import clone
from itertools import combinations
from sklearn.metrics import explained_variance_score, r2_score, median_absolute_error, mean_squared_error
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import StratifiedKFold
from sklearn.feature_selection import RFECV
print('The scikit-learn version is {}.'.format(sklearn.__version__))
print('The pandas version is {}.'.format(pd.__version__))
print('The numpy version is {}.'.format(np.__version__))
goal_features = ['murders', 'murdPerPop', 'rapes', 'rapesPerPop', 'robberies','robbbPerPop',
'assaults', 'assaultPerPop', 'burglaries', 'burglPerPop', 'larcenies', 'larcPerPop',
'autoTheft', 'autoTheftPerPop', 'arsons', 'arsonsPerPop', 'violentPerPop', 'nonViolPerPop']
non_predictive_features = ['communityname', 'state', 'countyCode', 'communityCode', 'fold']
df = pd.read_csv('../datasets/UnnormalizedCrimeData.csv');
df = df.replace('?',np.NAN)
features = [x for x in df.columns if x not in goal_features and x not in non_predictive_features]
len(features)
def drop_rows_with_null_goal_feature(old_df, feature):
new_df = old_df.dropna(subset=[feature])
return new_df
goal_feature = 'murders'
goal_df = drop_rows_with_null_goal_feature(df, goal_feature)
print goal_df['murders'].describe()
# Create a figure instance
fig = plt.figure(1, figsize=(9, 6))
# Create an axes instance
ax = fig.add_subplot(111)
# Create the boxplot
bp = ax.boxplot(goal_df['murders'])
#plt.boxplot(goal_df['murders'], sym='k.', showfliers=True, showmeans=True, showcaps=True, showbox=True)
plt.show()
goal_feature = 'rapes'
goal_df = drop_rows_with_null_goal_feature(df, goal_feature)
goal_df[[goal_feature]] = goal_df[[goal_feature]].apply(pd.to_numeric)
print goal_df[goal_feature].describe()
# Create a figure instance
fig = plt.figure(1, figsize=(9, 6))
# Create an axes instance
ax = fig.add_subplot(111)
# Create the boxplot
bp = ax.boxplot(goal_df[goal_feature])
#plt.boxplot(goal_df['murders'], sym='k.', showfliers=True, showmeans=True, showcaps=True, showbox=True)
plt.show()
goal_feature = 'robberies'
goal_df = drop_rows_with_null_goal_feature(df, goal_feature)
goal_df[[goal_feature]] = goal_df[[goal_feature]].apply(pd.to_numeric)
print goal_df[goal_feature].describe()
# Create a figure instance
fig = plt.figure(1, figsize=(9, 6))
# Create an axes instance
ax = fig.add_subplot(111)
# Create the boxplot
bp = ax.boxplot(goal_df[goal_feature])
#plt.boxplot(goal_df['murders'], sym='k.', showfliers=True, showmeans=True, showcaps=True, showbox=True)
plt.show()
goal_feature = 'assaults'
goal_df = drop_rows_with_null_goal_feature(df, goal_feature)
goal_df[[goal_feature]] = goal_df[[goal_feature]].apply(pd.to_numeric)
print goal_df[goal_feature].describe()
# Create a figure instance
fig = plt.figure(1, figsize=(9, 6))
# Create an axes instance
ax = fig.add_subplot(111)
# Create the boxplot
bp = ax.boxplot(goal_df[goal_feature])
#plt.boxplot(goal_df['murders'], sym='k.', showfliers=True, showmeans=True, showcaps=True, showbox=True)
plt.show()
goal_feature = 'burglaries'
goal_df = drop_rows_with_null_goal_feature(df, goal_feature)
goal_df[[goal_feature]] = goal_df[[goal_feature]].apply(pd.to_numeric)
print goal_df[goal_feature].describe()
# Create a figure instance
fig = plt.figure(1, figsize=(9, 6))
# Create an axes instance
ax = fig.add_subplot(111)
# Create the boxplot
bp = ax.boxplot(goal_df[goal_feature])
#plt.boxplot(goal_df['murders'], sym='k.', showfliers=True, showmeans=True, showcaps=True, showbox=True)
plt.show()
clf = Pipeline([
('feature_selection', SelectKBest(k=96, score_func=f_regression)),
('regression', (Ridge()))
])
goal_feature = 'murders'
goal_df = drop_rows_with_null_goal_feature(df, goal_feature)
goal_df[[goal_feature]] = goal_df[[goal_feature]].apply(pd.to_numeric)
goal_df = goal_df[goal_df.murders <= goal_df.murders.quantile(.98)]
print len(goal_df)
#print goal_df.describe()
imr = Imputer(missing_values='NaN', strategy='mean', axis=0)
imr = imr.fit(goal_df[features])
imputed_data = imr.transform(goal_df[features]);
df_X_train, df_X_test, df_y_train, df_y_test = \
train_test_split(imputed_data, goal_df[goal_feature], test_size=0.3)
mse_cv = cross_val_score(estimator = clf, X=df_X_train, y=df_y_train, scoring='neg_mean_squared_error')
r2_cv = cross_val_score(estimator=clf, X=df_X_train, y=df_y_train, scoring='r2')
print "Cross Validation Score MSE and R_2 are {0} and {1}".format(mse_cv.mean(), r2_cv.mean())
clf.fit(df_X_train, df_y_train)
mse_train = mean_squared_error(df_y_train, clf.predict(df_X_train))
r2_train = r2_score(df_y_train, clf.predict(df_X_train))
print "Training MSE error & R_2 SCore are {0} and {1} ".format(mse_train, r2_train)
mse = mean_squared_error(df_y_test, clf.predict(df_X_test))
r2_sc = r2_score(df_y_test, clf.predict(df_X_test))
print "Test MSE error & R_2 SCore are {0} and {1} ".format(mse, r2_sc)
clf = Pipeline([
('feature_selection', SelectKBest(k=100, score_func=f_regression)),
('regression', GradientBoostingRegressor())
])
goal_feature = 'rapes'
goal_df = drop_rows_with_null_goal_feature(df, goal_feature)
goal_df[[goal_feature]] = goal_df[[goal_feature]].apply(pd.to_numeric)
goal_df = goal_df[goal_df.murders <= goal_df.murders.quantile(.98)]
print len(goal_df)
#print goal_df.describe()
imr = Imputer(missing_values='NaN', strategy='mean', axis=0)
imr = imr.fit(goal_df[features])
imputed_data = imr.transform(goal_df[features]);
df_X_train, df_X_test, df_y_train, df_y_test = \
train_test_split(imputed_data, goal_df[goal_feature], test_size=0.3)
mse_cv = cross_val_score(estimator = clf, X=df_X_train, y=df_y_train, scoring='neg_mean_squared_error')
r2_cv = cross_val_score(estimator=clf, X=df_X_train, y=df_y_train, scoring='r2')
print "Cross Validation Score MSE and R_2 are {0} and {1}".format(mse_cv.mean(), r2_cv.mean())
clf.fit(df_X_train, df_y_train)
mse_train = mean_squared_error(df_y_train, clf.predict(df_X_train))
r2_train = r2_score(df_y_train, clf.predict(df_X_train))
print "Training MSE error & R_2 SCore are {0} and {1} ".format(mse_train, r2_train)
mse = mean_squared_error(df_y_test, clf.predict(df_X_test))
r2_sc = r2_score(df_y_test, clf.predict(df_X_test))
print "Test MSE error & R_2 SCore are {0} and {1} ".format(mse, r2_sc)
clf = Pipeline([
('feature_selection', SelectKBest(k=116, score_func=f_regression)),
('regression', LinearRegression())
])
goal_feature = 'assaults'
goal_df = drop_rows_with_null_goal_feature(df, goal_feature)
goal_df[[goal_feature]] = goal_df[[goal_feature]].apply(pd.to_numeric)
#goal_df = goal_df[goal_df.murders <= goal_df.murders.quantile(0.70)]
print len(goal_df)
#print goal_df.describe()
imr = Imputer(missing_values='NaN', strategy='mean', axis=0)
imr = imr.fit(goal_df[features])
imputed_data = imr.transform(goal_df[features]);
df_X_train, df_X_test, df_y_train, df_y_test = \
train_test_split(imputed_data, goal_df[goal_feature], test_size=0.2)
mse_cv = cross_val_score(estimator = clf, X=df_X_train, y=df_y_train, scoring='neg_mean_squared_error')
r2_cv = cross_val_score(estimator=clf, X=df_X_train, y=df_y_train, scoring='r2')
print "Cross Validation Score MSE and R_2 are {0} and {1}".format(mse_cv.mean(), r2_cv.mean())
clf.fit(df_X_train, df_y_train)
mse_train = mean_squared_error(df_y_train, clf.predict(df_X_train))
r2_train = r2_score(df_y_train, clf.predict(df_X_train))
print df_y_train
print clf.predict(df_X_train)
print "Training MSE error & R_2 SCore are {0} and {1} ".format(mse_train, r2_train)
mse = mean_squared_error(df_y_test, clf.predict(df_X_test))
r2_sc = r2_score(df_y_test, clf.predict(df_X_test))
print "Test MSE error & R_2 SCore are {0} and {1} ".format(mse, r2_sc)
clf.predict(df_X_test)
| 0.605916 | 0.752967 |
# BNN on Pynq
This notebook covers how to use Binary Neural Networks on Pynq.
It shows an example of image recognition with a binarized neural inspired at VGG-16, featuring 6 convolutional layers, 3 Max Pool layers and 3 Fully connected layers
## 1. Instantiate a Classifier
Creating a classifier will automatically download the correct bitstream onto the device and load the weights trained on the specified dataset. By default there are three sets of weights to choose from - this example uses the StreetView house number set.
```
from pynq import Overlay
import cffi
ROOT_BNN = "/opt/python3.6/lib/python3.6/site-packages/bnn/"
ol = Overlay(ROOT_BNN+"/bitstreams/cnv-pynq-pynq.bit")
ol.download()
ffi = cffi.FFI()
ROOT_BNN = "/opt/python3.6/lib/python3.6/site-packages/bnn/"
ffi.cdef("""
void load_parameters(const char* path);
unsigned int inference(const char* path, unsigned int results[64], int number_class, float *usecPerImage);
unsigned int* inference_multiple(const char* path, int number_class, int *image_number, float *usecPerImage, unsigned int enable_detail);
void free_results(unsigned int * result);
void deinit();
"""
)
NN_hw=ffi.dlopen(ROOT_BNN+"/libraries/python_hw-cnv-pynq.so")
NN_sw=ffi.dlopen(ROOT_BNN+"/libraries/python_sw-cnv-pynq.so")
ol.bitstream.timestamp
```
# 2. Download the network parameters
The parameters of the network are downloaded in the Programmable logic memory, storing the trained weights on the Street view house number dataset.
```
NN_hw.load_parameters(bytes(ROOT_BNN+"/params/streetview", encoding='utf-8'))
```
# 3. Open image to be classified
The image to be classified is loaded, showd and resized to meet the BNN requirements (scaled to 32x32 pixels)
```
from PIL import Image
import numpy as np
im = Image.open('/home/xilinx/jupyter_notebooks/bnn/6.png')
# We resize the downloaded image to be 32x32 pixels as expected from the BNN
im
im.thumbnail((32, 32), Image.ANTIALIAS)
background = Image.new('RGBA', (32, 32), (0, 0, 0, 0))
background.paste(
im, (int((32 - im.size[0]) / 2), int((32 - im.size[1]) / 2))
)
# We write the image into the format used in the Cifar-10 dataset for code compatibility
im = (np.array(background))
r = im[:,:,0].flatten()
g = im[:,:,1].flatten()
b = im[:,:,2].flatten()
label = [1]
out = np.array(list(label) + list(r) + list(g) + list(b),np.uint8)
out.tofile("/home/xilinx/out.bin")
background
```
# 4. Loading classes description
We load the class description, that links the class identified from the BNN and the actual meaning of that class
```
import os
with open (os.path.join(ROOT_BNN+"/params/streetview/", "classes.txt")) as f:
classes = [c.strip() for c in f.readlines()]
```
# 5. Launching BNN in hardware
The image is passed in the PL and the inference is performed
NOTE: To execute the next block, you will need to have install wurlitzer (pip3.6 install wurlitzer)
```
from wurlitzer import pipes
with pipes() as (stdout, stderr):
class_out=NN_hw.inference(b"/home/xilinx/out.bin",ffi.NULL,10, ffi.NULL)
print(stdout.read())
print("Identified number: {0}".format(classes[class_out]))
```
# 6. Launching BNN in software
The inference on the same image is performed in sofware on the ARM core
```
NN_sw.load_parameters(bytes(ROOT_BNN+"/params/streetview", encoding='utf-8'))
with pipes() as (stdout, stderr):
class_out=NN_sw.inference(b"/home/xilinx/out.bin",ffi.NULL,10, ffi.NULL)
print(stdout.read())
print("Identified number: {0}".format(classes[class_out]))
```
|
github_jupyter
|
from pynq import Overlay
import cffi
ROOT_BNN = "/opt/python3.6/lib/python3.6/site-packages/bnn/"
ol = Overlay(ROOT_BNN+"/bitstreams/cnv-pynq-pynq.bit")
ol.download()
ffi = cffi.FFI()
ROOT_BNN = "/opt/python3.6/lib/python3.6/site-packages/bnn/"
ffi.cdef("""
void load_parameters(const char* path);
unsigned int inference(const char* path, unsigned int results[64], int number_class, float *usecPerImage);
unsigned int* inference_multiple(const char* path, int number_class, int *image_number, float *usecPerImage, unsigned int enable_detail);
void free_results(unsigned int * result);
void deinit();
"""
)
NN_hw=ffi.dlopen(ROOT_BNN+"/libraries/python_hw-cnv-pynq.so")
NN_sw=ffi.dlopen(ROOT_BNN+"/libraries/python_sw-cnv-pynq.so")
ol.bitstream.timestamp
NN_hw.load_parameters(bytes(ROOT_BNN+"/params/streetview", encoding='utf-8'))
from PIL import Image
import numpy as np
im = Image.open('/home/xilinx/jupyter_notebooks/bnn/6.png')
# We resize the downloaded image to be 32x32 pixels as expected from the BNN
im
im.thumbnail((32, 32), Image.ANTIALIAS)
background = Image.new('RGBA', (32, 32), (0, 0, 0, 0))
background.paste(
im, (int((32 - im.size[0]) / 2), int((32 - im.size[1]) / 2))
)
# We write the image into the format used in the Cifar-10 dataset for code compatibility
im = (np.array(background))
r = im[:,:,0].flatten()
g = im[:,:,1].flatten()
b = im[:,:,2].flatten()
label = [1]
out = np.array(list(label) + list(r) + list(g) + list(b),np.uint8)
out.tofile("/home/xilinx/out.bin")
background
import os
with open (os.path.join(ROOT_BNN+"/params/streetview/", "classes.txt")) as f:
classes = [c.strip() for c in f.readlines()]
from wurlitzer import pipes
with pipes() as (stdout, stderr):
class_out=NN_hw.inference(b"/home/xilinx/out.bin",ffi.NULL,10, ffi.NULL)
print(stdout.read())
print("Identified number: {0}".format(classes[class_out]))
NN_sw.load_parameters(bytes(ROOT_BNN+"/params/streetview", encoding='utf-8'))
with pipes() as (stdout, stderr):
class_out=NN_sw.inference(b"/home/xilinx/out.bin",ffi.NULL,10, ffi.NULL)
print(stdout.read())
print("Identified number: {0}".format(classes[class_out]))
| 0.275032 | 0.949059 |
```
!pip install pandas
import sympy as sym
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
sym.init_printing()
```
## Correlación
La correlación entre las señales $f(t)$ y $g(t)$ es una operación que indica cuán parecidas son las dos señales entre sí.
\begin{equation}
(f \; \circ \; g)(\tau) = h(\tau) = \int_{-\infty}^{\infty} f(t) \cdot g(t + \tau) \; dt
\end{equation}
Observe que la correlación y la convolución tienen estructura similares.
\begin{equation}
f(t) * g(t) = \int_{-\infty}^{\infty} f(\tau) \cdot g(t - \tau) \; d\tau
\end{equation}
## Señales periódicas
La señal $y(t)$ es periódica si cumple con la condición $y(t+nT)=y(t)$ para todo $n$ entero. En este caso, $T$ es el periodo de la señal.

La señal seno es la oscilación más pura que se puede expresar matemáticamente. Esta señal surge al considerar la proyección de un movimiento circular uniforme.
## Serie de Fourier
Si se combinan apropiadamente un conjunto de oscilaciones puras, como combinaciones lineales de señales desplazadas y escaladas en tiempo y amplitud, podría recrearse cualquiér señal periódica. Esta idea da lugar a las series de Fourier.
\begin{equation}
y(t) = \sum_{n=0}^{\infty} C_n \cdot cos(n \omega_0 t - \phi_n)
\end{equation}
La señal $y(t)$ es igual a una combinación de infinitas señales coseno, cada una con una amplitud $C_n$, una frecuencia $n \omega_0$ y un desfase $\phi_n$.
También puede expresarse como:
\begin{equation}
y(t) = \sum_{n=0}^{\infty} A_n \cdot cos(n \omega_0 t) + B_n \cdot sin(n \omega_0 t)
\end{equation}
La serie queda definida si se encuentran los valores apropiados de $A_n$ y $B_n$ para todos los valores de $n$.
Observe que:
- $A_n$ debe ser más grande si $y(t)$ se "parece" más a un cos.
- $B_n$ debe ser más grande si $y(t)$ se "parece" más a un sin.
\begin{equation}
y(t) = \sum_{n=0}^{\infty} A_n \cdot cos(n \omega_0 t) + B_n \cdot sin(n \omega_0 t)
\end{equation}
\begin{equation}
(f \; \circ \; g)(\tau) = \int_{-\infty}^{\infty} f(t) \cdot g(t + \tau) \; dt
\end{equation}
\begin{equation}
(y \; \circ \; sin_n)(\tau) = \int_{-\infty}^{\infty} y(t) \cdot sin(n \omega_0(t + \tau)) \; dt
\end{equation}
Considerando:
- $\tau=0$ para no incluir desfases.
- la señal $y(t)$ es periódica con periodo $T$.
\begin{equation}
(y \; \circ \; sin_n)(0) = \frac{1}{T} \int_{0}^{T} y(t) \cdot sin(n \omega_0 t) \; dt
\end{equation}
Esta expresión puede interpretarse como el parecido de una señal $y(t)$ a la señal $sin$ con crecuencia $n \omega_0$ promediado a lo largo de un periodo sin desfase del seno.
Retomando la idea inicial
\begin{equation}
y(t) = \sum_{n=0}^{\infty} A_n \cdot cos(n \omega_0 t) + B_n \cdot sin(n \omega_0 t)
\end{equation}
donde
\begin{equation}
A_n = \frac{1}{T} \int_{0}^{T} y(t) \cdot cos(n \omega_0 t) \; dt
\end{equation}
\begin{equation}
B_n = \frac{1}{T} \int_{0}^{T} y(t) \cdot sin(n \omega_0 t) \; dt
\end{equation}
Se recomienda al estudiante que encuentre la relación entre las Series anteriores y la siguiente alternativa para representar la Series de Fourier.
\begin{equation}
y(t) = \sum_{n=-\infty}^{\infty} C_n \cdot e^{j n \omega_0 t}
\end{equation}
donde
\begin{equation}
C_n = \frac{1}{T} \int_{0}^{T} y(t) \cdot e^{j n \omega_0 t} \; dt
\end{equation}
Los valores $C_n$ son el espectro de la señal periódica $y(t)$ y son una representación en el dominio de la frecuencia.
**Ejemplo # 1**
La señal $y(t) = sin(2 \pi t)$ es en sí misma una oscilación pura de periodo $T=1$.
```
# Se define y como el seno de t
t = sym.symbols('t', real=True)
#T = sym.symbols('T', real=True)
T = 1
nw = sym.symbols('n', real=True)
delta = sym.DiracDelta(nw)
w0 = 2 * sym.pi / T
y = 1*sym.sin(w0*t) + 0.5
# y = sym.sin(w0*t)
# y = (t-0.5)*(t-0.5)
y
```
Aunque la sumatoria de las series de Fourier incluye infinitos términos, solamente se tomaran las primeras 3 componentes.
```
n_max = 5
y_ser = 0
C = 0
ns = range(-n_max,n_max+1)
espectro = pd.DataFrame(index = ns,
columns= ['C','C_np','C_real','C_imag','C_mag','C_ang'])
for n in espectro.index:
C_n = (1/T)*sym.integrate(y*sym.exp(-1j*n*w0*t), (t,0,T)).evalf()
C = C + C_n*delta.subs(nw,nw-n)
y_ser = y_ser + C_n*sym.exp(1j*n*w0*t)
espectro['C'][n]=C_n
C_r = float(sym.re(C_n))
C_i = float(sym.im(C_n))
espectro['C_real'][n] = C_r
espectro['C_imag'][n] = C_i
espectro['C_np'][n] = complex(C_r + 1j*C_i)
espectro['C_mag'][n] = np.absolute(espectro['C_np'][n])
espectro['C_ang'][n] = np.angle(espectro['C_np'][n])
espectro
```
La señal reconstruida con un **n_max** componentes
```
y_ser
plt.rcParams['figure.figsize'] = 7, 2
g1 = sym.plot(y, (t,0,1), ylabel=r'Amp',show=False,line_color='blue',legend=True, label = 'y(t) original')
g2 = sym.plot(sym.re(y_ser), (t,-1,2), ylabel=r'Amp',show=False,line_color='red',legend=True, label = 'y(t) reconstruida')
g1.extend(g2)
g1.show()
C
plt.rcParams['figure.figsize'] = 7, 2
plt.stem(espectro.index,espectro['C_mag'])
```
**Ejercicio para entregar 02-Octubre-2020**
Use las siguientes funciones para definir un periodo de una señal periódica con periodo $T=1$:
\begin{equation}
y_1(t) = \begin{cases}
-1 & 0 \leq t < 0.5 \\
1 & 0.5 \leq t < 1
\end{cases}
\end{equation}
\begin{equation}
y_2(t) = t
\end{equation}
\begin{equation}
y_3(t) = 3 sin(2 \pi t)
\end{equation}
Varíe la cantidad de componentes que reconstruyen cada función y analice la reconstrucción obtenida y los valores de $C_n$
|
github_jupyter
|
!pip install pandas
import sympy as sym
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
sym.init_printing()
# Se define y como el seno de t
t = sym.symbols('t', real=True)
#T = sym.symbols('T', real=True)
T = 1
nw = sym.symbols('n', real=True)
delta = sym.DiracDelta(nw)
w0 = 2 * sym.pi / T
y = 1*sym.sin(w0*t) + 0.5
# y = sym.sin(w0*t)
# y = (t-0.5)*(t-0.5)
y
n_max = 5
y_ser = 0
C = 0
ns = range(-n_max,n_max+1)
espectro = pd.DataFrame(index = ns,
columns= ['C','C_np','C_real','C_imag','C_mag','C_ang'])
for n in espectro.index:
C_n = (1/T)*sym.integrate(y*sym.exp(-1j*n*w0*t), (t,0,T)).evalf()
C = C + C_n*delta.subs(nw,nw-n)
y_ser = y_ser + C_n*sym.exp(1j*n*w0*t)
espectro['C'][n]=C_n
C_r = float(sym.re(C_n))
C_i = float(sym.im(C_n))
espectro['C_real'][n] = C_r
espectro['C_imag'][n] = C_i
espectro['C_np'][n] = complex(C_r + 1j*C_i)
espectro['C_mag'][n] = np.absolute(espectro['C_np'][n])
espectro['C_ang'][n] = np.angle(espectro['C_np'][n])
espectro
y_ser
plt.rcParams['figure.figsize'] = 7, 2
g1 = sym.plot(y, (t,0,1), ylabel=r'Amp',show=False,line_color='blue',legend=True, label = 'y(t) original')
g2 = sym.plot(sym.re(y_ser), (t,-1,2), ylabel=r'Amp',show=False,line_color='red',legend=True, label = 'y(t) reconstruida')
g1.extend(g2)
g1.show()
C
plt.rcParams['figure.figsize'] = 7, 2
plt.stem(espectro.index,espectro['C_mag'])
| 0.295636 | 0.952442 |
# Chapter 2: Our First Model
```
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data
import torch.nn.functional as F
import torchvision
from torchvision import transforms
from PIL import Image, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES=True
```
## Setting up DataLoaders
We'll use the built-in dataset of `torchvision.datasets.ImageFolder` to quickly set up some dataloaders of downloaded cat and fish images.
`check_image` is a quick little function that is passed to the `is_valid_file` parameter in the ImageFolder and will do a sanity check to make sure PIL can actually open the file. We're going to use this in lieu of cleaning up the downloaded dataset.
```
def check_image(path):
try:
im = Image.open(path)
return True
except:
return False
```
Set up the transforms for every image:
* Resize to 64x64
* Convert to tensor
* Normalize using ImageNet mean & std
```
img_transforms = transforms.Compose([
transforms.Resize((64,64)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225] )
])
train_data_path = "./train/"
train_data = torchvision.datasets.ImageFolder(root=train_data_path,transform=img_transforms, is_valid_file=check_image)
val_data_path = "./val/"
val_data = torchvision.datasets.ImageFolder(root=val_data_path,transform=img_transforms, is_valid_file=check_image)
test_data_path = "./test/"
test_data = torchvision.datasets.ImageFolder(root=test_data_path,transform=img_transforms, is_valid_file=check_image)
batch_size=64
train_data_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size)
val_data_loader = torch.utils.data.DataLoader(val_data, batch_size=batch_size)
test_data_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size)
```
## Our First Model, SimpleNet
SimpleNet is a very simple combination of three Linear layers and ReLu activations between them. Note that as we don't do a `softmax()` in our `forward()`, we will need to make sure we do it in our training function during the validation phase.
```
class SimpleNet(nn.Module):
def __init__(self):
super(SimpleNet, self).__init__()
self.fc1 = nn.Linear(12288, 84)
self.fc2 = nn.Linear(84, 50)
self.fc3 = nn.Linear(50,2)
def forward(self, x):
x = x.view(-1, 12288)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
simplenet = SimpleNet()
```
## Create an optimizer
Here, we're just using Adam as our optimizer with a learning rate of 0.001.
```
optimizer = optim.Adam(simplenet.parameters(), lr=0.001)
```
## Copy the model to GPU
Copy the model to the GPU if available.
```
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
simplenet.to(device)
```
## Training
Trains the model, copying batches to the GPU if required, calculating losses, optimizing the network and perform validation for each epoch.
```
def train(model, optimizer, loss_fn, train_loader, val_loader, epochs=20, device="cpu"):
for epoch in range(1, epochs+1):
training_loss = 0.0
valid_loss = 0.0
model.train()
for batch in train_loader:
optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
output = model(inputs)
loss = loss_fn(output, targets)
loss.backward()
optimizer.step()
training_loss += loss.data.item() * inputs.size(0)
training_loss /= len(train_loader.dataset)
model.eval()
num_correct = 0
num_examples = 0
for batch in val_loader:
inputs, targets = batch
inputs = inputs.to(device)
output = model(inputs)
targets = targets.to(device)
loss = loss_fn(output,targets)
valid_loss += loss.data.item() * inputs.size(0)
correct = torch.eq(torch.max(F.softmax(output, dim=1), dim=1)[1], targets)
num_correct += torch.sum(correct).item()
num_examples += correct.shape[0]
valid_loss /= len(val_loader.dataset)
print('Epoch: {}, Training Loss: {:.2f}, Validation Loss: {:.2f}, accuracy = {:.2f}'.format(epoch, training_loss,
valid_loss, num_correct / num_examples))
train(simplenet, optimizer,torch.nn.CrossEntropyLoss(), train_data_loader,val_data_loader, epochs=5, device=device)
```
## Making predictions
Labels are in alphanumeric order, so `cat` will be 0, `fish` will be 1. We'll need to transform the image and also make sure that the resulting tensor is copied to the appropriate device before applying our model to it.
```
labels = ['cat','fish']
img = Image.open("./val/fish/100_1422.JPG")
img = img_transforms(img).to(device)
img = torch.unsqueeze(img, 0)
simplenet.eval()
prediction = F.softmax(simplenet(img), dim=1)
prediction = prediction.argmax()
print(labels[prediction])
```
## Saving Models
We can either save the entire model using `save` or just the parameters using `state_dict`. Using the latter is normally preferable, as it allows you to reuse parameters even if the model's structure changes (or apply parameters from one model to another).
```
torch.save(simplenet, "/tmp/simplenet")
simplenet = torch.load("/tmp/simplenet")
torch.save(simplenet.state_dict(), "/tmp/simplenet")
simplenet = SimpleNet()
simplenet_state_dict = torch.load("/tmp/simplenet")
simplenet.load_state_dict(simplenet_state_dict)
```
|
github_jupyter
|
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data
import torch.nn.functional as F
import torchvision
from torchvision import transforms
from PIL import Image, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES=True
def check_image(path):
try:
im = Image.open(path)
return True
except:
return False
img_transforms = transforms.Compose([
transforms.Resize((64,64)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225] )
])
train_data_path = "./train/"
train_data = torchvision.datasets.ImageFolder(root=train_data_path,transform=img_transforms, is_valid_file=check_image)
val_data_path = "./val/"
val_data = torchvision.datasets.ImageFolder(root=val_data_path,transform=img_transforms, is_valid_file=check_image)
test_data_path = "./test/"
test_data = torchvision.datasets.ImageFolder(root=test_data_path,transform=img_transforms, is_valid_file=check_image)
batch_size=64
train_data_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size)
val_data_loader = torch.utils.data.DataLoader(val_data, batch_size=batch_size)
test_data_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size)
class SimpleNet(nn.Module):
def __init__(self):
super(SimpleNet, self).__init__()
self.fc1 = nn.Linear(12288, 84)
self.fc2 = nn.Linear(84, 50)
self.fc3 = nn.Linear(50,2)
def forward(self, x):
x = x.view(-1, 12288)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
simplenet = SimpleNet()
optimizer = optim.Adam(simplenet.parameters(), lr=0.001)
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
simplenet.to(device)
def train(model, optimizer, loss_fn, train_loader, val_loader, epochs=20, device="cpu"):
for epoch in range(1, epochs+1):
training_loss = 0.0
valid_loss = 0.0
model.train()
for batch in train_loader:
optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
output = model(inputs)
loss = loss_fn(output, targets)
loss.backward()
optimizer.step()
training_loss += loss.data.item() * inputs.size(0)
training_loss /= len(train_loader.dataset)
model.eval()
num_correct = 0
num_examples = 0
for batch in val_loader:
inputs, targets = batch
inputs = inputs.to(device)
output = model(inputs)
targets = targets.to(device)
loss = loss_fn(output,targets)
valid_loss += loss.data.item() * inputs.size(0)
correct = torch.eq(torch.max(F.softmax(output, dim=1), dim=1)[1], targets)
num_correct += torch.sum(correct).item()
num_examples += correct.shape[0]
valid_loss /= len(val_loader.dataset)
print('Epoch: {}, Training Loss: {:.2f}, Validation Loss: {:.2f}, accuracy = {:.2f}'.format(epoch, training_loss,
valid_loss, num_correct / num_examples))
train(simplenet, optimizer,torch.nn.CrossEntropyLoss(), train_data_loader,val_data_loader, epochs=5, device=device)
labels = ['cat','fish']
img = Image.open("./val/fish/100_1422.JPG")
img = img_transforms(img).to(device)
img = torch.unsqueeze(img, 0)
simplenet.eval()
prediction = F.softmax(simplenet(img), dim=1)
prediction = prediction.argmax()
print(labels[prediction])
torch.save(simplenet, "/tmp/simplenet")
simplenet = torch.load("/tmp/simplenet")
torch.save(simplenet.state_dict(), "/tmp/simplenet")
simplenet = SimpleNet()
simplenet_state_dict = torch.load("/tmp/simplenet")
simplenet.load_state_dict(simplenet_state_dict)
| 0.838845 | 0.956145 |
```
# Install PyTorch, TorchVision and matplotlib
!pip install torch==1.5.0+cpu torchvision -f https://download.pytorch.org/whl/torch_stable.html
!pip install matplotlib
import matplotlib.pyplot as plt
import numpy as np
import torch
import torchvision
from torchvision import transforms
import torch.nn as nn
import torch.nn.functional as F
```
## Build a Neural Network
### Build Your Own Neural Network
#### Load data
```
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
train_set = torchvision.datasets.MNIST('.', train=True, transform=transform, download=True)
train_loader = torch.utils.data.DataLoader(train_set, batch_size=4, shuffle=True, num_workers=4)
test_set = torchvision.datasets.MNIST('.', train=False, transform=transform, download=True)
test_loader = torch.utils.data.DataLoader(test_set, batch_size=4, shuffle=False, num_workers=4)
```
Display data
```
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
images, labels = next(iter(test_loader)) # First group of test examples
imshow(torchvision.utils.make_grid(images))
print('Labels:', labels)
```
### Create the Net
```
class MyConvNet(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 2, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(2, 6, 5)
self.fc1 = nn.Linear(96, 32)
self.fc2 = nn.Linear(32, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 96)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
net = MyConvNet()
```
### Train
```
import torch.optim as optim
# Define loss function and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(3): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(train_loader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward
outputs = net(inputs)
loss = criterion(outputs, labels)
# backward (differentiate)
loss.backward()
# optimize (update)
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 3000 == 2999: # print every 3000 mini-batches
print(f'Epoch: {epoch + 1}, Iteration: {i + 1}, loss: {running_loss / 3000}')
running_loss = 0.0
# Test accuracy
correct = 0
total = 0
with torch.no_grad():
for data in test_loader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1) # The label with the maximum probability is predicted
total += labels.size(0)
correct += (predicted == labels).sum().item()
print(f'Accuracy of the network on the test images: {(100 * correct / total)} %')
```
### Display Sample Test Results
```
images, labels = next(iter(test_loader)) # First group of test examples
imshow(torchvision.utils.make_grid(images))
print('Labels:', labels)
outputs = net(images)
_, predicted = torch.max(outputs.data, 1) # The label with the maximum probability is predicted
print('Predicted:', predicted)
```
## Save Model
```
torch.save(net.state_dict(), 'minst-classifier.pt')
```
|
github_jupyter
|
# Install PyTorch, TorchVision and matplotlib
!pip install torch==1.5.0+cpu torchvision -f https://download.pytorch.org/whl/torch_stable.html
!pip install matplotlib
import matplotlib.pyplot as plt
import numpy as np
import torch
import torchvision
from torchvision import transforms
import torch.nn as nn
import torch.nn.functional as F
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
train_set = torchvision.datasets.MNIST('.', train=True, transform=transform, download=True)
train_loader = torch.utils.data.DataLoader(train_set, batch_size=4, shuffle=True, num_workers=4)
test_set = torchvision.datasets.MNIST('.', train=False, transform=transform, download=True)
test_loader = torch.utils.data.DataLoader(test_set, batch_size=4, shuffle=False, num_workers=4)
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
images, labels = next(iter(test_loader)) # First group of test examples
imshow(torchvision.utils.make_grid(images))
print('Labels:', labels)
class MyConvNet(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 2, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(2, 6, 5)
self.fc1 = nn.Linear(96, 32)
self.fc2 = nn.Linear(32, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 96)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
net = MyConvNet()
import torch.optim as optim
# Define loss function and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(3): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(train_loader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward
outputs = net(inputs)
loss = criterion(outputs, labels)
# backward (differentiate)
loss.backward()
# optimize (update)
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 3000 == 2999: # print every 3000 mini-batches
print(f'Epoch: {epoch + 1}, Iteration: {i + 1}, loss: {running_loss / 3000}')
running_loss = 0.0
# Test accuracy
correct = 0
total = 0
with torch.no_grad():
for data in test_loader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1) # The label with the maximum probability is predicted
total += labels.size(0)
correct += (predicted == labels).sum().item()
print(f'Accuracy of the network on the test images: {(100 * correct / total)} %')
images, labels = next(iter(test_loader)) # First group of test examples
imshow(torchvision.utils.make_grid(images))
print('Labels:', labels)
outputs = net(images)
_, predicted = torch.max(outputs.data, 1) # The label with the maximum probability is predicted
print('Predicted:', predicted)
torch.save(net.state_dict(), 'minst-classifier.pt')
| 0.931921 | 0.975437 |
# Discussion 1: Writing Good Notebooks
## Notebook Conventions
### Imports
At the top of the notebook, I usually import the following:
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
More specific imports can just be done later on when you need them.
```
from scipy.integrate import odeint
```
### Explaining Yourself
In this problem, we need to add $a$ and $b$
$$
x = a + b
$$
In the cell below, I'm defining a function to compute $x$
```
def func(a, b):
return a + b
print(func(3, 4))
```
### Cells as Units
Take a look at the homework solutions on Canvas.
## Python Conventions
### Documenting Functions
In addition to markdown, try to document your functions (numpydoc convention) https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt
```
def func(a, b, c=0):
"""Multiplies the sum of a and b by c
Parameters
----------
a : float
The first number
b : float
The second number
c : float, optional, default=0
The multiplier
Returns
-------
x : float
The sum of a and b multiplied by c
"""
return c * (a + b)
help(func)
```
### Using Loops to the Fullest
`zip` can be used to iterate over multiple things at once:
```
initial_conditions = [1, 2, 3]
colors = ['green', 'red', 'blue']
for ic, color in zip(initial_conditions, colors):
print("ic is {}, color is {}".format(ic, color))
```
`enumerate` gives you an indexing variable
```
x = [4, 2, 5, 7]
for index, element in enumerate(x):
print("index {}, {}".format(index, element))
```
`zip` and `enumerate` can be used together. Note that `enumerate` yields a tuple, so to unpack the contents that are zipped, use parentheses.
```
x = [4, 2, 5, 7]
colors = ['r', 'g', 'b']
for index, (element, color) in enumerate(zip(x, colors)):
print("index {}, {} ({})".format(index, element, color))
```
If you don't unpack the zipped contents, you can still access them later:
```
x = [4, 2, 5, 7]
colors = ['r', 'g', 'b']
for index, zipped in enumerate(zip(x, colors)):
# unpacking
element, color = zipped
print("index {}, {} ({})".format(index, element, color))
# indexing
print("index {}, {} ({})".format(index, zipped[0], zipped[1]))
```
## Plotting Tips
### Settings in Loops
`zip` can be useful in plotting loops to specify properties of lines
```
x = np.linspace(0, 10, 1000)
offsets = [3, 4]
linestyles = ['-', '--']
colors = ['g', 'c']
for offset, linestyle, color in zip(offsets, linestyles, colors):
y = np.cos(x) + offset
plt.plot(x, y, color=color, linestyle=linestyle)
```
### Using matplotlib.rc
http://matplotlib.org/examples/color/color_cycle_demo.html
If you're generating multiple plots, you might want to set the order of linestyle, color, linewidth, etc. for *all* plots. By defualt, multiple `plt.plot` calls just change the color.
```
from cycler import cycler
plt.rc('axes', prop_cycle=(cycler('color', ['r', 'g', 'b', 'y']) +
cycler('linewidth', np.linspace(1, 10, 4))))
for i in range(3):
plt.plot(np.random.randn(10))
```
Each figure restarts the cycle.
```
for i in range(4):
plt.plot(np.random.randn(10))
```
## Miscellaneous
Related to questions asked during the session
### Equations
One LaTeX environment that can be useful for showing derivations is `aligned`:
$$
\begin{aligned}
x &= a + b \\
&= (1) + (3)
\end{aligned}
$$
### Iterating over a slice
You can use `enumerate` and such on slices, but note the mismatch between the index given to you and the index of the element.
```
x = np.arange(10)
for i, element in enumerate(x[-5:]):
i_of_x = np.where(x == element)[0][0]
print("i from enumerate: {}, index in x: {}".format(i, i_of_x))
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.integrate import odeint
def func(a, b):
return a + b
print(func(3, 4))
def func(a, b, c=0):
"""Multiplies the sum of a and b by c
Parameters
----------
a : float
The first number
b : float
The second number
c : float, optional, default=0
The multiplier
Returns
-------
x : float
The sum of a and b multiplied by c
"""
return c * (a + b)
help(func)
initial_conditions = [1, 2, 3]
colors = ['green', 'red', 'blue']
for ic, color in zip(initial_conditions, colors):
print("ic is {}, color is {}".format(ic, color))
x = [4, 2, 5, 7]
for index, element in enumerate(x):
print("index {}, {}".format(index, element))
x = [4, 2, 5, 7]
colors = ['r', 'g', 'b']
for index, (element, color) in enumerate(zip(x, colors)):
print("index {}, {} ({})".format(index, element, color))
x = [4, 2, 5, 7]
colors = ['r', 'g', 'b']
for index, zipped in enumerate(zip(x, colors)):
# unpacking
element, color = zipped
print("index {}, {} ({})".format(index, element, color))
# indexing
print("index {}, {} ({})".format(index, zipped[0], zipped[1]))
x = np.linspace(0, 10, 1000)
offsets = [3, 4]
linestyles = ['-', '--']
colors = ['g', 'c']
for offset, linestyle, color in zip(offsets, linestyles, colors):
y = np.cos(x) + offset
plt.plot(x, y, color=color, linestyle=linestyle)
from cycler import cycler
plt.rc('axes', prop_cycle=(cycler('color', ['r', 'g', 'b', 'y']) +
cycler('linewidth', np.linspace(1, 10, 4))))
for i in range(3):
plt.plot(np.random.randn(10))
for i in range(4):
plt.plot(np.random.randn(10))
x = np.arange(10)
for i, element in enumerate(x[-5:]):
i_of_x = np.where(x == element)[0][0]
print("i from enumerate: {}, index in x: {}".format(i, i_of_x))
| 0.672547 | 0.988131 |
# Anatomy of a NIfTI
---
In the last lesson, we introduced the NIfTI. NIfTI is one of the most ubiquitous file formats for storing neuroimaging data. We'll cover a few details to get started working with them. If you're interested in learning more about NIfTI images, we highly recommend [this blog post about the NIfTI format](http://brainder.org/2012/09/23/the-nifti-file-format/).
## Reading NIfTI Images
[NiBabel](http://nipy.org/nibabel/) is a Python package for reading and writing neuroimaging data. To learn more about how NiBabel handles NIfTIs, check out the [Working with NIfTI images](http://nipy.org/nibabel/nifti_images.html) page of the NiBabel documentation.
```
import nibabel as nib
```
First, use the `load()` function to create a NiBabel image object from a NIfTI file. We'll load in an example T1w image from the zip file we just downloaded.
```
t1_img = nib.load("../data/dicom_examples/0219191_mystudy-0219-1114/nii/dcm_anat_ses-01_T1w_20190219111436_5.nii.gz")
```
Loading in a NIfTI file with `nibabel` gives us a special type of data object which encodes all the information in the file. Each bit of information is called an **attribute** in Python's terminology. To see all of these attributes, type `t1_img.` and <kbd>Tab</kbd>.
There are three main attributes that we'll discuss today:
### 1. [Header](http://nipy.org/nibabel/nibabel_images.html#the-image-header): contains metadata about the image, such as image dimensions, data type, etc.
```
t1_hdr = t1_img.header
print(t1_hdr)
```
`t1_hdr` is a Python **dictionary**. Dictionaries are containers that hold pairs of objects - keys and values. Let's take a look at all of the keys.
Similar to `t1_img` in which attributes can be accessed by typing `t1_img.` and hitting <kbd>Tab</kbd>, you can do the same with `t1_hdr`. In particular, we'll be using a **method** belonging to `t1_hdr` that will allow you to view the keys associated with it.
```
t1_hdr.keys()
```
Notice that **methods** require you to include `()` at the end of them whereas **attributes** do not.
The key difference between a method and an attribute is:
- Attributes are *stored values* kept within an object
- Methods are *processes* that we can run using the object. Usually a method takes attributes, performs an operation on them, then returns it for you to use.
When you type in `t1_img.` followed by <kbd>Tab</kbd>, you'll see that attributes are highlighted in <span style="color:orange"> orange </span> and methods highlighted in <span style="color:blue"> blue </span>.
The output above is a list of **keys** you can use from `t1_hdr` to access **values**. We can access the value stored by a given key by typing:
```python
t1_hdr['<key_name>']
```
<b>EXERCISE:</b> Extract the value of <code>pixdim</code> from <code>t1_hdr</code>
```
t1_hdr['pixdim']
```
In addition to metadata embedded in the NIfTI header, the T1w image also has a corresponding JSON file with additional scan acquisition details. Using the JSON file to store this information is a concept added by BIDS (which we'll cover in the next lesson) to log the important bits of information that traditionally get excluded from the NIfTI header.
Let's take a look at it below:
```
import json
with open("../data/dicom_examples/0219191_mystudy-0219-1114/nii/dcm_anat_ses-01_T1w_20190219111436_5.json", "r") as f:
t1_metadata = json.load(f)
t1_metadata
```
The additional metadata are also in the form of a Python <b>dictionary</b>.
<b>EXERCISE:</b> Extract the value of <code>SliceThickness</code> from <code>t1_metadata</code> similar to how you did previously for <code>t1_hdr</code>.
```
t1_metadata["SliceThickness"]
```
### 2. Data
As you've seen above, the header contains useful information that gives us information about the properties (metadata) associated with the MR data we've loaded in. Now we'll move in to loading the actual *image data itself*. We can achieve this by using the *method* called `t1_img.get_fdata()`.
```
t1_data = t1_img.get_fdata()
t1_data
```
What type of data is this exactly? We can determine this by calling the `type()` function on `t1_data`.
```
type(t1_data)
```
The data is a multidimensional **array** representing the image data. In Python, an array is used to store lists of numerical data into something like a table.
<b>EXERCISE:</b> Let's check out some *attributes* of the array. How can we see the number of dimensions in the <code>t1_data</code> array? What about the how big each dimension is (shape)? Once again, all of the attributes of the array can be seen by typing `t1_data.` and <kbd>Tab</kbd>.
```
t1_data.ndim
```
`t1_data` contains 3 dimensions. You can think of the data as a 3D version of a picture (more accurately, a volume).
<img src="../fig/numpy_arrays.png" alt="Drawing" align="middle" width="600px"/>
While typical 2D pictures are made out of squares called **pixels**, a 3D MR image is made up of 3D cubes called **voxels**.
<img src="http://www.sprawls.org/mripmt/MRI10/MR10-2.jpg" alt="Drawing" align="middle" width="500px"/>
```
t1_data.shape
```
The 3 numbers given here represent the number of values *along a respective dimension (x,y,z)*. This brain was scanned in 192 slices with a resolution of 256 x 256 voxels per slice. That means there are:
$$x * y * z = value$$
$$ 192 * 256 * 256 = 12582912$$ voxels in total!
Let's see the type of data inside of the array.
```
t1_data.dtype
```
This tells us that each element in the array (or voxel) is a floating-point number.
The data type of an image controls the range of possible intensities. As the number of possible values increases, the amount of space the image takes up in memory also increases.
```
import numpy as np
print(np.min(t1_data))
print(np.max(t1_data))
```
For our data, the range of intensity values goes from 0 (black) to more positive digits (whiter).
How do we examine what value a particular voxel is? We can inspect the value of a voxel by selecting an **index** as follows:
~~~python
data[x,y,z]
~~~
So for example we can inspect a voxel at coordinates (10,20,3) by doing the following:
```
t1_data[9, 19, 2]
```
**NOTE**: Python uses **zero-based indexing**. The first item in the array is item 0. The second item is item 1, the third is item 2, etc.
This yields a single value representing the intensity of the signal at a particular voxel! Next we'll see how to not just pull one voxel but a slice or an *array* of voxels for visualization and analysis!
## Working With Image Data
Slicing does exactly what it seems to imply. Giving our 3D volume, we pull out a 2D slice of our data. Here's an example of slicing from left to right (**sagittal slicing**):
<img src="https://upload.wikimedia.org/wikipedia/commons/5/56/Parasagittal_MRI_of_human_head_in_patient_with_benign_familial_macrocephaly_prior_to_brain_injury_%28ANIMATED%29.gif"/>
This gif is a series of 2D images or **slices** moving from left to right.
Let's pull the 50th slice in the x axis.
```
x_slice = t1_data[49, :, :]
```
This is similar to the indexing we did before to pull out a single voxel. However, instead of providing a value for each axis, the `:` indicates that we want to grab *all* values from that particular axis.
<b>EXERCISE:</b> Now try selecting the 80th slice from the y axis.
```
y_slice = t1_data[:, 79, :]
```
<b>EXERCISE:</b> Finally try grabbing the 100th slice from the z axis.
```
z_slice = t1_data[:, :, 99]
```
We've been slicing and dicing brain images but we have no idea what they look like! In the next section we'll show you how you can visualize brain slices!
## Visualizing
We previously inspected the signal intensity of the voxel at coordinates (10,20,3). Let's see what out data looks like when we slice it at this location. We've already indexed the data at each x, y, and z axis. Let's use `matplotlib`.
```
import matplotlib.pyplot as plt
%matplotlib inline
slices = [x_slice, y_slice, z_slice]
fig, axes = plt.subplots(1, len(slices))
for i, slice in enumerate(slices):
axes[i].imshow(slice.T, cmap="gray", origin="lower")
```
Now, we're going to step away from discussing our data and talk about the final important attribute of a NIfTI.
### 3. [Affine](http://nipy.org/nibabel/coordinate_systems.html): tells the position of the image array data in a *reference space*
The final important piece of metadata associated with an image file is the **affine matrix**. Below is the affine matrix for our data.
```
t1_affine = t1_img.affine
t1_affine
```
To explain this concept, recall that we referred to coordinates in our data as (x,y,z) coordinates such that:
- x is the first dimension of `t1_data`
- y is the second dimension of `t1_data`
- z is the third dimension of `t1_data`
Although this tells us how to access our data in terms of voxels in a 3D volume, it doesn't tell us much about the actual dimensions in our data (centimetres, right or left, up or down, back or front). The affine matrix allows us to translate between *voxel coordinates (x,y,z)* and *world space coordinates* in (left/right,bottom/top,back/front). An important thing to note is that in reality in which order you have:
- left/right
- bottom/top
- back/front
Depends on how you've constructed the affine matrix, but for the data we're dealing with it always refers to:
- Right
- Anterior
- Superior
Applying the affine matrix (`t1_affine`) is done through using a *linear map* (matrix multiplication) on voxel coordinates (defined in `t1_data`).
<img src="../fig/coordinate_systems.png" alt="Drawing" align="middle" width="500px"/>
The concept of an affine matrix may seem confusing at first but an example might help gain an intuition:
Suppose we have two voxels located at the the following coordinates:
$$(15,2,90)$$
$$(64,100,2)$$
And we wanted to know what the distances between these two voxels are in terms of real world distances (millimetres). This information cannot be derived from using voxel coordinates so we turn to the **affine matrix**.
Now, the affine matrix we'll be using happens to be encoded in **RAS**. That means once we apply the matrix our coordinates are as follows:
$$(\text{Right},\text{Anterior},\text{Superior})$$
So increasing a coordinate value in the first dimension corresponds to moving to the right of the person being scanned.
Applying our affine matrix yields the following coordinates:
$$(90.23,0.2,2.15)$$
$$(10.25,30.5,9.2)$$
This means that:
- Voxel 1 is $90.23-10.25= 79.98$ in the R axis. Positive values mean move right
- Voxel 1 is $0.2-30.5= -30.3$ in the A axis. Negative values mean move posterior
- Voxel 1 is $2.15-9.2= -7.05$ in the S axis. Negatve values mean move inferior
---
This covers the basics of how NIfTI data and metadata are stored and organized in the context of Python. In the next segment we'll talk a bit about an increasingly important component of MR data analysis - data organization. This is a key component to reproducible analysis and so we'll spend a bit of time here.
|
github_jupyter
|
import nibabel as nib
t1_img = nib.load("../data/dicom_examples/0219191_mystudy-0219-1114/nii/dcm_anat_ses-01_T1w_20190219111436_5.nii.gz")
t1_hdr = t1_img.header
print(t1_hdr)
t1_hdr.keys()
t1_hdr['<key_name>']
t1_hdr['pixdim']
import json
with open("../data/dicom_examples/0219191_mystudy-0219-1114/nii/dcm_anat_ses-01_T1w_20190219111436_5.json", "r") as f:
t1_metadata = json.load(f)
t1_metadata
t1_metadata["SliceThickness"]
t1_data = t1_img.get_fdata()
t1_data
type(t1_data)
t1_data.ndim
t1_data.shape
t1_data.dtype
import numpy as np
print(np.min(t1_data))
print(np.max(t1_data))
t1_data[9, 19, 2]
x_slice = t1_data[49, :, :]
y_slice = t1_data[:, 79, :]
z_slice = t1_data[:, :, 99]
import matplotlib.pyplot as plt
%matplotlib inline
slices = [x_slice, y_slice, z_slice]
fig, axes = plt.subplots(1, len(slices))
for i, slice in enumerate(slices):
axes[i].imshow(slice.T, cmap="gray", origin="lower")
t1_affine = t1_img.affine
t1_affine
| 0.146606 | 0.987616 |
```
!git clone https://github.com/suvarnak/code-colt.git
!ls code-colt/data/cats_and_dogs/
!pip install keras
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
# kaggle dataset image size
img_width, img_height = 150, 150
train_data_dir = 'code-colt/data/cats_and_dogs//train'
validation_data_dir = 'code-colt/data/cats_and_dogs//valid'
nb_train_samples = 2000
nb_validation_samples = 800
epochs = 50
batch_size = 16
if K.image_data_format() == 'channels_first':
input_shape = (3, img_width, img_height)
else:
input_shape = (img_width, img_height, 3)
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(2))
model.add(Activation('softmax'))
from keras.utils.np_utils import to_categorical
categorical_labels = to_categorical([0,1], num_classes=2)
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
hist = model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size)
model.summary()
#model.save_weights('first_try.h5')
model.save('model-cats_dogs_model2.h5')
from matplotlib import pyplot as plt
# visualizing losses and accuracy
train_loss=hist.history['loss']
val_loss=hist.history['val_loss']
train_acc=hist.history['acc']
val_acc=hist.history['val_acc']
xc=range(0,50)
plt.figure(1,figsize=(7,5))
plt.plot(xc,train_loss)
plt.plot(xc,val_loss)
plt.xlabel('num of Epochs')
plt.ylabel('loss')
plt.title('train_loss vs val_loss')
plt.grid(True)
plt.legend(['train','val'])
#print plt.style.available # use bmh, classic,ggplot for big pictures
plt.style.use(['classic'])
plt.figure(2,figsize=(7,5))
plt.plot(xc,train_acc)
plt.plot(xc,val_acc)
plt.xlabel('num of Epochs')
plt.ylabel('accuracy')
plt.title('train_acc vs val_acc')
plt.grid(True)
plt.legend(['train','val'],loc=4)
#print plt.style.available # use bmh, classic,ggplot for big pictures
plt.style.use(['classic'])
test_datagen = ImageDataGenerator(rescale=1./255)
test_data_dir = 'code-colt/data/cats_and_dogs/test/'
test_generator = test_datagen.flow_from_directory(
test_data_dir,
target_size=(150, 150),
color_mode="rgb",
shuffle = "false",
class_mode='categorical',
batch_size=1)
filenames = test_generator.filenames
nb_samples = len(filenames)
#predict = model.predict_generator(test_generator,steps = nb_samples)
#predict
print(model.metrics_names)
loss,acc= model.evaluate_generator(test_generator)
print('Test loss:', loss)
print('Test accuracy:', acc)
from google.colab import files
files.download('model-cats_dogs_model2.h5')
```
|
github_jupyter
|
!git clone https://github.com/suvarnak/code-colt.git
!ls code-colt/data/cats_and_dogs/
!pip install keras
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
# kaggle dataset image size
img_width, img_height = 150, 150
train_data_dir = 'code-colt/data/cats_and_dogs//train'
validation_data_dir = 'code-colt/data/cats_and_dogs//valid'
nb_train_samples = 2000
nb_validation_samples = 800
epochs = 50
batch_size = 16
if K.image_data_format() == 'channels_first':
input_shape = (3, img_width, img_height)
else:
input_shape = (img_width, img_height, 3)
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(2))
model.add(Activation('softmax'))
from keras.utils.np_utils import to_categorical
categorical_labels = to_categorical([0,1], num_classes=2)
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
hist = model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size)
model.summary()
#model.save_weights('first_try.h5')
model.save('model-cats_dogs_model2.h5')
from matplotlib import pyplot as plt
# visualizing losses and accuracy
train_loss=hist.history['loss']
val_loss=hist.history['val_loss']
train_acc=hist.history['acc']
val_acc=hist.history['val_acc']
xc=range(0,50)
plt.figure(1,figsize=(7,5))
plt.plot(xc,train_loss)
plt.plot(xc,val_loss)
plt.xlabel('num of Epochs')
plt.ylabel('loss')
plt.title('train_loss vs val_loss')
plt.grid(True)
plt.legend(['train','val'])
#print plt.style.available # use bmh, classic,ggplot for big pictures
plt.style.use(['classic'])
plt.figure(2,figsize=(7,5))
plt.plot(xc,train_acc)
plt.plot(xc,val_acc)
plt.xlabel('num of Epochs')
plt.ylabel('accuracy')
plt.title('train_acc vs val_acc')
plt.grid(True)
plt.legend(['train','val'],loc=4)
#print plt.style.available # use bmh, classic,ggplot for big pictures
plt.style.use(['classic'])
test_datagen = ImageDataGenerator(rescale=1./255)
test_data_dir = 'code-colt/data/cats_and_dogs/test/'
test_generator = test_datagen.flow_from_directory(
test_data_dir,
target_size=(150, 150),
color_mode="rgb",
shuffle = "false",
class_mode='categorical',
batch_size=1)
filenames = test_generator.filenames
nb_samples = len(filenames)
#predict = model.predict_generator(test_generator,steps = nb_samples)
#predict
print(model.metrics_names)
loss,acc= model.evaluate_generator(test_generator)
print('Test loss:', loss)
print('Test accuracy:', acc)
from google.colab import files
files.download('model-cats_dogs_model2.h5')
| 0.866627 | 0.532911 |
### Importing Libraries
```
import numpy as np
import tensorflow as tf
from IPython.display import HTML, display
import tabulate
import seaborn as sns
from tensorflow.keras import layers, models, optimizers
from tensorflow.keras import backend as K
from tensorflow.keras import losses
from tensorflow.keras.utils import plot_model
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
import pandas as pd
import csv
print("Version:", tf.__version__)
```
### Import the Data
- string_to_ascii: Converts a string to an array of ASCII numbers.
- import_data: Used to import txt domain data
- returns: ASCII data, labels of that data
- data_path: Path of the training data
- labels: Whether the domain names are from a malicious source or not (1 : yes)
- header: No of lines above the actual data
- lateral_skip: No of columns to skip
- no_of_entries: How much data to use for training
- csv_txt: csv or txt
```
def string_to_ascii(string):
ascii_arr = np.zeros(len(string))
for i in range(len(string)):
ascii_arr[i] = ord(string[i])
return ascii_arr
def import_data(data_path, labels, header, lateral_skip, no_of_entries, csv_txt):
if csv_txt == 0:
data = open(data_path, "r")
data = list(data.readlines())
else:
data = open(data_path, 'rt')
reader = csv.reader(data, delimiter =',', quoting = csv.QUOTE_NONE)
data = list(reader)
data = list(np.asarray(data[:no_of_entries + header])[:, 1])
ret_data = np.zeros((no_of_entries,256))
for i in range(header, no_of_entries + header):
ret_data[i - header, 0 : len(data[i].strip('\"'))] = string_to_ascii(data[i].strip('\"'))
labels = np.ones((no_of_entries, 1)) * labels
return ret_data, labels
```
### Creating the Testing , Training and Validation Data
```
number_of_samples = 1000
ret_data_mal, labels_mal = import_data('C:/Chanakya/Projects/coredns_dns_ml_firewall/Data/domain.txt', 1, 1, 0, int(number_of_samples/2), 0)
ret_data_nmal, labels_nmal = import_data('C:/Chanakya/Projects/coredns_dns_ml_firewall/Data/top10milliondomains.csv', 0, 1, 1, int(number_of_samples/2), 1)
train_split = int(number_of_samples/2 * 0.8)
valid_split = int(number_of_samples/2 * 0.9)
test_split = int(number_of_samples/2)
train_set = np.append(ret_data_mal[0:train_split], ret_data_nmal[0:train_split], axis = 0)
train_set = np.reshape(train_set, (train_split * 2, 16, 16, 1))
np.random.seed(43)
np.random.shuffle(train_set)
labels_train_set = np.append(labels_mal[0:train_split], labels_nmal[0:train_split], axis = 0)
np.random.seed(43)
np.random.shuffle(labels_train_set)
valid_set = np.append(ret_data_mal[train_split:valid_split], ret_data_nmal[train_split:valid_split], axis = 0)
valid_set = np.reshape(valid_set, ((valid_split - train_split) * 2, 16, 16, 1))
np.random.seed(44)
np.random.shuffle(valid_set)
labels_valid_set = np.append(labels_mal[train_split:valid_split], labels_nmal[train_split:valid_split], axis = 0)
np.random.seed(44)
np.random.shuffle(labels_valid_set)
test_set = np.append(ret_data_mal[valid_split:test_split], ret_data_nmal[valid_split:test_split], axis = 0)
test_set = np.reshape(test_set, ((test_split - valid_split)*2, 16, 16, 1))
np.random.seed(45)
np.random.shuffle(test_set)
labels_test_set = np.append(labels_mal[valid_split:test_split], labels_nmal[valid_split:test_split], axis = 0)
np.random.seed(45)
np.random.shuffle(labels_test_set)
print('Train Shape:', np.shape(train_set), np.shape(labels_train_set))
print('Validation Shape:', np.shape(valid_set), np.shape(labels_valid_set))
print('Test Shape:', np.shape(test_set), np.shape(labels_test_set))
```
### Model Definition
```
model = models.Sequential(name = 'DNS_Alert_Net')
model.add(layers.Conv2D(16, (2, 2), activation='relu', input_shape=(16, 16, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(16 , (2, 2), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(8, (2, 2), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(8, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
adam_ = tf.keras.optimizers.Adam(lr = 0.001)
model.compile(loss='binary_crossentropy', optimizer=adam_, metrics=['accuracy'])
model.summary()
```
### Training
```
save_callback = tf.keras.callbacks.ModelCheckpoint('dns_alert_model.hdf5', save_best_only=True, monitor='val_loss', mode='min')
history = model.fit(train_set, labels_train_set, epochs=30, validation_data=(valid_set, labels_valid_set), callbacks=[save_callback])
```
#### Training Graphs
```
%matplotlib inline
plt.rcParams["figure.figsize"] = (25,7)
plt.subplot(1, 2, 1)
plt.plot(range(len(history.history['loss'])), history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model Loss', fontsize= 20)
plt.ylabel('Loss', fontsize= 18)
plt.xlabel('Epochs', fontsize= 18)
plt.legend(['Train','Validation'], loc='upper left', fontsize = 16)
plt.subplot(1, 2, 2)
plt.plot(range(len(history.history['acc'])), history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Model Accuracy',fontsize= 20)
plt.ylabel('Accuracy', fontsize= 18)
plt.xlabel('Epochs', fontsize= 18)
plt.legend(['Train','Validation'], loc='upper left', fontsize = 16)
plt.show()
```
### Evaluation
#### Accuracy and Loss
```
loss_train, acc_train = model.evaluate(train_set, labels_train_set)
loss_valid, acc_valid = model.evaluate(valid_set, labels_valid_set)
loss_test, acc_test = model.evaluate(test_set, labels_test_set)
table = table = [["Data Set","Loss","Accuracy (%)"],
["Train Set",loss_train,acc_train * 100],
["Validation Set",loss_valid,acc_valid * 100],
["Test Set",loss_test,acc_test * 100]]
display(HTML(tabulate.tabulate(table, tablefmt='html')))
```
#### Confusion Matrix
```
y_pred = model.predict(train_set)
cf_matrix_train =confusion_matrix(labels_train_set,y_pred.round())
plt.subplot(1,3,1)
plt.title('Train Set', fontsize = 16)
group_names = ['True Neg','False Pos','False Neg','True Pos']
group_counts = ["{0:0.0f}".format(value) for value in cf_matrix_train.flatten()]
group_percentages = ["{0:.2%}".format(value) for value in cf_matrix_train.flatten()/np.sum(cf_matrix_train)]
labels = [f"{v1}\n{v2}\n{v3}" for v1, v2, v3 in zip(group_names,group_counts,group_percentages)]
labels = np.asarray(labels).reshape(2,2)
sns.heatmap(cf_matrix_train, annot=labels, fmt='', cmap='Blues', cbar=False)
y_pred = model.predict(valid_set)
cf_matrix_valid =confusion_matrix(labels_valid_set,y_pred.round())
plt.subplot(1,3,2)
plt.title('Validation Set', fontsize = 16)
group_names = ['True Neg','False Pos','False Neg','True Pos']
group_counts = ["{0:0.0f}".format(value) for value in cf_matrix_valid.flatten()]
group_percentages = ["{0:.2%}".format(value) for value in cf_matrix_valid.flatten()/np.sum(cf_matrix_valid)]
labels = [f"{v1}\n{v2}\n{v3}" for v1, v2, v3 in zip(group_names,group_counts,group_percentages)]
labels = np.asarray(labels).reshape(2,2)
sns.heatmap(cf_matrix_valid, annot=labels, fmt='', cmap='Blues', cbar=False)
y_pred = model.predict(test_set)
cf_matrix_test =confusion_matrix(labels_test_set,y_pred.round())
plt.subplot(1,3,3)
plt.title('Test Set', fontsize = 16)
group_names = ['True Neg','False Pos','False Neg','True Pos']
group_counts = ["{0:0.0f}".format(value) for value in cf_matrix_test.flatten()]
group_percentages = ["{0:.2%}".format(value) for value in cf_matrix_test.flatten()/np.sum(cf_matrix_test)]
labels = [f"{v1}\n{v2}\n{v3}" for v1, v2, v3 in zip(group_names,group_counts,group_percentages)]
labels = np.asarray(labels).reshape(2,2)
sns.heatmap(cf_matrix_test, annot=labels, fmt='', cmap='Blues', cbar=True)
```
#### Accuracy, Prescision, Recall, F1 Score
```
acc_train = (cf_matrix_train[0, 0] + cf_matrix_train[1,1]) / np.sum(cf_matrix_train)
pres_train = (cf_matrix_train[1, 1]) / (cf_matrix_train[1, 1] + cf_matrix_train[0, 1])
rec_train = (cf_matrix_train[1, 1]) / (cf_matrix_train[1, 1] + cf_matrix_train[1, 0])
f1_train = 2 * rec_train * pres_train / (rec_train + pres_train)
acc_valid = (cf_matrix_valid[0, 0] + cf_matrix_valid[1,1]) / np.sum(cf_matrix_valid)
pres_valid = (cf_matrix_valid[1, 1]) / (cf_matrix_valid[1, 1] + cf_matrix_valid[0, 1])
rec_valid = (cf_matrix_valid[1, 1]) / (cf_matrix_valid[1, 1] + cf_matrix_valid[1, 0])
f1_valid = 2 * rec_valid * pres_valid / (rec_valid + pres_valid)
acc_test = (cf_matrix_test[0, 0] + cf_matrix_test[1,1]) / np.sum(cf_matrix_test)
pres_test = (cf_matrix_test[1, 1]) / (cf_matrix_test[1, 1] + cf_matrix_test[0, 1])
rec_test = (cf_matrix_test[1, 1]) / (cf_matrix_test[1, 1] + cf_matrix_test[1, 0])
f1_test = 2 * rec_test * pres_test / (rec_test + pres_test)
table = [["Data Set","Accuracy","Prescision", "Recall", "F1_Score"],
["Train Set",acc_train, pres_train, rec_train, f1_train],
["Validation Set",acc_valid, pres_valid, rec_valid, f1_valid],
["Test Set",acc_test, pres_test, rec_test, f1_test ]]
display(HTML(tabulate.tabulate(table, tablefmt='html')))
```
|
github_jupyter
|
import numpy as np
import tensorflow as tf
from IPython.display import HTML, display
import tabulate
import seaborn as sns
from tensorflow.keras import layers, models, optimizers
from tensorflow.keras import backend as K
from tensorflow.keras import losses
from tensorflow.keras.utils import plot_model
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
import pandas as pd
import csv
print("Version:", tf.__version__)
def string_to_ascii(string):
ascii_arr = np.zeros(len(string))
for i in range(len(string)):
ascii_arr[i] = ord(string[i])
return ascii_arr
def import_data(data_path, labels, header, lateral_skip, no_of_entries, csv_txt):
if csv_txt == 0:
data = open(data_path, "r")
data = list(data.readlines())
else:
data = open(data_path, 'rt')
reader = csv.reader(data, delimiter =',', quoting = csv.QUOTE_NONE)
data = list(reader)
data = list(np.asarray(data[:no_of_entries + header])[:, 1])
ret_data = np.zeros((no_of_entries,256))
for i in range(header, no_of_entries + header):
ret_data[i - header, 0 : len(data[i].strip('\"'))] = string_to_ascii(data[i].strip('\"'))
labels = np.ones((no_of_entries, 1)) * labels
return ret_data, labels
number_of_samples = 1000
ret_data_mal, labels_mal = import_data('C:/Chanakya/Projects/coredns_dns_ml_firewall/Data/domain.txt', 1, 1, 0, int(number_of_samples/2), 0)
ret_data_nmal, labels_nmal = import_data('C:/Chanakya/Projects/coredns_dns_ml_firewall/Data/top10milliondomains.csv', 0, 1, 1, int(number_of_samples/2), 1)
train_split = int(number_of_samples/2 * 0.8)
valid_split = int(number_of_samples/2 * 0.9)
test_split = int(number_of_samples/2)
train_set = np.append(ret_data_mal[0:train_split], ret_data_nmal[0:train_split], axis = 0)
train_set = np.reshape(train_set, (train_split * 2, 16, 16, 1))
np.random.seed(43)
np.random.shuffle(train_set)
labels_train_set = np.append(labels_mal[0:train_split], labels_nmal[0:train_split], axis = 0)
np.random.seed(43)
np.random.shuffle(labels_train_set)
valid_set = np.append(ret_data_mal[train_split:valid_split], ret_data_nmal[train_split:valid_split], axis = 0)
valid_set = np.reshape(valid_set, ((valid_split - train_split) * 2, 16, 16, 1))
np.random.seed(44)
np.random.shuffle(valid_set)
labels_valid_set = np.append(labels_mal[train_split:valid_split], labels_nmal[train_split:valid_split], axis = 0)
np.random.seed(44)
np.random.shuffle(labels_valid_set)
test_set = np.append(ret_data_mal[valid_split:test_split], ret_data_nmal[valid_split:test_split], axis = 0)
test_set = np.reshape(test_set, ((test_split - valid_split)*2, 16, 16, 1))
np.random.seed(45)
np.random.shuffle(test_set)
labels_test_set = np.append(labels_mal[valid_split:test_split], labels_nmal[valid_split:test_split], axis = 0)
np.random.seed(45)
np.random.shuffle(labels_test_set)
print('Train Shape:', np.shape(train_set), np.shape(labels_train_set))
print('Validation Shape:', np.shape(valid_set), np.shape(labels_valid_set))
print('Test Shape:', np.shape(test_set), np.shape(labels_test_set))
model = models.Sequential(name = 'DNS_Alert_Net')
model.add(layers.Conv2D(16, (2, 2), activation='relu', input_shape=(16, 16, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(16 , (2, 2), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(8, (2, 2), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(8, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
adam_ = tf.keras.optimizers.Adam(lr = 0.001)
model.compile(loss='binary_crossentropy', optimizer=adam_, metrics=['accuracy'])
model.summary()
save_callback = tf.keras.callbacks.ModelCheckpoint('dns_alert_model.hdf5', save_best_only=True, monitor='val_loss', mode='min')
history = model.fit(train_set, labels_train_set, epochs=30, validation_data=(valid_set, labels_valid_set), callbacks=[save_callback])
%matplotlib inline
plt.rcParams["figure.figsize"] = (25,7)
plt.subplot(1, 2, 1)
plt.plot(range(len(history.history['loss'])), history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model Loss', fontsize= 20)
plt.ylabel('Loss', fontsize= 18)
plt.xlabel('Epochs', fontsize= 18)
plt.legend(['Train','Validation'], loc='upper left', fontsize = 16)
plt.subplot(1, 2, 2)
plt.plot(range(len(history.history['acc'])), history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Model Accuracy',fontsize= 20)
plt.ylabel('Accuracy', fontsize= 18)
plt.xlabel('Epochs', fontsize= 18)
plt.legend(['Train','Validation'], loc='upper left', fontsize = 16)
plt.show()
loss_train, acc_train = model.evaluate(train_set, labels_train_set)
loss_valid, acc_valid = model.evaluate(valid_set, labels_valid_set)
loss_test, acc_test = model.evaluate(test_set, labels_test_set)
table = table = [["Data Set","Loss","Accuracy (%)"],
["Train Set",loss_train,acc_train * 100],
["Validation Set",loss_valid,acc_valid * 100],
["Test Set",loss_test,acc_test * 100]]
display(HTML(tabulate.tabulate(table, tablefmt='html')))
y_pred = model.predict(train_set)
cf_matrix_train =confusion_matrix(labels_train_set,y_pred.round())
plt.subplot(1,3,1)
plt.title('Train Set', fontsize = 16)
group_names = ['True Neg','False Pos','False Neg','True Pos']
group_counts = ["{0:0.0f}".format(value) for value in cf_matrix_train.flatten()]
group_percentages = ["{0:.2%}".format(value) for value in cf_matrix_train.flatten()/np.sum(cf_matrix_train)]
labels = [f"{v1}\n{v2}\n{v3}" for v1, v2, v3 in zip(group_names,group_counts,group_percentages)]
labels = np.asarray(labels).reshape(2,2)
sns.heatmap(cf_matrix_train, annot=labels, fmt='', cmap='Blues', cbar=False)
y_pred = model.predict(valid_set)
cf_matrix_valid =confusion_matrix(labels_valid_set,y_pred.round())
plt.subplot(1,3,2)
plt.title('Validation Set', fontsize = 16)
group_names = ['True Neg','False Pos','False Neg','True Pos']
group_counts = ["{0:0.0f}".format(value) for value in cf_matrix_valid.flatten()]
group_percentages = ["{0:.2%}".format(value) for value in cf_matrix_valid.flatten()/np.sum(cf_matrix_valid)]
labels = [f"{v1}\n{v2}\n{v3}" for v1, v2, v3 in zip(group_names,group_counts,group_percentages)]
labels = np.asarray(labels).reshape(2,2)
sns.heatmap(cf_matrix_valid, annot=labels, fmt='', cmap='Blues', cbar=False)
y_pred = model.predict(test_set)
cf_matrix_test =confusion_matrix(labels_test_set,y_pred.round())
plt.subplot(1,3,3)
plt.title('Test Set', fontsize = 16)
group_names = ['True Neg','False Pos','False Neg','True Pos']
group_counts = ["{0:0.0f}".format(value) for value in cf_matrix_test.flatten()]
group_percentages = ["{0:.2%}".format(value) for value in cf_matrix_test.flatten()/np.sum(cf_matrix_test)]
labels = [f"{v1}\n{v2}\n{v3}" for v1, v2, v3 in zip(group_names,group_counts,group_percentages)]
labels = np.asarray(labels).reshape(2,2)
sns.heatmap(cf_matrix_test, annot=labels, fmt='', cmap='Blues', cbar=True)
acc_train = (cf_matrix_train[0, 0] + cf_matrix_train[1,1]) / np.sum(cf_matrix_train)
pres_train = (cf_matrix_train[1, 1]) / (cf_matrix_train[1, 1] + cf_matrix_train[0, 1])
rec_train = (cf_matrix_train[1, 1]) / (cf_matrix_train[1, 1] + cf_matrix_train[1, 0])
f1_train = 2 * rec_train * pres_train / (rec_train + pres_train)
acc_valid = (cf_matrix_valid[0, 0] + cf_matrix_valid[1,1]) / np.sum(cf_matrix_valid)
pres_valid = (cf_matrix_valid[1, 1]) / (cf_matrix_valid[1, 1] + cf_matrix_valid[0, 1])
rec_valid = (cf_matrix_valid[1, 1]) / (cf_matrix_valid[1, 1] + cf_matrix_valid[1, 0])
f1_valid = 2 * rec_valid * pres_valid / (rec_valid + pres_valid)
acc_test = (cf_matrix_test[0, 0] + cf_matrix_test[1,1]) / np.sum(cf_matrix_test)
pres_test = (cf_matrix_test[1, 1]) / (cf_matrix_test[1, 1] + cf_matrix_test[0, 1])
rec_test = (cf_matrix_test[1, 1]) / (cf_matrix_test[1, 1] + cf_matrix_test[1, 0])
f1_test = 2 * rec_test * pres_test / (rec_test + pres_test)
table = [["Data Set","Accuracy","Prescision", "Recall", "F1_Score"],
["Train Set",acc_train, pres_train, rec_train, f1_train],
["Validation Set",acc_valid, pres_valid, rec_valid, f1_valid],
["Test Set",acc_test, pres_test, rec_test, f1_test ]]
display(HTML(tabulate.tabulate(table, tablefmt='html')))
| 0.577734 | 0.884389 |
# Phase Kickback
## Exploring the CNOT-Gate
We can entangle the two qubits by placing the control qubit in CNOT gate in the state |+>:
```
from math import pi
import numpy as np
from qiskit import Aer, assemble, QuantumCircuit
from qiskit.visualization import array_to_latex, plot_bloch_multivector, plot_histogram
qc = QuantumCircuit(2)
qc.h(0)
qc.cx(0, 1)
qc.draw('mpl', initial_state=True)
svsim = Aer.get_backend('aer_simulator')
qc.save_statevector()
qobj = assemble(qc)
res = svsim.run(qobj).result()
vec = res.get_statevector()
array_to_latex(vec, prefix="\\text{Statevector} = ")
```
what happens if we put the second qubit also in superposition?
```
qc = QuantumCircuit(2)
qc.h(0)
qc.h(1)
qc.cx(0, 1)
qc.draw('mpl', initial_state=True)
svsim = Aer.get_backend('aer_simulator')
qc.save_statevector()
qobj = assemble(qc)
res = svsim.run(qobj).result()
vec = res.get_statevector()
array_to_latex(vec, prefix="\\text{Statevector} = ")
plot_bloch_multivector(vec)
```
$$ |++> = |00> + |01> + |10> + |11> $$
Let’s put the target qubit in the state |−⟩, so it has a negative phase:
```
qc = QuantumCircuit(2)
qc.h(0)
qc.x(1)
qc.h(1)
qc.cx(0, 1)
qc.draw('mpl', initial_state=True)
svsim = Aer.get_backend('aer_simulator')
qc.save_statevector()
qobj = assemble(qc)
res = svsim.run(qobj).result()
vec = res.get_statevector()
array_to_latex(vec, prefix="\\text{Statevector} = ")
plot_bloch_multivector(vec)
```
$$ |++> = |00> - |01> - |10> + |11> $$
H-gate transforms |+⟩ → |0⟩ and |−⟩ → |1⟩, we can see that wrapping a CNOT in H-gates has the equivalent behaviour of a CNOT acting in the opposite direction:
```
qc = QuantumCircuit(2)
qc.h(0)
qc.h(1)
qc.cx(0, 1)
qc.h(0)
qc.h(1)
qc.draw('mpl', initial_state=True)
svsim = Aer.get_backend('aer_simulator')
qc.save_unitary()
qobj = assemble(qc)
res = svsim.run(qobj).result()
um = res.get_unitary()
array_to_latex(um, prefix="\\text{Unitary Matrix} = ")
```
Now let's apply CNOT with second qbit as control gate.
```
qc = QuantumCircuit(2)
qc.cx(1, 0)
qc.draw('mpl', initial_state=True)
svsim = Aer.get_backend('aer_simulator')
qc.save_unitary()
qobj = assemble(qc)
res = svsim.run(qobj).result()
um = res.get_unitary()
array_to_latex(um, prefix="\\text{Unitary Matrix} = ")
```
This identity is an example of phase kickback.
## Phase Kickback
### Explaining the CNOT Circuit Identity
Kickback is where the eigenvalue added by a gate to a qubit is ‘kicked back’ into a different qubit via a controlled operation. For example, we saw that performing an X-gate on a |−⟩ qubit gives it the phase −1:
<br>
$$ X|-> = X(1/\sqrt{2}(|0> - |1>)) = 1/\sqrt{2}(|1> -|0>) = -|-> $$
<br>
When control qubit in CNOT is in either |0⟩ or |1⟩, this phase affects the whole state, however it is a global phase and has no observable effects:
$$ CNOT|-0> = |-> \mathop{\otimes} |0> = |-0> $$
<br>
$$ CNOT|-1> = X|-> \mathop{\otimes} |1> = -|-> \mathop{\otimes} |1> = -|-1> $$
<br>
<br>
The interesting effect is when our control qubit is in superposition. The component of the control qubit that lies in the direction of |1⟩ applies this phase factor to the corresponding target qubit. This applied phase factor in turn introduces a relative phase into the control qubit:
<br>
$$ CNOT|-+> = 1/\sqrt{2}(CNOT|-0> +CNOT|-1>) = 1/\sqrt{2}(|-0> - |-1>) = |-> \mathop{\otimes} 1/\sqrt{2}(|0> - |1>) = |--> $$
### Kickback with the T-gate
```
qc = QuantumCircuit(2)
qc.cp(pi/4, 0, 1)
qc.draw('mpl', initial_state=True)
qc.save_unitary()
qobj = assemble(qc)
unitary = svsim.run(qobj).result().get_unitary()
array_to_latex(unitary, prefix="\\text{Controlled-T} = \n")
```
We can find the matrix of any controlled-U operation using the rule:
<br>
$$ U = \begin{bmatrix} u_{00} & u_{01} \\ u_{10} & u_{11} \end{bmatrix} $$
<br>
$$ Controlled-U = \begin{bmatrix}I & 0 \\ 0 & U \end{bmatrix} = \begin{bmatrix}1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & u_{00} & u_{01} \\ 0 & 0 & u_{10} & u_{11} \end{bmatrix} $$
<br>
<br>
If we apply T-gate to qubit |1>, we add phase $e^{i\pi/4}$ to the qubit:
<br>
$$ T|1> = e^{i\pi/4}|1> $$
<br>
This is global phase and is unobservable.
If we control this operation using another qubit in the |+⟩ state, the phase is no longer global but relative, which changes the relative phase in our control qubit:
<br>
$$ |1+> = |1> \mathop{\otimes} 1/\sqrt{2}(|0> + |1>) = 1/\sqrt{2}(|10> + |11>) $$
<br>
$$ Controlled-T|1+⟩ = 1/\sqrt{2} (|10> + e^{i\pi/4}|11>) = |1> \mathop{\otimes} 1/\sqrt{2}(|0> + e^{i\pi/4}|1>) $$
<br>
This has the effect of rotating our control qubit around the Z-axis of the Bloch sphere, while leaving the target qubit unchanged.
```
qc = QuantumCircuit(2)
qc.x(1)
qc.h(0)
qc.cp(pi/4, 0, 1)
qc.draw('mpl', initial_state=True)
qc.save_statevector()
qobj = assemble(qc)
final_state = svsim.run(qobj).result().get_statevector()
array_to_latex(final_state, prefix="\\text{Statevector} = \n")
plot_bloch_multivector(final_state)
```
## Quick Exercises:
1. What would be the resulting state of the control qubit ($q_{0}$) if the target qubit ($q_{1}$) was in the state |0>? Use Qiskit to check your answer.
```
qc = QuantumCircuit(2)
qc.h(0)
qc.cp(pi/4, 0, 1)
qc.draw('mpl', initial_state=True)
qc.save_statevector()
qobj = assemble(qc)
final_state = svsim.run(qobj).result().get_statevector()
array_to_latex(final_state, prefix="\\text{Statevector} = \n")
plot_bloch_multivector(final_state)
```
The resulting state of qubit ($q_{0}$) after applying controlled-T gate is |+>
2. What would happen to the control qubit ($q_{0}$) if the if the target qubit ($q_{1}$) was in the state |1> , and the circuit used a controlled-Sdg gate instead of the controlled-T (as shown in the circuit below)?
```
qc = QuantumCircuit(2)
qc.h(0)
qc.x(1)
qc.cp(-pi/2, 0, 1)
qc.draw('mpl', initial_state=True)
qc.save_statevector()
qobj = assemble(qc)
final_state = svsim.run(qobj).result().get_statevector()
array_to_latex(final_state, prefix="\\text{Statevector} = \n")
plot_bloch_multivector(final_state)
```
The qubit ($q_{0}$) after applying controlled-sdg gate will be in state |-i>
3. What would happen to the control qubit ($q_{0}$) if it was in the state |1> instead of the state before application of the controlled-T?
```
qc = QuantumCircuit(2)
qc.x(0)
qc.x(1)
qc.cp(pi/4, 0, 1)
qc.draw('mpl', initial_state=True)
qc.save_statevector()
qobj = assemble(qc)
final_state = svsim.run(qobj).result().get_statevector()
array_to_latex(final_state, prefix="\\text{Statevector} = \n")
plot_bloch_multivector(final_state)
```
The qubit ($q_{0}$) after applying controlled-T gate will be in state |1>
```
import qiskit.tools.jupyter
%qiskit_version_table
```
|
github_jupyter
|
from math import pi
import numpy as np
from qiskit import Aer, assemble, QuantumCircuit
from qiskit.visualization import array_to_latex, plot_bloch_multivector, plot_histogram
qc = QuantumCircuit(2)
qc.h(0)
qc.cx(0, 1)
qc.draw('mpl', initial_state=True)
svsim = Aer.get_backend('aer_simulator')
qc.save_statevector()
qobj = assemble(qc)
res = svsim.run(qobj).result()
vec = res.get_statevector()
array_to_latex(vec, prefix="\\text{Statevector} = ")
qc = QuantumCircuit(2)
qc.h(0)
qc.h(1)
qc.cx(0, 1)
qc.draw('mpl', initial_state=True)
svsim = Aer.get_backend('aer_simulator')
qc.save_statevector()
qobj = assemble(qc)
res = svsim.run(qobj).result()
vec = res.get_statevector()
array_to_latex(vec, prefix="\\text{Statevector} = ")
plot_bloch_multivector(vec)
qc = QuantumCircuit(2)
qc.h(0)
qc.x(1)
qc.h(1)
qc.cx(0, 1)
qc.draw('mpl', initial_state=True)
svsim = Aer.get_backend('aer_simulator')
qc.save_statevector()
qobj = assemble(qc)
res = svsim.run(qobj).result()
vec = res.get_statevector()
array_to_latex(vec, prefix="\\text{Statevector} = ")
plot_bloch_multivector(vec)
qc = QuantumCircuit(2)
qc.h(0)
qc.h(1)
qc.cx(0, 1)
qc.h(0)
qc.h(1)
qc.draw('mpl', initial_state=True)
svsim = Aer.get_backend('aer_simulator')
qc.save_unitary()
qobj = assemble(qc)
res = svsim.run(qobj).result()
um = res.get_unitary()
array_to_latex(um, prefix="\\text{Unitary Matrix} = ")
qc = QuantumCircuit(2)
qc.cx(1, 0)
qc.draw('mpl', initial_state=True)
svsim = Aer.get_backend('aer_simulator')
qc.save_unitary()
qobj = assemble(qc)
res = svsim.run(qobj).result()
um = res.get_unitary()
array_to_latex(um, prefix="\\text{Unitary Matrix} = ")
qc = QuantumCircuit(2)
qc.cp(pi/4, 0, 1)
qc.draw('mpl', initial_state=True)
qc.save_unitary()
qobj = assemble(qc)
unitary = svsim.run(qobj).result().get_unitary()
array_to_latex(unitary, prefix="\\text{Controlled-T} = \n")
qc = QuantumCircuit(2)
qc.x(1)
qc.h(0)
qc.cp(pi/4, 0, 1)
qc.draw('mpl', initial_state=True)
qc.save_statevector()
qobj = assemble(qc)
final_state = svsim.run(qobj).result().get_statevector()
array_to_latex(final_state, prefix="\\text{Statevector} = \n")
plot_bloch_multivector(final_state)
qc = QuantumCircuit(2)
qc.h(0)
qc.cp(pi/4, 0, 1)
qc.draw('mpl', initial_state=True)
qc.save_statevector()
qobj = assemble(qc)
final_state = svsim.run(qobj).result().get_statevector()
array_to_latex(final_state, prefix="\\text{Statevector} = \n")
plot_bloch_multivector(final_state)
qc = QuantumCircuit(2)
qc.h(0)
qc.x(1)
qc.cp(-pi/2, 0, 1)
qc.draw('mpl', initial_state=True)
qc.save_statevector()
qobj = assemble(qc)
final_state = svsim.run(qobj).result().get_statevector()
array_to_latex(final_state, prefix="\\text{Statevector} = \n")
plot_bloch_multivector(final_state)
qc = QuantumCircuit(2)
qc.x(0)
qc.x(1)
qc.cp(pi/4, 0, 1)
qc.draw('mpl', initial_state=True)
qc.save_statevector()
qobj = assemble(qc)
final_state = svsim.run(qobj).result().get_statevector()
array_to_latex(final_state, prefix="\\text{Statevector} = \n")
plot_bloch_multivector(final_state)
import qiskit.tools.jupyter
%qiskit_version_table
| 0.63273 | 0.924142 |
# Supervised sentiment: dense feature representations and neural networks
```
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2021"
```
## Contents
1. [Overview](#Overview)
1. [Set-up](#Set-up)
1. [Distributed representations as features](#Distributed-representations-as-features)
1. [GloVe inputs](#GloVe-inputs)
1. [Yelp representations](#Yelp-representations)
1. [Remarks on this approach](#Remarks-on-this-approach)
1. [RNN classifiers](#RNN-classifiers)
1. [RNN dataset preparation](#RNN-dataset-preparation)
1. [Vocabulary for the embedding](#Vocabulary-for-the-embedding)
1. [PyTorch RNN classifier](#PyTorch-RNN-classifier)
1. [Pretrained embeddings](#Pretrained-embeddings)
1. [RNN hyperparameter tuning experiment](#RNN-hyperparameter-tuning-experiment)
1. [The VecAvg baseline from Socher et al. 2013](#The-VecAvg-baseline-from-Socher-et-al.-2013)
1. [Defining the model](#Defining-the-model)
1. [VecAvg hyperparameter tuning experiment](#VecAvg-hyperparameter-tuning-experiment)
## Overview
This notebook defines and explores __vector averaging__ and __recurrent neural network (RNN) classifiers__ for the Stanford Sentiment Treebank.
These approaches make their predictions based on comprehensive representations of the examples:
* For the vector averaging models, each word is modeled, but we assume that words combine via a simple function that is insensitive to their order or constituent structure.
* For the RNN, each word is again modeled, and we also model the sequential relationships between words.
These models contrast with the ones explored in [the previous notebook](sst_02_hand_built_features.ipynb), which make predictions based on more partial, potentially idiosyncratic information extracted from the examples.
## Set-up
See [the first notebook in this unit](sst_01_overview.ipynb#Set-up) for set-up instructions.
```
from collections import Counter
import numpy as np
import os
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report
import torch
import torch.nn as nn
from torch_rnn_classifier import TorchRNNClassifier
import sst
import vsm
import utils
utils.fix_random_seeds()
DATE_HOME = 'data'
GLOVE_HOME = os.path.join(DATE_HOME, 'glove.6B')
VSMDATA_HOME = os.path.join(DATE_HOME, 'vsmdata')
SST_HOME = os.path.join(DATE_HOME, 'sentiment')
```
## Distributed representations as features
As a first step in the direction of neural networks for sentiment, we can connect with our previous unit on distributed representations. Arguably, more than any specific model architecture, this is the major innovation of deep learning: __rather than designing feature functions by hand, we use dense, distributed representations, often derived from unsupervised models__.
<img src="fig/distreps-as-features.png" width=500 alt="distreps-as-features.png" />
Our model will just be `LogisticRegression`, and we'll continue with the experiment framework from the previous notebook. Here is `fit_maxent_classifier` again:
```
def fit_maxent_classifier(X, y):
mod = LogisticRegression(
fit_intercept=True,
solver='liblinear',
multi_class='auto')
mod.fit(X, y)
return mod
```
### GloVe inputs
To illustrate this process, we'll use the general purpose GloVe representations released by the GloVe team, at 300d:
```
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
def vsm_phi(text, lookup, np_func=np.mean):
"""Represent `tree` as a combination of the vector of its words.
Parameters
----------
text : str
lookup : dict
From words to vectors.
np_func : function (default: np.sum)
A numpy matrix operation that can be applied columnwise,
like `np.mean`, `np.sum`, or `np.prod`. The requirement is that
the function take `axis=0` as one of its arguments (to ensure
columnwise combination) and that it return a vector of a
fixed length, no matter what the size of the tree is.
Returns
-------
np.array, dimension `X.shape[1]`
"""
allvecs = np.array([lookup[w] for w in text.split() if w in lookup])
if len(allvecs) == 0:
dim = len(next(iter(lookup.values())))
feats = np.zeros(dim)
else:
feats = np_func(allvecs, axis=0)
return feats
def glove_phi(text, np_func=np.mean):
return vsm_phi(text, glove_lookup, np_func=np_func)
%%time
_ = sst.experiment(
sst.train_reader(SST_HOME),
glove_phi,
fit_maxent_classifier,
assess_dataframes=sst.dev_reader(SST_HOME),
vectorize=False) # Tell `experiment` that we already have our feature vectors.
```
### Yelp representations
Our Yelp VSMs seems pretty well-attuned to the SST, so we might think that they can do even better than the general-purpose GloVe inputs. Here are two quick assessments of that idea that seeks to build on ideas we developed in the unit on VSMs.
```
yelp20 = pd.read_csv(
os.path.join(VSMDATA_HOME, 'yelp_window20-flat.csv.gz'), index_col=0)
yelp20_ppmi = vsm.pmi(yelp20, positive=False)
yelp20_ppmi_svd = vsm.lsa(yelp20_ppmi, k=300)
yelp_lookup = dict(zip(yelp20_ppmi_svd.index, yelp20_ppmi_svd.values))
def yelp_phi(text, np_func=np.mean):
return vsm_leaves_phi(text, yelp_lookup, np_func=np_func)
%%time
_ = sst.experiment(
sst.train_reader(SST_HOME),
yelp_phi,
fit_maxent_classifier,
assess_dataframes=sst.dev_reader(SST_HOME),
vectorize=False) # Tell `experiment` that we already have our feature vectors.
```
### Remarks on this approach
* Recall that our `unigrams_phi` created feature representations with over 16K dimensions and got about 0.52 with no hyperparameter tuning.
* The above models' feature representations have only 300 dimensions. While they are struggling with the neutral category, we can probably overcome this with some additional attention to the representations and to our strategies for optimization.
* The promise of the Mittens model of [Dingwall and Potts 2018](https://arxiv.org/abs/1803.09901) is that we can use GloVe itself to update the general purpose information in the 'glove.6B' vectors with specialized information from one of these IMDB count matrices. That might be worth trying; the `mittens` package (`pip install mittens`) already implements this!
* That said, just averaging all the word representations is pretty unappealing linguistically. There's no doubt that we're losing a lot of valuable information in doing this. The models we turn to now can be seen as addressing this shortcoming while retaining the insight that our distributed representations are valuable for this task.
* We'll return to these ideas below, when we consider [the VecAvg baseline from Socher et al. 2013](#The-VecAvg-baseline-from-Socher-et-al.-2013). That model also posits a simple, fixed combination function (averaging). However, it begins with randomly initialized representations and updates them as part of training.
## RNN classifiers
A recurrent neural network (RNN) is any deep learning model that process its inputs sequentially. There are many variations on this theme. The one that we use here is an __RNN classifier__.
<img src="fig/rnn_classifier.png" width=800 />
The version of the model that is implemented in `np_rnn_classifier.py` corresponds exactly to the above diagram. We can express it mathematically as follows:
$$\begin{align*}
h_{t} &= \tanh(x_{t}W_{xh} + h_{t-1}W_{hh}) \\
y &= \textbf{softmax}(h_{n}W_{hy} + b_y)
\end{align*}$$
where $1 \leqslant t \leqslant n$. The first line defines the recurrence: each hidden state $h_{t}$ is defined by the input $x_{t}$ and the previous hidden state $h_{t-1}$, together with weight matrices $W_{xh}$ and $W_{hh}$, which are used at all timesteps. As indicated in the above diagram, the sequence of hidden states is padded with an initial state $h_{0}$. In our implementations, this is always an all $0$ vector, but it can be initialized in more sophisticated ways (some of which we will explore in our units on natural language inference and grounded natural language generation).
The model in `torch_rnn_classifier.py` expands on the above and allows for more flexibility:
$$\begin{align*}
h_{t} &= \text{RNN}(x_{t}, h_{t-1}) \\
h &= f(h_{n}W_{hh} + b_{h}) \\
y &= \textbf{softmax}(hW_{hy} + b_y)
\end{align*}$$
Here, $\text{RNN}$ stands for all the parameters of the recurrent part of the model. This will depend on the choice one makes for `rnn_cell_class`; options include `nn.RNN`, `nn.LSTM`, and `nn.GRU`. In addition, the classifier part includes a hidden layer (middle row), and the user can decide on the activation funtion $f$ to use there (parameter: `classifier_activation`).
This is a potential gain over our average-vectors baseline, in that it processes each word independently, and in the context of those that came before it. Thus, not only is this sensitive to word order, but the hidden representation create the potential to encode how the preceding context for a word affects its interpretation.
The downside of this, of course, is that this model is much more difficult to set up and optimize. Let's dive into those details.
### RNN dataset preparation
SST contains trees, but the RNN processes just the sequence of leaf nodes. The function `sst.build_rnn_dataset` creates datasets in this format:
```
X_rnn_train, y_rnn_train = sst.build_rnn_dataset(sst.train_reader(SST_HOME))
```
Each member of `X_rnn_train` is a list of lists of words. Here's a look at the start of the first:
```
X_rnn_train[0][: 6]
```
Because this is a classifier, `y_rnn_train` is just a list of labels, one per example:
```
y_rnn_train[0]
```
For experiments, let's build a `dev` dataset as well:
```
X_rnn_dev, y_rnn_dev = sst.build_rnn_dataset(sst.dev_reader(SST_HOME))
```
### Vocabulary for the embedding
The first delicate issue we need to address is the vocabulary for our model:
* As indicated in the figure above, the first thing we do when processing an example is look up the words in an embedding (a VSM), which has to have a fixed dimensionality.
* We can use our training data to specify the vocabulary for this embedding; at prediction time, though, we will inevitably encounter words we haven't seen before.
* The convention we adopt here is to map them to an `$UNK` token that is in our pre-specified vocabulary.
* At the same time, we might want to collapse infrequent tokens into `$UNK` to make optimization easier and to try to create reasonable representations for words that we have to map to `$UNK` at test time.
In `utils`, the function `get_vocab` will help you specify a vocabulary. It will let you choose a vocabulary by optionally specifying `mincount` or `n_words`, and it will ensure that `$UNK` is included.
```
sst_full_train_vocab = utils.get_vocab(X_rnn_train)
print("sst_full_train_vocab has {:,} items".format(len(sst_full_train_vocab)))
```
This frankly seems too big relative to our dataset size. Let's restrict to just words that occur at least twice:
```
sst_train_vocab = utils.get_vocab(X_rnn_train, mincount=2)
print("sst_train_vocab has {:,} items".format(len(sst_train_vocab)))
```
### PyTorch RNN classifier
Here and throughout, we'll rely on `early_stopping=True` to try to find the optimal time to stop optimization. This behavior can be further refined by setting different values of `validation_fraction`, `n_iter_no_change`, and `tol`. For additional discussion, see [the section on model convergence in the evaluation methods notebook](#Assessing-models-without-convergence).
```
rnn = TorchRNNClassifier(
sst_train_vocab,
early_stopping=True)
%time _ = rnn.fit(X_rnn_train, y_rnn_train)
rnn_dev_preds = rnn.predict(X_rnn_dev)
print(classification_report(y_rnn_dev, rnn_dev_preds, digits=3))
```
The above numbers are just a starting point. Let's try to improve on them by using pretrained embeddings and then by exploring a range of hyperparameter options.
### Pretrained embeddings
With `embedding=None`, `TorchRNNClassifier` (and its counterpart in `np_rnn_classifier.py`) create random embeddings. You can also pass in an embedding, as long as you make sure it has the right vocabulary. The utility `utils.create_pretrained_embedding` will help with that:
```
glove_embedding, sst_glove_vocab = utils.create_pretrained_embedding(
glove_lookup, sst_train_vocab)
```
Here's an illustration using `TorchRNNClassifier`:
```
rnn_glove = TorchRNNClassifier(
sst_glove_vocab,
embedding=glove_embedding,
early_stopping=True)
%time _ = rnn_glove.fit(X_rnn_train, y_rnn_train)
rnn_glove_dev_preds = rnn_glove.predict(X_rnn_dev)
print(classification_report(y_rnn_dev, rnn_glove_dev_preds, digits=3))
```
It looks like pretrained representations give us a notable boost, but we're still below most of the simpler models explored in [the previous notebook](sst_02_hand_built_features.ipynb).
### RNN hyperparameter tuning experiment
As we saw in [the previous notebook](sst_02_hand_built_features.ipynb), we're not really done until we've done some hyperparameter search. So let's round out this section by cross-validating the RNN that uses GloVe embeddings, to see if we can improve on the default-parameters model we evaluated just above. For this, we'll use `sst.experiment`:
```
def simple_leaves_phi(text):
return text.split()
def fit_rnn_with_hyperparameter_search(X, y):
basemod = TorchRNNClassifier(
sst_train_vocab,
embedding=glove_embedding,
batch_size=25, # Inspired by comments in the paper.
bidirectional=True,
early_stopping=True)
# There are lots of other parameters and values we could
# explore, but this is at least a solid start:
param_grid = {
'embed_dim': [50, 75, 100],
'hidden_dim': [50, 75, 100],
'eta': [0.001, 0.01]}
bestmod = utils.fit_classifier_with_hyperparameter_search(
X, y, basemod, cv=3, param_grid=param_grid)
return bestmod
%%time
rnn_experiment_xval = sst.experiment(
sst.train_reader(SST_HOME),
simple_leaves_phi,
fit_rnn_with_hyperparameter_search,
assess_dataframes=sst.dev_reader(SST_HOME),
vectorize=False)
```
This model looks quite competitive with the simpler models we explored previously, and perhaps an even wider hyperparameter search would lead to additional improvements. In [finetuning.ipynb](finetuning.ipynb), we look at variants of the above that involve fine-tuning with BERT, and those models achieve even better results, which further highlights the value of rich pretraining.
## The VecAvg baseline from Socher et al. 2013
One of the baseline models from [Socher et al., Table 1](http://www.aclweb.org/anthology/D/D13/D13-1170.pdf) is __VecAvg__. This is like the model we explored above under the heading of [Distributed representations as features](#Distributed-representations-as-features), but it uses a random initial embedding that is updated as part of optimization. Another perspective on it is that it is like the RNN we just evaluated, but with the RNN parameters replaced by averaging.
In Socher et al. 2013, this model does reasonably well, scoring 80.1 on the root-only binary problem. In this section, we reimplement it, relying on `TorchRNNClassifier` to handle most of the heavy-lifting, and we evaluate it with a reasonably wide hyperparameter search.
### Defining the model
The core model is `TorchVecAvgModel`, which just looks up embeddings, averages them, and feeds the result to a classifier layer:
```
class TorchVecAvgModel(nn.Module):
def __init__(self, vocab_size, output_dim, device, embed_dim=50):
super().__init__()
self.vocab_size = vocab_size
self.embed_dim = embed_dim
self.output_dim = output_dim
self.device = device
self.embedding = nn.Embedding(self.vocab_size, self.embed_dim)
self.classifier_layer = nn.Linear(self.embed_dim, self.output_dim)
def forward(self, X, seq_lengths):
embs = self.embedding(X)
# Mask based on the **true** lengths:
mask = [torch.ones(l, self.embed_dim) for l in seq_lengths]
mask = torch.nn.utils.rnn.pad_sequence(mask, batch_first=True)
mask = mask.to(self.device)
# True average:
mu = (embs * mask).sum(axis=1) / seq_lengths.unsqueeze(1)
# Classifier:
logits = self.classifier_layer(mu)
return logits
```
For the main interface, we can just subclass `TorchRNNClassifier` and change the `build_graph` method to use `TorchVecAvgModel`. (For more details on the code and logic here, see the notebook [tutorial_torch_models.ipynb](tutorial_torch_models.ipynb).)
```
class TorchVecAvgClassifier(TorchRNNClassifier):
def build_graph(self):
return TorchVecAvgModel(
vocab_size=len(self.vocab),
output_dim=self.n_classes_,
device=self.device,
embed_dim=self.embed_dim)
```
### VecAvg hyperparameter tuning experiment
Now that we have the model implemented, let's see if we can reproduce Socher et al.'s 80.1 on the binary, root-only version of SST.
```
train_df = sst.train_reader(SST_HOME)
train_bin_df = train_df[train_df.label != 'neutral']
dev_df = sst.dev_reader(SST_HOME)
dev_bin_df = dev_df[dev_df.label != 'neutral']
test_df = sst.sentiment_reader(os.path.join(SST_HOME, "sst3-test-labeled.csv"))
test_bin_df = test_df[test_df.label != 'neutral']
def fit_vecavg_with_hyperparameter_search(X, y):
basemod = TorchVecAvgClassifier(
sst_train_vocab,
early_stopping=True)
param_grid = {
'embed_dim': [50, 100, 200, 300],
'eta': [0.001, 0.01, 0.05]}
bestmod = utils.fit_classifier_with_hyperparameter_search(
X, y, basemod, cv=3, param_grid=param_grid)
return bestmod
%%time
vecavg_experiment_xval = sst.experiment(
[train_bin_df, dev_bin_df],
simple_leaves_phi,
fit_vecavg_with_hyperparameter_search,
assess_dataframes=test_bin_df,
vectorize=False)
```
Excellent – it looks like we basically reproduced the number from the paper (80.1).
|
github_jupyter
|
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2021"
from collections import Counter
import numpy as np
import os
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report
import torch
import torch.nn as nn
from torch_rnn_classifier import TorchRNNClassifier
import sst
import vsm
import utils
utils.fix_random_seeds()
DATE_HOME = 'data'
GLOVE_HOME = os.path.join(DATE_HOME, 'glove.6B')
VSMDATA_HOME = os.path.join(DATE_HOME, 'vsmdata')
SST_HOME = os.path.join(DATE_HOME, 'sentiment')
def fit_maxent_classifier(X, y):
mod = LogisticRegression(
fit_intercept=True,
solver='liblinear',
multi_class='auto')
mod.fit(X, y)
return mod
glove_lookup = utils.glove2dict(
os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))
def vsm_phi(text, lookup, np_func=np.mean):
"""Represent `tree` as a combination of the vector of its words.
Parameters
----------
text : str
lookup : dict
From words to vectors.
np_func : function (default: np.sum)
A numpy matrix operation that can be applied columnwise,
like `np.mean`, `np.sum`, or `np.prod`. The requirement is that
the function take `axis=0` as one of its arguments (to ensure
columnwise combination) and that it return a vector of a
fixed length, no matter what the size of the tree is.
Returns
-------
np.array, dimension `X.shape[1]`
"""
allvecs = np.array([lookup[w] for w in text.split() if w in lookup])
if len(allvecs) == 0:
dim = len(next(iter(lookup.values())))
feats = np.zeros(dim)
else:
feats = np_func(allvecs, axis=0)
return feats
def glove_phi(text, np_func=np.mean):
return vsm_phi(text, glove_lookup, np_func=np_func)
%%time
_ = sst.experiment(
sst.train_reader(SST_HOME),
glove_phi,
fit_maxent_classifier,
assess_dataframes=sst.dev_reader(SST_HOME),
vectorize=False) # Tell `experiment` that we already have our feature vectors.
yelp20 = pd.read_csv(
os.path.join(VSMDATA_HOME, 'yelp_window20-flat.csv.gz'), index_col=0)
yelp20_ppmi = vsm.pmi(yelp20, positive=False)
yelp20_ppmi_svd = vsm.lsa(yelp20_ppmi, k=300)
yelp_lookup = dict(zip(yelp20_ppmi_svd.index, yelp20_ppmi_svd.values))
def yelp_phi(text, np_func=np.mean):
return vsm_leaves_phi(text, yelp_lookup, np_func=np_func)
%%time
_ = sst.experiment(
sst.train_reader(SST_HOME),
yelp_phi,
fit_maxent_classifier,
assess_dataframes=sst.dev_reader(SST_HOME),
vectorize=False) # Tell `experiment` that we already have our feature vectors.
X_rnn_train, y_rnn_train = sst.build_rnn_dataset(sst.train_reader(SST_HOME))
X_rnn_train[0][: 6]
y_rnn_train[0]
X_rnn_dev, y_rnn_dev = sst.build_rnn_dataset(sst.dev_reader(SST_HOME))
sst_full_train_vocab = utils.get_vocab(X_rnn_train)
print("sst_full_train_vocab has {:,} items".format(len(sst_full_train_vocab)))
sst_train_vocab = utils.get_vocab(X_rnn_train, mincount=2)
print("sst_train_vocab has {:,} items".format(len(sst_train_vocab)))
rnn = TorchRNNClassifier(
sst_train_vocab,
early_stopping=True)
%time _ = rnn.fit(X_rnn_train, y_rnn_train)
rnn_dev_preds = rnn.predict(X_rnn_dev)
print(classification_report(y_rnn_dev, rnn_dev_preds, digits=3))
glove_embedding, sst_glove_vocab = utils.create_pretrained_embedding(
glove_lookup, sst_train_vocab)
rnn_glove = TorchRNNClassifier(
sst_glove_vocab,
embedding=glove_embedding,
early_stopping=True)
%time _ = rnn_glove.fit(X_rnn_train, y_rnn_train)
rnn_glove_dev_preds = rnn_glove.predict(X_rnn_dev)
print(classification_report(y_rnn_dev, rnn_glove_dev_preds, digits=3))
def simple_leaves_phi(text):
return text.split()
def fit_rnn_with_hyperparameter_search(X, y):
basemod = TorchRNNClassifier(
sst_train_vocab,
embedding=glove_embedding,
batch_size=25, # Inspired by comments in the paper.
bidirectional=True,
early_stopping=True)
# There are lots of other parameters and values we could
# explore, but this is at least a solid start:
param_grid = {
'embed_dim': [50, 75, 100],
'hidden_dim': [50, 75, 100],
'eta': [0.001, 0.01]}
bestmod = utils.fit_classifier_with_hyperparameter_search(
X, y, basemod, cv=3, param_grid=param_grid)
return bestmod
%%time
rnn_experiment_xval = sst.experiment(
sst.train_reader(SST_HOME),
simple_leaves_phi,
fit_rnn_with_hyperparameter_search,
assess_dataframes=sst.dev_reader(SST_HOME),
vectorize=False)
class TorchVecAvgModel(nn.Module):
def __init__(self, vocab_size, output_dim, device, embed_dim=50):
super().__init__()
self.vocab_size = vocab_size
self.embed_dim = embed_dim
self.output_dim = output_dim
self.device = device
self.embedding = nn.Embedding(self.vocab_size, self.embed_dim)
self.classifier_layer = nn.Linear(self.embed_dim, self.output_dim)
def forward(self, X, seq_lengths):
embs = self.embedding(X)
# Mask based on the **true** lengths:
mask = [torch.ones(l, self.embed_dim) for l in seq_lengths]
mask = torch.nn.utils.rnn.pad_sequence(mask, batch_first=True)
mask = mask.to(self.device)
# True average:
mu = (embs * mask).sum(axis=1) / seq_lengths.unsqueeze(1)
# Classifier:
logits = self.classifier_layer(mu)
return logits
class TorchVecAvgClassifier(TorchRNNClassifier):
def build_graph(self):
return TorchVecAvgModel(
vocab_size=len(self.vocab),
output_dim=self.n_classes_,
device=self.device,
embed_dim=self.embed_dim)
train_df = sst.train_reader(SST_HOME)
train_bin_df = train_df[train_df.label != 'neutral']
dev_df = sst.dev_reader(SST_HOME)
dev_bin_df = dev_df[dev_df.label != 'neutral']
test_df = sst.sentiment_reader(os.path.join(SST_HOME, "sst3-test-labeled.csv"))
test_bin_df = test_df[test_df.label != 'neutral']
def fit_vecavg_with_hyperparameter_search(X, y):
basemod = TorchVecAvgClassifier(
sst_train_vocab,
early_stopping=True)
param_grid = {
'embed_dim': [50, 100, 200, 300],
'eta': [0.001, 0.01, 0.05]}
bestmod = utils.fit_classifier_with_hyperparameter_search(
X, y, basemod, cv=3, param_grid=param_grid)
return bestmod
%%time
vecavg_experiment_xval = sst.experiment(
[train_bin_df, dev_bin_df],
simple_leaves_phi,
fit_vecavg_with_hyperparameter_search,
assess_dataframes=test_bin_df,
vectorize=False)
| 0.894294 | 0.965576 |
# Building the Best AND Gate
```
from qiskit import *
from qiskit.tools.visualization import plot_histogram
from qiskit.providers.aer import noise
import numpy as np
```
In Problem Set 1, you made an AND gate with quantum gates. This time you'll do the same again, but for a real device. Using real devices gives you two major constraints to deal with. One is the connectivity, and the other is noise.
The connectivity tells you what `cx` gates it is possible to do perform directly. For example, the device `ibmq_5_tenerife` has five qubits numbered from 0 to 4. It has a connectivity defined by
```
coupling_map = [[1, 0], [2, 0], [2, 1], [3, 2], [3, 4], [4, 2]]
```
Here the `[1,0]` tells us that we can implement a `cx` with qubit 1 as control and qubit 0 as target, the `[2,0]` tells us we can have qubit 2 as control and 0 as target, and so on. The are the `cx` gates that the device can implement directly.
The 'noise' of a device is the collective effects of all the things that shouldn't happen, but nevertheless do happen. Noise results in the output not always having the result we expect. There is noise associated with all processes in a quantum circuit: preparing the initial states, applying gates and measuring the output. For the gates, noise levels can vary between different gates and between different qubits. The `cx` gates are typically more noisy than any single qubit gate.
We can also simulate noise using a noise model. And we can set the noise model based on measurements of the noise for a real device. The following noise model is based on `ibmq_5_tenerife`.
```
noise_dict = {'errors': [{'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0004721766167523067, 0.0004721766167523067, 0.0004721766167523067, 0.9985834701497431], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.000901556048412383, 0.000901556048412383, 0.000901556048412383, 0.9972953318547628], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0011592423249461303, 0.0011592423249461303, 0.0011592423249461303, 0.9965222730251616], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0009443532335046134, 0.0009443532335046134, 0.0009443532335046134, 0.9971669402994862], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.001803112096824766, 0.001803112096824766, 0.001803112096824766, 0.9945906637095256], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0023184846498922607, 0.0023184846498922607, 0.0023184846498922607, 0.9930445460503232], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.9672573379090872], 'gate_qubits': [[1, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.9699888805021712], 'gate_qubits': [[2, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.9627184072576159], 'gate_qubits': [[2, 1]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.9437457618579164], 'gate_qubits': [[3, 2]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.9339816349935997], 'gate_qubits': [[3, 4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.9307167621063416], 'gate_qubits': [[4, 2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9372499999999999, 0.06275000000000008], [0.06275000000000008, 0.9372499999999999]], 'gate_qubits': [[0]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9345, 0.0655], [0.0655, 0.9345]], 'gate_qubits': [[1]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.97075, 0.029249999999999998], [0.029249999999999998, 0.97075]], 'gate_qubits': [[2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9742500000000001, 0.02574999999999994], [0.02574999999999994, 0.9742500000000001]], 'gate_qubits': [[3]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.8747499999999999, 0.12525000000000008], [0.12525000000000008, 0.8747499999999999]], 'gate_qubits': [[4]]}], 'x90_gates': []}
noise_model = noise.noise_model.NoiseModel.from_dict( noise_dict )
```
Running directly on the device requires you to have an IBMQ account, and for you to sign in to it within your program. In order to not worry about all this, we'll instead use a simulation of the 5 qubit device defined by the constraints set above.
```
qr = QuantumRegister(5, 'qr')
cr = ClassicalRegister(1, 'cr')
backend = Aer.get_backend('qasm_simulator')
```
We now define the `AND` function. This has a few differences to the version in Exercise 1. Firstly, it is defined on a 5 qubit circuit, so you'll need to decide which of the 5 qubits are used to encode `input1`, `input2` and the output. Secondly, the output is a histogram of the number of times that each output is found when the process is repeated over 10000 samples.
```
def AND (input1,input2, q_1=0,q_2=1,q_out=2):
# The keyword q_1 specifies the qubit used to encode input1
# The keyword q_2 specifies qubit used to encode input2
# The keyword q_out specifies qubit to be as output
qc = QuantumCircuit(qr, cr)
# prepare input on qubits q1 and q2
if input1=='1':
qc.x( qr[ q_1 ] )
if input2=='1':
qc.x( qr[ q_2 ] )
qc.ccx(qr[ q_1 ],qr[ q_2 ],qr[ q_out ]) # the AND just needs a c
qc.measure(qr[ q_out ],cr[0]) # output from qubit 1 is measured
# the circuit is run on a simulator, but we do it so that the noise and connectivity of Tenerife are also reproduced
job = execute(qc, backend, shots=10000, noise_model=noise_model,
coupling_map=coupling_map,
basis_gates=noise_model.basis_gates)
output = job.result().get_counts()
return output
```
For example, here are the results when both inputs are `0`.
```
result = AND('0','0')
print( result )
plot_histogram( result )
```
We'll compare across all results to find the most unreliable.
```
worst = 1
for input1 in ['0','1']:
for input2 in ['0','1']:
print('\nProbability of correct answer for inputs',input1,input2)
prob = AND(input1,input2, q_1=0,q_2=1,q_out=2)[str(int( input1=='1' and input2=='1' ))]/10000
print( prob )
worst = min(worst,prob)
print('\nThe lowest of these probabilities was',worst)
```
The `AND` function above uses the `ccx` gate the implement the required operation. But you now know how to make your own. Find a way to implement an `AND` for which the lowest of the above probabilities is better than for a simple `ccx`.
|
github_jupyter
|
from qiskit import *
from qiskit.tools.visualization import plot_histogram
from qiskit.providers.aer import noise
import numpy as np
coupling_map = [[1, 0], [2, 0], [2, 1], [3, 2], [3, 4], [4, 2]]
noise_dict = {'errors': [{'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0004721766167523067, 0.0004721766167523067, 0.0004721766167523067, 0.9985834701497431], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.000901556048412383, 0.000901556048412383, 0.000901556048412383, 0.9972953318547628], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0011592423249461303, 0.0011592423249461303, 0.0011592423249461303, 0.9965222730251616], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0009443532335046134, 0.0009443532335046134, 0.0009443532335046134, 0.9971669402994862], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.001803112096824766, 0.001803112096824766, 0.001803112096824766, 0.9945906637095256], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0023184846498922607, 0.0023184846498922607, 0.0023184846498922607, 0.9930445460503232], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.9672573379090872], 'gate_qubits': [[1, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.9699888805021712], 'gate_qubits': [[2, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.9627184072576159], 'gate_qubits': [[2, 1]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.9437457618579164], 'gate_qubits': [[3, 2]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.9339816349935997], 'gate_qubits': [[3, 4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.9307167621063416], 'gate_qubits': [[4, 2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9372499999999999, 0.06275000000000008], [0.06275000000000008, 0.9372499999999999]], 'gate_qubits': [[0]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9345, 0.0655], [0.0655, 0.9345]], 'gate_qubits': [[1]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.97075, 0.029249999999999998], [0.029249999999999998, 0.97075]], 'gate_qubits': [[2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9742500000000001, 0.02574999999999994], [0.02574999999999994, 0.9742500000000001]], 'gate_qubits': [[3]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.8747499999999999, 0.12525000000000008], [0.12525000000000008, 0.8747499999999999]], 'gate_qubits': [[4]]}], 'x90_gates': []}
noise_model = noise.noise_model.NoiseModel.from_dict( noise_dict )
qr = QuantumRegister(5, 'qr')
cr = ClassicalRegister(1, 'cr')
backend = Aer.get_backend('qasm_simulator')
def AND (input1,input2, q_1=0,q_2=1,q_out=2):
# The keyword q_1 specifies the qubit used to encode input1
# The keyword q_2 specifies qubit used to encode input2
# The keyword q_out specifies qubit to be as output
qc = QuantumCircuit(qr, cr)
# prepare input on qubits q1 and q2
if input1=='1':
qc.x( qr[ q_1 ] )
if input2=='1':
qc.x( qr[ q_2 ] )
qc.ccx(qr[ q_1 ],qr[ q_2 ],qr[ q_out ]) # the AND just needs a c
qc.measure(qr[ q_out ],cr[0]) # output from qubit 1 is measured
# the circuit is run on a simulator, but we do it so that the noise and connectivity of Tenerife are also reproduced
job = execute(qc, backend, shots=10000, noise_model=noise_model,
coupling_map=coupling_map,
basis_gates=noise_model.basis_gates)
output = job.result().get_counts()
return output
result = AND('0','0')
print( result )
plot_histogram( result )
worst = 1
for input1 in ['0','1']:
for input2 in ['0','1']:
print('\nProbability of correct answer for inputs',input1,input2)
prob = AND(input1,input2, q_1=0,q_2=1,q_out=2)[str(int( input1=='1' and input2=='1' ))]/10000
print( prob )
worst = min(worst,prob)
print('\nThe lowest of these probabilities was',worst)
| 0.427397 | 0.973569 |
# МАДМО
<a href="https://mipt.ru/science/labs/laboratoriya-neyronnykh-sistem-i-glubokogo-obucheniya/"><img align="right" src="https://avatars1.githubusercontent.com/u/29918795?v=4&s=200" alt="DeepHackLab" style="position:relative;top:-40px;right:10px;height:100px;" /></a>
### Физтех-Школа Прикладной математики и информатики МФТИ
### Лаборатория нейронных сетей и глубокого обучения (DeepHackLab)
Домашнее задание необходимо загрузить в общий репозиторий с именной папкой
## Домашнее задание 1
### Основы Python и пакет NumPy
---
```
import numpy as np
import random
import scipy.stats as sps
```
### Задача 1
В первой задаче вам предлагается перемножить две квадратные матрицы двумя способами -- без использования пакета ***numpy*** и с ним.
```
# Для генерации матриц используем фукнцию random -- она используется для генерации случайных объектов
# функция sample создает случайную выборку. В качестве аргумента ей передается кортеж (i,j), здесь i -- число строк,
# j -- число столбцов.
a = np.random.sample((1000,1000))
b = np.random.sample((1000,1000))
# выведите ранг каждой матрицы с помощью функции np.linalg.rank.
# Используйте функцию shape, что она вывела?
# ========
print(f"Ранг А: {np.linalg.matrix_rank(a)}")
print(f"Размерность А: {a.shape}")
print(f"Ранг B: {np.linalg.matrix_rank(b)}")
print(f"Размерность B: {b.shape}")
# ========
print(a)
print(b)
def mult(a, b):
# здесь напишите перемножение матриц без
# использования NumPy и выведите результат
if a.shape[1] == b.shape[0]:
result = np.zeros((a.shape[0], b.shape[1]))
b_tr = b.T
for row_id, row_a in enumerate(a):
for col_id, column_b in enumerate(b_tr):
sum_pr = 0
for i in range(row_a.shape[0]):
sum_pr += row_a[i] * column_b[i]
result[row_id][col_id] = sum_pr
print(result)
else:
print("У матриц несовместимая размерность")
def np_mult(a, b):
result = np.matmul(a,b)
print(result)
%%time
# засечем время работы функции без NumPy
mult(a,b)
%%time
# засечем время работы функции с NumPy
np_mult(a,b)
```
### Задача 2
Напишите функцию, которая по данной последовательности $\{A_i\}_{i=1}^n$ строит последовательность $S_n$, где $S_k = \frac{A_1 + ... + A_k}{k}$.
Аналогично -- с помощью библиотеки **NumPy** и без нее. Сравните скорость, объясните результат.
```
# функция, решающая задачу с помощью NumPy
def sec_av(A):
return np.cumsum(A) / np.array(range(1, A.shape[0] + 1))
# функция без NumPy
def stupid_sec_av(A):
S = [0 for i in range(len(A))]
for ind, el in enumerate(A):
if ind > 0:
S[ind] = S[ind-1] + el
else:
S[ind] = el
for i in range(len(A)):
S[i] = S[i] / (i + 1)
return S
# зададим некоторую последовательность и проверим ее на ваших функциях.
# Первая функция должна работать ~ в 50 раз быстрее
A = sps.uniform.rvs(size=10 ** 7)
%time S1 = sec_av(A)
%time S2 = stupid_sec_av(A)
#проверим корректность:
np.abs(S1 - S2).sum()
```
### Задача 3
Пусть задан некоторый массив $X$. Надо построить новый массив, где все элементы с нечетными индексами требуется заменить на число $a$ (если оно не указано, то на 1). Все элементы с четными индексами исходного массива нужно возвести в куб и записать в обратном порядке относительно позиций этих элементов. Массив $X$ при этом должен остаться без изменений. В конце требуется слить массив X с преобразованным X и вывести в обратном порядке.
```
X = [10,20,30, 40, 50]
X = np.array(X)
index = range(0, len(X))
index = index[-1::-1]
print(X[index])
# функция, решающая задачу с помощью NumPy
def transformation(X, a=1):
Y = X.copy()
index_odd = []
index_even = []
for i in range(Y.shape[0]):
if i % 2 == 0:
index_even.append(i)
else:
index_odd.append(i)
Y[index_odd] = a
index_even_revert = index_even[-1::-1]
Y[index_even_revert] = Y[index_even] ** 3
return Y
# функция, решающая задачу без NumPy
def stupid_transformation(X, a=1):
Y = X.copy()
index_even = []
for i in range(Y.shape[0]):
if i % 2 == 0:
Y[i] = Y[i] ** 3
index_even.append(i)
else:
Y[i] = a
index_even_revert = index_even[-1::-1]
for i in range(len(index_even_revert) // 2):
buf = Y[index_even_revert[i]]
Y[index_even_revert[i]] = Y[index_even[i]]
Y[index_even[i]] = buf
return Y
X = sps.uniform.rvs(size=10 ** 7)
# здесь код эффективнее примерно в 20 раз.
# если Вы вдруг соберетесь печатать массив без np -- лучше сначала посмотрите на его размер
%time S1 = transformation(X)
%time S2 = stupid_transformation(X)
# проверим корректность:
np.abs(S1 - S2).sum()
```
Почему методы ***numpy*** оказываются эффективнее?
1) Массив в Python - это список из указателей, т.е. сами данные раскиданы по куче. В противоположность, массивы Numpy - выделенная неприрывная область памяти с данными.
2) При выполнении действий с элементами массива Python происходит каждый раз проверка типа, в массивах Numpy данные однородные.
3) Numpy реализован на С, что позволяет избежать всех накладных расходов имеющихся в Python
## Дополнительные задачи
Дополнительные задачи подразумевают, что Вы самостоятельно разберётесь в некоторых функциях ***numpy***, чтобы их сделать.
Эти задачи не являются обязательными, но могут повлиять на Ваш рейтинг в лучшую сторону (точные правила учёта доп. задач будут оглашены позже).
### Задача 4*
Дана функция двух переменных: $f(x, y) = sin(x)cos(y)$ (это просто такой красивый 3D-график), а также дана функция для отрисовки $f(x, y)$ (`draw_f()`), которая принимает на вход двумерную сетку, на которой будет вычисляться функция.
Вам нужно разобраться в том, как строить такие сетки (подсказка - это одна конкретная функция ***numpy***), и подать такую сетку на вход функции отрисовки.
```
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
def f(x, y):
'''Функция двух переменных'''
return np.sin(x) * np.cos(y)
def draw_f(grid_x, grid_y):
'''Функция отрисовки функции f(x, y)'''
fig = plt.figure(figsize=(10, 8))
ax = Axes3D(fig)
ax.plot_surface(grid_x, grid_y, f(grid_x, grid_y), cmap='inferno')
plt.show()
grid_x, grid_y = np.mgrid[-5:5:0.5,-5:5:0.5]
draw_f(grid_x, grid_y)
```
### Задача 5*
Выберите любую картинку и загрузите ее в папку с кодом. При загрузке её размерность равна 3: **(w, h, num_channels)**, где **w** - ширина картинки в пикселях, **h** - высота картинки в пикселях, **num_channels** - количество каналов *(R, G, B, alpha)*.
Вам нужно "развернуть" картинку в одномерный массив размера w \* h \* num_channels, написав **одну строку кода**.
```
from matplotlib import pyplot as plt
%matplotlib inline
path_to_image = './image.jpg'
image_array = plt.imread(path_to_image)
plt.imshow(image_array);
flat_image_array = image_array.flatten()
flat_image_array.shape
```
|
github_jupyter
|
import numpy as np
import random
import scipy.stats as sps
# Для генерации матриц используем фукнцию random -- она используется для генерации случайных объектов
# функция sample создает случайную выборку. В качестве аргумента ей передается кортеж (i,j), здесь i -- число строк,
# j -- число столбцов.
a = np.random.sample((1000,1000))
b = np.random.sample((1000,1000))
# выведите ранг каждой матрицы с помощью функции np.linalg.rank.
# Используйте функцию shape, что она вывела?
# ========
print(f"Ранг А: {np.linalg.matrix_rank(a)}")
print(f"Размерность А: {a.shape}")
print(f"Ранг B: {np.linalg.matrix_rank(b)}")
print(f"Размерность B: {b.shape}")
# ========
print(a)
print(b)
def mult(a, b):
# здесь напишите перемножение матриц без
# использования NumPy и выведите результат
if a.shape[1] == b.shape[0]:
result = np.zeros((a.shape[0], b.shape[1]))
b_tr = b.T
for row_id, row_a in enumerate(a):
for col_id, column_b in enumerate(b_tr):
sum_pr = 0
for i in range(row_a.shape[0]):
sum_pr += row_a[i] * column_b[i]
result[row_id][col_id] = sum_pr
print(result)
else:
print("У матриц несовместимая размерность")
def np_mult(a, b):
result = np.matmul(a,b)
print(result)
%%time
# засечем время работы функции без NumPy
mult(a,b)
%%time
# засечем время работы функции с NumPy
np_mult(a,b)
# функция, решающая задачу с помощью NumPy
def sec_av(A):
return np.cumsum(A) / np.array(range(1, A.shape[0] + 1))
# функция без NumPy
def stupid_sec_av(A):
S = [0 for i in range(len(A))]
for ind, el in enumerate(A):
if ind > 0:
S[ind] = S[ind-1] + el
else:
S[ind] = el
for i in range(len(A)):
S[i] = S[i] / (i + 1)
return S
# зададим некоторую последовательность и проверим ее на ваших функциях.
# Первая функция должна работать ~ в 50 раз быстрее
A = sps.uniform.rvs(size=10 ** 7)
%time S1 = sec_av(A)
%time S2 = stupid_sec_av(A)
#проверим корректность:
np.abs(S1 - S2).sum()
X = [10,20,30, 40, 50]
X = np.array(X)
index = range(0, len(X))
index = index[-1::-1]
print(X[index])
# функция, решающая задачу с помощью NumPy
def transformation(X, a=1):
Y = X.copy()
index_odd = []
index_even = []
for i in range(Y.shape[0]):
if i % 2 == 0:
index_even.append(i)
else:
index_odd.append(i)
Y[index_odd] = a
index_even_revert = index_even[-1::-1]
Y[index_even_revert] = Y[index_even] ** 3
return Y
# функция, решающая задачу без NumPy
def stupid_transformation(X, a=1):
Y = X.copy()
index_even = []
for i in range(Y.shape[0]):
if i % 2 == 0:
Y[i] = Y[i] ** 3
index_even.append(i)
else:
Y[i] = a
index_even_revert = index_even[-1::-1]
for i in range(len(index_even_revert) // 2):
buf = Y[index_even_revert[i]]
Y[index_even_revert[i]] = Y[index_even[i]]
Y[index_even[i]] = buf
return Y
X = sps.uniform.rvs(size=10 ** 7)
# здесь код эффективнее примерно в 20 раз.
# если Вы вдруг соберетесь печатать массив без np -- лучше сначала посмотрите на его размер
%time S1 = transformation(X)
%time S2 = stupid_transformation(X)
# проверим корректность:
np.abs(S1 - S2).sum()
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
def f(x, y):
'''Функция двух переменных'''
return np.sin(x) * np.cos(y)
def draw_f(grid_x, grid_y):
'''Функция отрисовки функции f(x, y)'''
fig = plt.figure(figsize=(10, 8))
ax = Axes3D(fig)
ax.plot_surface(grid_x, grid_y, f(grid_x, grid_y), cmap='inferno')
plt.show()
grid_x, grid_y = np.mgrid[-5:5:0.5,-5:5:0.5]
draw_f(grid_x, grid_y)
from matplotlib import pyplot as plt
%matplotlib inline
path_to_image = './image.jpg'
image_array = plt.imread(path_to_image)
plt.imshow(image_array);
flat_image_array = image_array.flatten()
flat_image_array.shape
| 0.184694 | 0.917377 |
This notebook is part of the `nbsphinx` documentation: https://nbsphinx.readthedocs.io/.
# Code Cells
## Code, Output, Streams
An empty code cell:
Two empty lines:
```
```
Leading/trailing empty lines:
```
# 2 empty lines before, 1 after
```
A simple output:
```
6 * 7
```
The standard output stream:
```
print('Hello, world!')
```
Normal output + standard output
```
print('Hello, world!')
6 * 7
```
The standard error stream is highlighted and displayed just below the code cell.
The standard output stream comes afterwards (with no special highlighting).
Finally, the "normal" output is displayed.
```
import sys
print("I'll appear on the standard error stream", file=sys.stderr)
print("I'll appear on the standard output stream")
"I'm the 'normal' output"
```
<div class="alert alert-info">
**Note:**
Using the IPython kernel, the order is actually mixed up,
see https://github.com/ipython/ipykernel/issues/280.
</div>
## Cell Magics
IPython can handle code in other languages by means of [cell magics](https://ipython.readthedocs.io/en/stable/interactive/magics.html#cell-magics):
```
%%bash
for i in 1 2 3
do
echo $i
done
```
## Special Display Formats
See [IPython example notebook](https://nbviewer.jupyter.org/github/ipython/ipython/blob/master/examples/IPython Kernel/Rich Output.ipynb).
### Local Image Files
```
from IPython.display import Image
i = Image(filename='images/notebook_icon.png')
i
display(i)
```
See also [SVG support for LaTeX](markdown-cells.ipynb#SVG-support-for-LaTeX).
```
from IPython.display import SVG
SVG(filename='images/python_logo.svg')
```
### Image URLs
```
Image(url='https://www.python.org/static/img/python-logo-large.png')
Image(url='https://www.python.org/static/img/python-logo-large.png', embed=True)
Image(url='https://jupyter.org/assets/nav_logo.svg')
```
### Math
```
from IPython.display import Math
eq = Math(r'\int\limits_{-\infty}^\infty f(x) \delta(x - x_0) dx = f(x_0)')
eq
display(eq)
from IPython.display import Latex
Latex(r'This is a \LaTeX{} equation: $a^2 + b^2 = c^2$')
%%latex
\begin{equation}
\int\limits_{-\infty}^\infty f(x) \delta(x - x_0) dx = f(x_0)
\end{equation}
```
### Plots
The output formats for Matplotlib plots can be customized.
You'll need separate settings for the Jupyter Notebook application and for `nbsphinx`.
If you want to use SVG images for Matplotlib plots,
add this line to your IPython configuration file:
```python
c.InlineBackend.figure_formats = {'svg'}
```
If you want SVG images, but also want nice plots when exporting to LaTeX/PDF, you can select:
```python
c.InlineBackend.figure_formats = {'svg', 'pdf'}
```
If you want to use the default PNG plots or HiDPI plots using `'png2x'` (a.k.a. `'retina'`),
make sure to set this:
```python
c.InlineBackend.rc = {'figure.dpi': 96}
```
This is needed because the default `'figure.dpi'` value of 72
is only valid for the [Qt Console](https://qtconsole.readthedocs.io/).
If you are planning to store your SVG plots as part of your notebooks,
you should also have a look at the `'svg.hashsalt'` setting.
For more details on these and other settings, have a look at
[Default Values for Matplotlib's "inline" Backend](https://nbviewer.jupyter.org/github/mgeier/python-audio/blob/master/plotting/matplotlib-inline-defaults.ipynb).
The configuration file `ipython_kernel_config.py` can be either
in the directory where your notebook is located
(see the [ipython_kernel_config.py](ipython_kernel_config.py) in this directory),
or in your profile directory
(typically `~/.ipython/profile_default/ipython_kernel_config.py`).
To find out your IPython profile directory, use this command:
python3 -m IPython profile locate
A local `ipython_kernel_config.py` in the notebook directory
also works on https://mybinder.org/.
Alternatively, you can create a file with those settings in a file named
`.ipython/profile_default/ipython_kernel_config.py` in your repository.
To get SVG and PDF plots for `nbsphinx`,
use something like this in your `conf.py` file:
```python
nbsphinx_execute_arguments = [
"--InlineBackend.figure_formats={'svg', 'pdf'}",
"--InlineBackend.rc={'figure.dpi': 96}",
]
```
In the following example, `nbsphinx` should use an SVG image in the HTML output
and a PDF image for LaTeX/PDF output.
```
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=[6, 3])
ax.plot([4, 9, 7, 20, 6, 33, 13, 23, 16, 62, 8]);
```
Alternatively, the figure format(s) can also be chosen directly in the notebook
(which overrides the setting in `nbsphinx_execute_arguments` and in the IPython configuration):
```
%config InlineBackend.figure_formats = ['png']
fig
```
If you want to use PNG images, but with HiDPI resolution,
use the special `'png2x'` (a.k.a. `'retina'`) format
(which also looks nice in the LaTeX output):
```
%config InlineBackend.figure_formats = ['png2x']
fig
```
### Pandas Dataframes
[Pandas dataframes](https://pandas.pydata.org/pandas-docs/stable/getting_started/dsintro.html#dataframe)
should be displayed as nicely formatted HTML tables (if you are using HTML output).
```
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.randint(0, 100, size=[5, 4]),
columns=['a', 'b', 'c', 'd'])
df
```
For LaTeX output, however, the plain text output is used by default.
To get nice LaTeX tables, a few settings have to be changed:
```
pd.set_option('display.latex.repr', True)
```
This is not enabled by default because of
[Pandas issue #12182](https://github.com/pandas-dev/pandas/issues/12182).
The generated LaTeX tables utilize the `booktabs` package, so you have to make sure that package is [loaded in the preamble](https://www.sphinx-doc.org/en/master/latex.html) with:
\usepackage{booktabs}
In order to allow page breaks within tables, you should use:
```
pd.set_option('display.latex.longtable', True)
```
The `longtable` package is already used by Sphinx,
so you don't have to manually load it in the preamble.
Finally, if you want to use LaTeX math expressions in your dataframe, you'll have to disable escaping:
```
pd.set_option('display.latex.escape', False)
```
The above settings should have no influence on the HTML output, but the LaTeX output should now look nicer:
```
df = pd.DataFrame(np.random.randint(0, 100, size=[10, 4]),
columns=[r'$\alpha$', r'$\beta$', r'$\gamma$', r'$\delta$'])
df
```
### YouTube Videos
```
from IPython.display import YouTubeVideo
YouTubeVideo('WAikxUGbomY')
```
### Interactive Widgets (HTML only)
The basic widget infrastructure is provided by
the `ipywidgets` module: https://ipywidgets.readthedocs.io/.
More advanced widgets are available in separate packages,
see for example https://jupyter.org/widgets.
```
import ipywidgets as w
slider = w.IntSlider()
slider.value = 42
slider
```
A widget typically consists of a so-called "model" and a "view" into that model.
If you display a widget multiple times,
all instances act as a "view" into the same "model".
That means that their state is synchronized.
You can move either one of these sliders to try this out:
```
slider
```
You can also link different widgets.
Widgets can be linked via the kernel
(which of course only works while a kernel is running)
or directly in the client
(which even works in the rendered HTML pages).
Widgets can be linked uni- or bi-directionally.
Examples for all 4 combinations are shown here:
```
link = w.IntSlider(description='link')
w.link((slider, 'value'), (link, 'value'))
jslink = w.IntSlider(description='jslink')
w.jslink((slider, 'value'), (jslink, 'value'))
dlink = w.IntSlider(description='dlink')
w.dlink((slider, 'value'), (dlink, 'value'))
jsdlink = w.IntSlider(description='jsdlink')
w.jsdlink((slider, 'value'), (jsdlink, 'value'))
w.VBox([link, jslink, dlink, jsdlink])
```
<div class="alert alert-info">
**Other Languages:**
The examples shown here are using Python,
but the widget technology can also be used with
different Jupyter kernels
(i.e. with different programming languages).
</div>
### Arbitrary JavaScript Output (HTML only)
```
%%javascript
var text = document.createTextNode("Hello, I was generated with JavaScript!");
// Content appended to "element" will be visible in the output area:
element.appendChild(text);
```
### Unsupported Output Types
If a code cell produces data with an unsupported MIME type, the Jupyter Notebook doesn't generate any output.
`nbsphinx`, however, shows a warning message.
```
display({
'text/x-python': 'print("Hello, world!")',
'text/x-haskell': 'main = putStrLn "Hello, world!"',
}, raw=True)
```
## ANSI Colors
The standard output and standard error streams may contain [ANSI escape sequences](https://en.wikipedia.org/wiki/ANSI_escape_code) to change the text and background colors.
```
print('BEWARE: \x1b[1;33;41mugly colors\x1b[m!', file=sys.stderr)
print('AB\x1b[43mCD\x1b[35mEF\x1b[1mGH\x1b[4mIJ\x1b[7m'
'KL\x1b[49mMN\x1b[39mOP\x1b[22mQR\x1b[24mST\x1b[27mUV')
```
The following code showing the 8 basic ANSI colors is based on http://tldp.org/HOWTO/Bash-Prompt-HOWTO/x329.html.
Each of the 8 colors has an "intense" variation, which is used for bold text.
```
text = ' XYZ '
formatstring = '\x1b[{}m' + text + '\x1b[m'
print(' ' * 6 + ' ' * len(text) +
''.join('{:^{}}'.format(bg, len(text)) for bg in range(40, 48)))
for fg in range(30, 38):
for bold in False, True:
fg_code = ('1;' if bold else '') + str(fg)
print(' {:>4} '.format(fg_code) + formatstring.format(fg_code) +
''.join(formatstring.format(fg_code + ';' + str(bg))
for bg in range(40, 48)))
```
ANSI also supports a set of 256 indexed colors.
The following code showing all of them is based on [http://bitmote.com/index.php?post/2012/11/19/Using-ANSI-Color-Codes-to-Colorize-Your-Bash-Prompt-on-Linux](https://web.archive.org/web/20190109005413/http://bitmote.com/index.php?post/2012/11/19/Using-ANSI-Color-Codes-to-Colorize-Your-Bash-Prompt-on-Linux).
```
formatstring = '\x1b[38;5;{0};48;5;{0}mX\x1b[1mX\x1b[m'
print(' + ' + ''.join('{:2}'.format(i) for i in range(36)))
print(' 0 ' + ''.join(formatstring.format(i) for i in range(16)))
for i in range(7):
i = i * 36 + 16
print('{:3} '.format(i) + ''.join(formatstring.format(i + j)
for j in range(36) if i + j < 256))
```
You can even use 24-bit RGB colors:
```
start = 255, 0, 0
end = 0, 0, 255
length = 79
out = []
for i in range(length):
rgb = [start[c] + int(i * (end[c] - start[c]) / length) for c in range(3)]
out.append('\x1b['
'38;2;{rgb[2]};{rgb[1]};{rgb[0]};'
'48;2;{rgb[0]};{rgb[1]};{rgb[2]}mX\x1b[m'.format(rgb=rgb))
print(''.join(out))
```
|
github_jupyter
|
```
Leading/trailing empty lines:
A simple output:
The standard output stream:
Normal output + standard output
The standard error stream is highlighted and displayed just below the code cell.
The standard output stream comes afterwards (with no special highlighting).
Finally, the "normal" output is displayed.
<div class="alert alert-info">
**Note:**
Using the IPython kernel, the order is actually mixed up,
see https://github.com/ipython/ipykernel/issues/280.
</div>
## Cell Magics
IPython can handle code in other languages by means of [cell magics](https://ipython.readthedocs.io/en/stable/interactive/magics.html#cell-magics):
## Special Display Formats
See [IPython example notebook](https://nbviewer.jupyter.org/github/ipython/ipython/blob/master/examples/IPython Kernel/Rich Output.ipynb).
### Local Image Files
See also [SVG support for LaTeX](markdown-cells.ipynb#SVG-support-for-LaTeX).
### Image URLs
### Math
### Plots
The output formats for Matplotlib plots can be customized.
You'll need separate settings for the Jupyter Notebook application and for `nbsphinx`.
If you want to use SVG images for Matplotlib plots,
add this line to your IPython configuration file:
If you want SVG images, but also want nice plots when exporting to LaTeX/PDF, you can select:
If you want to use the default PNG plots or HiDPI plots using `'png2x'` (a.k.a. `'retina'`),
make sure to set this:
This is needed because the default `'figure.dpi'` value of 72
is only valid for the [Qt Console](https://qtconsole.readthedocs.io/).
If you are planning to store your SVG plots as part of your notebooks,
you should also have a look at the `'svg.hashsalt'` setting.
For more details on these and other settings, have a look at
[Default Values for Matplotlib's "inline" Backend](https://nbviewer.jupyter.org/github/mgeier/python-audio/blob/master/plotting/matplotlib-inline-defaults.ipynb).
The configuration file `ipython_kernel_config.py` can be either
in the directory where your notebook is located
(see the [ipython_kernel_config.py](ipython_kernel_config.py) in this directory),
or in your profile directory
(typically `~/.ipython/profile_default/ipython_kernel_config.py`).
To find out your IPython profile directory, use this command:
python3 -m IPython profile locate
A local `ipython_kernel_config.py` in the notebook directory
also works on https://mybinder.org/.
Alternatively, you can create a file with those settings in a file named
`.ipython/profile_default/ipython_kernel_config.py` in your repository.
To get SVG and PDF plots for `nbsphinx`,
use something like this in your `conf.py` file:
In the following example, `nbsphinx` should use an SVG image in the HTML output
and a PDF image for LaTeX/PDF output.
Alternatively, the figure format(s) can also be chosen directly in the notebook
(which overrides the setting in `nbsphinx_execute_arguments` and in the IPython configuration):
If you want to use PNG images, but with HiDPI resolution,
use the special `'png2x'` (a.k.a. `'retina'`) format
(which also looks nice in the LaTeX output):
### Pandas Dataframes
[Pandas dataframes](https://pandas.pydata.org/pandas-docs/stable/getting_started/dsintro.html#dataframe)
should be displayed as nicely formatted HTML tables (if you are using HTML output).
For LaTeX output, however, the plain text output is used by default.
To get nice LaTeX tables, a few settings have to be changed:
This is not enabled by default because of
[Pandas issue #12182](https://github.com/pandas-dev/pandas/issues/12182).
The generated LaTeX tables utilize the `booktabs` package, so you have to make sure that package is [loaded in the preamble](https://www.sphinx-doc.org/en/master/latex.html) with:
\usepackage{booktabs}
In order to allow page breaks within tables, you should use:
The `longtable` package is already used by Sphinx,
so you don't have to manually load it in the preamble.
Finally, if you want to use LaTeX math expressions in your dataframe, you'll have to disable escaping:
The above settings should have no influence on the HTML output, but the LaTeX output should now look nicer:
### YouTube Videos
### Interactive Widgets (HTML only)
The basic widget infrastructure is provided by
the `ipywidgets` module: https://ipywidgets.readthedocs.io/.
More advanced widgets are available in separate packages,
see for example https://jupyter.org/widgets.
A widget typically consists of a so-called "model" and a "view" into that model.
If you display a widget multiple times,
all instances act as a "view" into the same "model".
That means that their state is synchronized.
You can move either one of these sliders to try this out:
You can also link different widgets.
Widgets can be linked via the kernel
(which of course only works while a kernel is running)
or directly in the client
(which even works in the rendered HTML pages).
Widgets can be linked uni- or bi-directionally.
Examples for all 4 combinations are shown here:
<div class="alert alert-info">
**Other Languages:**
The examples shown here are using Python,
but the widget technology can also be used with
different Jupyter kernels
(i.e. with different programming languages).
</div>
### Arbitrary JavaScript Output (HTML only)
### Unsupported Output Types
If a code cell produces data with an unsupported MIME type, the Jupyter Notebook doesn't generate any output.
`nbsphinx`, however, shows a warning message.
## ANSI Colors
The standard output and standard error streams may contain [ANSI escape sequences](https://en.wikipedia.org/wiki/ANSI_escape_code) to change the text and background colors.
The following code showing the 8 basic ANSI colors is based on http://tldp.org/HOWTO/Bash-Prompt-HOWTO/x329.html.
Each of the 8 colors has an "intense" variation, which is used for bold text.
ANSI also supports a set of 256 indexed colors.
The following code showing all of them is based on [http://bitmote.com/index.php?post/2012/11/19/Using-ANSI-Color-Codes-to-Colorize-Your-Bash-Prompt-on-Linux](https://web.archive.org/web/20190109005413/http://bitmote.com/index.php?post/2012/11/19/Using-ANSI-Color-Codes-to-Colorize-Your-Bash-Prompt-on-Linux).
You can even use 24-bit RGB colors:
| 0.804981 | 0.950365 |
```
epochs = 10
n_test_batches = 200
```
# Part 11 - 安全深度学习分类
## 您的数据很重要,您的模型也很重要
数据是机器学习的推动力 创建和收集数据的组织能够构建和训练自己的机器学习模型。这使他们能够向外部组织提供此类模型即服务(MLaaS)的使用。这对于某些组织很有用——他们无法自行创建这些模型,但仍希望使用此模型对自己的数据进行预测。
但是,托管在云中的模型仍然存在隐私/ IP问题。 为了让外部组织使用它——他们必须上传输入数据(例如要分类的图像)或下载模型。从隐私的角度来看,上传输入数据可能会出现问题,但是如果创建/拥有模型的组织不愿意,则下载模型可能不是一个选择。
## 计算加密数据
在这种情况下,一种潜在的解决方案是以一种方式对模型和数据进行加密,以允许一个组织使用另一组织拥有的模型,而无需将其IP彼此公开。存在几种允许对加密数据进行计算的加密方案,其中最为人熟知的类型是安全多方计算(SMPC),同态加密(FHE / SHE)和功能加密(FE)。我们将在这里集中讨论安全的多方计算([在教程5的此处详细介绍](https://github.com/OpenMined/PySyft/blob/dev/examples/tutorials/Part%205%20-%20Intro% 20to%20Encrypted%20Programs.ipynb)),其中包含私有添加共享。它依赖于SecureNN和SPDZ等加密协议,[在此出色的博客文章中给出了详细信息](https://mortendahl.github.io/2017/09/19/private-image-analysis-with-mpc /)。
这些协议在加密数据上实现了卓越的性能,并且在过去的几个月中,我们一直在努力使这些协议易于使用。具体来说,我们正在构建工具,使您可以使用这些协议,而不必自己重新实现协议(甚至不必知道其工作原理背后的加密方法)。让我们进去看看。
## 设定
本教程中的确切设置如下:考虑您是服务器,并且有一些数据。首先,您使用此私人训练数据定义和训练模型。然后,您与拥有自己的一些数据的客户联系,该客户希望访问您的模型以做出一些预测。
您对模型(神经网络)进行加密。客户端加密其数据。然后,您都使用这两个加密资产来使用模型对数据进行分类。 最后,预测结果以加密方式发送回客户端,以便服务器(即您)对客户端数据一无所知(您既不了解输入也不了解预测)。
理想情况下,我们将在“服务器”与“客户端”之间共享输入,对模型亦然。为了简单起见,共享将由另外两个工作机“alice”和“ bob”持有。 如果您认为alice由客户端拥有,而bob由服务器拥有,也是一样的。
该计算在[许多MPC框架](https://arxiv.org/pdf/1801.03239.pdf)中是标准的半诚实(译者注:honest-but-curious,指的是遵循协议但会试图窃取隐私信息的敌手)敌手模型中是安全的。
**万事俱备, 我们开始吧!**
作者:
- Théo Ryffel - Twitter: [@theoryffel](https://twitter.com/theoryffel) · GitHub: [@LaRiffle](https://github.com/LaRiffle)
中文版译者:
- Hou Wei - github:[@dljgs1](https://github.com/dljgs1)
**Let's get started!**
### 导包以及模型规格
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
```
我们还需要执行特定于导入/启动PySyft的命令。我们创建了一些工作程序(分别命名为“client”,“bob”和“ alice”)。 最后,我们定义`crypto_provider`,它提供我们可能需要的所有加密原语([请参阅我们在SMPC上的教程以了解更多详细信息](https://github.com/OpenMined/PySyft/blob/master/examples/tutorials/Part %2009%20-%20Intro%20to%20Encrypted%20Programs.ipynb)。
```
import syft as sy
hook = sy.TorchHook(torch)
client = sy.VirtualWorker(hook, id="client")
bob = sy.VirtualWorker(hook, id="bob")
alice = sy.VirtualWorker(hook, id="alice")
crypto_provider = sy.VirtualWorker(hook, id="crypto_provider")
```
我们定义学习任务的设置
```
class Arguments():
def __init__(self):
self.batch_size = 64
self.test_batch_size = 50
self.epochs = epochs
self.lr = 0.001
self.log_interval = 100
args = Arguments()
```
### 数据加载并发送给工作人员
在我们的设置中,我们假设服务器有权访问某些数据以首先训练其模型。这是MNIST训练集。
```
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.batch_size, shuffle=True)
```
其次,客户端具有一些数据,并且希望使用服务器的模型对其进行预测。该客户端通过在两个工作人员“alice”和“ bob”之间共享共享数据来加密其数据。
> SMPC使用要求在整数上工作的加密协议。我们在这里利用PySyft张量抽象来使用.fix_precisio你()将PyTorch浮点张量转换为固定精度张量。例如,精度为2的0.123在第二个十进制数字处进行舍入,因此存储的数字为整数12。
```
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.test_batch_size, shuffle=True)
private_test_loader = []
for data, target in test_loader:
private_test_loader.append((
data.fix_precision().share(alice, bob, crypto_provider=crypto_provider),
target.fix_precision().share(alice, bob, crypto_provider=crypto_provider)
))
```
### 前馈神经网络规范
这是服务器使用的网络规范
```
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(784, 500)
self.fc2 = nn.Linear(500, 10)
def forward(self, x):
x = x.view(-1, 784)
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
return x
```
### 启动训练
训练是在本地进行的,所以这是纯粹的本地PyTorch训练,这里没有什么特别的!
```
def train(args, model, train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
optimizer.zero_grad()
output = model(data)
output = F.log_softmax(output, dim=1)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % args.log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * args.batch_size, len(train_loader) * args.batch_size,
100. * batch_idx / len(train_loader), loss.item()))
model = Net()
optimizer = torch.optim.Adam(model.parameters(), lr=args.lr)
for epoch in range(1, args.epochs + 1):
train(args, model, train_loader, optimizer, epoch)
def test(args, model, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
output = model(data)
output = F.log_softmax(output, dim=1)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
test(args, model, test_loader)
```
现在我们的模型已经过训练,可以随时提供服务!
## 安全评估
现在,作为服务器,我们将模型发送给持有数据的工作人员。由于模型是敏感信息(您已经花了很多时间优化它!),因此您不想透露其权重,因此像我们之前对数据集所做的那样,秘密共享模型。
```
model.fix_precision().share(alice, bob, crypto_provider=crypto_provider)
```
这个测试函数执行加密的评估。模型权重,数据输入,预测和用于评分的目标均已加密!
但是,语法与模型的纯PyTorch测试非常相似,不是很好吗?
我们从服务器端解密的唯一一件事是最后的最终分数,以验证预测的平均水平。
```
def test(args, model, test_loader):
model.eval()
n_correct_priv = 0
n_total = 0
with torch.no_grad():
for data, target in test_loader[:n_test_batches]:
output = model(data)
pred = output.argmax(dim=1)
n_correct_priv += pred.eq(target.view_as(pred)).sum()
n_total += args.test_batch_size
# 此“测试”功能执行加密的评估。 模型权重,数据输入,预测和评分目标均已加密!
# 但是,您可以看到,语法与普通的PyTorch测试非常相似! 真好!
# 我们从服务器端解密的唯一一件事是在200批次数据的末尾最终得分,以验证预测的平均水平。
n_correct = n_correct_priv.copy().get().float_precision().long().item()
print('Test set: Accuracy: {}/{} ({:.0f}%)'.format(
n_correct, n_total,
100. * n_correct / n_total))
test(args, model, private_test_loader)
```
等等! 在这里,您已经学会了如何进行端到端的安全预测:服务器模型的权重尚未泄漏到客户端,并且服务器没有有关数据输入或分类输出的信息!
关于性能,在我的笔记本电脑(2,7 GHz Intel Core i7,16GB RAM)上,对一张图像进行分类的时间**少于0.1秒,约33ms。**但是,这正在使用非常快速的通信(所有工作机都在我的本地计算机上)。 性能会因不同工作机之间的通信速度而异。
## 结论
您已经看到利用PyTorch和PySyft进行实用的安全机器学习并保护用户数据非常容易,而不必成为加密专家!
关于此主题的更多信息将很快推出,包括卷积层以相对于其他库正确地基准化PySyft性能,以及神经网络的私有加密训练,当组织依靠外部敏感数据来训练自己的模型时,这是必需的。 敬请关注!
如果您喜欢此方法,并希望加入保护隐私,去中心化AI和AI供应链(数据)所有权的运动,则可以通过以下方式做到这一点!
# 恭喜!!! 是时候加入社区了!
祝贺您完成本笔记本教程! 如果您喜欢此方法,并希望加入保护隐私、去中心化AI和AI供应链(数据)所有权的运动,则可以通过以下方式做到这一点!
### 给 PySyft 加星
帮助我们的社区的最简单方法是仅通过给GitHub存储库加注星标! 这有助于提高人们对我们正在构建的出色工具的认识。
- [Star PySyft](https://github.com/OpenMined/PySyft)
### 选择我们的教程
我们编写了非常不错的教程,以更好地了解联合学习和隐私保护学习的外观,以及我们如何为实现这一目标添砖加瓦。
- [Checkout the PySyft tutorials](https://github.com/OpenMined/PySyft/tree/master/examples/tutorials)
### 加入我们的 Slack!
保持最新进展的最佳方法是加入我们的社区! 您可以通过填写以下表格来做到这一点[http://slack.openmined.org](http://slack.openmined.org)
### 加入代码项目!
对我们的社区做出贡献的最好方法是成为代码贡献者! 您随时可以转到PySyft GitHub的Issue页面并过滤“projects”。这将向您显示所有概述,选择您可以加入的项目!如果您不想加入项目,但是想做一些编码,则还可以通过搜索标记为“good first issue”的GitHub问题来寻找更多的“一次性”微型项目。
- [PySyft Projects](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3AProject)
- [Good First Issue Tickets](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
### 捐赠
如果您没有时间为我们的代码库做贡献,但仍想提供支持,那么您也可以成为Open Collective的支持者。所有捐款都将用于我们的网络托管和其他社区支出,例如黑客马拉松和聚会!
[OpenMined's Open Collective Page](https://opencollective.com/openmined)
|
github_jupyter
|
epochs = 10
n_test_batches = 200
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
import syft as sy
hook = sy.TorchHook(torch)
client = sy.VirtualWorker(hook, id="client")
bob = sy.VirtualWorker(hook, id="bob")
alice = sy.VirtualWorker(hook, id="alice")
crypto_provider = sy.VirtualWorker(hook, id="crypto_provider")
class Arguments():
def __init__(self):
self.batch_size = 64
self.test_batch_size = 50
self.epochs = epochs
self.lr = 0.001
self.log_interval = 100
args = Arguments()
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.test_batch_size, shuffle=True)
private_test_loader = []
for data, target in test_loader:
private_test_loader.append((
data.fix_precision().share(alice, bob, crypto_provider=crypto_provider),
target.fix_precision().share(alice, bob, crypto_provider=crypto_provider)
))
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(784, 500)
self.fc2 = nn.Linear(500, 10)
def forward(self, x):
x = x.view(-1, 784)
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
return x
def train(args, model, train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
optimizer.zero_grad()
output = model(data)
output = F.log_softmax(output, dim=1)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % args.log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * args.batch_size, len(train_loader) * args.batch_size,
100. * batch_idx / len(train_loader), loss.item()))
model = Net()
optimizer = torch.optim.Adam(model.parameters(), lr=args.lr)
for epoch in range(1, args.epochs + 1):
train(args, model, train_loader, optimizer, epoch)
def test(args, model, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
output = model(data)
output = F.log_softmax(output, dim=1)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
test(args, model, test_loader)
model.fix_precision().share(alice, bob, crypto_provider=crypto_provider)
def test(args, model, test_loader):
model.eval()
n_correct_priv = 0
n_total = 0
with torch.no_grad():
for data, target in test_loader[:n_test_batches]:
output = model(data)
pred = output.argmax(dim=1)
n_correct_priv += pred.eq(target.view_as(pred)).sum()
n_total += args.test_batch_size
# 此“测试”功能执行加密的评估。 模型权重,数据输入,预测和评分目标均已加密!
# 但是,您可以看到,语法与普通的PyTorch测试非常相似! 真好!
# 我们从服务器端解密的唯一一件事是在200批次数据的末尾最终得分,以验证预测的平均水平。
n_correct = n_correct_priv.copy().get().float_precision().long().item()
print('Test set: Accuracy: {}/{} ({:.0f}%)'.format(
n_correct, n_total,
100. * n_correct / n_total))
test(args, model, private_test_loader)
| 0.900399 | 0.941922 |
25/05/2020 - DataScience - SoloLearn
https://www.sololearn.com/Play/data-science/
# Data Manipulation
45 U.S. President heights (cm), age when started their presidency, height in chronological order
```
import numpy as np
heights = [189, 170, 189, 163, 183, 171, 185, 168, 173, 183, 173, 173, 175, 178, 183, 193, 178, 173, 174, 183, 183, 180, 168, 180, 170, 178, 182, 180, 183, 178, 182, 188, 175, 179, 183, 193, 182, 183, 177, 185, 188, 188, 182, 185, 191]
ages = [57, 61, 57, 57, 58, 57, 61, 54, 68, 51, 49, 64, 50, 48, 65, 52, 56, 46, 54, 49, 51, 47, 55, 55, 54, 42, 51, 56, 55, 51, 54, 51, 60, 62, 43, 55, 56, 61, 52, 69, 64, 46, 54, 47, 70]
heights_arr = np.array(heights)
heights_and_ages = heights + ages
# convert a list to a numpy array
height_age_arr = np.array(heights_and_ages)
height_age_arr = height_age_arr.reshape((2,45))
height_age_arr = height_age_arr.transpose()
height_age_arr
```
## Numpy (**Num**erical **P**ython)
45 U.S. president heights in centimeters in chronological order and stored them in a list, a built-in data type in python.
```
heights = [189, 170, 189, 163, 183, 171, 185, 168, 173, 183, 173, 173, 175, 178, 183, 193, 178, 173, 174, 183, 183, 180, 168, 180, 170, 178, 182, 180, 183, 178, 182, 188, 175, 179, 183, 193, 182, 183, 177, 185, 188, 188, 182, 185, 191]
```
In this example, George Washington was the first president, and his height was 189 cm.
If we wanted to know how many presidents are taller than 188cm, we could iterate through the list, compare each element against 188, and increase the count by 1 as the criteria is met.
```
cnt = 0
for height in heights:
if height > 188:
cnt +=1
print(cnt)
```
This shows that there are five presidents who are taller than 188 cm.
### Same problem with Numpy
```
import numpy as np
heights_arr = np.array(heights)
print((heights_arr > 188).sum())
```
The import statement allows us to access the functions and modules inside the numpy library. The library will be used frequently, so by convention numpy is imported under a shorter name, np. The second line is to convert the list into a numpy array object, via np.array(), that tools provided in numpy can work with. The last line provides a simple and natural solution, enabled by numpy, to the original question.
As our datasets grow larger and more complicated, numpy allows us the use of a more efficient and for-loop-free method to manipulate and analyze our data. Our dataset example in this module will include the US Presidents' height, age and party.
Fill in the blanks to create a numpy array 'arr' from the list 'lst' given below:
```
import numpy as np
lst = [1,0,1,0]
arr = ________(lst)
```
```
import numpy as np
lst = [1,0,1,0]
arr = np.array(lst)
```
### Size (~len: length) and Shape (dimension)
An array class in Numpy is called an ndarray or n-dimensional array. We can use this to count the number of presidents in heights_arr, use attribute numpy.ndarray.size:
```
heights_arr.size
```
Note that once an array is created in numpy, its size cannot be changed.
Size tells us how big the array is, shape tells us the dimension. To get current shape of an array use attribute shape:
```
heights_arr.shape
```
The output is a tuple, recall that the built-in data type tuple is immutable whereas a list is mutable, containing a single value, indicating that there is only one dimension, i.e., axis 0. Along axis 0, there are 45 elements (one for each president) Here, heights_arr is a 1d array.
Attribute size in numpy is similar to the built-in method len in python that is used to compute the length of iterable python objects like str, list, dict, etc.
### Reshape
```
ages = [57, 61, 57, 57, 58, 57, 61, 54, 68, 51, 49, 64, 50, 48, 65, 52, 56, 46, 54, 49, 51, 47, 55, 55, 54, 42, 51, 56, 55, 51, 54, 51, 60, 62, 43, 55, 56, 61, 52, 69, 64, 46, 54, 47, 70]
```
Since both heights and ages are all about the same presidents, we can combine them:
```
heights_and_ages = heights + ages
# convert a list to a numpy array
heights_and_ages_arr = np.array(heights_and_ages)
heights_and_ages_arr.shape
```
This produces one long array. It would be clearer if we could align height and age for each president and reorganize the data into a 2 by 45 matrix where the first row contains all heights and the second row contains ages. To achieve this, a new array can be created by calling numpy.ndarray.reshape with new dimensions specified in a tuple:
```
print(heights_and_ages_arr.reshape((2,45)))
heights_and_ages_arr = heights_and_ages_arr.reshape((2,45))
```
The reshaped array is now a 2darray, yet note that the original array is not changed. We can reshape an array in multiple ways, as long as the size of the reshaped array matches that of the original.
Numpy can calculate the shape (dimension) for us if we indicate the unknown dimension as -1. For example, given a 2darray `arr` of shape (3,4), arr.reshape(-1) would output a 1darray of shape (12,), while arr.reshape((-1,2)) would generate a 2darray of shape (6,2).
Review the code below and reshape the 1darray of shape (45,) to a 2darray with a shape of (5, 9).
```
heights_arr.______
>>> (45, )
heights_arr_reshaped =
heights_arr._______((5, 9))
```
```
heights_arr.shape
heights_arr_reshaped = heights_arr.reshape((5, 9))
```
### Data Type
Another characteristic about numpy array is that it is homogeneous, meaning each element must be of the same data type.
For example, in heights_arr, we recorded all heights in whole numbers; thus each element is stored as an integer in the array. To check the data type, use numpy.ndarray.dtype
```
heights_arr.dtype
```
If we mixed a float number in, say, the first element is 189.0 instead of 189:
```
heights_float = [189.0, 170, 189, 163, 183, 171, 185, 168, 173, 183, 173, 173, 175, 178, 183, 193, 178, 173, 174, 183, 183, 180, 168, 180, 170, 178, 182, 180, 183, 178, 182, 188, 175, 179, 183, 193, 182, 183, 177, 185, 188, 188, 182, 185, 191]
```
Then after converting the list into an array, we’d see all other numbers are coerced into floats:
```
heights_float_arr = np.array(heights_float)
heights_float_arr
heights_float_arr.dtype
```
Numpy supports several data types such as int (integer), float (numeric floating point), and bool (boolean values, True and False). The number after the data type, ex. int64, represents the bitsize of the data type.
What is the data type of heights_and_ages_arr?
Recall:
```
heights = [189, 170, 189, 163, ... 182, 185, 191]
ages = [57, 61, 57, ... 62, 43, 55, 56]
```
float, bool, int ?
int
### Indexing
We can use array indexing to select individual elements from arrays. Like Python lists, numpy index starts from 0.
To access the height of the 3rd president Thomas Jefferson in the 1darray 'heights_arr':
```
heights_arr[2]
```
In a 2darray, there are two axes, axis 0 and 1. Axis 0 runs downward down the rows whereas axis 1 runs horizontally across the columns.
In the 2darrary heights_and_ages_arr, recall that its dimensions are (2, 45). To find Thomas Jefferson’s age at the beginning of his presidency you would need to access the second row where ages are stored:
```
heights_and_ages_arr[1,2]
```
In 2darray, the row is axis 0 and the column is axis 1, therefore, to access a 2darray, numpy first looks for the position in rows, then in columns. So in our example heights_and_ages_arr[1,2], we are accessing row 2 (ages), column 3 (third president) to find Thomas Jefferson’s age.
Which of the following would correctly select row 5 and column 1 in the 2darray 'arr'?
```
arr[1,5]
arr[5,1]
arr[0,4]
arr[4,0]
```
arr[4,0]
### Slicing
What if we want to inspect the first three elements from the first row in a 2darray? We use ":" to select all the elements from the index up to but not including the ending index. This is called slicing.
```
heights_and_ages_arr[0, 0:3]
```
When the starting index is 0, we can omit it as shown below:
```
heights_and_ages_arr[0, :3]
```
What if we’d like to see the entire third column? Specify this by using a ":" as follows
```
heights_and_ages_arr[:, 3]
```
Numpy slicing syntax follows that of a python list: arr[start:stop:step]. When any of these are unspecified, they default to the values start=0, stop=size of dimension, step=1
Which of the following is the correct syntax to select the second column of the 2darray heights_and_ages_arr of shape (2, 45)?
```
heights_and_ages_arr[:, 1]
heights_and_ages_arr[, 1]
heights_and_ages_arr[1, :]
```
```
heights_and_ages_arr[:, 1]
```
### Assigning values
Sometimes you need to change the values of particular elements in the array. For example, we noticed the fourth entry in the heights_arr was incorrect, it should be 165 instead of 163, we can re-assign the correct number by:
```
heights_arr[3] = 165
```
In a 2darray, single values can be assigned easily. You can use indexing for one element. For example, change the fourth entry in heights_arr to 165:
```
heights_and_ages_arr[0, 3] = 165
heights_and_ages_arr
```
Or we can use slicing for multiple elements. For example, to replace the first row by its mean 180 in heights_and_ages_arr:
```
heights_and_ages_arr[0,:] = 180
heights_and_ages_arr
```
We can also combine slicing to change any subset of the array. For example, to reassign 0 to the left upper corner:
```
heights_and_ages_arr[:2, :2] = 0
heights_and_ages_arr
```
It is easy to update values in a subarray when you combine arrays with slicing. For more on basic slicing and advanced indexing in numpy check out this [link](https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html).
Replace the value in the second row and third column of the array heights_and_ages_arr with 2.
```
heights_and_ages_arr[__,__] = 2
```
```
heights_and_ages_arr[1,2] = 2
heights_and_ages_arr
```
### Assigning an array to an array
In addition, a 1darray or a 2darry can be assigned to a subset of another 2darray, as long as their shapes match. Recall the 2darray heights_and_ages_arr:
```
heights_and_ages_arr
```
If we want to update both height and age of the first president with new data, we can supply the data in a list:
```
heights_and_ages_arr[:, 0] = [190, 58]
heights_and_ages_arr
```
We can also update data in a subarray with a numpy array as such:
```
new_record = np.array([[180, 183, 190], [54, 50, 69]])
heights_and_ages_arr[:, 42:] = new_record
heights_and_ages_arr
```
Note the last three columns' values have changed.
Updating a multidimensional array with a new record is straightforward in numpy as long as their shapes match.
Drag and drop to update both heights and ages for the first five presidents in heights_and_ages_arr with a numpy array new_record:
```
new_record = np.__([[188, 190, 189, 165, 180],
[58, 62, 55, 68, 80]])
new_record.shape
>>> __
heights_and_ages_arr[:2,__] = new_record
ndarray
(2, 5)
(10, 1)
:5
:4
array
list
```
```
new_record = np.array([[188, 190, 189, 165, 180],
[58, 62, 55, 68, 80]])
new_record.shape
heights_and_ages_arr[:2,:5] = new_record
heights_and_ages_arr
```
### Combining two arrays
Oftentime we obtain data stored in different arrays and we need to combine them into one to keep it in one place. For example, instead of having the ages stored in a list, it could be stored in a 2darray:
```
heights_arr = np.array(heights)
heights_arr.shape
ages_arr = np.array(ages)
ages_arr.shape
ages_arr[:3,]
```
If we reshape the heights_arr to (45,1), the same as 'ages_arr', we can stack them horizontally (by column) to get a 2darray using 'hstack':
```
#heights_arr = heights_arr.reshape((45,1))
height_age_arr = np.hstack((heights_arr, ages_arr))
height_age_arr.shape
height_age_arr[:3,]
```
Now height_age_arr has both heights and ages for the presidents, each column corresponds to the height and age of one president.
Similarly, if we want to combine the arrays vertically (by row), we can use 'vstack'.
```
heights_arr = heights_arr.reshape((1,45))
ages_arr = ages_arr.reshape((1,45))
height_age_arr = np.vstack((heights_arr, ages_arr))
height_age_arr.shape
height_age_arr[:,:3]
```
To combine more than two arrays horizontally, simply add the additional arrays into the tuple.
Fill in the blank: To combine arr1 of shape (10, 2) and arr2 of shape (5, 2) into a new array arr3 of shape (15, 2):
```
arr3 = np.___((arr1, arr2))
```
```
arr3 = np.vstack((arr1, arr2))
```
Visualization 😁
hstack:
[ [ 🍔 🍕 ] [ 🍔 🍕 ] [ 🍔 🍕 ] ]
vstack:
[ [ 🍔 🍔 🍔 ]
[🍕 🍕 🍕 ] ]
### Concatenate
More generally, we can use the function numpy.concatenate. If we want to concatenate, link together, two arrays along rows, then pass 'axis = 0' to achieve the same result as using numpy.hstack; and pass 'axis = 1' if you want to combine arrays vertically.
In the example from the previous part, we were using hstack to combine two arrays horizontally, instead:
```
height_age_arr = np.concatenate((heights_arr, ages_arr), axis=0)
```
You can use np.hstack to concatenate arrays ONLY if they have the same number of rows.
Fill in the blanks: To concatenate arr1 of shape (5, 2) and arr2 of shape (5, 1) horizontally:
```
np.concatenate((arr1, arr2), axis= __)
```
```
arr1 = np.random.rand(5,2)
arr2 = np.random.rand(5,1)
np.concatenate((arr1, arr2), axis= 1)
```
### Operations
Performing mathematical operations on arrays is straightforward. For instance, to convert the heights from centimeters to feet, knowing that 1 centimeter is equal to 0.0328084 feet, we can use multiplication:
Other mathematical operations for addition, subtraction, division and power (+, -, /, **) work the same way on arrays.
```
height_age_arr[:,0]*0.0328084
```
Now we have all heights in feet. Note that this operation won’t change the original array, it returns a new 1darray where 0.0328084 has been multiplied to each element in the first column of 'heights_age_arr'.
### Method
Other operations, such as .min(), .max(), .mean(), work in a similar way to .sum().
In addition, there are several methods in numpy to perform more complex calculations on arrays. For example, the sum() method finds the sum of all the elements in an array:
```
height_age_arr.sum()
```
The sum of all heights and ages is 10575. In order to sum all heights and sum all ages separately, we can specify axis=0 to calculate the sum across the rows, that is, it computes the sum for each column, or column sum. On the other hand, to obtain the row sums specify axis=1. In this example, we want to calculate the total sum of heights and ages, respectively:
```
height_age_arr.sum(axis=0)
```
The output is the row sums: heights of all presidents (i.e., the first row) add up to 8100, and the sum of ages (i.e., the second row) is 2475.
### Comparisons
To find out how many rows satisfy the condition, use .sum() on the resultant 1d boolean array, e.g., (height_age_arr[:, 1] == 51).sum(), to see that there were exactly five presidents who started the presidency at age 51. True is treated as 1 and False as 0 in the sum.
In practicing data science, we often encounter comparisons to identify rows that match certain values. We can use operations including "<", ">", ">=", "<=", and "==" to do so. For example, in the height_age_arr dataset, we might be interested in only those presidents who started their presidency younger than 55 years old.
```
height_age_arr[:, 1] < 55
```
The output is a 1darray with boolean values that indicates which presidents meet the criteria. If we are only interested in which presidents started their presidency at 51 years of age, we can use "==" instead.
```
height_age_arr[:, 1] == 51
```
Complete the code to output a boolean array that indicates exactly how many presidents had a height of 170cm at the start of their presidency.
```
(height_age_arr[:, 0] == 170).sum()
```
### Mask and Subsetting
Masking is used to extract, modify, count, or otherwise manipulate values in an array based on some criterion. In our example, the criteria was height of 182cm or taller.
Now that rows matching certain criteria can be identified, a subset of the data can be found. For example, instead of the entire dataset, we want only tall presidents, that is, those presidents whose height is greater than or equal to 182 cm. We first create a mask, 1darray with boolean values:
```
mask = height_age_arr[:, 0] >= 182
mask.sum()
```
Then pass it to the first axis of `height_age_arr` to filter presidents who don’t meet the criteria:
```
tall_presidents = height_age_arr[mask, ]
tall_presidents.shape
```
This is a subarray of height_age_arr, and all presidents in tall_presidents were at least 182cm tall.
Fill in the blanks to obtain a subset of presidents who started presidency under age 50.
```
mask = height_age_arr[:,1] __ 50
young_president = height_age_arr[__,]
```
```
mask = height_age_arr[:,1] < 50
young_president = height_age_arr[mask,]
young_president
```
### Multiple Criteria
We can create a mask satisfying more than one criteria. For example, in addition to height, we want to find those presidents that were 50 years old or younger at the start of their presidency. To achieve this, we use & to separate the conditions and each condition is encapsulated with parentheses "()" as shown below:
```
mask = (height_age_arr[:, 0]>=182) & (height_age_arr[:,1]<=50)
height_age_arr[mask,]
```
The results show us that there are four presidents who satisfy both conditions.
Type in the code to create a mask that identifies the rows where the presidents are shorter than 180cm and started their presidency younger than 60 years old.
```
__ = (height_age_arr[:,0]<180) & (height_age_arr[:, 1]< __)
```
```
mask = (height_age_arr[:,0]<180) & (height_age_arr[:, 1]< 60)
height_age_arr[mask,]
```
### Quiz
Fill in the blanks to create a numpy array from the list given:
```
import numpy as __
lst = [1,-1,1,-1]
arr = np.__(lst)
list array arr np
```
```
import numpy as np
lst = [1,-1,1,-1]
arr = np.array(lst)
arr
```
What is the correct shape of arr = np.array([1,2,3])?
```
(3, )
(3, 1)
(1, 3)
```
```
arr = np.array([1,2,3])
arr.shape
```
To compute the column sums of the 2darray `arr`, which sum() method would we use?
```
arr.sum(axis = 0)
arr.sum()
arr.sum(axis = 1)
```
```
arr.sum(axis = 0)
```
Fill in the blanks to find the column minimums.
```
import numpy as np
arr = np.array([[ 1, 2, 3], [2, 4, 6]])
arr.__(__=0)
```
```
arr = np.array([[ 1, 2, 3], [2, 4, 6]])
arr.min(axis=0)
```
Which of the following commands is to multiply the first row of a 2darray arr by 3:
```
arr[:,0]*3
arr[0,:]*3
arr[0] * 3
```
```
arr[0,:]*3
```
# Data Analysis
|
github_jupyter
|
import numpy as np
heights = [189, 170, 189, 163, 183, 171, 185, 168, 173, 183, 173, 173, 175, 178, 183, 193, 178, 173, 174, 183, 183, 180, 168, 180, 170, 178, 182, 180, 183, 178, 182, 188, 175, 179, 183, 193, 182, 183, 177, 185, 188, 188, 182, 185, 191]
ages = [57, 61, 57, 57, 58, 57, 61, 54, 68, 51, 49, 64, 50, 48, 65, 52, 56, 46, 54, 49, 51, 47, 55, 55, 54, 42, 51, 56, 55, 51, 54, 51, 60, 62, 43, 55, 56, 61, 52, 69, 64, 46, 54, 47, 70]
heights_arr = np.array(heights)
heights_and_ages = heights + ages
# convert a list to a numpy array
height_age_arr = np.array(heights_and_ages)
height_age_arr = height_age_arr.reshape((2,45))
height_age_arr = height_age_arr.transpose()
height_age_arr
heights = [189, 170, 189, 163, 183, 171, 185, 168, 173, 183, 173, 173, 175, 178, 183, 193, 178, 173, 174, 183, 183, 180, 168, 180, 170, 178, 182, 180, 183, 178, 182, 188, 175, 179, 183, 193, 182, 183, 177, 185, 188, 188, 182, 185, 191]
cnt = 0
for height in heights:
if height > 188:
cnt +=1
print(cnt)
import numpy as np
heights_arr = np.array(heights)
print((heights_arr > 188).sum())
import numpy as np
lst = [1,0,1,0]
arr = ________(lst)
import numpy as np
lst = [1,0,1,0]
arr = np.array(lst)
heights_arr.size
heights_arr.shape
ages = [57, 61, 57, 57, 58, 57, 61, 54, 68, 51, 49, 64, 50, 48, 65, 52, 56, 46, 54, 49, 51, 47, 55, 55, 54, 42, 51, 56, 55, 51, 54, 51, 60, 62, 43, 55, 56, 61, 52, 69, 64, 46, 54, 47, 70]
heights_and_ages = heights + ages
# convert a list to a numpy array
heights_and_ages_arr = np.array(heights_and_ages)
heights_and_ages_arr.shape
print(heights_and_ages_arr.reshape((2,45)))
heights_and_ages_arr = heights_and_ages_arr.reshape((2,45))
heights_arr.______
>>> (45, )
heights_arr_reshaped =
heights_arr._______((5, 9))
heights_arr.shape
heights_arr_reshaped = heights_arr.reshape((5, 9))
heights_arr.dtype
heights_float = [189.0, 170, 189, 163, 183, 171, 185, 168, 173, 183, 173, 173, 175, 178, 183, 193, 178, 173, 174, 183, 183, 180, 168, 180, 170, 178, 182, 180, 183, 178, 182, 188, 175, 179, 183, 193, 182, 183, 177, 185, 188, 188, 182, 185, 191]
heights_float_arr = np.array(heights_float)
heights_float_arr
heights_float_arr.dtype
heights = [189, 170, 189, 163, ... 182, 185, 191]
ages = [57, 61, 57, ... 62, 43, 55, 56]
heights_arr[2]
heights_and_ages_arr[1,2]
arr[1,5]
arr[5,1]
arr[0,4]
arr[4,0]
heights_and_ages_arr[0, 0:3]
heights_and_ages_arr[0, :3]
heights_and_ages_arr[:, 3]
heights_and_ages_arr[:, 1]
heights_and_ages_arr[, 1]
heights_and_ages_arr[1, :]
heights_and_ages_arr[:, 1]
heights_arr[3] = 165
heights_and_ages_arr[0, 3] = 165
heights_and_ages_arr
heights_and_ages_arr[0,:] = 180
heights_and_ages_arr
heights_and_ages_arr[:2, :2] = 0
heights_and_ages_arr
heights_and_ages_arr[__,__] = 2
heights_and_ages_arr[1,2] = 2
heights_and_ages_arr
heights_and_ages_arr
heights_and_ages_arr[:, 0] = [190, 58]
heights_and_ages_arr
new_record = np.array([[180, 183, 190], [54, 50, 69]])
heights_and_ages_arr[:, 42:] = new_record
heights_and_ages_arr
new_record = np.__([[188, 190, 189, 165, 180],
[58, 62, 55, 68, 80]])
new_record.shape
>>> __
heights_and_ages_arr[:2,__] = new_record
ndarray
(2, 5)
(10, 1)
:5
:4
array
list
new_record = np.array([[188, 190, 189, 165, 180],
[58, 62, 55, 68, 80]])
new_record.shape
heights_and_ages_arr[:2,:5] = new_record
heights_and_ages_arr
heights_arr = np.array(heights)
heights_arr.shape
ages_arr = np.array(ages)
ages_arr.shape
ages_arr[:3,]
#heights_arr = heights_arr.reshape((45,1))
height_age_arr = np.hstack((heights_arr, ages_arr))
height_age_arr.shape
height_age_arr[:3,]
heights_arr = heights_arr.reshape((1,45))
ages_arr = ages_arr.reshape((1,45))
height_age_arr = np.vstack((heights_arr, ages_arr))
height_age_arr.shape
height_age_arr[:,:3]
arr3 = np.___((arr1, arr2))
arr3 = np.vstack((arr1, arr2))
height_age_arr = np.concatenate((heights_arr, ages_arr), axis=0)
np.concatenate((arr1, arr2), axis= __)
arr1 = np.random.rand(5,2)
arr2 = np.random.rand(5,1)
np.concatenate((arr1, arr2), axis= 1)
height_age_arr[:,0]*0.0328084
height_age_arr.sum()
height_age_arr.sum(axis=0)
height_age_arr[:, 1] < 55
height_age_arr[:, 1] == 51
(height_age_arr[:, 0] == 170).sum()
mask = height_age_arr[:, 0] >= 182
mask.sum()
tall_presidents = height_age_arr[mask, ]
tall_presidents.shape
mask = height_age_arr[:,1] __ 50
young_president = height_age_arr[__,]
mask = height_age_arr[:,1] < 50
young_president = height_age_arr[mask,]
young_president
mask = (height_age_arr[:, 0]>=182) & (height_age_arr[:,1]<=50)
height_age_arr[mask,]
__ = (height_age_arr[:,0]<180) & (height_age_arr[:, 1]< __)
mask = (height_age_arr[:,0]<180) & (height_age_arr[:, 1]< 60)
height_age_arr[mask,]
import numpy as __
lst = [1,-1,1,-1]
arr = np.__(lst)
list array arr np
import numpy as np
lst = [1,-1,1,-1]
arr = np.array(lst)
arr
(3, )
(3, 1)
(1, 3)
arr = np.array([1,2,3])
arr.shape
arr.sum(axis = 0)
arr.sum()
arr.sum(axis = 1)
arr.sum(axis = 0)
import numpy as np
arr = np.array([[ 1, 2, 3], [2, 4, 6]])
arr.__(__=0)
arr = np.array([[ 1, 2, 3], [2, 4, 6]])
arr.min(axis=0)
arr[:,0]*3
arr[0,:]*3
arr[0] * 3
arr[0,:]*3
| 0.211173 | 0.92667 |
# SR-GNN Session-based Recommendation on Sample dataset
## Setup
```
import torch
from torch import nn
from torch.nn import Module, Parameter
import torch.nn.functional as F
import pickle
import time
import numpy as np
import datetime
import math
import csv
import pickle
import operator
import os
import warnings
warnings.filterwarnings('ignore')
```
## Dataset
```
!wget -q --show-progress https://github.com/sparsh-ai/stanza/raw/S969796/datasets/sample_train-item-views.csv
!head sample_train-item-views.csv
print("-- Starting @ %ss" % datetime.datetime.now())
dataset = 'sample_train-item-views.csv'
with open(dataset, "r") as f:
reader = csv.DictReader(f, delimiter=';')
sess_clicks = {}
sess_date = {}
ctr = 0
curid = -1
curdate = None
for data in reader:
sessid = data['session_id']
if curdate and not curid == sessid:
date = ''
date = time.mktime(time.strptime(curdate, '%Y-%m-%d'))
sess_date[curid] = date
curid = sessid
item = data['item_id'], int(data['timeframe'])
curdate = ''
curdate = data['eventdate']
if sessid in sess_clicks:
sess_clicks[sessid] += [item]
else:
sess_clicks[sessid] = [item]
ctr += 1
date = ''
date = time.mktime(time.strptime(curdate, '%Y-%m-%d'))
for i in list(sess_clicks):
sorted_clicks = sorted(sess_clicks[i], key=operator.itemgetter(1))
sess_clicks[i] = [c[0] for c in sorted_clicks]
sess_date[curid] = date
print("-- Reading data @ %ss" % datetime.datetime.now())
# Filter out length 1 sessions
for s in list(sess_clicks):
if len(sess_clicks[s]) == 1:
del sess_clicks[s]
del sess_date[s]
# Count number of times each item appears
iid_counts = {}
for s in sess_clicks:
seq = sess_clicks[s]
for iid in seq:
if iid in iid_counts:
iid_counts[iid] += 1
else:
iid_counts[iid] = 1
sorted_counts = sorted(iid_counts.items(), key=operator.itemgetter(1))
length = len(sess_clicks)
for s in list(sess_clicks):
curseq = sess_clicks[s]
filseq = list(filter(lambda i: iid_counts[i] >= 5, curseq))
if len(filseq) < 2:
del sess_clicks[s]
del sess_date[s]
else:
sess_clicks[s] = filseq
# Split out test set based on dates
dates = list(sess_date.items())
maxdate = dates[0][1]
for _, date in dates:
if maxdate < date:
maxdate = date
# 7 days for test
splitdate = 0
splitdate = maxdate - 86400 * 7
print('Splitting date', splitdate) # Yoochoose: ('Split date', 1411930799.0)
tra_sess = filter(lambda x: x[1] < splitdate, dates)
tes_sess = filter(lambda x: x[1] > splitdate, dates)
# Sort sessions by date
tra_sess = sorted(tra_sess, key=operator.itemgetter(1)) # [(session_id, timestamp), (), ]
tes_sess = sorted(tes_sess, key=operator.itemgetter(1)) # [(session_id, timestamp), (), ]
print(len(tra_sess)) # 186670 # 7966257
print(len(tes_sess)) # 15979 # 15324
print(tra_sess[:3])
print(tes_sess[:3])
print("-- Splitting train set and test set @ %ss" % datetime.datetime.now())
# Choosing item count >=5 gives approximately the same number of items as reported in paper
item_dict = {}
# Convert training sessions to sequences and renumber items to start from 1
def obtian_tra():
train_ids = []
train_seqs = []
train_dates = []
item_ctr = 1
for s, date in tra_sess:
seq = sess_clicks[s]
outseq = []
for i in seq:
if i in item_dict:
outseq += [item_dict[i]]
else:
outseq += [item_ctr]
item_dict[i] = item_ctr
item_ctr += 1
if len(outseq) < 2: # Doesn't occur
continue
train_ids += [s]
train_dates += [date]
train_seqs += [outseq]
print(item_ctr) # 43098, 37484
return train_ids, train_dates, train_seqs
# Convert test sessions to sequences, ignoring items that do not appear in training set
def obtian_tes():
test_ids = []
test_seqs = []
test_dates = []
for s, date in tes_sess:
seq = sess_clicks[s]
outseq = []
for i in seq:
if i in item_dict:
outseq += [item_dict[i]]
if len(outseq) < 2:
continue
test_ids += [s]
test_dates += [date]
test_seqs += [outseq]
return test_ids, test_dates, test_seqs
tra_ids, tra_dates, tra_seqs = obtian_tra()
tes_ids, tes_dates, tes_seqs = obtian_tes()
def process_seqs(iseqs, idates):
out_seqs = []
out_dates = []
labs = []
ids = []
for id, seq, date in zip(range(len(iseqs)), iseqs, idates):
for i in range(1, len(seq)):
tar = seq[-i]
labs += [tar]
out_seqs += [seq[:-i]]
out_dates += [date]
ids += [id]
return out_seqs, out_dates, labs, ids
tr_seqs, tr_dates, tr_labs, tr_ids = process_seqs(tra_seqs, tra_dates)
te_seqs, te_dates, te_labs, te_ids = process_seqs(tes_seqs, tes_dates)
tra = (tr_seqs, tr_labs)
tes = (te_seqs, te_labs)
print(len(tr_seqs))
print(len(te_seqs))
print(tr_seqs[:3], tr_dates[:3], tr_labs[:3])
print(te_seqs[:3], te_dates[:3], te_labs[:3])
all = 0
for seq in tra_seqs:
all += len(seq)
for seq in tes_seqs:
all += len(seq)
print('avg length: ', all/(len(tra_seqs) + len(tes_seqs) * 1.0))
pickle.dump(tra, open('train.txt', 'wb'))
pickle.dump(tes, open('test.txt', 'wb'))
pickle.dump(tra_seqs, open('all_train_seq.txt', 'wb'))
print('Done.')
def data_masks(all_usr_pois, item_tail):
us_lens = [len(upois) for upois in all_usr_pois]
len_max = max(us_lens)
us_pois = [upois + item_tail * (len_max - le) for upois, le in zip(all_usr_pois, us_lens)]
us_msks = [[1] * le + [0] * (len_max - le) for le in us_lens]
return us_pois, us_msks, len_max
def split_validation(train_set, valid_portion):
train_set_x, train_set_y = train_set
n_samples = len(train_set_x)
sidx = np.arange(n_samples, dtype='int32')
np.random.shuffle(sidx)
n_train = int(np.round(n_samples * (1. - valid_portion)))
valid_set_x = [train_set_x[s] for s in sidx[n_train:]]
valid_set_y = [train_set_y[s] for s in sidx[n_train:]]
train_set_x = [train_set_x[s] for s in sidx[:n_train]]
train_set_y = [train_set_y[s] for s in sidx[:n_train]]
return (train_set_x, train_set_y), (valid_set_x, valid_set_y)
class Data():
def __init__(self, data, shuffle=False, graph=None):
inputs = data[0]
inputs, mask, len_max = data_masks(inputs, [0])
self.inputs = np.asarray(inputs)
self.mask = np.asarray(mask)
self.len_max = len_max
self.targets = np.asarray(data[1])
self.length = len(inputs)
self.shuffle = shuffle
self.graph = graph
def generate_batch(self, batch_size):
if self.shuffle:
shuffled_arg = np.arange(self.length)
np.random.shuffle(shuffled_arg)
self.inputs = self.inputs[shuffled_arg]
self.mask = self.mask[shuffled_arg]
self.targets = self.targets[shuffled_arg]
n_batch = int(self.length / batch_size)
if self.length % batch_size != 0:
n_batch += 1
slices = np.split(np.arange(n_batch * batch_size), n_batch)
slices[-1] = slices[-1][:(self.length - batch_size * (n_batch - 1))]
return slices
def get_slice(self, i):
inputs, mask, targets = self.inputs[i], self.mask[i], self.targets[i]
items, n_node, A, alias_inputs = [], [], [], []
for u_input in inputs:
n_node.append(len(np.unique(u_input)))
max_n_node = np.max(n_node)
for u_input in inputs:
node = np.unique(u_input)
items.append(node.tolist() + (max_n_node - len(node)) * [0])
u_A = np.zeros((max_n_node, max_n_node))
for i in np.arange(len(u_input) - 1):
if u_input[i + 1] == 0:
break
u = np.where(node == u_input[i])[0][0]
v = np.where(node == u_input[i + 1])[0][0]
u_A[u][v] = 1
u_sum_in = np.sum(u_A, 0)
u_sum_in[np.where(u_sum_in == 0)] = 1
u_A_in = np.divide(u_A, u_sum_in)
u_sum_out = np.sum(u_A, 1)
u_sum_out[np.where(u_sum_out == 0)] = 1
u_A_out = np.divide(u_A.transpose(), u_sum_out)
u_A = np.concatenate([u_A_in, u_A_out]).transpose()
A.append(u_A)
alias_inputs.append([np.where(node == i)[0][0] for i in u_input])
return alias_inputs, A, items, mask, targets
```
## Model
```
class GNN(Module):
def __init__(self, hidden_size, step=1):
super(GNN, self).__init__()
self.step = step
self.hidden_size = hidden_size
self.input_size = hidden_size * 2
self.gate_size = 3 * hidden_size
self.w_ih = Parameter(torch.Tensor(self.gate_size, self.input_size))
self.w_hh = Parameter(torch.Tensor(self.gate_size, self.hidden_size))
self.b_ih = Parameter(torch.Tensor(self.gate_size))
self.b_hh = Parameter(torch.Tensor(self.gate_size))
self.b_iah = Parameter(torch.Tensor(self.hidden_size))
self.b_oah = Parameter(torch.Tensor(self.hidden_size))
self.linear_edge_in = nn.Linear(self.hidden_size, self.hidden_size, bias=True)
self.linear_edge_out = nn.Linear(self.hidden_size, self.hidden_size, bias=True)
self.linear_edge_f = nn.Linear(self.hidden_size, self.hidden_size, bias=True)
def GNNCell(self, A, hidden):
input_in = torch.matmul(A[:, :, :A.shape[1]], self.linear_edge_in(hidden)) + self.b_iah
input_out = torch.matmul(A[:, :, A.shape[1]: 2 * A.shape[1]], self.linear_edge_out(hidden)) + self.b_oah
inputs = torch.cat([input_in, input_out], 2)
gi = F.linear(inputs, self.w_ih, self.b_ih)
gh = F.linear(hidden, self.w_hh, self.b_hh)
i_r, i_i, i_n = gi.chunk(3, 2)
h_r, h_i, h_n = gh.chunk(3, 2)
resetgate = torch.sigmoid(i_r + h_r)
inputgate = torch.sigmoid(i_i + h_i)
newgate = torch.tanh(i_n + resetgate * h_n)
hy = newgate + inputgate * (hidden - newgate)
return hy
def forward(self, A, hidden):
for i in range(self.step):
hidden = self.GNNCell(A, hidden)
return hidden
class SessionGraph(Module):
def __init__(self, opt, n_node):
super(SessionGraph, self).__init__()
self.hidden_size = opt.hiddenSize
self.n_node = n_node
self.batch_size = opt.batchSize
self.nonhybrid = opt.nonhybrid
self.embedding = nn.Embedding(self.n_node, self.hidden_size)
self.gnn = GNN(self.hidden_size, step=opt.step)
self.linear_one = nn.Linear(self.hidden_size, self.hidden_size, bias=True)
self.linear_two = nn.Linear(self.hidden_size, self.hidden_size, bias=True)
self.linear_three = nn.Linear(self.hidden_size, 1, bias=False)
self.linear_transform = nn.Linear(self.hidden_size * 2, self.hidden_size, bias=True)
self.loss_function = nn.CrossEntropyLoss()
self.optimizer = torch.optim.Adam(self.parameters(), lr=opt.lr, weight_decay=opt.l2)
self.scheduler = torch.optim.lr_scheduler.StepLR(self.optimizer, step_size=opt.lr_dc_step, gamma=opt.lr_dc)
self.reset_parameters()
def reset_parameters(self):
stdv = 1.0 / math.sqrt(self.hidden_size)
for weight in self.parameters():
weight.data.uniform_(-stdv, stdv)
def compute_scores(self, hidden, mask):
ht = hidden[torch.arange(mask.shape[0]).long(), torch.sum(mask, 1) - 1] # batch_size x latent_size
q1 = self.linear_one(ht).view(ht.shape[0], 1, ht.shape[1]) # batch_size x 1 x latent_size
q2 = self.linear_two(hidden) # batch_size x seq_length x latent_size
alpha = self.linear_three(torch.sigmoid(q1 + q2))
a = torch.sum(alpha * hidden * mask.view(mask.shape[0], -1, 1).float(), 1)
if not self.nonhybrid:
a = self.linear_transform(torch.cat([a, ht], 1))
b = self.embedding.weight[1:] # n_nodes x latent_size
scores = torch.matmul(a, b.transpose(1, 0))
return scores
def forward(self, inputs, A):
hidden = self.embedding(inputs)
hidden = self.gnn(A, hidden)
return hidden
def trans_to_cuda(variable):
if torch.cuda.is_available():
return variable.cuda()
else:
return variable
def trans_to_cpu(variable):
if torch.cuda.is_available():
return variable.cpu()
else:
return variable
def forward(model, i, data):
alias_inputs, A, items, mask, targets = data.get_slice(i)
alias_inputs = trans_to_cuda(torch.Tensor(alias_inputs).long())
items = trans_to_cuda(torch.Tensor(items).long())
A = trans_to_cuda(torch.Tensor(A).float())
mask = trans_to_cuda(torch.Tensor(mask).long())
hidden = model(items, A)
get = lambda i: hidden[i][alias_inputs[i]]
seq_hidden = torch.stack([get(i) for i in torch.arange(len(alias_inputs)).long()])
return targets, model.compute_scores(seq_hidden, mask)
def train_test(model, train_data, test_data):
model.scheduler.step()
print('start training: ', datetime.datetime.now())
model.train()
total_loss = 0.0
slices = train_data.generate_batch(model.batch_size)
for i, j in zip(slices, np.arange(len(slices))):
model.optimizer.zero_grad()
targets, scores = forward(model, i, train_data)
targets = trans_to_cuda(torch.Tensor(targets).long())
loss = model.loss_function(scores, targets - 1)
loss.backward()
model.optimizer.step()
total_loss += loss
if j % int(len(slices) / 5 + 1) == 0:
print('[%d/%d] Loss: %.4f' % (j, len(slices), loss.item()))
print('\tLoss:\t%.3f' % total_loss)
print('start predicting: ', datetime.datetime.now())
model.eval()
hit, mrr = [], []
slices = test_data.generate_batch(model.batch_size)
for i in slices:
targets, scores = forward(model, i, test_data)
sub_scores = scores.topk(20)[1]
sub_scores = trans_to_cpu(sub_scores).detach().numpy()
for score, target, mask in zip(sub_scores, targets, test_data.mask):
hit.append(np.isin(target - 1, score))
if len(np.where(score == target - 1)[0]) == 0:
mrr.append(0)
else:
mrr.append(1 / (np.where(score == target - 1)[0][0] + 1))
hit = np.mean(hit) * 100
mrr = np.mean(mrr) * 100
return hit, mrr
```
## Main
```
class Args():
dataset = 'sample'
batchSize = 100 # input batch size
hiddenSize = 100 # hidden state size
epoch = 30 # the number of epochs to train for
lr = 0.001 # learning rate') # [0.001, 0.0005, 0.000
lr_dc = 0.1 # learning rate decay rate
lr_dc_step = 3 # the number of steps after which the learning rate decay
l2 = 1e-5 # l2 penalty') # [0.001, 0.0005, 0.0001, 0.00005, 0.0000
step = 1 # gnn propogation steps
patience = 10 # the number of epoch to wait before early stop
nonhybrid = True # only use the global preference to predict
validation = True # validation
valid_portion = 0.1 # split the portion of training set as validation set
args = Args()
train_data = pickle.load(open('train.txt', 'rb'))
if args.validation:
train_data, valid_data = split_validation(train_data, args.valid_portion)
test_data = valid_data
else:
test_data = pickle.load(open('test.txt', 'rb'))
train_data = Data(train_data, shuffle=True)
test_data = Data(test_data, shuffle=False)
n_node = 310
model = trans_to_cuda(SessionGraph(args, n_node))
start = time.time()
best_result = [0, 0]
best_epoch = [0, 0]
bad_counter = 0
for epoch in range(args.epoch):
print('-------------------------------------------------------')
print('epoch: ', epoch)
hit, mrr = train_test(model, train_data, test_data)
flag = 0
if hit >= best_result[0]:
best_result[0] = hit
best_epoch[0] = epoch
flag = 1
if mrr >= best_result[1]:
best_result[1] = mrr
best_epoch[1] = epoch
flag = 1
print('Best Result:')
print('\tRecall@20:\t%.4f\tMMR@20:\t%.4f\tEpoch:\t%d,\t%d'% (best_result[0], best_result[1], best_epoch[0], best_epoch[1]))
bad_counter += 1 - flag
if bad_counter >= args.patience:
break
print('-------------------------------------------------------')
end = time.time()
print("Run time: %f s" % (end - start))
```
---
```
!apt-get -qq install tree
!rm -r sample_data
!tree -h --du .
!pip install -q watermark
%reload_ext watermark
%watermark -a "Sparsh A." -m -iv -u -t -d
```
---
**END**
|
github_jupyter
|
import torch
from torch import nn
from torch.nn import Module, Parameter
import torch.nn.functional as F
import pickle
import time
import numpy as np
import datetime
import math
import csv
import pickle
import operator
import os
import warnings
warnings.filterwarnings('ignore')
!wget -q --show-progress https://github.com/sparsh-ai/stanza/raw/S969796/datasets/sample_train-item-views.csv
!head sample_train-item-views.csv
print("-- Starting @ %ss" % datetime.datetime.now())
dataset = 'sample_train-item-views.csv'
with open(dataset, "r") as f:
reader = csv.DictReader(f, delimiter=';')
sess_clicks = {}
sess_date = {}
ctr = 0
curid = -1
curdate = None
for data in reader:
sessid = data['session_id']
if curdate and not curid == sessid:
date = ''
date = time.mktime(time.strptime(curdate, '%Y-%m-%d'))
sess_date[curid] = date
curid = sessid
item = data['item_id'], int(data['timeframe'])
curdate = ''
curdate = data['eventdate']
if sessid in sess_clicks:
sess_clicks[sessid] += [item]
else:
sess_clicks[sessid] = [item]
ctr += 1
date = ''
date = time.mktime(time.strptime(curdate, '%Y-%m-%d'))
for i in list(sess_clicks):
sorted_clicks = sorted(sess_clicks[i], key=operator.itemgetter(1))
sess_clicks[i] = [c[0] for c in sorted_clicks]
sess_date[curid] = date
print("-- Reading data @ %ss" % datetime.datetime.now())
# Filter out length 1 sessions
for s in list(sess_clicks):
if len(sess_clicks[s]) == 1:
del sess_clicks[s]
del sess_date[s]
# Count number of times each item appears
iid_counts = {}
for s in sess_clicks:
seq = sess_clicks[s]
for iid in seq:
if iid in iid_counts:
iid_counts[iid] += 1
else:
iid_counts[iid] = 1
sorted_counts = sorted(iid_counts.items(), key=operator.itemgetter(1))
length = len(sess_clicks)
for s in list(sess_clicks):
curseq = sess_clicks[s]
filseq = list(filter(lambda i: iid_counts[i] >= 5, curseq))
if len(filseq) < 2:
del sess_clicks[s]
del sess_date[s]
else:
sess_clicks[s] = filseq
# Split out test set based on dates
dates = list(sess_date.items())
maxdate = dates[0][1]
for _, date in dates:
if maxdate < date:
maxdate = date
# 7 days for test
splitdate = 0
splitdate = maxdate - 86400 * 7
print('Splitting date', splitdate) # Yoochoose: ('Split date', 1411930799.0)
tra_sess = filter(lambda x: x[1] < splitdate, dates)
tes_sess = filter(lambda x: x[1] > splitdate, dates)
# Sort sessions by date
tra_sess = sorted(tra_sess, key=operator.itemgetter(1)) # [(session_id, timestamp), (), ]
tes_sess = sorted(tes_sess, key=operator.itemgetter(1)) # [(session_id, timestamp), (), ]
print(len(tra_sess)) # 186670 # 7966257
print(len(tes_sess)) # 15979 # 15324
print(tra_sess[:3])
print(tes_sess[:3])
print("-- Splitting train set and test set @ %ss" % datetime.datetime.now())
# Choosing item count >=5 gives approximately the same number of items as reported in paper
item_dict = {}
# Convert training sessions to sequences and renumber items to start from 1
def obtian_tra():
train_ids = []
train_seqs = []
train_dates = []
item_ctr = 1
for s, date in tra_sess:
seq = sess_clicks[s]
outseq = []
for i in seq:
if i in item_dict:
outseq += [item_dict[i]]
else:
outseq += [item_ctr]
item_dict[i] = item_ctr
item_ctr += 1
if len(outseq) < 2: # Doesn't occur
continue
train_ids += [s]
train_dates += [date]
train_seqs += [outseq]
print(item_ctr) # 43098, 37484
return train_ids, train_dates, train_seqs
# Convert test sessions to sequences, ignoring items that do not appear in training set
def obtian_tes():
test_ids = []
test_seqs = []
test_dates = []
for s, date in tes_sess:
seq = sess_clicks[s]
outseq = []
for i in seq:
if i in item_dict:
outseq += [item_dict[i]]
if len(outseq) < 2:
continue
test_ids += [s]
test_dates += [date]
test_seqs += [outseq]
return test_ids, test_dates, test_seqs
tra_ids, tra_dates, tra_seqs = obtian_tra()
tes_ids, tes_dates, tes_seqs = obtian_tes()
def process_seqs(iseqs, idates):
out_seqs = []
out_dates = []
labs = []
ids = []
for id, seq, date in zip(range(len(iseqs)), iseqs, idates):
for i in range(1, len(seq)):
tar = seq[-i]
labs += [tar]
out_seqs += [seq[:-i]]
out_dates += [date]
ids += [id]
return out_seqs, out_dates, labs, ids
tr_seqs, tr_dates, tr_labs, tr_ids = process_seqs(tra_seqs, tra_dates)
te_seqs, te_dates, te_labs, te_ids = process_seqs(tes_seqs, tes_dates)
tra = (tr_seqs, tr_labs)
tes = (te_seqs, te_labs)
print(len(tr_seqs))
print(len(te_seqs))
print(tr_seqs[:3], tr_dates[:3], tr_labs[:3])
print(te_seqs[:3], te_dates[:3], te_labs[:3])
all = 0
for seq in tra_seqs:
all += len(seq)
for seq in tes_seqs:
all += len(seq)
print('avg length: ', all/(len(tra_seqs) + len(tes_seqs) * 1.0))
pickle.dump(tra, open('train.txt', 'wb'))
pickle.dump(tes, open('test.txt', 'wb'))
pickle.dump(tra_seqs, open('all_train_seq.txt', 'wb'))
print('Done.')
def data_masks(all_usr_pois, item_tail):
us_lens = [len(upois) for upois in all_usr_pois]
len_max = max(us_lens)
us_pois = [upois + item_tail * (len_max - le) for upois, le in zip(all_usr_pois, us_lens)]
us_msks = [[1] * le + [0] * (len_max - le) for le in us_lens]
return us_pois, us_msks, len_max
def split_validation(train_set, valid_portion):
train_set_x, train_set_y = train_set
n_samples = len(train_set_x)
sidx = np.arange(n_samples, dtype='int32')
np.random.shuffle(sidx)
n_train = int(np.round(n_samples * (1. - valid_portion)))
valid_set_x = [train_set_x[s] for s in sidx[n_train:]]
valid_set_y = [train_set_y[s] for s in sidx[n_train:]]
train_set_x = [train_set_x[s] for s in sidx[:n_train]]
train_set_y = [train_set_y[s] for s in sidx[:n_train]]
return (train_set_x, train_set_y), (valid_set_x, valid_set_y)
class Data():
def __init__(self, data, shuffle=False, graph=None):
inputs = data[0]
inputs, mask, len_max = data_masks(inputs, [0])
self.inputs = np.asarray(inputs)
self.mask = np.asarray(mask)
self.len_max = len_max
self.targets = np.asarray(data[1])
self.length = len(inputs)
self.shuffle = shuffle
self.graph = graph
def generate_batch(self, batch_size):
if self.shuffle:
shuffled_arg = np.arange(self.length)
np.random.shuffle(shuffled_arg)
self.inputs = self.inputs[shuffled_arg]
self.mask = self.mask[shuffled_arg]
self.targets = self.targets[shuffled_arg]
n_batch = int(self.length / batch_size)
if self.length % batch_size != 0:
n_batch += 1
slices = np.split(np.arange(n_batch * batch_size), n_batch)
slices[-1] = slices[-1][:(self.length - batch_size * (n_batch - 1))]
return slices
def get_slice(self, i):
inputs, mask, targets = self.inputs[i], self.mask[i], self.targets[i]
items, n_node, A, alias_inputs = [], [], [], []
for u_input in inputs:
n_node.append(len(np.unique(u_input)))
max_n_node = np.max(n_node)
for u_input in inputs:
node = np.unique(u_input)
items.append(node.tolist() + (max_n_node - len(node)) * [0])
u_A = np.zeros((max_n_node, max_n_node))
for i in np.arange(len(u_input) - 1):
if u_input[i + 1] == 0:
break
u = np.where(node == u_input[i])[0][0]
v = np.where(node == u_input[i + 1])[0][0]
u_A[u][v] = 1
u_sum_in = np.sum(u_A, 0)
u_sum_in[np.where(u_sum_in == 0)] = 1
u_A_in = np.divide(u_A, u_sum_in)
u_sum_out = np.sum(u_A, 1)
u_sum_out[np.where(u_sum_out == 0)] = 1
u_A_out = np.divide(u_A.transpose(), u_sum_out)
u_A = np.concatenate([u_A_in, u_A_out]).transpose()
A.append(u_A)
alias_inputs.append([np.where(node == i)[0][0] for i in u_input])
return alias_inputs, A, items, mask, targets
class GNN(Module):
def __init__(self, hidden_size, step=1):
super(GNN, self).__init__()
self.step = step
self.hidden_size = hidden_size
self.input_size = hidden_size * 2
self.gate_size = 3 * hidden_size
self.w_ih = Parameter(torch.Tensor(self.gate_size, self.input_size))
self.w_hh = Parameter(torch.Tensor(self.gate_size, self.hidden_size))
self.b_ih = Parameter(torch.Tensor(self.gate_size))
self.b_hh = Parameter(torch.Tensor(self.gate_size))
self.b_iah = Parameter(torch.Tensor(self.hidden_size))
self.b_oah = Parameter(torch.Tensor(self.hidden_size))
self.linear_edge_in = nn.Linear(self.hidden_size, self.hidden_size, bias=True)
self.linear_edge_out = nn.Linear(self.hidden_size, self.hidden_size, bias=True)
self.linear_edge_f = nn.Linear(self.hidden_size, self.hidden_size, bias=True)
def GNNCell(self, A, hidden):
input_in = torch.matmul(A[:, :, :A.shape[1]], self.linear_edge_in(hidden)) + self.b_iah
input_out = torch.matmul(A[:, :, A.shape[1]: 2 * A.shape[1]], self.linear_edge_out(hidden)) + self.b_oah
inputs = torch.cat([input_in, input_out], 2)
gi = F.linear(inputs, self.w_ih, self.b_ih)
gh = F.linear(hidden, self.w_hh, self.b_hh)
i_r, i_i, i_n = gi.chunk(3, 2)
h_r, h_i, h_n = gh.chunk(3, 2)
resetgate = torch.sigmoid(i_r + h_r)
inputgate = torch.sigmoid(i_i + h_i)
newgate = torch.tanh(i_n + resetgate * h_n)
hy = newgate + inputgate * (hidden - newgate)
return hy
def forward(self, A, hidden):
for i in range(self.step):
hidden = self.GNNCell(A, hidden)
return hidden
class SessionGraph(Module):
def __init__(self, opt, n_node):
super(SessionGraph, self).__init__()
self.hidden_size = opt.hiddenSize
self.n_node = n_node
self.batch_size = opt.batchSize
self.nonhybrid = opt.nonhybrid
self.embedding = nn.Embedding(self.n_node, self.hidden_size)
self.gnn = GNN(self.hidden_size, step=opt.step)
self.linear_one = nn.Linear(self.hidden_size, self.hidden_size, bias=True)
self.linear_two = nn.Linear(self.hidden_size, self.hidden_size, bias=True)
self.linear_three = nn.Linear(self.hidden_size, 1, bias=False)
self.linear_transform = nn.Linear(self.hidden_size * 2, self.hidden_size, bias=True)
self.loss_function = nn.CrossEntropyLoss()
self.optimizer = torch.optim.Adam(self.parameters(), lr=opt.lr, weight_decay=opt.l2)
self.scheduler = torch.optim.lr_scheduler.StepLR(self.optimizer, step_size=opt.lr_dc_step, gamma=opt.lr_dc)
self.reset_parameters()
def reset_parameters(self):
stdv = 1.0 / math.sqrt(self.hidden_size)
for weight in self.parameters():
weight.data.uniform_(-stdv, stdv)
def compute_scores(self, hidden, mask):
ht = hidden[torch.arange(mask.shape[0]).long(), torch.sum(mask, 1) - 1] # batch_size x latent_size
q1 = self.linear_one(ht).view(ht.shape[0], 1, ht.shape[1]) # batch_size x 1 x latent_size
q2 = self.linear_two(hidden) # batch_size x seq_length x latent_size
alpha = self.linear_three(torch.sigmoid(q1 + q2))
a = torch.sum(alpha * hidden * mask.view(mask.shape[0], -1, 1).float(), 1)
if not self.nonhybrid:
a = self.linear_transform(torch.cat([a, ht], 1))
b = self.embedding.weight[1:] # n_nodes x latent_size
scores = torch.matmul(a, b.transpose(1, 0))
return scores
def forward(self, inputs, A):
hidden = self.embedding(inputs)
hidden = self.gnn(A, hidden)
return hidden
def trans_to_cuda(variable):
if torch.cuda.is_available():
return variable.cuda()
else:
return variable
def trans_to_cpu(variable):
if torch.cuda.is_available():
return variable.cpu()
else:
return variable
def forward(model, i, data):
alias_inputs, A, items, mask, targets = data.get_slice(i)
alias_inputs = trans_to_cuda(torch.Tensor(alias_inputs).long())
items = trans_to_cuda(torch.Tensor(items).long())
A = trans_to_cuda(torch.Tensor(A).float())
mask = trans_to_cuda(torch.Tensor(mask).long())
hidden = model(items, A)
get = lambda i: hidden[i][alias_inputs[i]]
seq_hidden = torch.stack([get(i) for i in torch.arange(len(alias_inputs)).long()])
return targets, model.compute_scores(seq_hidden, mask)
def train_test(model, train_data, test_data):
model.scheduler.step()
print('start training: ', datetime.datetime.now())
model.train()
total_loss = 0.0
slices = train_data.generate_batch(model.batch_size)
for i, j in zip(slices, np.arange(len(slices))):
model.optimizer.zero_grad()
targets, scores = forward(model, i, train_data)
targets = trans_to_cuda(torch.Tensor(targets).long())
loss = model.loss_function(scores, targets - 1)
loss.backward()
model.optimizer.step()
total_loss += loss
if j % int(len(slices) / 5 + 1) == 0:
print('[%d/%d] Loss: %.4f' % (j, len(slices), loss.item()))
print('\tLoss:\t%.3f' % total_loss)
print('start predicting: ', datetime.datetime.now())
model.eval()
hit, mrr = [], []
slices = test_data.generate_batch(model.batch_size)
for i in slices:
targets, scores = forward(model, i, test_data)
sub_scores = scores.topk(20)[1]
sub_scores = trans_to_cpu(sub_scores).detach().numpy()
for score, target, mask in zip(sub_scores, targets, test_data.mask):
hit.append(np.isin(target - 1, score))
if len(np.where(score == target - 1)[0]) == 0:
mrr.append(0)
else:
mrr.append(1 / (np.where(score == target - 1)[0][0] + 1))
hit = np.mean(hit) * 100
mrr = np.mean(mrr) * 100
return hit, mrr
class Args():
dataset = 'sample'
batchSize = 100 # input batch size
hiddenSize = 100 # hidden state size
epoch = 30 # the number of epochs to train for
lr = 0.001 # learning rate') # [0.001, 0.0005, 0.000
lr_dc = 0.1 # learning rate decay rate
lr_dc_step = 3 # the number of steps after which the learning rate decay
l2 = 1e-5 # l2 penalty') # [0.001, 0.0005, 0.0001, 0.00005, 0.0000
step = 1 # gnn propogation steps
patience = 10 # the number of epoch to wait before early stop
nonhybrid = True # only use the global preference to predict
validation = True # validation
valid_portion = 0.1 # split the portion of training set as validation set
args = Args()
train_data = pickle.load(open('train.txt', 'rb'))
if args.validation:
train_data, valid_data = split_validation(train_data, args.valid_portion)
test_data = valid_data
else:
test_data = pickle.load(open('test.txt', 'rb'))
train_data = Data(train_data, shuffle=True)
test_data = Data(test_data, shuffle=False)
n_node = 310
model = trans_to_cuda(SessionGraph(args, n_node))
start = time.time()
best_result = [0, 0]
best_epoch = [0, 0]
bad_counter = 0
for epoch in range(args.epoch):
print('-------------------------------------------------------')
print('epoch: ', epoch)
hit, mrr = train_test(model, train_data, test_data)
flag = 0
if hit >= best_result[0]:
best_result[0] = hit
best_epoch[0] = epoch
flag = 1
if mrr >= best_result[1]:
best_result[1] = mrr
best_epoch[1] = epoch
flag = 1
print('Best Result:')
print('\tRecall@20:\t%.4f\tMMR@20:\t%.4f\tEpoch:\t%d,\t%d'% (best_result[0], best_result[1], best_epoch[0], best_epoch[1]))
bad_counter += 1 - flag
if bad_counter >= args.patience:
break
print('-------------------------------------------------------')
end = time.time()
print("Run time: %f s" % (end - start))
!apt-get -qq install tree
!rm -r sample_data
!tree -h --du .
!pip install -q watermark
%reload_ext watermark
%watermark -a "Sparsh A." -m -iv -u -t -d
| 0.320715 | 0.643889 |
# Major Leagues
- EECS 731 Project 5
- Author: Lazarus
- ID : 3028051
## Problem Statement
### Shipping and delivering to a place near you
1. Set up a data science project structure in a new git repository in your GitHub account
2. Download the product demand data set from
https://www.kaggle.com/felixzhao/productdemandforecasting
3. Load the data set into panda data frames
4. Formulate one or two ideas on how feature engineering would help the data set to establish additional value using exploratory data analysis
5. Build one or more forecasting models to determine the demand for a particular product using the other columns as features
6. Document your process and results
7. Commit your notebook, source code, visualizations and other supporting files to the git repository in GitHub
## What we want to do?
- Build one or more forecasting models to determine the demand for a particular product using the other columns as features
## Step 1: Prepare Environment and import Data
- Let's import pandas, numpy and pyplot libraries
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
product_data= pd.read_csv("../data/Historical_Product_Demand.csv")
product_data.head(5)
```
## Step 2: Data analysis and Feature Engineering
- Lets look at how much data we are dealing with
```
product_data.shape
```
- Let's plot warehouses vs count of them
```
sns.countplot(x="Warehouse", data= product_data)
```
### Treating Missing values
```
print("::::::::before:::::::::::")
print(product_data.isnull().sum())
product_data=product_data.dropna()
print("::::::::after::::::::::::")
print(product_data.isnull().sum())
```
### Let's use describe function to find out most repeated product in dataset
```
product_data.describe()
```
- We see that Product_1359 is at the top so let's use this particular product for our forecasting Model
- Let's create a new dataset which has only data related to product_1359
```
df = product_data.loc[product_data['Product_Code'] == 'Product_1359']
df = df.drop(columns=['Product_Code', 'Warehouse', 'Product_Category'])
df['Date'] = pd.to_datetime(df['Date'])
df = df.sort_values(by=['Date'])
df = df.reset_index(drop=True)
df.head()
```
- As we can see with same date there are multiple samples so lets combine such samples by adding demand values
```
demands = []
for index, row in df.iterrows():
demands.append(int(row['Order_Demand'].strip('()')))
df['Demand'] = demands
df = df.groupby('Date').agg({'Demand': 'sum'})
df.reset_index(inplace=True)
df.head()
```
### Let's plot the demand of product_1359
```
f, ax = plt.subplots(figsize=(20,6))
sns.lineplot(x= "Date", y="Demand", data= df)
```
- Now let's split date attribute to day, month and year
```
df["Date"]=df['Date'].astype(str)
asl=df['Date'].str.split('-',n=-1,expand=True)
df['year']=asl[0]
df['month']=asl[1]
df['day']=asl[2]
# df=df.drop(['Date'],axis=1)
df['year']=df['year'].astype(int)
df['month']=df['month'].astype(int)
df['day']=df['day'].astype(int)
df=df.set_index('Date')
df.head(5)
```
### Now we have our data to train lets split it such that we use 20% of data for testing
```
from sklearn.model_selection import train_test_split
x = df.drop(['Demand'], axis=1)
y = df["Demand"]
xtr, xval, ytr, yval = train_test_split(x, y, test_size=.2, shuffle=False)
```
## Forecasting Models
### Linear Regression model
```
from sklearn.metrics import mean_squared_error
from sklearn.linear_model import LinearRegression
reg= LinearRegression().fit(xtr,ytr)
print("Training R2 score",reg.score(xtr,ytr))
print("Testing R2 score",reg.score(xval,yval))
df["reg"]=reg.predict(x)
df.head(5)
```
- we can not make much from this, lets visualize predictions to understand better how Linear Regression model performed
```
plt.rcParams['figure.figsize'] = [20, 6]
plt.plot(xtr.index.values, ytr, color='blue', label='Training Data', alpha=0.5)
plt.plot(xval.index.values, yval, color='green', label='Testing Data', alpha=0.5)
plt.plot(df.index.values, df['reg'], color='red', label='Linear Regression Predictions')
plt.legend(loc='best')
plt.ylabel("Demand")
plt.xlabel("Date")
plt.title("Linear Regression Predictions on complete data")
plt.xticks([])
plt.show()
```
- We see that it performed worst throughout
- Let's check only for testing part
```
plt.rcParams['figure.figsize'] = [20, 6]
plt.plot(xval.index.values, yval, color='green', label='Testing Data', alpha=0.8)
plt.plot(xval.index.values, reg.predict(xval), color='red', label='Linear Regression Predictions', alpha=0.7)
plt.legend(loc='best')
plt.ylabel("Demand")
plt.xlabel("Date")
plt.title("Linear Regression Predictions on testing data")
plt.xticks([])
plt.show()
```
- We can do lot better with other models
### Gradient Boosting Model
```
from xgboost import XGBRegressor
gb =XGBRegressor()
gb= gb.fit(xtr,ytr)
print("Training R2 score",gb.score(xtr,ytr))
print("Testing R2 score",gb.score(xval,yval))
df["xgboost"]=gb.predict(x)
```
- Let's visualize the results of this model
```
plt.rcParams['figure.figsize'] = [20, 6]
plt.plot(xtr.index.values, ytr, color='blue', label='Training Data', alpha=0.5)
plt.plot(xval.index.values, yval, color='green', label='Testing Data', alpha=0.5)
plt.plot(df.index.values, df['xgboost'], color='red', label='XGBOOST Predictions', alpha=0.3)
plt.legend(loc='best')
plt.ylabel("Demand")
plt.xlabel("Date")
plt.title("Gradient Boosting Predictions on complete data")
plt.xticks([])
plt.show()
```
- I think this is way more better than previous model
- Lets just look at testing part
```
plt.rcParams['figure.figsize'] = [20, 6]
plt.plot(xval.index.values, yval, color='green', label='Testing Data', alpha=0.8)
plt.plot(xval.index.values, gb.predict(xval), color='red', label='XGBOOST Predictions', alpha=0.7)
plt.legend(loc='best')
plt.xticks([])
plt.ylabel("Demand")
plt.xlabel("Date")
plt.title("Gradient Boosting Predictions on testing data")
plt.show()
```
- We can see that predictions are not consistent but they are pretty good.
## Conclusions and Future Devlopments:
- Linear Regression model performed worst
- Gradient Boosting model was good
- To further improve we can make use of recurrent neural networks(RNN) which I am sure would perform best in such scenarios
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
product_data= pd.read_csv("../data/Historical_Product_Demand.csv")
product_data.head(5)
product_data.shape
sns.countplot(x="Warehouse", data= product_data)
print("::::::::before:::::::::::")
print(product_data.isnull().sum())
product_data=product_data.dropna()
print("::::::::after::::::::::::")
print(product_data.isnull().sum())
product_data.describe()
df = product_data.loc[product_data['Product_Code'] == 'Product_1359']
df = df.drop(columns=['Product_Code', 'Warehouse', 'Product_Category'])
df['Date'] = pd.to_datetime(df['Date'])
df = df.sort_values(by=['Date'])
df = df.reset_index(drop=True)
df.head()
demands = []
for index, row in df.iterrows():
demands.append(int(row['Order_Demand'].strip('()')))
df['Demand'] = demands
df = df.groupby('Date').agg({'Demand': 'sum'})
df.reset_index(inplace=True)
df.head()
f, ax = plt.subplots(figsize=(20,6))
sns.lineplot(x= "Date", y="Demand", data= df)
df["Date"]=df['Date'].astype(str)
asl=df['Date'].str.split('-',n=-1,expand=True)
df['year']=asl[0]
df['month']=asl[1]
df['day']=asl[2]
# df=df.drop(['Date'],axis=1)
df['year']=df['year'].astype(int)
df['month']=df['month'].astype(int)
df['day']=df['day'].astype(int)
df=df.set_index('Date')
df.head(5)
from sklearn.model_selection import train_test_split
x = df.drop(['Demand'], axis=1)
y = df["Demand"]
xtr, xval, ytr, yval = train_test_split(x, y, test_size=.2, shuffle=False)
from sklearn.metrics import mean_squared_error
from sklearn.linear_model import LinearRegression
reg= LinearRegression().fit(xtr,ytr)
print("Training R2 score",reg.score(xtr,ytr))
print("Testing R2 score",reg.score(xval,yval))
df["reg"]=reg.predict(x)
df.head(5)
plt.rcParams['figure.figsize'] = [20, 6]
plt.plot(xtr.index.values, ytr, color='blue', label='Training Data', alpha=0.5)
plt.plot(xval.index.values, yval, color='green', label='Testing Data', alpha=0.5)
plt.plot(df.index.values, df['reg'], color='red', label='Linear Regression Predictions')
plt.legend(loc='best')
plt.ylabel("Demand")
plt.xlabel("Date")
plt.title("Linear Regression Predictions on complete data")
plt.xticks([])
plt.show()
plt.rcParams['figure.figsize'] = [20, 6]
plt.plot(xval.index.values, yval, color='green', label='Testing Data', alpha=0.8)
plt.plot(xval.index.values, reg.predict(xval), color='red', label='Linear Regression Predictions', alpha=0.7)
plt.legend(loc='best')
plt.ylabel("Demand")
plt.xlabel("Date")
plt.title("Linear Regression Predictions on testing data")
plt.xticks([])
plt.show()
from xgboost import XGBRegressor
gb =XGBRegressor()
gb= gb.fit(xtr,ytr)
print("Training R2 score",gb.score(xtr,ytr))
print("Testing R2 score",gb.score(xval,yval))
df["xgboost"]=gb.predict(x)
plt.rcParams['figure.figsize'] = [20, 6]
plt.plot(xtr.index.values, ytr, color='blue', label='Training Data', alpha=0.5)
plt.plot(xval.index.values, yval, color='green', label='Testing Data', alpha=0.5)
plt.plot(df.index.values, df['xgboost'], color='red', label='XGBOOST Predictions', alpha=0.3)
plt.legend(loc='best')
plt.ylabel("Demand")
plt.xlabel("Date")
plt.title("Gradient Boosting Predictions on complete data")
plt.xticks([])
plt.show()
plt.rcParams['figure.figsize'] = [20, 6]
plt.plot(xval.index.values, yval, color='green', label='Testing Data', alpha=0.8)
plt.plot(xval.index.values, gb.predict(xval), color='red', label='XGBOOST Predictions', alpha=0.7)
plt.legend(loc='best')
plt.xticks([])
plt.ylabel("Demand")
plt.xlabel("Date")
plt.title("Gradient Boosting Predictions on testing data")
plt.show()
| 0.552057 | 0.972571 |
# VacationPy
----
#### Note
* Keep an eye on your API usage. Use https://developers.google.com/maps/reporting/gmp-reporting as reference for how to monitor your usage and billing.
* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
```
!jupyter labextension list
jupyterlab build
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
# Import API key
from api_keys import g_key
```
### Store Part I results into DataFrame
* Load the csv exported in Part I to a DataFrame
```
csv_file = os.path.join('output_data/cities.csv')
city_df = pd.read_csv(csv_file)
city_df
```
### Humidity Heatmap
* Configure gmaps.
* Use the Lat and Lng as locations and Humidity as the weight.
* Add Heatmap layer to map.
```
gmaps.configure(api_key=g_key)
locations = city_df[["Lat", "Lng"]]
humidity = city_df["Humidity"]
fig = gmaps.figure()
heat_layer = gmaps.heatmap_layer(locations, weights=humidity, dissipating=False)
heat_layer.max_intensity = 100
heat_layer.point_radius = 5
fig.add_layer(heat_layer)
fig
```
### Create new DataFrame fitting weather criteria
* Narrow down the cities to fit weather conditions.
* Drop any rows will null values.
```
ideal_df = city_df[(city_df['Temperature'] >= 75) & (city_df['Temperature'] <= 90)]
ideal_df = ideal_df[ideal_df['Wind Speed'] <= 10]
ideal_df = ideal_df[ideal_df['Clouds'] <= 10]
ideal_df = ideal_df[ideal_df['Humidity'] <= 70]
hotel_df = ideal_df
hotel_df
```
### Hotel Map
* Store into variable named `hotel_df`.
* Add a "Hotel Name" column to the DataFrame.
* Set parameters to search for hotels with 5000 meters.
* Hit the Google Places API for each city's coordinates.
* Store the first Hotel result into the DataFrame.
* Plot markers on top of the heatmap.
```
hotel_df['Hotel Name'] = ""
hotel_df
for index, row in hotel_df.iterrows():
try:
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
params = {
"keyword": "hotel",
"radius": 5000,
"key": g_key,
}
lat = row['Lat']
lng = row['Lng']
params['location'] = f"{lat}, {lng}"
hotel_data = requests.get(base_url, params=params).json()
hotel_df.loc[index, "Hotel Name"] = hotel_data["results"][0]["name"]
except IndexError:
hotel_df.loc[index, "Hotel Name"] = "NaN"
hotel_df
# NOTE: Do not change any of the code in this cell
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
</dl>
"""
# Store the DataFrame Row
# NOTE: be sure to update with your DataFrame name
hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]
locations = hotel_df[["Lat", "Lng"]]
markers = gmaps.marker_layer(locations, info_box_content=hotel_info)
fig.add_layer(markers)
fig
```
|
github_jupyter
|
!jupyter labextension list
jupyterlab build
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
# Import API key
from api_keys import g_key
csv_file = os.path.join('output_data/cities.csv')
city_df = pd.read_csv(csv_file)
city_df
gmaps.configure(api_key=g_key)
locations = city_df[["Lat", "Lng"]]
humidity = city_df["Humidity"]
fig = gmaps.figure()
heat_layer = gmaps.heatmap_layer(locations, weights=humidity, dissipating=False)
heat_layer.max_intensity = 100
heat_layer.point_radius = 5
fig.add_layer(heat_layer)
fig
ideal_df = city_df[(city_df['Temperature'] >= 75) & (city_df['Temperature'] <= 90)]
ideal_df = ideal_df[ideal_df['Wind Speed'] <= 10]
ideal_df = ideal_df[ideal_df['Clouds'] <= 10]
ideal_df = ideal_df[ideal_df['Humidity'] <= 70]
hotel_df = ideal_df
hotel_df
hotel_df['Hotel Name'] = ""
hotel_df
for index, row in hotel_df.iterrows():
try:
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
params = {
"keyword": "hotel",
"radius": 5000,
"key": g_key,
}
lat = row['Lat']
lng = row['Lng']
params['location'] = f"{lat}, {lng}"
hotel_data = requests.get(base_url, params=params).json()
hotel_df.loc[index, "Hotel Name"] = hotel_data["results"][0]["name"]
except IndexError:
hotel_df.loc[index, "Hotel Name"] = "NaN"
hotel_df
# NOTE: Do not change any of the code in this cell
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
</dl>
"""
# Store the DataFrame Row
# NOTE: be sure to update with your DataFrame name
hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]
locations = hotel_df[["Lat", "Lng"]]
markers = gmaps.marker_layer(locations, info_box_content=hotel_info)
fig.add_layer(markers)
fig
| 0.346431 | 0.841044 |
# Python Cheat Sheet
Basic cheatsheet for Python mostly based on the book written by Al Sweigart, [Automate the Boring Stuff with Python](https://automatetheboringstuff.com/) under the [Creative Commons license](https://creativecommons.org/licenses/by-nc-sa/3.0/) and many other sources.
## Read It
- [Website](https://www.pythoncheatsheet.org)
- [Github](https://github.com/wilfredinni/python-cheatsheet)
- [PDF](https://github.com/wilfredinni/Python-cheatsheet/raw/master/python_cheat_sheet.pdf)
- [Jupyter Notebook](https://mybinder.org/v2/gh/wilfredinni/python-cheatsheet/master?filepath=jupyter_notebooks)
## Reading and Writing Files
### The File Reading/Writing Process
To read/write to a file in Python, you will want to use the `with`
statement, which will close the file for you after you are done.
### Opening and reading files with the open function
```
with open('C:\\Users\\your_home_folder\\hello.txt') as hello_file:
hello_content = hello_file.read()
hello_content
```
Alternatively, you can use the _readlines()_ method to get a list of string values from the file, one string for each line of text:
```
with open('sonnet29.txt') as sonnet_file:
sonnet_file.readlines()
```
You can also iterate through the file line by line:
```
with open('sonnet29.txt') as sonnet_file:
for line in sonnet_file: # note the new line character will be included in the line
print(line, end='')
```
### Writing to Files
```
with open('bacon.txt', 'w') as bacon_file:
bacon_file.write('Hello world!\n')
with open('bacon.txt', 'a') as bacon_file:
bacon_file.write('Bacon is not a vegetable.')
with open('bacon.txt') as bacon_file:
content = bacon_file.read()
print(content)
```
### Saving Variables with the shelve Module
To save variables:
```
import shelve
cats = ['Zophie', 'Pooka', 'Simon']
with shelve.open('mydata') as shelf_file:
shelf_file['cats'] = cats
```
To open and read variables:
```
with shelve.open('mydata') as shelf_file:
print(type(shelf_file))
print(shelf_file['cats'])
```
Just like dictionaries, shelf values have keys() and values() methods that will return list-like values of the keys and values in the shelf. Since these methods return list-like values instead of true lists, you should pass them to the list() function to get them in list form.
```
with shelve.open('mydata') as shelf_file:
print(list(shelf_file.keys()))
print(list(shelf_file.values()))
```
### Saving Variables with pprint.pformat
```
import pprint
cats = [{'name': 'Zophie', 'desc': 'chubby'}, {'name': 'Pooka', 'desc': 'fluffy'}]
pprint.pformat(cats)
with open('myCats.py', 'w') as file_obj:
file_obj.write('cats = {}\n'.format(pprint.pformat(cats)))
```
### Reading ZIP Files
```
import zipfile, os
os.chdir('C:\\') # move to the folder with example.zip
with zipfile.ZipFile('example.zip') as example_zip:
print(example_zip.namelist())
spam_info = example_zip.getinfo('spam.txt')
print(spam_info.file_size)
print(spam_info.compress_size)
print('Compressed file is %sx smaller!' % (round(spam_info.file_size / spam_info.compress_size, 2)))
```
### Extracting from ZIP Files
The extractall() method for ZipFile objects extracts all the files and folders from a ZIP file into the current working directory.
```
import zipfile, os
os.chdir('C:\\') # move to the folder with example.zip
with zipfile.ZipFile('example.zip') as example_zip:
example_zip.extractall()
```
The extract() method for ZipFile objects will extract a single file from the ZIP file. Continue the interactive shell example:
```
with zipfile.ZipFile('example.zip') as example_zip:
print(example_zip.extract('spam.txt'))
print(example_zip.extract('spam.txt', 'C:\\some\\new\\folders'))
```
### Creating and Adding to ZIP Files
```
import zipfile
with zipfile.ZipFile('new.zip', 'w') as new_zip:
new_zip.write('spam.txt', compress_type=zipfile.ZIP_DEFLATED)
```
This code will create a new ZIP file named new.zip that has the compressed contents of spam.txt.
|
github_jupyter
|
with open('C:\\Users\\your_home_folder\\hello.txt') as hello_file:
hello_content = hello_file.read()
hello_content
with open('sonnet29.txt') as sonnet_file:
sonnet_file.readlines()
with open('sonnet29.txt') as sonnet_file:
for line in sonnet_file: # note the new line character will be included in the line
print(line, end='')
with open('bacon.txt', 'w') as bacon_file:
bacon_file.write('Hello world!\n')
with open('bacon.txt', 'a') as bacon_file:
bacon_file.write('Bacon is not a vegetable.')
with open('bacon.txt') as bacon_file:
content = bacon_file.read()
print(content)
import shelve
cats = ['Zophie', 'Pooka', 'Simon']
with shelve.open('mydata') as shelf_file:
shelf_file['cats'] = cats
with shelve.open('mydata') as shelf_file:
print(type(shelf_file))
print(shelf_file['cats'])
with shelve.open('mydata') as shelf_file:
print(list(shelf_file.keys()))
print(list(shelf_file.values()))
import pprint
cats = [{'name': 'Zophie', 'desc': 'chubby'}, {'name': 'Pooka', 'desc': 'fluffy'}]
pprint.pformat(cats)
with open('myCats.py', 'w') as file_obj:
file_obj.write('cats = {}\n'.format(pprint.pformat(cats)))
import zipfile, os
os.chdir('C:\\') # move to the folder with example.zip
with zipfile.ZipFile('example.zip') as example_zip:
print(example_zip.namelist())
spam_info = example_zip.getinfo('spam.txt')
print(spam_info.file_size)
print(spam_info.compress_size)
print('Compressed file is %sx smaller!' % (round(spam_info.file_size / spam_info.compress_size, 2)))
import zipfile, os
os.chdir('C:\\') # move to the folder with example.zip
with zipfile.ZipFile('example.zip') as example_zip:
example_zip.extractall()
with zipfile.ZipFile('example.zip') as example_zip:
print(example_zip.extract('spam.txt'))
print(example_zip.extract('spam.txt', 'C:\\some\\new\\folders'))
import zipfile
with zipfile.ZipFile('new.zip', 'w') as new_zip:
new_zip.write('spam.txt', compress_type=zipfile.ZIP_DEFLATED)
| 0.126812 | 0.935582 |
## How to do it...
1. Import required libraries
```
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from sklearn.preprocessing import MinMaxScaler
```
2. Set prediction horizon and network hyperparameters
```
prediction_days = 60
nof_units = 4
```
3. Encapsulate model creation within a function
```
def create_model(nunits):
# Initializing the RNN
regressor = Sequential()
# Adding the input layer and the LSTM layer
regressor.add(LSTM(units=nunits, activation='sigmoid', input_shape=(None, 1)))
# Add output layer
regressor.add(Dense(units = 1))
# Compiling the RNN
regressor.compile(optimizer='adam', loss='mean_squared_error')
return regressor
```
4. Load data
```
# Import the dataset and encode the date
df = pd.read_csv('bitstampUSD_1-min_data_2012-01-01_to_2021-03-31.csv')
df['date'] = pd.to_datetime(df['Timestamp'], unit='s').dt.date
group = df.groupby('date')
Real_Price = group['Weighted_Price'].mean()
```
5. Split data into train and test
```
df_train = Real_Price[:len(Real_Price)-prediction_days]
df_test = Real_Price[len(Real_Price)-prediction_days:]
```
6. Apply preprocessing to improve convergence
```
training_set = df_train.values
training_set = np.reshape(training_set, (len(training_set), 1))
sc = MinMaxScaler()
training_set = sc.fit_transform(training_set)
X_train = training_set[0:len(training_set)-1]
y_train = training_set[1:len(training_set)]
X_train = np.reshape(X_train, (len(X_train), 1, 1))
```
7. Fit model
```
regressor = create_model(nunits = nof_units)
regressor.fit(X_train, y_train, batch_size=5, epochs=100)
```
8. Using the trained model, we create a time series prediction over the horizon
```
test_set = df_test.values
inputs = np.reshape(test_set, (len(test_set), 1))
inputs = sc.transform(inputs)
inputs = np.reshape(inputs, (len(inputs), 1, 1))
predicted_BTC_price = regressor.predict(inputs)
predicted_BTC_price = sc.inverse_transform(predicted_BTC_price)
```
9. View prediction results
```
plt.figure(figsize=(25,15), dpi=80, facecolor='w', edgecolor='k')
ax = plt.gca()
plt.plot(test_set, color='red', label='Real BTC Price')
plt.plot(predicted_BTC_price, color='blue', label='Predicted BTC Price')
plt.title('BTC Price Prediction', fontsize=40)
df_test = df_test.reset_index()
x = df_test.index
labels = df_test['date']
plt.xticks(x, labels, rotation = 'vertical')
for tick in ax.xaxis.get_major_ticks():
tick.label1.set_fontsize(18)
for tick in ax.yaxis.get_major_ticks():
tick.label1.set_fontsize(18)
plt.xlabel('Time', fontsize=40)
plt.ylabel('BTC Price(USD)', fontsize=40)
plt.legend(loc=2, prop={'size': 25})
plt.show()
```
|
github_jupyter
|
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from sklearn.preprocessing import MinMaxScaler
prediction_days = 60
nof_units = 4
def create_model(nunits):
# Initializing the RNN
regressor = Sequential()
# Adding the input layer and the LSTM layer
regressor.add(LSTM(units=nunits, activation='sigmoid', input_shape=(None, 1)))
# Add output layer
regressor.add(Dense(units = 1))
# Compiling the RNN
regressor.compile(optimizer='adam', loss='mean_squared_error')
return regressor
# Import the dataset and encode the date
df = pd.read_csv('bitstampUSD_1-min_data_2012-01-01_to_2021-03-31.csv')
df['date'] = pd.to_datetime(df['Timestamp'], unit='s').dt.date
group = df.groupby('date')
Real_Price = group['Weighted_Price'].mean()
df_train = Real_Price[:len(Real_Price)-prediction_days]
df_test = Real_Price[len(Real_Price)-prediction_days:]
training_set = df_train.values
training_set = np.reshape(training_set, (len(training_set), 1))
sc = MinMaxScaler()
training_set = sc.fit_transform(training_set)
X_train = training_set[0:len(training_set)-1]
y_train = training_set[1:len(training_set)]
X_train = np.reshape(X_train, (len(X_train), 1, 1))
regressor = create_model(nunits = nof_units)
regressor.fit(X_train, y_train, batch_size=5, epochs=100)
test_set = df_test.values
inputs = np.reshape(test_set, (len(test_set), 1))
inputs = sc.transform(inputs)
inputs = np.reshape(inputs, (len(inputs), 1, 1))
predicted_BTC_price = regressor.predict(inputs)
predicted_BTC_price = sc.inverse_transform(predicted_BTC_price)
plt.figure(figsize=(25,15), dpi=80, facecolor='w', edgecolor='k')
ax = plt.gca()
plt.plot(test_set, color='red', label='Real BTC Price')
plt.plot(predicted_BTC_price, color='blue', label='Predicted BTC Price')
plt.title('BTC Price Prediction', fontsize=40)
df_test = df_test.reset_index()
x = df_test.index
labels = df_test['date']
plt.xticks(x, labels, rotation = 'vertical')
for tick in ax.xaxis.get_major_ticks():
tick.label1.set_fontsize(18)
for tick in ax.yaxis.get_major_ticks():
tick.label1.set_fontsize(18)
plt.xlabel('Time', fontsize=40)
plt.ylabel('BTC Price(USD)', fontsize=40)
plt.legend(loc=2, prop={'size': 25})
plt.show()
| 0.875534 | 0.935582 |
# Free Body Diagram for Rigid Bodies
Renato Naville Watanabe
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib notebook
```
## Equivalent systems
<figure><img src="./../images/forcecouple.png" width=660 alt="force couple"/><figcaption><center><i>Figure from [this notebook](FreeBodyDiagram.ipynb)</i></center></figcaption></figure>
## Constraints
Examples of constraints
## 1) Horizontal fixed bar
<figure><img src="../images/bar1.png\" width=200 />
<figure><img src="../images/bar1FBD.png\" width=250 />
The resultant force being applied to the bar is:
\begin{equation}
\vec{\bf{F}} = -mg\hat{\bf{j}} + \vec{\bf{F_1}}
\end{equation}
And the total moment in the z direction around the point O is:
\begin{equation}
\vec{\bf{M_O}} = \vec{\bf{r_{C/O}}}\times-mg\hat{\bf{j}} + \vec{\bf{M}}
\end{equation}
The vector from the point O to the point C is given by $\vec{\bf{r_{C/O}}} =\frac{l}{2}\hat{\bf{i}}$.
As the bar is fixed, all the accelerations are zero. So we can find the forces and the moment at the restraint.
\begin{equation}
\vec{\bf{F}} = \vec{\bf{0}} \rightarrow -mg\hat{\bf{j}} + \vec{\bf{F_1}} = \vec{\bf{0}} \rightarrow \vec{\bf{F_1}} = mg\hat{\bf{j}}
\end{equation}
\begin{equation}
\vec{\bf{M_O}} = \vec{\bf{0}} \rightarrow \vec{\bf{r_{C/O}}}\times-mg\hat{\bf{j}} + \vec{\bf{M}} = \vec{\bf{0}} \rightarrow \frac{l}{2}\hat{\bf{i}}\times-mg\hat{\bf{j}} + \vec{\bf{M}} = \vec{\bf{0}} \rightarrow -\frac{mgl}{2}\hat{\bf{k}} + \vec{\bf{M}} = \vec{\bf{0}} \rightarrow \vec{\bf{M}} = \frac{mgl}{2}\hat{\bf{k}}
\end{equation}
## 2) Rotating ball with drag force
A basketball has a mass $m = 0.63$ kg and radius equal $R = 11.5$ cm.
<figure><img src="../images/ballRotDrag.png\" width=250 />
The resultant force being applied at the ball is:
\begin{equation}
\vec{\bf{F}} = -mg\hat{\bf{j}} - b_l\vec{\bf{v}} + b_m\omega\hat{\bf{k}}\times \vec{\bf{v}} = -mg\hat{\bf{j}} - b_l\frac{d\vec{\bf{r}}}{dt} = -mg\hat{\bf{j}} - b_l\frac{d\vec{\bf{r}}}{dt}
\end{equation}
\begin{equation}
\vec{\bf{M_C}} = - b_r\omega\hat{\bf{k}}=- b_r\frac{d\theta}{dt}\hat{\bf{k}}
\end{equation}
\begin{equation}
\frac{d\vec{\bf{H_C}}}{dt} = I_{zz}^{C}\frac{d^2\theta}{dt^2}\hat{\bf{k}}
\end{equation}
So, by the Newton-Euler laws:
\begin{equation}
\frac{d\vec{\bf{H_C}}}{dt}=\vec{\bf{M_C}} \rightarrow I_{zz}^{C}\frac{d^2\theta}{dt^2} = - b_r\frac{d\theta}{dt}
\end{equation}
and
\begin{equation}
m\frac{d\vec{\bf{r}}}{dt}=\vec{\bf{F}} \rightarrow \frac{d\vec{\bf{r}}}{dt} = -g\hat{\bf{j}} - \frac{b_l}{m}\frac{d\vec{\bf{r}}}{dt}
\end{equation}
So, we can split the differential equations above in three equations:
\begin{equation}
\frac{d^2\theta}{dt^2} = - \frac{b_r}{I_{zz}^{C}}\frac{d\theta}{dt}
\end{equation}
\begin{equation}
\frac{d^2x}{dt^2} = - \frac{b_l}{m}\frac{dx}{dt}
\end{equation}
\begin{equation}
\frac{d^2y}{dt^2} = -g - \frac{b_l}{m}\frac{dy}{dt}
\end{equation}
To solve these equations numerically, we can split each of these equations in first-order equations. The first-order equations we be written in a matrix form:
\begin{equation}
\left[\begin{array}{c}\frac{d\omega}{dt}\\\frac{dv_x}{dt}\\\frac{dv_y}{dt}\\\frac{d\theta}{dt}\\\frac{dx}{dt}\\\frac{dy}{dt} \end{array}\right] = \left[\begin{array}{c}- \frac{b_r}{I_{zz}^{C}}\omega\\- \frac{b_l}{m}v_x\\-g - \frac{b_l}{m}v_y\\\omega\\v_x\\v_y\end{array}\right]
\end{equation}
```
m = 0.63
R = 0.12
I = 2.0/3*m*R**2
bl = 0.5
br = 0.001
g = 9.81
x0 = 0
y0 = 2
v0 = 9.5
angle = 51*np.pi/180.0
vx0 = v0*np.cos(angle)
vy0 = v0*np.sin(angle)
dt = 0.001
t = np.arange(0, 2.1, dt)
x = x0
y = y0
vx = vx0
vy = vy0
omega = 100
theta = 0
r = np.array([x,y])
ballAngle = np.array([theta])
state = np.array([omega, vx, vy, theta, x, y])
while state[4]<=4.6:
dstatedt = np.array([-br/I*state[0],-bl/m*state[1], -g-bl/m*state[2],state[0], state[1], state[2] ])
state = state + dt * dstatedt
r = np.vstack((r, [state[4], state[5]]))
ballAngle = np.vstack((ballAngle, [state[3]]))
plt.figure()
plt.plot(r[0:-1:50,0], r[0:-1:50,1], 'o', color = np.array([1, 0.6,0]), markerSize =10)
plt.plot(np.array([4, 4.45]), np.array([3.05, 3.05]))
for i in range(len(r[0:-1:50,0])):
plt.plot(r[i*50,0]+np.array([-0.05*(np.cos(ballAngle[i*50])-np.sin(ballAngle[i*50])), 0.05*(np.cos(ballAngle[i*50])-np.sin(ballAngle[i*50]))]),
r[i*50,1] + np.array([-0.05*(np.sin(ballAngle[i*50])+np.cos(ballAngle[i*50])), 0.05*(np.sin(ballAngle[i*50])+np.cos(ballAngle[i*50]))]),'k')
plt.ylim((0,4.5))
plt.show()
print(state[0])
```
## 3) Rotating ball with drag force and magnus force
A basketball has a mass $m = 0.63$ kg and radius equal $R = 11.5$ cm.
The resultant force being applied at the ball is:
\begin{equation}
\vec{\bf{F}} = -mg\hat{\bf{j}} - b_l\vec{\bf{v}} + b_m\omega\hat{\bf{k}}\times \vec{\bf{v}} = -mg\hat{\bf{j}} - b_l\frac{d\vec{\bf{r}}}{dt} + b_m\omega\hat{\bf{k}}\times \frac{d\vec{\bf{r}}}{dt} = -mg\hat{\bf{j}} - b_l\frac{d\vec{\bf{r}}}{dt} + b_m\omega\frac{dx}{dt}\hat{\bf{j}}-b_m\omega\frac{dy}{dt}\hat{\bf{i}}
\end{equation}
\begin{equation}
\vec{\bf{M_C}} = - b_r\omega\hat{\bf{k}}=- b_r\frac{d\theta}{dt}\hat{\bf{k}}
\end{equation}
\begin{equation}
\frac{d\vec{\bf{H_C}}}{dt} = I_{zz}^{C}\frac{d^2\theta}{dt^2}\hat{\bf{k}}
\end{equation}
So, by the Newton-Euler laws:
\begin{equation}
\frac{d\vec{\bf{H_C}}}{dt}=\vec{\bf{M_C}} \rightarrow I_{zz}^{C}\frac{d^2\theta}{dt^2} = - b_r\frac{d\theta}{dt}
\end{equation}
and
\begin{equation}
m\frac{d\vec{\bf{r}}}{dt}=\vec{\bf{F}} \rightarrow \frac{d\vec{\bf{r}}}{dt} = -g\hat{\bf{j}} - \frac{b_l}{m}\frac{d\vec{\bf{r}}}{dt} + \frac{b_m}{m}\omega\frac{dx}{dt}\hat{\bf{j}}-\frac{b_m}{m}\omega\frac{dy}{dt}\hat{\bf{i}}
\end{equation}
So, we can split the differential equations above in three equations:
\begin{equation}
\frac{d^2\theta}{dt^2} = - \frac{b_r}{I_{zz}^{C}}\frac{d\theta}{dt}
\end{equation}
\begin{equation}
\frac{d^2x}{dt^2} = - \frac{b_l}{m}\frac{dx}{dt} -\frac{b_m}{m}\omega\frac{dy}{dt}
\end{equation}
\begin{equation}
\frac{d^2y}{dt^2} = -g - \frac{b_l}{m}\frac{dy}{dt} + \frac{b_m}{m}\omega\frac{dx}{dt}
\end{equation}
To solve these equations numerically, we can split each of these equations in first-order equations. The first-order equations we be written in a matrix form:
\begin{equation}
\left[\begin{array}{c}\frac{d\omega}{dt}\\\frac{dv_x}{dt}\\\frac{dv_y}{dt}\\\frac{d\theta}{dt}\\\frac{dx}{dt}\\\frac{dy}{dt} \end{array}\right] = \left[\begin{array}{c}- \frac{b_r}{I_{zz}^{C}}\omega\\- \frac{b_l}{m}v_x - \frac{b_m}{m}\omega v_y\\-g - \frac{b_l}{m}v_y + \frac{b_m}{m}\omega v_x\\\omega\\v_x\\v_y\end{array}\right]
\end{equation}
```
m = 0.63
R = 0.12
I = 2.0/3*m*R**2
bl = 0.5
br = 0.001
bm = 16/3*np.pi**2*R**3
g = 9.81
x0 = 0
y0 = 2
v0 = 9.4
angle = 50*np.pi/180.0
vx0 = v0*np.cos(angle)
vy0 = v0*np.sin(angle)
dt = 0.001
t = np.arange(0, 2.1, dt)
x = x0
y = y0
vx = vx0
vy = vy0
omega = 30
theta = 0
r = np.array([x,y])
ballAngle = np.array([theta])
state = np.array([omega, vx, vy, theta, x, y])
while state[4]<=4.6:
dstatedt = np.array([-br/I*state[0],-bl/m*state[1]-bm/m*state[2], -g-bl/m*state[2]+bm/m*state[1],state[0], state[1], state[2] ])
state = state + dt * dstatedt
r = np.vstack((r, [state[4], state[5]]))
ballAngle = np.vstack((ballAngle, [state[3]]))
plt.figure()
plt.plot(r[0:-1:50,0], r[0:-1:50,1], 'o', color = np.array([1, 0.6,0]), markerSize =10)
plt.plot(np.array([4, 4.45]), np.array([3.05, 3.05]))
for i in range(len(r[0:-1:50,0])):
plt.plot(r[i*50,0]+np.array([-0.05*(np.cos(ballAngle[i*50])-np.sin(ballAngle[i*50])), 0.05*(np.cos(ballAngle[i*50])-np.sin(ballAngle[i*50]))]),
r[i*50,1] + np.array([-0.05*(np.sin(ballAngle[i*50])+np.cos(ballAngle[i*50])), 0.05*(np.sin(ballAngle[i*50])+np.cos(ballAngle[i*50]))]),'k')
plt.ylim((0,4.5))
plt.show()
print(state[0])
```
## 3) Force platform
## 4) Pendulum
<figure><img src="../images/pendulum.png\" width=350 />
<figure><img src="../images/pendulumFBD.png\" width=300 />
The moment around the fixed point is:
\begin{equation}
\vec{\bf{M_O}} = \vec{\bf{r_{cm/O}}} \times (-mg\hat{\bf{j}})
\end{equation}
The resultant force applied in the bar is:
\begin{equation}
\vec{\bf{F}} = -mg\hat{\bf{j}} + \vec{\bf{F_1}}
\end{equation}
The angular momentum derivative of the bar around point O is:
\begin{equation}
\frac{d\vec{\bf{H_O}}}{dt} = I_{zz}^{cm} \frac{d^2\theta}{dt^2} \hat{\bf{k}} + m \vec{\bf{r_{cm/O}}} \times \vec{\bf{a_{cm}}}
\end{equation}
The vector from point O to the center of mass is:
\begin{equation}
\vec{\bf{r_{cm/O}}} = \frac{l}{2}\sin{\theta}\hat{\bf{i}}-\frac{l}{2}\cos{\theta}\hat{\bf{j}}
\end{equation}
The position of the center of mass is, considering the point O as the origin, equal to $\vec{\bf{r_{cm/O}}}$. So, the center of mass acceleration is obtained by deriving it twice.
\begin{equation}
\vec{\bf{v_{cm}}} = \frac{\vec{\bf{r_{cm/O}}}}{dt} = \frac{l}{2}(\cos{\theta}\hat{\bf{i}}+\sin{\theta}\hat{\bf{j}})\frac{d\theta}{dt} \rightarrow \vec{\bf{a_{cm}}} = \frac{l}{2}(-\sin{\theta}\hat{\bf{i}}+\cos{\theta}\hat{\bf{j}})\left(\frac{d\theta}{dt}\right)^2 + \frac{l}{2}(\cos{\theta}\hat{\bf{i}}+\sin{\theta}\hat{\bf{j}})\frac{d^2\theta}{dt^2}
\end{equation}
So, the moment around the point O:
\begin{equation}
\vec{\bf{M_O}} = \left(\frac{l}{2}\sin{\theta}\hat{\bf{i}}-\frac{l}{2}\cos{\theta}\hat{\bf{j}}\right) \times (-mg\hat{\bf{j}}) = \frac{-mgl}{2}\sin{\theta}\hat{\bf{k}}
\end{equation}
And the derivative of the angular momentum is:
\begin{equation}
\frac{d\vec{\bf{H_O}}}{dt} = I_{zz}^{cm} \frac{d^2\theta}{dt^2}\hat{\bf{k}} + m \frac{l}{2}(\sin{\theta}\hat{\bf{i}}-\cos{\theta}\hat{\bf{j}}) \times \left[ \frac{l}{2}(-\sin{\theta}\hat{\bf{i}}+\cos{\theta}\hat{\bf{j}})\left(\frac{d\theta}{dt}\right)^2 + \frac{l}{2}(\cos{\theta}\hat{\bf{i}}+\sin{\theta}\hat{\bf{j}})\frac{d^2\theta}{dt^2} \right] = I_{zz}^{cm} \frac{d^2\theta}{dt^2}\hat{\bf{k}} + m \frac{l^2}{4}\frac{d^2\theta}{dt^2} \hat{\bf{k}} =\left(I_{zz}^{cm} + \frac{ml^2}{4}\right)\frac{d^2\theta}{dt^2} \hat{\bf{k}}
\end{equation}
Now, by using the Newton-Euler laws, we can obtain the differential equation that describes the bar angle along time:
\begin{equation}
\frac{d\vec{\bf{H_O}}}{dt} = \vec{\bf{M_O}} \rightarrow \frac{-mgl}{2}\sin{\theta} = \left(I_{zz}^{cm} + \frac{ml^2}{4}\right)\frac{d^2\theta}{dt^2} \rightarrow \frac{d^2\theta}{dt^2} = \frac{-2mgl}{\left(4I_{zz}^{cm} + ml^2\right)}\sin{\theta}
\end{equation}
## 5) Inverted Pendulum
<figure><img src="../images/invertedPendulum.png\" width=350 />
<figure><img src="../images/invertedPendulumFBD.png\" width=300 />
The moment around the fixed point is:
\begin{equation}
\vec{\bf{M_O}} = \vec{\bf{r_{cm/O}}} \times (-mg\hat{\bf{j}})
\end{equation}
The resultant force applied in the bar is:
\begin{equation}
\vec{\bf{F}} = -mg\hat{\bf{j}} + \vec{\bf{F_1}}
\end{equation}
The angular momentum derivative of the bar around point O is:
\begin{equation}
\frac{d\vec{\bf{H_O}}}{dt} = I_{zz}^{cm} \frac{d^2\theta}{dt^2} \hat{\bf{k}} + m \vec{\bf{r_{cm/O}}} \times \vec{\bf{a_{cm}}}
\end{equation}
Until this part, the equations are exactly the same of the pendulum example. The difference is in the kinematics of the bar. Now,the vector from point O to the center of mass is:
\begin{equation}
\vec{\bf{r_{cm/O}}} = -\frac{l}{2}\sin{\theta}\hat{\bf{i}}+\frac{l}{2}\cos{\theta}\hat{\bf{j}}
\end{equation}
The position of the center of mass of the bar is equal to the vector $\vec{\bf{r_{cm/O}}}$, since the point O is with velocity zero relative to the global reference frame. So the center of mass acceleration can be obtained by deriving this vector twice:
\begin{equation}
\vec{\bf{v_{cm}}} = \frac{\vec{\bf{r_{cm/O}}}}{dt} = -\frac{l}{2}(\cos{\theta}\hat{\bf{i}}+\sin{\theta}\hat{\bf{j}})\frac{d\theta}{dt} \rightarrow \vec{\bf{a_{cm}}} = \frac{l}{2}(\sin{\theta}\hat{\bf{i}}-\cos{\theta}\hat{\bf{j}})\left(\frac{d\theta}{dt}\right)^2 - \frac{l}{2}(\cos{\theta}\hat{\bf{i}}+\sin{\theta}\hat{\bf{j}})\frac{d^2\theta}{dt^2}
\end{equation}
So, the moment around the point O is:
\begin{equation}
\vec{\bf{M_O}} = \left(-\frac{l}{2}\sin{\theta}\hat{\bf{i}}+\frac{l}{2}\cos{\theta}\hat{\bf{j}}\right) \times (-mg\hat{\bf{j}}) = \frac{mgl}{2}\sin{\theta} \hat{\bf{k}}
\end{equation}
And the derivative of the angular momentum is:
\begin{equation}
\frac{d\vec{\bf{H_O}}}{dt} = I_{zz}^{cm} \frac{d^2\theta}{dt^2} \hat{\bf{k}} + m \left(-\frac{l}{2}\sin{\theta}\hat{\bf{i}}+\frac{l}{2}\cos{\theta}\hat{\bf{j}}\right) \times \left[\frac{l}{2}(\sin{\theta}\hat{\bf{i}}-\cos{\theta}\hat{\bf{j}})\left(\frac{d\theta}{dt}\right)^2 - \frac{l}{2}(\cos{\theta}\hat{\bf{i}}+\sin{\theta}\hat{\bf{j}})\frac{d^2\theta}{dt^2}\right] = I_{zz}^{cm} \frac{d^2\theta}{dt^2} \hat{\bf{k}} + m \frac{l^2}{4}\frac{d^2\theta}{dt^2}\hat{\bf{k}} = \left(I_{zz}^{cm} + \frac{ml^2}{4}\right)\frac{d^2\theta}{dt^2}\hat{\bf{k}}
\end{equation}
By using the Newton-Euler laws, we can find the equation of motion of the bar:
\begin{equation}
\frac{d\vec{\bf{H_O}}}{dt} = \vec{\bf{M_O}} \rightarrow \left(I_{zz}^{cm} + \frac{ml^2}{4}\right)\frac{d^2\theta}{dt^2} = \frac{mgl}{2}\sin{\theta} \rightarrow \frac{d^2\theta}{dt^2} = \frac{2mgl}{\left(4I_{zz}^{cm} + ml^2\right)}\sin(\theta)
\end{equation}
## 6) Quiet standing
<figure><img src="../images/quietStanding.png\" width=350 /><figcaption><center>Adapted from [[3]](http://dx.doi.org/10.1371/journal.pcbi.1003944) </center></figcaption></figure>
<figure><img src="../images/quietStandingFBD.png\" width=300 /></figure>
The model is very similar to the inverted pendulum shown above. The difference is the muscular moment at the ankle joint. It is usual in Biomechanics to represent the net torque generated by all the muscles on a single joint as a single moment applied to the body.
The process to obtain the equation of motion of the angle is very similar. The moment around the ankle being applied to the body is:
\begin{equation}
\vec{\bf{M_A}} = \vec{\bf{T_A}} + \vec{\bf{r_{cm/A}}} \times (-m_Bg\hat{\bf{j}})
\end{equation}
And the derivative of the angular momentum is:
\begin{equation}
\frac{d\vec{\bf{H_A}}}{dt} = I_{zz}^{cm}\frac{d\theta_A}{dt} + m\vec{\bf{r_{cm/A}}} \times \vec{\bf{a_{cm}}}
\end{equation}
To find the kinematics of the bar, we could do the same procedure we have used in the pendulum and inverted pendulum examples, but this time we will use polar coordinates (for a revision on polar coordinates, see [Polar coordinates notebook](PolarCoordinates.ipynb)).
\begin{equation}
\vec{\bf{r_{cm/A}}} = \vec{\bf{r_{cm}}} = h_G\hat{\bf{e_r}} \rightarrow \vec{\bf{v_{cm}}} = h_G\frac{d\theta_A}{dt}\hat{\bf{e_\theta}} \rightarrow \vec{\bf{a_{cm}}} = -h_G\left(\frac{d\theta_A}{dt}\right)^2\hat{\bf{e_r}} + h_G\frac{d^2\theta_A}{dt^2}\hat{\bf{e_\theta}}
\end{equation}
where $\hat{\bf{e_r}} = -\sin(\theta_A)\hat{\bf{i}} + \cos(\theta_A)\hat{\bf{j}}$ and $\hat{\bf{e_\theta}} = -\cos(\theta_A)\hat{\bf{i}} - \sin(\theta_A)\hat{\bf{j}}$
Having the kinematics computed, we can go back to the moment and derivative of the angular momentum:
\begin{equation}
\vec{\bf{M_A}} = \vec{\bf{T_A}} + h_G\hat{\bf{e_r}} \times (-m_Bg\hat{\bf{j}}) = T_A\hat{\bf{k}} + h_Gm_Bg\sin(\theta_A)\hat{\bf{k}}
\end{equation}
\begin{equation}
\frac{d\vec{\bf{H_A}}}{dt} = I_{zz}^{cm}\frac{d^2\theta_A}{dt^2} \hat{\bf{k}} + mh_G\hat{\bf{e_r}} \times \left(-h_G\left(\frac{d^2\theta_A}{dt^2}\right)^2\hat{\bf{e_r}} + h_G\frac{d^2\theta_A}{dt^2}\hat{\bf{e_\theta}}\right) = I_{zz}^{cm}\frac{d^2\theta_A}{dt^2}\hat{\bf{k}} + mh_G^2\frac{d^2\theta_A}{dt^2}\hat{\bf{k}} = \left(I_{zz}^{cm} + mh_G^2\right)\frac{d^2\theta_A}{dt^2}\hat{\bf{k}}
\end{equation}
By using the Newton-Euler equations, we can now find the equation of motion of the body during quiet standing:
\begin{equation}
\vec{\bf{M_A}}=\frac{d\vec{\bf{H_A}}}{dt} \rightarrow \left(I_{zz}^{cm} + mh_G^2\right)\frac{d^2\theta_A}{dt^2} = T_A + h_Gm_B g\sin(\theta_A) \rightarrow \frac{d^2\theta_A}{dt^2} = \frac{h_Gm_B g}{I_{zz}^{cm} + mh_G^2}\sin(\theta_A)+ \frac{T_A}{I_{zz}^{cm} + mh_G^2}
\end{equation}
## 7) Segway
The horizontal acceleration of the point A of the segway is known. There is a motor at point A that generates a moment $\vec{\bf{T}}$
<figure><img src="../images/segway.png\" width=350 />
<figure><img src="../images/segwayFBD.png\" width=350 />
Considering the bar, the moment around the point A is:
\begin{equation}
\vec{\bf{M_A}} = \vec{\bf{r_{cm/A}}} \times (-mg\hat{\bf{j}}) + \vec{\bf{T}}
\end{equation}
And the derivative of the angular momentum around the point A is:
\begin{equation}
\frac{d\vec{H_A}}{dt} = I_{zz}^{cm}\frac{d^2\theta}{dt^2}+m\vec{\bf{r_{cm/A}}}\times \vec{\bf{a_{cm}}}
\end{equation}
The vector from the point A to the center of mass is:
\begin{equation}
\vec{\bf{r_{cm/A}}} = \frac{l}{2}\hat{\bf{e_R}}
\end{equation}
and the acceleration of the center of mass is obtained by deriving the position of the center of mass twice:
\begin{equation}
\vec{\bf{r_{cm}}} = \vec{\bf{r_{A}}} + \vec{\bf{r_{cm/A}}} = \vec{\bf{r_{A}}} + \frac{l}{2}\hat{\bf{e_R}} \rightarrow \vec{\bf{v_{cm}}} = \vec{\bf{v_{A}}} + \frac{l}{2}\frac{d\theta}{dt}\hat{\bf{e_\theta}} \rightarrow \vec{\bf{a_{cm}}} = \vec{\bf{a_{A}}} - \frac{l}{2}\left(\frac{d\theta}{dt}\right)^2\hat{\bf{e_R}}+\frac{l}{2}\frac{d^2\theta}{dt^2}\hat{\bf{e_\theta}}
\end{equation}
where $\hat{\bf{e_R}} = -\sin(\theta)\hat{\bf{i}}+\cos(\theta)\hat{\bf{j}}$ and $\hat{\bf{e_\theta}} = -\cos(\theta)\hat{\bf{i}}-\sin(\theta)\hat{\bf{j}}$.
Using the Newton-Euler laws, we can find the equation of motion of the upper part of the segway:
\begin{equation}
\frac{d\vec{H_A}}{dt} = \vec{\bf{M_A}} \rightarrow I_{zz}^{cm}\frac{d^2\theta}{dt^2}\hat{\bf{k}}+m\vec{\bf{r_{cm/A}}}\times \vec{\bf{a_{cm}}} = \vec{\bf{r_{cm/A}}} \times (-mg\hat{\bf{j}}) + \vec{\bf{T}} \rightarrow I_{zz}^{cm}\frac{d^2\theta}{dt^2}+m\frac{l}{2}\hat{\bf{e_R}}\times\left(a_{A}\hat{\bf{i}} - \frac{l}{2}\left(\frac{d\theta}{dt}\right)^2\hat{\bf{e_R}}+\frac{l}{2}\frac{d^2\theta}{dt^2}\hat{\bf{e_\theta}}\right) = ( \frac{l}{2}\hat{\bf{e_R}} \times (-mg\hat{\bf{j}}) + \vec{\bf{T}}) \rightarrow I_{zz}^{cm}\frac{d^2\theta}{dt^2}\hat{\bf{k}}-\frac{ml}{2}a_{A}\cos(\theta)\hat{\bf{k}} +\frac{ml^2}{4}\frac{d^2\theta}{dt^2}\hat{\bf{k}} = \frac{mgl}{2}\sin(\theta)\hat{\bf{k}} + T\hat{\bf{k}}
\end{equation}
So, the equation of motion of the upper part of the segway is:
\begin{equation}
\frac{d^2\theta}{dt^2} = \frac{mgl}{2\left(I_{zz}^{cm}+\frac{ml^2}{4}\right)}\sin(\theta) + \frac{ml}{2\left(I_{zz}^{cm}+\frac{ml^2}{4}\right)}a_{A}\cos(\theta)+ \frac{T}{\left(I_{zz}^{cm}+\frac{ml^2}{4}\right)}
\end{equation}
## 9) Bars linked by spring and damping
## 8) Double pendulum with actuators
<figure><img src="../images/doublePendulum.png\" width=350 />
<figure><img src="../images/doublePendulumFBD.png\" width=800 />
This model can represent, for example, the arm and forearm.
First, we can analyse both bars together.
### Problems
- Solve the problems 6 and 8 from [this Notebook](FreeBodyDiagram.ipynb).
- Solve the problems 16.2.9, 18.2.24, 18.3.26, 18.3.29 from Ruina and Pratap book.
### References
- Ruina A., Rudra P. (2015) [Introduction to Statics and Dynamics](http://ruina.tam.cornell.edu/Book/index.html). Oxford University Press.
- Duarte M. (2017) [Free body diagram](FreeBodyDiagram.ipynb)
- [Elias, L A, Watanabe, R N, Kohn, A F.(2014) Spinal Mechanisms May Provide a Combination of Intermittent and Continuous Control of Human Posture: Predictions from a Biologically Based Neuromusculoskeletal Model. PLOS Computational Biology (Online)](http://dx.doi.org/10.1371/journal.pcbi.1003944)
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
%matplotlib notebook
m = 0.63
R = 0.12
I = 2.0/3*m*R**2
bl = 0.5
br = 0.001
g = 9.81
x0 = 0
y0 = 2
v0 = 9.5
angle = 51*np.pi/180.0
vx0 = v0*np.cos(angle)
vy0 = v0*np.sin(angle)
dt = 0.001
t = np.arange(0, 2.1, dt)
x = x0
y = y0
vx = vx0
vy = vy0
omega = 100
theta = 0
r = np.array([x,y])
ballAngle = np.array([theta])
state = np.array([omega, vx, vy, theta, x, y])
while state[4]<=4.6:
dstatedt = np.array([-br/I*state[0],-bl/m*state[1], -g-bl/m*state[2],state[0], state[1], state[2] ])
state = state + dt * dstatedt
r = np.vstack((r, [state[4], state[5]]))
ballAngle = np.vstack((ballAngle, [state[3]]))
plt.figure()
plt.plot(r[0:-1:50,0], r[0:-1:50,1], 'o', color = np.array([1, 0.6,0]), markerSize =10)
plt.plot(np.array([4, 4.45]), np.array([3.05, 3.05]))
for i in range(len(r[0:-1:50,0])):
plt.plot(r[i*50,0]+np.array([-0.05*(np.cos(ballAngle[i*50])-np.sin(ballAngle[i*50])), 0.05*(np.cos(ballAngle[i*50])-np.sin(ballAngle[i*50]))]),
r[i*50,1] + np.array([-0.05*(np.sin(ballAngle[i*50])+np.cos(ballAngle[i*50])), 0.05*(np.sin(ballAngle[i*50])+np.cos(ballAngle[i*50]))]),'k')
plt.ylim((0,4.5))
plt.show()
print(state[0])
m = 0.63
R = 0.12
I = 2.0/3*m*R**2
bl = 0.5
br = 0.001
bm = 16/3*np.pi**2*R**3
g = 9.81
x0 = 0
y0 = 2
v0 = 9.4
angle = 50*np.pi/180.0
vx0 = v0*np.cos(angle)
vy0 = v0*np.sin(angle)
dt = 0.001
t = np.arange(0, 2.1, dt)
x = x0
y = y0
vx = vx0
vy = vy0
omega = 30
theta = 0
r = np.array([x,y])
ballAngle = np.array([theta])
state = np.array([omega, vx, vy, theta, x, y])
while state[4]<=4.6:
dstatedt = np.array([-br/I*state[0],-bl/m*state[1]-bm/m*state[2], -g-bl/m*state[2]+bm/m*state[1],state[0], state[1], state[2] ])
state = state + dt * dstatedt
r = np.vstack((r, [state[4], state[5]]))
ballAngle = np.vstack((ballAngle, [state[3]]))
plt.figure()
plt.plot(r[0:-1:50,0], r[0:-1:50,1], 'o', color = np.array([1, 0.6,0]), markerSize =10)
plt.plot(np.array([4, 4.45]), np.array([3.05, 3.05]))
for i in range(len(r[0:-1:50,0])):
plt.plot(r[i*50,0]+np.array([-0.05*(np.cos(ballAngle[i*50])-np.sin(ballAngle[i*50])), 0.05*(np.cos(ballAngle[i*50])-np.sin(ballAngle[i*50]))]),
r[i*50,1] + np.array([-0.05*(np.sin(ballAngle[i*50])+np.cos(ballAngle[i*50])), 0.05*(np.sin(ballAngle[i*50])+np.cos(ballAngle[i*50]))]),'k')
plt.ylim((0,4.5))
plt.show()
print(state[0])
| 0.239083 | 0.97362 |
#Lab.10 / IBM3202 – Conformational changes using Structure-Based Models
###Theoretical aspects
The **energy landscape theory** and the **principle of minimum frustration** in protein folding have provided the theoretical basis for the generation of simplified models to simulate the pathways of protein folding of different proteins. Noteworthy, recent work has demonstrated their utility for simulating other functionally relevant phenomena, such as **protein misfolding and conformational changes** associated to biological function. Most of these applications have been generated through savvy and careful combinations of the native bonded and non-bonded terms from **two or more structures deposited in the PDB** in two or more conformational states (i.e. open and closed conformations, alpha and beta states, etc).
<figure>
<center>
<img src='https://raw.githubusercontent.com/pb3lab/ibm3202/master/images/smogdual_01.png'/>
<figcaption>FIGURE 1. Modeling the conformational transitions of adenylate kinase. This enzyme undergoes a >25 Å motion between open (red, PDB 4AKE) and closed (green, PDB 1AKE) states due to ligand binding. The relative occupation of the closed and open states can be tuned to experimental data by varying the strength of the subset of contacts only existing in the closed state between 0.6 (red) to 1.2 (black) relative to the open contacts <br> Noel JK & Onuchic JN (2012) <i> Computational Modeling of Biological Systems, 31-54</i></figcaption></center>
</figure>
##Experimental Overview
In this tutorial we will exemplify how we can **combine the native contacts of two different structures** to simulate the conformational change of adenylate kinase, an enzyme that has three domains (LID, NMP and core) and catalyzes the phosphorylation reaction of AMP, using ATP, generating 2 molecules of ADP as product:
<figure>
<center>
<img src='https://raw.githubusercontent.com/pb3lab/ibm3202/master/images/smogdual_02.png'/>
</center>
</figure>
This reaction requires a severe conformational change, which can be seen from the structures of the protein in the presence (1AKE) and in the absence (4AKE) of substrates.
#Part 0. Downloading and Installing the required software
Before we start, **remember to start the hosted runtime** in Google Colab.
Then, we must install several pieces of software to perform this tutorial. Namely:
- **biopython** for manipulation of the PDB files
- **py3Dmol** for visualization of the protein structure.
- **cpanm** for installation of several Perl utilities required to run SMOG2
- **SMOG2** for generating our structure-based models
- **SBM-enhanced GROMACS** for preparing our MD system and performing our MD simulations.
For visualizing our MD trajectories, we will employ a web version of **NGLview**. This is due to the inability of Google Colab to handle a required python package for loading NGLview directly onto Google Colab. Hopefully this will change in the near future.
1. First, we will start by downloading and setting up SMOG2 on Google Colab, which requires the installation of several Perl utilities using cpanminus
**NOTE**: This installation takes ~10 min. If possible, perform this installation before the tutorial session starts.
```
#Downloading and extracting SMOG2 from the SMOG-server
!wget http://smog-server.org/smog2/code/smog-2.4.2.tgz
!tar zxf smog-2.4.2.tgz
#Automatic configuration of cpan for Perl
!echo y | cpan
#Installing cpanm for easy installation of Perl utilities
!cpan App::cpanminus
#Installing all required Perl utilities for SMOG2
!cpanm String::Util #--local-lib $nb_path
!cpanm XML::Simple #--local-lib $nb_path
!cpanm Exporter #--local-lib $nb_path
!cpanm PDL #--local-lib $nb_path
!cpanm XML::Validator::Schema #--local-lib $nb_path
# New Perl module added: requirement for smog v2.4.2
!cpanm XML::LibXML #--local-lib $nb_path
#Download a preconfigured SMOG2 file and test the installation
%%bash
rm -r /content/smog-2.4.2/configure.smog2
wget -P /content/smog-2.4.2 https://github.com/pb3lab/ibm3202/raw/master/software/configure.smog2
source /content/smog-2.4.2/configure.smog2
smog2 -h
```
2. Then, we will set up our SBM-enhanced GROMACS on Google Colab, based on your previously compiled and installed GROMACS
```
# Download and unzip the compressed folder of SBM-enhanced GROMACS
!wget https://raw.githubusercontent.com/pb3lab/ibm3202/master/software/gromacs_sbm.tar.gz
!tar xzf gromacs_sbm.tar.gz
# Check if SBM-enhanced GROMACS is working
%%bash
source /content/gromacs_sbm/bin/GMXRC
gmx help commands
```
3. Lastly, we will install biopython and py3Dmol
```
!pip install biopython py3dmol
```
Once these software installation processes are completed, we are ready to perform our experiments
# Part I – Generate coarse-grained SBM models using SMOG2
As we did in the previous tutorial, we will first download the coordinates for the solved structures of human adenylate kinase in the open (4AKE) and closed (1AKE) states to generate coarse-grained SBMs for both structures using SMOG2. Then, we will employ a combination of numerical analysis and SMOG2 to generate custom-built SBM models that contain information from both structures in a single file (**dual-basin models**).
1. We start by creating and accessing a folder for preparing our systems
```
#Let's make a folder first. We need to import the os and path library
import os
from pathlib import Path
#Then, we define the path of the folder we want to create.
#Notice that the HOME folder for a hosted runtime in colab is /content/
smogpath = Path("/content/prepare_dualAKE/")
#Now, we create the folder using the os.mkdir() command
#The if conditional is just to check whether the folder already exists
#In which case, python returns an error
if os.path.exists(smogpath):
print("path already exists")
if not os.path.exists(smogpath):
os.mkdir(smogpath)
print("path was succesfully created")
#Changing directory using python
os.chdir(smogpath)
```
2. Then, we will download the solved structures of human adenylate kinase in the open (PDB 4AKE) and closed (PDB 1AKE) conformations, and remove alternative side chain conformations, water molecules and ligands using biopython as we have done in our previous tutorials.
**NOTE:** You might get a _chain discontinuity_ warning on biopython. In this particular case, this is due to the non-contiguous annotation of non-protein atoms from chain A and B in the PDB file.
```
#Importing your PDB file using biopython
import os
from Bio.PDB import *
pdbid = ['1ake', '4ake']
pdbl = PDBList()
for s in pdbid:
pdbl.retrieve_pdb_file(s, pdir='.', file_format ="pdb", overwrite=True)
os.rename("pdb"+s+".ent", s+".pdb")
#Here we set up a parser for our PDB
parser = PDBParser()
io=PDBIO()
#And here we set the residue conformation we want to keep
keepAltID = "A"
class KeepOneConfOnly(Select): # Inherit methods from Select class
def accept_atom(self, atom):
if (not atom.is_disordered()) or atom.get_altloc() == keepAltID:
atom.set_altloc(" ") # Eliminate alt location ID before output.
return True
else: # Alt location was not one to be output.
return False
# end of accept_atom()
#And now we loop for all structures
for s in pdbid:
structure = parser.get_structure('X', s+".pdb")
#This will keep only conformation for each residue
io.set_structure(structure)
io.save(s+"_ready.pdb", select=KeepOneConfOnly())
print("Your PDBs were processed. Alternative side chain conformations removed!")
#Here we set up a parser for our PDB
parser = PDBParser()
io=PDBIO()
for s in pdbid:
structure = parser.get_structure('X', s+"_ready.pdb")
#And here we remove hydrogens, waters and ligands using Dice
io.set_structure(structure)
sel = Dice.ChainSelector('A', 1, 214)
io.save(s+"_clean.txt", sel)
print("Your PDBs were processed. Only the protein heavy atoms have been kept!")
print("Both PDBs have been saved as text files for editing on Google Colab")
print("Remember to edit them before using SMOG2")
```
3. Let's examine our structures in py3Dmol
```
import py3Dmol
#First we assign the py3Dmol.view as a two-panel viewer
view=py3Dmol.view(800,400)
#Here we set the background color as white
view.setBackgroundColor('white')
#The following lines are used to add the addModel class
#to read the open state structure
view.addModel(open('4ake_clean.txt', 'r').read(),'pdb')
#Here we set the visualization style and color
view.setStyle({'chain':'A'},{'cartoon': {'color':'red'}})
#Here we center the molecule for its visualization
view.zoomTo()
#And we finally visualize the structures using the command below
view.show()
import py3Dmol
#First we assign the py3Dmol.view as a two-panel viewer
view=py3Dmol.view(800,400)
#Here we set the background color as white
view.setBackgroundColor('white')
#Now we do the same for the closed state structure
view.addModel(open('1ake_clean.txt', 'r').read(),'pdb')
#Here we set the visualization style and color
view.setStyle({'chain':'A'},{'cartoon': {'color':'green'}})
#Here we center the molecule for its visualization
view.zoomTo()
#And we finally visualize the structures using the command below
view.show()
```
4. As we saw in our previous tutorial, these PDB files are saved as .txt files to edit the, based on the format requirements of SMOG2. **Fix both files accordingly!**
5. Once this is done, we can process our files in SMOG2 as indicated below:
```
%%bash
source /content/smog-2.4.2/configure.smog2
smog2 -i 4ake_clean.txt -CA -dname 4ake_smog
smog2 -i 1ake_clean.txt -CA -dname 1ake_smog
```
#Part II – Generate a custom dual-basin SBM model using SMOG2
We should now have our SBM models for the open and the closed state of human adenylate kinase. However, if you remember from our lectures and our previous tutorial, these coarse-grained models do not contain water molecules, ligands, etc. Moreover, since we do not have water in our system, the inclusion of a ligand will lead to its drift outside the active site towards infinity (as you can see from the .mdp simulation file, we are not using periodic boundary conditions).
**How are we going to simulate a conformational change in the absence of ligands?**
Instead of thinking of including the ligand, we must think about the **consequences of ligand binding**. In this case, ligand binding leads to the three domains of the protein getting closer to each other, which translates into **several native contacts being formed upon ligand binding** (or unique to the closed conformation).
Briefly, to simulate the conformational change of this enzyme, it is most appropriate to consider that:
- The **open state** is the **initial condition**, since it can exist both in the absence of and at low concentrations of ligand;
- The native interactions that are **unique to the closed state** correspond to **ligand-induced interactions**;
- The **native ligand-induced interactions** that exhibit significant changes in distance (eg. dist [4AKE] / dist [1AKE] > 50%) are **the only ones to be included in a combined native contact map** (or dual-basin potential).
This is, in fact, shown in the following figure from an article that inspired this tutorial
<figure>
<center>
<img src='https://raw.githubusercontent.com/pb3lab/ibm3202/master/images/smogdual_03.jpg'/>
<figcaption>FIGURE 2. Plot of the contacts unique to the closed form of adenylate kinase. Each point represents a contact between residue <i>i</i> and residue <i>j</i> that is unique to the closed form. The Y-axis is the distance between the C${\alpha}$ atoms of residues <i>i</i> and <i>j</i> in the open form and the X-axis is the distance in the closed form. Contacts above the line of slope 1.5 (solid line) constitute the set of contacts selected for the dual-basin SBM models<br> Whitford PC et al (2007) <i> J Mol Biol 366(5), 1661-1671</i></figcaption></center>
</figure>
1. **How can we first filter the contacts that are unique to the closed conformation?** We will use `grep` over the **.contact.CG files** you just obtained during the SMOG2 processing for the open and closed states:
```
#Creating a folder for preparing the dual-basin SBM model
dualpath = Path("dualSBM")
#Now, we create the folder using the os.mkdir() command
#The if conditional is just to check whether the folder already exists
#In which case, python returns an error
if os.path.exists(dualpath):
print("path already exists")
if not os.path.exists(dualpath):
os.mkdir(dualpath)
print("path was succesfully created")
#Switching to this new folder
os.chdir(dualpath)
#Using grep to check which lines in file1 (4ake) are also in file2 (1ake)
#The -v flag inverts the sence of matching, thus printing non-matching lines from file2
!grep -Fxvf ../4ake_smog.contacts.CG ../1ake_smog.contacts.CG > uniquecontacts.txt
```
If all goes well, you should obtain a list of 113 contacts in the format `chain_i res_i chain_j res_j`. Now, we have to evaluate if the distance of these contacts is significantly different between the open and closed states.
2. To determine the distance difference between these contacts in the open and closed states, we will cheat a little bit. We will first use the `trjconv` module from GROMACS to generate a **coarse-grained PDB file** of both states based in our coarse-grained SBM .gro file. Then, we will use SMOG2, along with these coarse-grained PDB files and our list of unique contacts for the closed structure. With this small trick, we will obtain the **LJ parameters (and therefore the distances!)** of the ligand-induced contacts both in the open (4AKE) and closed (1AKE) states.
```
#Generating coarse-grained PDB for 4AKE and 1AKE using GROMACS
%%bash
source /content/gromacs_sbm/bin/GMXRC
gmx editconf -f ../4ake_smog.gro -o 4ake_CA.pdb
gmx editconf -f ../1ake_smog.gro -o 1ake_CA.pdb
#Edit your file to comply with SMOG2 parameters
grep ATOM 4ake_CA.pdb > 4ake_CA_clean.pdb
echo "END" >> 4ake_CA_clean.pdb
grep ATOM 1ake_CA.pdb > 1ake_CA_clean.pdb
echo "END" >> 1ake_CA_clean.pdb
%%bash
source /content/smog-2.4.2/configure.smog2
smog2 -i 4ake_CA_clean.pdb -c uniquecontacts.txt -t /content/smog-2.4.2/share/templates/SBM_calpha -dname 4ake_unique
smog2 -i 1ake_CA_clean.pdb -c uniquecontacts.txt -t /content/smog-2.4.2/share/templates/SBM_calpha -dname 1ake_unique
```
If we remember from our previous tutorial, the `[ pairs ]` section of the .top file contains the native contacts and their parameters.
<figure>
<center>
<img src='https://raw.githubusercontent.com/pb3lab/ibm3202/master/images/smogdual_05.png'/>
</center>
</figure>
Since we requested the same user-defined contact map for both files in the previous step, we can get the difference in distance between each contact by **dividing column 4 (or 5) from one file by the column 4 (or 5) from another file**
3. Create two text files on Google Colab in which you will store only the `[ pairs ]` section of these newly generated .top files. Then, we will use `awk` over these files to determine which interactions significantly change between these states:
```
#Creating two files only containing the pairs sections using awk
#And eliminating first and last printed lines using sed
!awk 'NR==1, $2 == "pairs" {next}; NF {print $0}; $2 == "exclusions" {exit}' 1ake_unique.top | sed '1d;$d' > closed.pairs
!awk 'NR==1, $2 == "pairs" {next}; NF {print $0}; $2 == "exclusions" {exit}' 4ake_unique.top | sed '1d;$d' > open.pairs
!paste open.pairs closed.pairs | awk '{if(($4/$9)>1.5)print $1, $2, $3, $9, $10}' > Qligand.pairs
```
After this, we will obtain a text file with **83 contacts** that are unique to the closed state for which a significant change in distance (>50%) occurs upon reaching the unbound, open conformation.
4. We will also use `awk` to generate the `[ exclusion ]` lists for these ligand-induced native contacts.
```
!paste open.pairs closed.pairs | awk '{if(($4/$9)>1.5)print $1, $2}' > Qligand.exclusions
```
Now we have everything we need to generate a dual-basin model for simulating the conformational change of human adenylate kinase: coordinate and parameter files and ligand-induced native contacts.
5. We will take one of the .top files for one of the states of human adenylate kinase and manually add the ligand-induced pairs and exclusions that we just obtained. Given our previous assumption on the conformational change of this protein, **the most reasonable strategy is to use the .top file from the open (4AKE) structure for its modification.**
```
#Copy the .top file for 4AKE into this folder and modify it!
!cp ../4ake_smog.top dualAKE.top
```
#Part III – Run and analyze our dual-basin SBM simulations
Now, we are ready to perform our simulations of the ligand-induced conformational change of human adenylate kinase using these dual-basin SBM models.
1. We will start by creating a new folder for preparing and running our MD simulations, in which we will copy our SBM coordinate and topology file.
```
#Defining a new folder for the MD simulations
mdpath = Path("/content/md_dualAKE/")
#Now, we create the folder using the os.mkdir() command
#The if conditional is just to check whether the folder already exists
#In which case, python returns an error
if os.path.exists(mdpath):
print("path already exists")
if not os.path.exists(mdpath):
os.mkdir(mdpath)
print("path was succesfully created")
#Changing to our newly created directory and copying the .gro and .top files
os.chdir(mdpath)
from shutil import copyfile
copyfile(smogpath/'4ake_smog.gro', mdpath/'dualAKE.gro')
copyfile(smogpath/dualpath/'dualAKE.top', mdpath/'dualAKE.top')
```
2. Then, we will download the same **MD instruction file** that we used in our previous tutorial (**mdrun_CA_v5.mdp**), changing the simulation temperature to 108 and the number of steps to 5000000. We will also download the Perl script to generate our LJ 12-10 tabulated potentials.
```
%%bash
wget https://github.com/pb3lab/ibm3202/raw/master/files/mdrun_CA_v5.mdp
wget https://github.com/pb3lab/ibm3202/raw/master/files/maketable4.pl
perl maketable4.pl > table.xvg
```
3. Lastly, we will prepare our **.tpr portable binary run input file for GROMACS** in this folder and run our simulation! Please note how we instruct GROMACS to use our custom table of LJ 12-10 tabulated potentials.
This simulation takes ~15 min.
```
#Preparing our binary run input file
%%bash
source /content/gromacs_sbm/bin/GMXRC
gmx grompp -f mdrun_CA_v5.mdp -c dualAKE.gro -p dualAKE.top -o run.tpr
#Running our simulation
%%time
%%bash
source /content/gromacs_sbm/bin/GMXRC
gmx mdrun -s run.tpr -table table.xvg -tablep table.xvg -nt 2 -noddcheck
```
4. Once our simulation is done, we can analyze if the conformational change is observed in our trajectory file. For simplification, we will first use the `rmsd` module along with the initial, open structure, as evidence of this change.
```
%%bash
source /content/gromacs_sbm/bin/GMXRC
#Commands for RMSD
echo "0" > options
echo " " >> options
echo "0" >> options
echo " " >> options
#RMSD calculation
gmx rms -s dualAKE.gro -f traj_comp.xtc -xvg none < options
import matplotlib.pyplot as plt
import numpy as np
data = np.loadtxt('rmsd.xvg')
plt.title('Structural fluctuations of the system')
plt.xlabel('Time (tau)')
plt.ylabel('rmsd (nm)')
plt.plot(data[:,0], data[:,1], linestyle='solid', linewidth='2', color='red')
plt.show()
```
2. A better metric for looking at the conformational change would be to directly determine if the ligand-induced native contacts are formed. For this, we will use the `g_kuh` module again, along with the list of contacts that are unique to the closed structure and the closed structure as reference for the native distances. Please check how we are generating the contact map file required for this analysis:
```
#Generating the contact map file for g_kuh
f = open(mdpath/"Qligand.ndx", "a")
f.write("[ Qligand ]\n")
with open(smogpath/dualpath/"Qligand.exclusions") as infile:
for line in infile:
if line.strip():
cols = line.split()
f.write(cols[0] + "\t" + cols[1] + "\n")
f.close()
#Copying the reference structure of the closed state for determining native distances
copyfile(smogpath/'1ake_smog.gro', mdpath/'1ake_smog.gro')
#Analyzing the formation of ligand-induced contacts in our trajectory
%%bash
source /content/gromacs_sbm/bin/GMXRC
g_kuh -s 1ake_smog.gro -f traj_comp.xtc -n Qligand.ndx -noabscut -noshortcut -cut 0.2
```
4. Let's plot our results and see what happened during our simulation! We will plot first the change in native contacts (Q), and then the change in potential energy. You can check the change in RMSD yourself
```
!paste rmsd.xvg qvals.out > data.txt
import matplotlib.pyplot as plt
import numpy as np
data = np.loadtxt('data.txt')
plt.title('Structural fluctuations of the system')
plt.xlabel('Time (tau)')
plt.ylabel('Q')
plt.plot(data[:,0], data[:,2], linestyle='solid', linewidth='2', color='red')
plt.show()
```
5. To finalize, we will visualize our simulation. For this, we will use the `trjconv` module to extract only the protein from our system and convert our trajectory into a PDB file and then download this new PDB file and load it onto [**NGLviewer**](http://nglviewer.org/ngl/) as a **trajectory** PDB file.
```
%%bash
source /content/gromacs_sbm/bin/GMXRC
#This is a trick to provide interactive options to gmx
echo "Protein" > options
echo "Protein" >> options
echo " "
gmx trjconv -s run.tpr -f traj_comp.xtc -o traj.pdb < options
#Downloading the trajectory PDB file
from google.colab import files
files.download("/content/md_dualAKE/traj.pdb")
```
**And this is the end of the tenth tutorial!** Good science!
|
github_jupyter
|
#Downloading and extracting SMOG2 from the SMOG-server
!wget http://smog-server.org/smog2/code/smog-2.4.2.tgz
!tar zxf smog-2.4.2.tgz
#Automatic configuration of cpan for Perl
!echo y | cpan
#Installing cpanm for easy installation of Perl utilities
!cpan App::cpanminus
#Installing all required Perl utilities for SMOG2
!cpanm String::Util #--local-lib $nb_path
!cpanm XML::Simple #--local-lib $nb_path
!cpanm Exporter #--local-lib $nb_path
!cpanm PDL #--local-lib $nb_path
!cpanm XML::Validator::Schema #--local-lib $nb_path
# New Perl module added: requirement for smog v2.4.2
!cpanm XML::LibXML #--local-lib $nb_path
#Download a preconfigured SMOG2 file and test the installation
%%bash
rm -r /content/smog-2.4.2/configure.smog2
wget -P /content/smog-2.4.2 https://github.com/pb3lab/ibm3202/raw/master/software/configure.smog2
source /content/smog-2.4.2/configure.smog2
smog2 -h
# Download and unzip the compressed folder of SBM-enhanced GROMACS
!wget https://raw.githubusercontent.com/pb3lab/ibm3202/master/software/gromacs_sbm.tar.gz
!tar xzf gromacs_sbm.tar.gz
# Check if SBM-enhanced GROMACS is working
%%bash
source /content/gromacs_sbm/bin/GMXRC
gmx help commands
!pip install biopython py3dmol
#Let's make a folder first. We need to import the os and path library
import os
from pathlib import Path
#Then, we define the path of the folder we want to create.
#Notice that the HOME folder for a hosted runtime in colab is /content/
smogpath = Path("/content/prepare_dualAKE/")
#Now, we create the folder using the os.mkdir() command
#The if conditional is just to check whether the folder already exists
#In which case, python returns an error
if os.path.exists(smogpath):
print("path already exists")
if not os.path.exists(smogpath):
os.mkdir(smogpath)
print("path was succesfully created")
#Changing directory using python
os.chdir(smogpath)
#Importing your PDB file using biopython
import os
from Bio.PDB import *
pdbid = ['1ake', '4ake']
pdbl = PDBList()
for s in pdbid:
pdbl.retrieve_pdb_file(s, pdir='.', file_format ="pdb", overwrite=True)
os.rename("pdb"+s+".ent", s+".pdb")
#Here we set up a parser for our PDB
parser = PDBParser()
io=PDBIO()
#And here we set the residue conformation we want to keep
keepAltID = "A"
class KeepOneConfOnly(Select): # Inherit methods from Select class
def accept_atom(self, atom):
if (not atom.is_disordered()) or atom.get_altloc() == keepAltID:
atom.set_altloc(" ") # Eliminate alt location ID before output.
return True
else: # Alt location was not one to be output.
return False
# end of accept_atom()
#And now we loop for all structures
for s in pdbid:
structure = parser.get_structure('X', s+".pdb")
#This will keep only conformation for each residue
io.set_structure(structure)
io.save(s+"_ready.pdb", select=KeepOneConfOnly())
print("Your PDBs were processed. Alternative side chain conformations removed!")
#Here we set up a parser for our PDB
parser = PDBParser()
io=PDBIO()
for s in pdbid:
structure = parser.get_structure('X', s+"_ready.pdb")
#And here we remove hydrogens, waters and ligands using Dice
io.set_structure(structure)
sel = Dice.ChainSelector('A', 1, 214)
io.save(s+"_clean.txt", sel)
print("Your PDBs were processed. Only the protein heavy atoms have been kept!")
print("Both PDBs have been saved as text files for editing on Google Colab")
print("Remember to edit them before using SMOG2")
import py3Dmol
#First we assign the py3Dmol.view as a two-panel viewer
view=py3Dmol.view(800,400)
#Here we set the background color as white
view.setBackgroundColor('white')
#The following lines are used to add the addModel class
#to read the open state structure
view.addModel(open('4ake_clean.txt', 'r').read(),'pdb')
#Here we set the visualization style and color
view.setStyle({'chain':'A'},{'cartoon': {'color':'red'}})
#Here we center the molecule for its visualization
view.zoomTo()
#And we finally visualize the structures using the command below
view.show()
import py3Dmol
#First we assign the py3Dmol.view as a two-panel viewer
view=py3Dmol.view(800,400)
#Here we set the background color as white
view.setBackgroundColor('white')
#Now we do the same for the closed state structure
view.addModel(open('1ake_clean.txt', 'r').read(),'pdb')
#Here we set the visualization style and color
view.setStyle({'chain':'A'},{'cartoon': {'color':'green'}})
#Here we center the molecule for its visualization
view.zoomTo()
#And we finally visualize the structures using the command below
view.show()
%%bash
source /content/smog-2.4.2/configure.smog2
smog2 -i 4ake_clean.txt -CA -dname 4ake_smog
smog2 -i 1ake_clean.txt -CA -dname 1ake_smog
#Creating a folder for preparing the dual-basin SBM model
dualpath = Path("dualSBM")
#Now, we create the folder using the os.mkdir() command
#The if conditional is just to check whether the folder already exists
#In which case, python returns an error
if os.path.exists(dualpath):
print("path already exists")
if not os.path.exists(dualpath):
os.mkdir(dualpath)
print("path was succesfully created")
#Switching to this new folder
os.chdir(dualpath)
#Using grep to check which lines in file1 (4ake) are also in file2 (1ake)
#The -v flag inverts the sence of matching, thus printing non-matching lines from file2
!grep -Fxvf ../4ake_smog.contacts.CG ../1ake_smog.contacts.CG > uniquecontacts.txt
#Generating coarse-grained PDB for 4AKE and 1AKE using GROMACS
%%bash
source /content/gromacs_sbm/bin/GMXRC
gmx editconf -f ../4ake_smog.gro -o 4ake_CA.pdb
gmx editconf -f ../1ake_smog.gro -o 1ake_CA.pdb
#Edit your file to comply with SMOG2 parameters
grep ATOM 4ake_CA.pdb > 4ake_CA_clean.pdb
echo "END" >> 4ake_CA_clean.pdb
grep ATOM 1ake_CA.pdb > 1ake_CA_clean.pdb
echo "END" >> 1ake_CA_clean.pdb
%%bash
source /content/smog-2.4.2/configure.smog2
smog2 -i 4ake_CA_clean.pdb -c uniquecontacts.txt -t /content/smog-2.4.2/share/templates/SBM_calpha -dname 4ake_unique
smog2 -i 1ake_CA_clean.pdb -c uniquecontacts.txt -t /content/smog-2.4.2/share/templates/SBM_calpha -dname 1ake_unique
#Creating two files only containing the pairs sections using awk
#And eliminating first and last printed lines using sed
!awk 'NR==1, $2 == "pairs" {next}; NF {print $0}; $2 == "exclusions" {exit}' 1ake_unique.top | sed '1d;$d' > closed.pairs
!awk 'NR==1, $2 == "pairs" {next}; NF {print $0}; $2 == "exclusions" {exit}' 4ake_unique.top | sed '1d;$d' > open.pairs
!paste open.pairs closed.pairs | awk '{if(($4/$9)>1.5)print $1, $2, $3, $9, $10}' > Qligand.pairs
!paste open.pairs closed.pairs | awk '{if(($4/$9)>1.5)print $1, $2}' > Qligand.exclusions
#Copy the .top file for 4AKE into this folder and modify it!
!cp ../4ake_smog.top dualAKE.top
#Defining a new folder for the MD simulations
mdpath = Path("/content/md_dualAKE/")
#Now, we create the folder using the os.mkdir() command
#The if conditional is just to check whether the folder already exists
#In which case, python returns an error
if os.path.exists(mdpath):
print("path already exists")
if not os.path.exists(mdpath):
os.mkdir(mdpath)
print("path was succesfully created")
#Changing to our newly created directory and copying the .gro and .top files
os.chdir(mdpath)
from shutil import copyfile
copyfile(smogpath/'4ake_smog.gro', mdpath/'dualAKE.gro')
copyfile(smogpath/dualpath/'dualAKE.top', mdpath/'dualAKE.top')
%%bash
wget https://github.com/pb3lab/ibm3202/raw/master/files/mdrun_CA_v5.mdp
wget https://github.com/pb3lab/ibm3202/raw/master/files/maketable4.pl
perl maketable4.pl > table.xvg
#Preparing our binary run input file
%%bash
source /content/gromacs_sbm/bin/GMXRC
gmx grompp -f mdrun_CA_v5.mdp -c dualAKE.gro -p dualAKE.top -o run.tpr
#Running our simulation
%%time
%%bash
source /content/gromacs_sbm/bin/GMXRC
gmx mdrun -s run.tpr -table table.xvg -tablep table.xvg -nt 2 -noddcheck
%%bash
source /content/gromacs_sbm/bin/GMXRC
#Commands for RMSD
echo "0" > options
echo " " >> options
echo "0" >> options
echo " " >> options
#RMSD calculation
gmx rms -s dualAKE.gro -f traj_comp.xtc -xvg none < options
import matplotlib.pyplot as plt
import numpy as np
data = np.loadtxt('rmsd.xvg')
plt.title('Structural fluctuations of the system')
plt.xlabel('Time (tau)')
plt.ylabel('rmsd (nm)')
plt.plot(data[:,0], data[:,1], linestyle='solid', linewidth='2', color='red')
plt.show()
#Generating the contact map file for g_kuh
f = open(mdpath/"Qligand.ndx", "a")
f.write("[ Qligand ]\n")
with open(smogpath/dualpath/"Qligand.exclusions") as infile:
for line in infile:
if line.strip():
cols = line.split()
f.write(cols[0] + "\t" + cols[1] + "\n")
f.close()
#Copying the reference structure of the closed state for determining native distances
copyfile(smogpath/'1ake_smog.gro', mdpath/'1ake_smog.gro')
#Analyzing the formation of ligand-induced contacts in our trajectory
%%bash
source /content/gromacs_sbm/bin/GMXRC
g_kuh -s 1ake_smog.gro -f traj_comp.xtc -n Qligand.ndx -noabscut -noshortcut -cut 0.2
!paste rmsd.xvg qvals.out > data.txt
import matplotlib.pyplot as plt
import numpy as np
data = np.loadtxt('data.txt')
plt.title('Structural fluctuations of the system')
plt.xlabel('Time (tau)')
plt.ylabel('Q')
plt.plot(data[:,0], data[:,2], linestyle='solid', linewidth='2', color='red')
plt.show()
%%bash
source /content/gromacs_sbm/bin/GMXRC
#This is a trick to provide interactive options to gmx
echo "Protein" > options
echo "Protein" >> options
echo " "
gmx trjconv -s run.tpr -f traj_comp.xtc -o traj.pdb < options
#Downloading the trajectory PDB file
from google.colab import files
files.download("/content/md_dualAKE/traj.pdb")
| 0.385953 | 0.98459 |
# Crime Analysis in New York City using Spark DataFrame Example
__Learning Objective:__
* [Load the data and get quick sense of data](#Load-the-data-and-get-quick-sense-of-data)
* [Exploring and analyzing data with DataFrames](#Exploring-and-analyzing-data-with-DataFrames)
- Transformations and actions on DataFrames
- Aggregation, grouping, sampling, ordering data with DataFrame
- Working with Joins in DataFrames + Using Broadcast Variables and Accumulators with DataFrame
## Load the data and get quick sense of data
__Read the compressed file content using Python's lzma library__
```
filePath="../02/NYPD_7_Major_Felony_Incidents.xz"
## For Linux "file:///Users/tirthalp/something/gs-spark-python/notebooks/02/NYPD_7_Major_Felony_Incidents.xz"
## For Windows "C:\\Users\\tirthalp\\something\\gs-spark-python\\notebooks\\02\\NYPD_7_Major_Felony_Incidents.xz"
import lzma
with lzma.open(filePath, 'rt') as f:
file_content = list(f)
print("Type of file_content variable = ", type(file_content))
print(file_content[0])
print(file_content[1])
```
__Convert Python list into Spark's DataFrame__
```
from pyspark.sql import SparkSession
spark = SparkSession.builder\
.appName("Crime Analysis in New York City using Spark DataFrame")\
.getOrCreate()
### ---> Parse header and unstructured data, and prepare RDD of Crime objects
data = spark.sparkContext.parallelize(file_content)
# Clean '\n' at the end of each line
data = data.map(lambda x:x.replace("\n",""))
# Get the header row
header = data.first()
# Filter the header row
dataWoHeader = data.filter(lambda x: x!=header)
# How to transform records of string to named tuples / Parse the rows to extract fields?
import csv
from io import StringIO
from collections import namedtuple
fields = header.replace(" ","_").replace("/","_").split(",")
Crime = namedtuple('Crime', fields, verbose=False)
def parse(row):
reader = csv.reader(StringIO(row))
row = next(reader)
return Crime(*row)
# Transform String to Crime object
crimesRdd=dataWoHeader.map(parse)
type(crimesRdd)
### ---> RDD to DataFrame
crimesDf = crimesRdd.toDF()
type(crimesDf)
```
## Exploring and analyzing data with DataFrames
### Do data exploration and simple transformations
```
# What's the schema?
crimesDf.printSchema()
# Explore data by taking subset of DataFrame
crimesDf.limit(3).show()
# Use dropna to drop rows which have nas (i.e values that are not available)
# crimesDf.dropna()
# When doing huge analysis, for performance improvement, drop columns which are not required for the purpose of analysis
crimesDf = crimesDf.drop("Occurrence_Date", "Day_of_Week", "Occurrence_Month", "Occurrence_Day", "Occurrence_Hour", "CompStat_Month", "CompStat_Day", "Offense_Classification", "Sector", "XCoordinate", "YCoordinate", "Location_1")
crimesDf.cache()
crimesDf.show(3)
# How to see unique values on particular column?
crimesDf.select('Offense').distinct().show()
# How to filter records with "NA" Offense type?
crimesDf = crimesDf.filter(crimesDf['offense'] != "NA").distinct()
crimesDf.select('Offense').distinct().show(10, False)
# Total records?
crimesDf.count()
```
### Sampling Data
```
# How to see certain fraction of data for the particular types of offense using sample(fraction=n)?
crimesDf.filter(crimesDf['offense'].isin(["BURGLARY", "ROBBERY"])).sample(fraction=0.1).limit(5).show()
```
### Grouping, aggregation and ordering data
##### Year-on-Year Offenses growth pattern?
```
from pyspark.sql.functions import desc
crimesDf.groupBy("Occurrence_Year").count().withColumnRenamed("count", "offenses").orderBy(desc("Occurrence_Year")).show(50)
# Sudden increase in the number of offenses from Year 2006 onwards indicates anomany
# So filtering data of occurrence year 2005 or prior
crimesDf = crimesDf.filter(crimesDf["Occurrence_Year"].isin(["2006", "2007", "2008", "2009", "2010", "2011", "2012", "2013", "2014", "2015"]))
crimesDf.count()
```
##### Total number of offense by type? Which is highest number of offense?
```
crimesDf.groupBy('Offense').count().orderBy(desc('count')).show(10, False)
```
##### Sum of precincts by Offense?
```
# agg({"Precinct":"sum"}) = perform sum aggregation on 'Precinct' column using built-in agg function of Spark DF
# withColumnRenamed = returns a new DataFrame by renaming an existing column
crimesDf_offense_precinct_sum = crimesDf.filter(crimesDf["Precinct"] != "NA")\
.groupBy("Offense")\
.agg({"Precinct":"sum"})\
.withColumnRenamed("sum(Precinct)","Precincts")
crimesDf_offense_precinct_sum.show()
# How to cast scientific notation in above (e.g. 2.3742835E7 value in Precincts column) to number?
# withColumn = returns a new DataFrame by adding a column or replacing the existing column that has the same name
from pyspark.sql.types import DecimalType
crimesDf_offense_precinct_sum = crimesDf_offense_precinct_sum\
.withColumn('Precincts', crimesDf_offense_precinct_sum.Precincts.cast(DecimalType(18, 0)))\
.orderBy("Offense", desc("Precincts"))
crimesDf_offense_precinct_sum.show(10, False)
```
##### Top 3 Offense based on respective total incident occurrences and % incident?
```
# Incidents by Offense?
crimesDf_offense_incidents = crimesDf.groupBy("Offense").count().withColumnRenamed("count","Incidents")
crimesDf_offense_incidents.show()
# Total number of incidents across all Offenses?
crimesDf_offense_total = crimesDf_offense_incidents.agg({"Incidents": "sum"})
crimesDf_offense_total.show()
total_offense_incidents = crimesDf_offense_total.collect()[0][0]
total_offense_incidents
from pyspark.sql.functions import round
# Adding new column for % of Incident calculation
crimesDf_offense_percentage_incident = crimesDf_offense_incidents.withColumn(
"% Incident",
round(crimesDf_offense_incidents.Incidents / total_offense_incidents * 100, 2))
crimesDf_offense_percentage_incident.printSchema()
# Order by % Incident column to get top 3 offense with highest incidents
crimesDf_offense_percentage_incident.orderBy(crimesDf_offense_percentage_incident[2].desc()).limit(3).show()
```
##### More Aggregation functions
```
# Multiple aggregation operations can be applied
crimesDf.agg({"Precinct" : "sum", "Occurrence_Year" : "max"}).show()
# Uages of describe
crimesDf.select("Precinct").filter(crimesDf["Precinct"] != "NA").describe().show()
```
##### How to view information in matrix form using crosstab function?
```
# Total occurrency of each Offense per Borough?
crimesDf.filter(crimesDf["Borough"] != "")\
.filter(crimesDf["Borough"] != "(null)")\
.crosstab("Borough", "Offense")\
.select("Borough_Offense", "FELONY ASSAULT", "ROBBERY", "RAPE", "GRAND LARCENY", "BURGLARY")\
.show()
```
### Working with Joins in DataFrames + Using Broadcast Variables and Accumulators with DataFrame
##### Broadcast Variables and Accumulators concepts in Spark
* Spark is written in Scala, and heavily utilizes __closures__. (_Scala Closures are functions which uses one or more free variables and the return value of this function is dependent of these variable. The free variables are defined outside of the Closure Function and is not included as a parameter of this function. So the difference between a closure function and a normal function is the free variable. A free variable is any kind of variable which is not defined within the function and not passed as the parameter of the function. A free variable is not bound to a function with a valid value. The function does not contain any values for the free variable._). Also, the closure retains its copies of local variables - even after the outer scope ceases to exist.
* In Spark 2.x architecture, Tasks which run on individual workers are closures. Every task will contain a copy of the variables that it works on for closures. That means, lots of copies are passed around from master to worker nodes for each tasks and shuffling would further cost hit. That's why need shared variables across tasks on individual worker nodes to solve this problem statement. In Spark, shared variables are: (1) Broadcast variables (2) Accumulator
* __Broadcast Variables__:
- Shared, read-only variables
- One copy per node (Not one copy per task)
- Distributed efficiently by Spark
- All nodes in cluster distribute (i.e. peer-to-peer copying too)
- No shuffling
- Will be cached in-memory on each node, so can be large, but not too large
- Use whenever tasks across stages need same data; Share dataset with all nodes like training data in ML, static lookup tables
* __Accumulators__:
- Read-write shared variables
- Added associatively { e.g. A + B = B + A } and communicatively { e.g. A + (B + C) = (A + B) + C }
- Spark native support for accumulators of type Long, Double and Collections; which can be extended by subclassing AccumulatorV2
- Workers can only modify state
- Only the driver program can read state
- Use for counters or sums
##### Load another data set and get quick sense of the data
```
# Dataset 1: Offenses
offensesDf = spark.read.format("csv").option("header", "true").load("Offense.csv");
type(offensesDf)
offensesDf.printSchema()
offensesDf.show(10, False)
# Drop unwanted columns
crimesDf = crimesDf.drop("OBJECTID", "Identifier", "CompStat_Year", "Precinct", "Jurisdiction")
crimesDf.printSchema()
crimesDf.show(5, False)
offensesDf.count(), crimesDf.count()
```
##### Join two datasets - Inner join
```
crime_details_temp = crimesDf.join(offensesDf, crimesDf.Offense == offensesDf.Offense)
crime_details_temp.columns
# Note: Offense column is two times, b'cas one from crimesDf and another from OffensesDf
crime_details_temp.count()
crime_details_temp.limit(3).show()
# Another syntax of joining two datasets, in which just providing column name to apply inner join
# use "how" to apply join type = inner | left | right | full
crimesDf.join(offensesDf, ["Offense"], how="inner").show(3)
```
##### Joins using Broadcast variables
When performing joins on huge datasets, it would be heavy duty operation. In such case, it is recommeded to perform join operations by broadcasting DataFrames to share the data across tasks.
Always broadcast the smaller DataFrame to all nodes. For example, offensesDf can be considered for broadcasting over crimesDf.
```
from pyspark.sql.functions import broadcast
crime_details = crimesDf.select("Offense", "Occurrence_Year")\
.join(broadcast(offensesDf), ["Offense"], "inner")
crime_details.count()
crime_details.sort(crime_details.Offense.desc()).show(5)
```
##### Sum using Accumulators
```
################# Approach 1: Punishment Penalty sum by Severity using Accumulator
# declare accumulator
low_severity_penalty_sum = spark.sparkContext.accumulator(0.0)
medium_severity_penalty_sum = spark.sparkContext.accumulator(0.0)
high_severity_penalty_sum = spark.sparkContext.accumulator(0.0)
def cal_penalty_by_severity(row):
severity = row.Punishment_Severity
penalty = float(row.Penalty_USD)
# write / add to accumulator
if(severity == "HIGH"):
high_severity_penalty_sum.add(penalty)
elif(severity == "MEDIUM"):
medium_severity_penalty_sum.add(penalty)
elif(severity == "LOW"):
low_severity_penalty_sum.add(penalty)
crime_details.foreach(lambda x: cal_penalty_by_severity(x))
# read accumulators
all_penalty_sum_by_severity = (low_severity_penalty_sum.value, medium_severity_penalty_sum.value, high_severity_penalty_sum.value)
all_penalty_sum_by_severity
################# Approach 2: Punishment Penalty sum by Severity using inbuilt agg function of Spark DataFrame
all_penalty_sum = crime_details.groupby("Punishment_Severity")\
.agg({"Penalty_USD": "sum"})\
.withColumnRenamed("sum(Penalty_USD)","Total_Penalty")
all_penalty_sum.withColumn('Total_Penalty', all_penalty_sum.Total_Penalty.cast(DecimalType(18, 1)))\
.orderBy(desc("Total_Penalty"))\
.show()
```
|
github_jupyter
|
filePath="../02/NYPD_7_Major_Felony_Incidents.xz"
## For Linux "file:///Users/tirthalp/something/gs-spark-python/notebooks/02/NYPD_7_Major_Felony_Incidents.xz"
## For Windows "C:\\Users\\tirthalp\\something\\gs-spark-python\\notebooks\\02\\NYPD_7_Major_Felony_Incidents.xz"
import lzma
with lzma.open(filePath, 'rt') as f:
file_content = list(f)
print("Type of file_content variable = ", type(file_content))
print(file_content[0])
print(file_content[1])
from pyspark.sql import SparkSession
spark = SparkSession.builder\
.appName("Crime Analysis in New York City using Spark DataFrame")\
.getOrCreate()
### ---> Parse header and unstructured data, and prepare RDD of Crime objects
data = spark.sparkContext.parallelize(file_content)
# Clean '\n' at the end of each line
data = data.map(lambda x:x.replace("\n",""))
# Get the header row
header = data.first()
# Filter the header row
dataWoHeader = data.filter(lambda x: x!=header)
# How to transform records of string to named tuples / Parse the rows to extract fields?
import csv
from io import StringIO
from collections import namedtuple
fields = header.replace(" ","_").replace("/","_").split(",")
Crime = namedtuple('Crime', fields, verbose=False)
def parse(row):
reader = csv.reader(StringIO(row))
row = next(reader)
return Crime(*row)
# Transform String to Crime object
crimesRdd=dataWoHeader.map(parse)
type(crimesRdd)
### ---> RDD to DataFrame
crimesDf = crimesRdd.toDF()
type(crimesDf)
# What's the schema?
crimesDf.printSchema()
# Explore data by taking subset of DataFrame
crimesDf.limit(3).show()
# Use dropna to drop rows which have nas (i.e values that are not available)
# crimesDf.dropna()
# When doing huge analysis, for performance improvement, drop columns which are not required for the purpose of analysis
crimesDf = crimesDf.drop("Occurrence_Date", "Day_of_Week", "Occurrence_Month", "Occurrence_Day", "Occurrence_Hour", "CompStat_Month", "CompStat_Day", "Offense_Classification", "Sector", "XCoordinate", "YCoordinate", "Location_1")
crimesDf.cache()
crimesDf.show(3)
# How to see unique values on particular column?
crimesDf.select('Offense').distinct().show()
# How to filter records with "NA" Offense type?
crimesDf = crimesDf.filter(crimesDf['offense'] != "NA").distinct()
crimesDf.select('Offense').distinct().show(10, False)
# Total records?
crimesDf.count()
# How to see certain fraction of data for the particular types of offense using sample(fraction=n)?
crimesDf.filter(crimesDf['offense'].isin(["BURGLARY", "ROBBERY"])).sample(fraction=0.1).limit(5).show()
from pyspark.sql.functions import desc
crimesDf.groupBy("Occurrence_Year").count().withColumnRenamed("count", "offenses").orderBy(desc("Occurrence_Year")).show(50)
# Sudden increase in the number of offenses from Year 2006 onwards indicates anomany
# So filtering data of occurrence year 2005 or prior
crimesDf = crimesDf.filter(crimesDf["Occurrence_Year"].isin(["2006", "2007", "2008", "2009", "2010", "2011", "2012", "2013", "2014", "2015"]))
crimesDf.count()
crimesDf.groupBy('Offense').count().orderBy(desc('count')).show(10, False)
# agg({"Precinct":"sum"}) = perform sum aggregation on 'Precinct' column using built-in agg function of Spark DF
# withColumnRenamed = returns a new DataFrame by renaming an existing column
crimesDf_offense_precinct_sum = crimesDf.filter(crimesDf["Precinct"] != "NA")\
.groupBy("Offense")\
.agg({"Precinct":"sum"})\
.withColumnRenamed("sum(Precinct)","Precincts")
crimesDf_offense_precinct_sum.show()
# How to cast scientific notation in above (e.g. 2.3742835E7 value in Precincts column) to number?
# withColumn = returns a new DataFrame by adding a column or replacing the existing column that has the same name
from pyspark.sql.types import DecimalType
crimesDf_offense_precinct_sum = crimesDf_offense_precinct_sum\
.withColumn('Precincts', crimesDf_offense_precinct_sum.Precincts.cast(DecimalType(18, 0)))\
.orderBy("Offense", desc("Precincts"))
crimesDf_offense_precinct_sum.show(10, False)
# Incidents by Offense?
crimesDf_offense_incidents = crimesDf.groupBy("Offense").count().withColumnRenamed("count","Incidents")
crimesDf_offense_incidents.show()
# Total number of incidents across all Offenses?
crimesDf_offense_total = crimesDf_offense_incidents.agg({"Incidents": "sum"})
crimesDf_offense_total.show()
total_offense_incidents = crimesDf_offense_total.collect()[0][0]
total_offense_incidents
from pyspark.sql.functions import round
# Adding new column for % of Incident calculation
crimesDf_offense_percentage_incident = crimesDf_offense_incidents.withColumn(
"% Incident",
round(crimesDf_offense_incidents.Incidents / total_offense_incidents * 100, 2))
crimesDf_offense_percentage_incident.printSchema()
# Order by % Incident column to get top 3 offense with highest incidents
crimesDf_offense_percentage_incident.orderBy(crimesDf_offense_percentage_incident[2].desc()).limit(3).show()
# Multiple aggregation operations can be applied
crimesDf.agg({"Precinct" : "sum", "Occurrence_Year" : "max"}).show()
# Uages of describe
crimesDf.select("Precinct").filter(crimesDf["Precinct"] != "NA").describe().show()
# Total occurrency of each Offense per Borough?
crimesDf.filter(crimesDf["Borough"] != "")\
.filter(crimesDf["Borough"] != "(null)")\
.crosstab("Borough", "Offense")\
.select("Borough_Offense", "FELONY ASSAULT", "ROBBERY", "RAPE", "GRAND LARCENY", "BURGLARY")\
.show()
# Dataset 1: Offenses
offensesDf = spark.read.format("csv").option("header", "true").load("Offense.csv");
type(offensesDf)
offensesDf.printSchema()
offensesDf.show(10, False)
# Drop unwanted columns
crimesDf = crimesDf.drop("OBJECTID", "Identifier", "CompStat_Year", "Precinct", "Jurisdiction")
crimesDf.printSchema()
crimesDf.show(5, False)
offensesDf.count(), crimesDf.count()
crime_details_temp = crimesDf.join(offensesDf, crimesDf.Offense == offensesDf.Offense)
crime_details_temp.columns
# Note: Offense column is two times, b'cas one from crimesDf and another from OffensesDf
crime_details_temp.count()
crime_details_temp.limit(3).show()
# Another syntax of joining two datasets, in which just providing column name to apply inner join
# use "how" to apply join type = inner | left | right | full
crimesDf.join(offensesDf, ["Offense"], how="inner").show(3)
from pyspark.sql.functions import broadcast
crime_details = crimesDf.select("Offense", "Occurrence_Year")\
.join(broadcast(offensesDf), ["Offense"], "inner")
crime_details.count()
crime_details.sort(crime_details.Offense.desc()).show(5)
################# Approach 1: Punishment Penalty sum by Severity using Accumulator
# declare accumulator
low_severity_penalty_sum = spark.sparkContext.accumulator(0.0)
medium_severity_penalty_sum = spark.sparkContext.accumulator(0.0)
high_severity_penalty_sum = spark.sparkContext.accumulator(0.0)
def cal_penalty_by_severity(row):
severity = row.Punishment_Severity
penalty = float(row.Penalty_USD)
# write / add to accumulator
if(severity == "HIGH"):
high_severity_penalty_sum.add(penalty)
elif(severity == "MEDIUM"):
medium_severity_penalty_sum.add(penalty)
elif(severity == "LOW"):
low_severity_penalty_sum.add(penalty)
crime_details.foreach(lambda x: cal_penalty_by_severity(x))
# read accumulators
all_penalty_sum_by_severity = (low_severity_penalty_sum.value, medium_severity_penalty_sum.value, high_severity_penalty_sum.value)
all_penalty_sum_by_severity
################# Approach 2: Punishment Penalty sum by Severity using inbuilt agg function of Spark DataFrame
all_penalty_sum = crime_details.groupby("Punishment_Severity")\
.agg({"Penalty_USD": "sum"})\
.withColumnRenamed("sum(Penalty_USD)","Total_Penalty")
all_penalty_sum.withColumn('Total_Penalty', all_penalty_sum.Total_Penalty.cast(DecimalType(18, 1)))\
.orderBy(desc("Total_Penalty"))\
.show()
| 0.545286 | 0.963814 |
# 基于词典的常规处理方式
## 环境搭建
```
import pandas as pd
import numpy as np
import jieba
# 定义去除前缀重复的函数
def qc_string_forward(s):
filelist = s
filelist2 = []
for a_string in filelist:
temp1 = a_string.strip('\n')
temp2 = temp1.lstrip('\ufeff')
temp3 = temp2.strip('\r')
char_list = list(temp3) #把字符串转化列表自动按单个字符分词了
#print(char_list)
list1 = []
list1.append(char_list[0])
list2 = ['']
#记录要删除的索引
del1 = []
i = 0
while (i<len(char_list)):
i = i+1
#这里是对后面没有词汇的时候对列表1和列表2判断一次重复
if i == len(char_list):
if list1 == list2:
m = len(list2)
for x in range(i-m,i):
del1.append(x)
else:
if char_list[i] == list1[0] and list2==['']:
#print('词汇和list1相同,list2为空,将词加入list2')
list2[0]=char_list[i] #这里初始化用append会让lisr2初始化为['','**']
elif char_list[i] != list1[0] and list2==['']:
#print('词汇和1不同,2为空,将词加入1')
list1.append(char_list[i])
#触发判断
elif char_list[i] != list1[0] and list2 !=['']:
if list1 == list2 and len(list2)>=2:
#print('词和1不同,2不为空,判断1和2重复')
m = len(list2)
#删除列表2里的内容,列表1本来的内容不用再去判断重复了
for x in range(i-m,i):
del1.append(x)
list1= ['']
list2 = ['']
list1[0]=char_list[i]
else:
#print('词和1不同,2不为空,判断1和2不重复')
list2.append(char_list[i])
#触发判断
elif char_list[i] == list1[0] and list2 != ['']:
if list1 == list2:
#print('词和1相同,2不为空,判断1和2重复')
m = len(list2)
#删除列表2里的内容,列表1需要再去和后面的词汇继续判断重复
for x in range(i-m,i):
del1.append(x)
list2 = ['']
list2[0]=char_list[i]
else:
#print('词和1相同,2不为空,判断1和2不重复')
#逻辑对书本上进行了修改,书上是清空列表1和2,就是保留现在列表1和2内容不做删除,这里只保留1,列表2内容还需要做对比
list1 = list2
list2 = ['']
list2[0]=char_list[i]
a = sorted(del1) #从数字更大的索引删起,这样就不用考虑元素删除后索引的变化问题
t = len(a) - 1
while(t>=0):
del char_list[a[t]]
t = t-1
str1 = ''.join(char_list)
str2 = str1.strip()
filelist2.append(str2)
return filelist2
# 定义去除后缀重复的函数
def qc_string_backward(s):
filelist = s
filelist2 = []
for a_string in filelist:
temp1 = a_string.strip('\n')
temp2 = temp1.lstrip('\ufeff')
temp3 = temp2.strip('\r')
temp3=temp3[::-1]
char_list = list(temp3) #把字符串转化列表自动按单个字符分词了
#print(char_list)
list1 = []
list1.append(char_list[0])
list2 = ['']
#记录要删除的索引
del1 = []
i = 0
while (i<len(char_list)):
i = i+1
#这里是对后面没有词汇的时候对列表1和列表2判断一次重复
if i == len(char_list):
if list1 == list2:
m = len(list2)
for x in range(i-m,i):
del1.append(x)
else:
if char_list[i] == list1[0] and list2==['']:
#print('词汇和list1相同,list2为空,将词加入list2')
list2[0]=char_list[i] #这里初始化用append会让lisr2初始化为['','**']
elif char_list[i] != list1[0] and list2==['']:
#print('词汇和1不同,2为空,将词加入1')
list1.append(char_list[i])
#触发判断
elif char_list[i] != list1[0] and list2 !=['']:
if list1 == list2 and len(list2)>=2:
#print('词和1不同,2不为空,判断1和2重复')
m = len(list2)
#删除列表2里的内容,列表1本来的内容不用再去判断重复了
for x in range(i-m,i):
del1.append(x)
list1= ['']
list2 = ['']
list1[0]=char_list[i]
else:
#print('词和1不同,2不为空,判断1和2不重复')
list2.append(char_list[i])
#触发判断
elif char_list[i] == list1[0] and list2 != ['']:
if list1 == list2:
#print('词和1相同,2不为空,判断1和2重复')
m = len(list2)
#删除列表2里的内容,列表1需要再去和后面的词汇继续判断重复
for x in range(i-m,i):
del1.append(x)
list2 = ['']
list2[0]=char_list[i]
else:
#print('词和1相同,2不为空,判断1和2不重复')
#逻辑对书本上进行了修改,书上是清空列表1和2,就是保留现在列表1和2内容不做删除,这里只保留1,列表2内容还需要做对比
list1 = list2
list2 = ['']
list2[0]=char_list[i]
a = sorted(del1) #从数字更大的索引删起,这样就不用考虑元素删除后索引的变化问题
t = len(a) - 1
while(t>=0):
del char_list[a[t]]
t = t-1
str1 = ''.join(char_list)
str2 = str1.strip()
str2=str2[::-1]
filelist2.append(str2)
return filelist2
```
## 数据加载
```
data_T = pd.read_csv('../data/train.csv',usecols=['text','label'])
data_T.shape
data_T.head()
```
## 无意义评论剔除
## 机械词去除
```
### 虽然股吧没有评论得积分的活动,但还是需要处理,以防止个别极端评论导致整体的偏颇
list_baier = data_T.text.values.tolist()
#去除前缀重复
res1=qc_string_forward(list_baier)
#去除后缀重复
res2=qc_string_backward(res1)
```
## 重复评论剔除
```
res3=[]
for i in res2:
if i not in res3:
res3.append(i)
len(res3)
res3
```
## 分词
```
res4 = []
for i in res3:
res4.append(' '.join(jieba.cut(i)))
res4
```
## 词典的合并去重
```
fname = './negative.txt'
fh = open(fname,encoding='utf-8')
l = list()
for line in fh:
line = line.rstrip()
l = l + list(line.split())
s = list(set(l))
for line in s:
print(line)
len(s) #### 原先1W35,去重后现在只有1W2 少了近1千
fname = './positive.txt'
fh = open(fname,encoding='utf-8')
l = list()
for line in fh:
line = line.rstrip()
l = l + list(line.split())
s = list(set(l))
for line in s:
print(line)
len(s) #### 原先1W35,去重后现在只有1W2 少了近1千
```
## 计算情感
# 基于机器学习的常规处理方式
## 环境搭建
## 数据加载
## 无意义评论剔除
## 机械词去除
## 分词
## 词向量生成
## 模型训练
## 得到情感
# 基于词典的SnowNLP处理方式
## 环境搭建
```
from snownlp import sentiment
from snownlp import SnowNLP
```
## 数据加载
```
data_T = pd.read_csv('../data/train.csv',usecols=['text','label'])
data_T.head()
```
## 模型训练
```
sentiment.train(
'./neg.txt', ### 用train、val数据集分类成两类,作为语料库
'./pos.txt'
)
sentiment.save('./sentiment.marshal.2')
```
## 模型评估 - 生成正负两类 阈值0.6
```
data_V = pd.read_csv('../data/test.csv',usecols=['text','label'])
predict = []
for review in data_V.text:
pos = SnowNLP(review).sentiments
thres = 0.6
if pos > thres:
predict.append(1)
else:
predict.append(0)
label = [{'positive':1,'negative':0}[x] for x in data_V.label]
pre = pd.Series(predict)
(pre==label).sum()/len(label)
```
## 模型评估 - 生成中负正三类 阈值0.3 0.7
```
data_V = pd.read_csv('../data/test.csv',usecols=['text','label'])
data_V.shape[0] ### 原先大小
from snownlp import SnowNLP
predict = []
for review in data_V.text:
pos = SnowNLP(review).sentiments
thres = 0.49
if pos > 0.5+thres:
predict.append(1)
elif pos < 0.5-thres:
predict.append(0)
else:
predict.append(-1)
label = pd.Series([{'positive':1,'negative':0}[x] for x in data_V.label])
pre = pd.Series(predict)
(pre==label).sum()/len(label) ### 这种计算方式是偏颇的、偏低的
pre1 = pre[pre>=0] ### 筛选出被预测为正负样本的样本集,再一次评估
label1 = label[pre>=0]
(pre1==label1).sum()/len(label1) #### 通过人工调整阈值,选择阈值0.99 0.01
len(label1) # 筛选后大小 从900缩减到300,近2/3
```
# 加载FINBERT模型
```
sys.path
import sys
sys.path.append('..')
from finbert.finbert import Finbert
dict1 = {'negative':0,'positive':1}
fb = Finbert(label_map = dict1)
import pandas as pd
raw_data = pd.read_csv('../data/corpus/train.csv',usecols=['text','label'],encoding='utf-8')
X = raw_data['text']
y = raw_data['label']
X.shape,y.shape
fb.fit(X,y,'balanced')
fb.score(X,y)
fb.predict_proba(X)
print('epoch {} train_loss_avg_batchs: {}'.format(1,2))
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import jieba
# 定义去除前缀重复的函数
def qc_string_forward(s):
filelist = s
filelist2 = []
for a_string in filelist:
temp1 = a_string.strip('\n')
temp2 = temp1.lstrip('\ufeff')
temp3 = temp2.strip('\r')
char_list = list(temp3) #把字符串转化列表自动按单个字符分词了
#print(char_list)
list1 = []
list1.append(char_list[0])
list2 = ['']
#记录要删除的索引
del1 = []
i = 0
while (i<len(char_list)):
i = i+1
#这里是对后面没有词汇的时候对列表1和列表2判断一次重复
if i == len(char_list):
if list1 == list2:
m = len(list2)
for x in range(i-m,i):
del1.append(x)
else:
if char_list[i] == list1[0] and list2==['']:
#print('词汇和list1相同,list2为空,将词加入list2')
list2[0]=char_list[i] #这里初始化用append会让lisr2初始化为['','**']
elif char_list[i] != list1[0] and list2==['']:
#print('词汇和1不同,2为空,将词加入1')
list1.append(char_list[i])
#触发判断
elif char_list[i] != list1[0] and list2 !=['']:
if list1 == list2 and len(list2)>=2:
#print('词和1不同,2不为空,判断1和2重复')
m = len(list2)
#删除列表2里的内容,列表1本来的内容不用再去判断重复了
for x in range(i-m,i):
del1.append(x)
list1= ['']
list2 = ['']
list1[0]=char_list[i]
else:
#print('词和1不同,2不为空,判断1和2不重复')
list2.append(char_list[i])
#触发判断
elif char_list[i] == list1[0] and list2 != ['']:
if list1 == list2:
#print('词和1相同,2不为空,判断1和2重复')
m = len(list2)
#删除列表2里的内容,列表1需要再去和后面的词汇继续判断重复
for x in range(i-m,i):
del1.append(x)
list2 = ['']
list2[0]=char_list[i]
else:
#print('词和1相同,2不为空,判断1和2不重复')
#逻辑对书本上进行了修改,书上是清空列表1和2,就是保留现在列表1和2内容不做删除,这里只保留1,列表2内容还需要做对比
list1 = list2
list2 = ['']
list2[0]=char_list[i]
a = sorted(del1) #从数字更大的索引删起,这样就不用考虑元素删除后索引的变化问题
t = len(a) - 1
while(t>=0):
del char_list[a[t]]
t = t-1
str1 = ''.join(char_list)
str2 = str1.strip()
filelist2.append(str2)
return filelist2
# 定义去除后缀重复的函数
def qc_string_backward(s):
filelist = s
filelist2 = []
for a_string in filelist:
temp1 = a_string.strip('\n')
temp2 = temp1.lstrip('\ufeff')
temp3 = temp2.strip('\r')
temp3=temp3[::-1]
char_list = list(temp3) #把字符串转化列表自动按单个字符分词了
#print(char_list)
list1 = []
list1.append(char_list[0])
list2 = ['']
#记录要删除的索引
del1 = []
i = 0
while (i<len(char_list)):
i = i+1
#这里是对后面没有词汇的时候对列表1和列表2判断一次重复
if i == len(char_list):
if list1 == list2:
m = len(list2)
for x in range(i-m,i):
del1.append(x)
else:
if char_list[i] == list1[0] and list2==['']:
#print('词汇和list1相同,list2为空,将词加入list2')
list2[0]=char_list[i] #这里初始化用append会让lisr2初始化为['','**']
elif char_list[i] != list1[0] and list2==['']:
#print('词汇和1不同,2为空,将词加入1')
list1.append(char_list[i])
#触发判断
elif char_list[i] != list1[0] and list2 !=['']:
if list1 == list2 and len(list2)>=2:
#print('词和1不同,2不为空,判断1和2重复')
m = len(list2)
#删除列表2里的内容,列表1本来的内容不用再去判断重复了
for x in range(i-m,i):
del1.append(x)
list1= ['']
list2 = ['']
list1[0]=char_list[i]
else:
#print('词和1不同,2不为空,判断1和2不重复')
list2.append(char_list[i])
#触发判断
elif char_list[i] == list1[0] and list2 != ['']:
if list1 == list2:
#print('词和1相同,2不为空,判断1和2重复')
m = len(list2)
#删除列表2里的内容,列表1需要再去和后面的词汇继续判断重复
for x in range(i-m,i):
del1.append(x)
list2 = ['']
list2[0]=char_list[i]
else:
#print('词和1相同,2不为空,判断1和2不重复')
#逻辑对书本上进行了修改,书上是清空列表1和2,就是保留现在列表1和2内容不做删除,这里只保留1,列表2内容还需要做对比
list1 = list2
list2 = ['']
list2[0]=char_list[i]
a = sorted(del1) #从数字更大的索引删起,这样就不用考虑元素删除后索引的变化问题
t = len(a) - 1
while(t>=0):
del char_list[a[t]]
t = t-1
str1 = ''.join(char_list)
str2 = str1.strip()
str2=str2[::-1]
filelist2.append(str2)
return filelist2
data_T = pd.read_csv('../data/train.csv',usecols=['text','label'])
data_T.shape
data_T.head()
### 虽然股吧没有评论得积分的活动,但还是需要处理,以防止个别极端评论导致整体的偏颇
list_baier = data_T.text.values.tolist()
#去除前缀重复
res1=qc_string_forward(list_baier)
#去除后缀重复
res2=qc_string_backward(res1)
res3=[]
for i in res2:
if i not in res3:
res3.append(i)
len(res3)
res3
res4 = []
for i in res3:
res4.append(' '.join(jieba.cut(i)))
res4
fname = './negative.txt'
fh = open(fname,encoding='utf-8')
l = list()
for line in fh:
line = line.rstrip()
l = l + list(line.split())
s = list(set(l))
for line in s:
print(line)
len(s) #### 原先1W35,去重后现在只有1W2 少了近1千
fname = './positive.txt'
fh = open(fname,encoding='utf-8')
l = list()
for line in fh:
line = line.rstrip()
l = l + list(line.split())
s = list(set(l))
for line in s:
print(line)
len(s) #### 原先1W35,去重后现在只有1W2 少了近1千
from snownlp import sentiment
from snownlp import SnowNLP
data_T = pd.read_csv('../data/train.csv',usecols=['text','label'])
data_T.head()
sentiment.train(
'./neg.txt', ### 用train、val数据集分类成两类,作为语料库
'./pos.txt'
)
sentiment.save('./sentiment.marshal.2')
data_V = pd.read_csv('../data/test.csv',usecols=['text','label'])
predict = []
for review in data_V.text:
pos = SnowNLP(review).sentiments
thres = 0.6
if pos > thres:
predict.append(1)
else:
predict.append(0)
label = [{'positive':1,'negative':0}[x] for x in data_V.label]
pre = pd.Series(predict)
(pre==label).sum()/len(label)
data_V = pd.read_csv('../data/test.csv',usecols=['text','label'])
data_V.shape[0] ### 原先大小
from snownlp import SnowNLP
predict = []
for review in data_V.text:
pos = SnowNLP(review).sentiments
thres = 0.49
if pos > 0.5+thres:
predict.append(1)
elif pos < 0.5-thres:
predict.append(0)
else:
predict.append(-1)
label = pd.Series([{'positive':1,'negative':0}[x] for x in data_V.label])
pre = pd.Series(predict)
(pre==label).sum()/len(label) ### 这种计算方式是偏颇的、偏低的
pre1 = pre[pre>=0] ### 筛选出被预测为正负样本的样本集,再一次评估
label1 = label[pre>=0]
(pre1==label1).sum()/len(label1) #### 通过人工调整阈值,选择阈值0.99 0.01
len(label1) # 筛选后大小 从900缩减到300,近2/3
sys.path
import sys
sys.path.append('..')
from finbert.finbert import Finbert
dict1 = {'negative':0,'positive':1}
fb = Finbert(label_map = dict1)
import pandas as pd
raw_data = pd.read_csv('../data/corpus/train.csv',usecols=['text','label'],encoding='utf-8')
X = raw_data['text']
y = raw_data['label']
X.shape,y.shape
fb.fit(X,y,'balanced')
fb.score(X,y)
fb.predict_proba(X)
print('epoch {} train_loss_avg_batchs: {}'.format(1,2))
| 0.053114 | 0.585101 |
<img src="./figuras/cropped-Logos-diplomatura-hd-2.png" width="750"/>
# Detección de comunidades
En esta notebook vamos a recorrer distintos algoritmos de detección de comunidades y algunas de las métricas con las que se pueden evaluar.
Vamos a estudiar uno de los grafos más famosos: la red de Karate de Zachary.
#### <u> Referencias </u>
- [Fortunato (2008)](https://arxiv.org/abs/0906.0612). Review de detección de comunidades
- [Newman (2018)](https://www.amazon.com/Networks-Introduction-Mark-Newman/dp/0199206651). Libro de redes complejas
- [Menczer (2019)](https://www.amazon.com/First-Course-Network-Science/dp/1108471137). Libro de redes complejas con código en Python
```
import numpy as np
import pandas as pd
from pprint import pprint
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
sns.set_context('talk')
def plot_graph(g, comms, ax=None):
if not ax:
fig, ax = plt.subplots(figsize=(12,12))
colors = np.zeros(g.number_of_nodes(), dtype='int')
for comm, node in enumerate(comms):
colors[node] = comm
pos = nx.spring_layout(g, seed=22)
nx.draw(
g, with_labels=True, pos=pos, node_color=colors, cmap='Set2',
ax=ax
)
return ax
def plot_graph_by_parameters(comms_dict, parameter_name):
ncols = len(comms_dict)
fig, axes = plt.subplots(figsize=(8*ncols, 10), ncols=ncols)
for i, (param, comms) in enumerate(comms_dict.items()):
ax = axes[i]
comms = comms_dict[param]
Q = evaluation.newman_girvan_modularity(g, comms).score
ax.set_title(f'{parameter_name}: {param:.3f}\nQ = {Q:.3f}')
plot_graph(g, comms.communities, ax=ax)
fig.suptitle(f'Algoritmo de {comms.method_name}', fontsize=28)
return fig, axes
def create_contingency_table(comms1, comms2):
"""
Compute contingency table between partition comms1 and
partition comms2.
The contingency table is defined by
n_{kk'} = |C_k \cap C'_{k'}|.
"""
table = np.zeros((len(comms1), len(comms2)))
for i, c1 in enumerate(comms1):
for j, c2 in enumerate(comms2):
table[i,j] = len(set(c1).intersection(c2))
return table
def get_all_node_properties(g):
"""
Return a set of all node properties of the graph.
"""
node_props = set()
for v in g.nodes():
node_props = node_props.union(set(g.nodes[v].keys()))
return node_props
def get_membership(comms):
"""
Devuelve un vector de N elementos, donde el i-ésimo elemento
corresponde a la etiqueta de la comunidad correspondiente
al nodo i.
"""
if isinstance(comms, NodeClustering):
comms = comms.communities
N = sum(len(c) for c in comms)
membership = np.zeros(N, dtype='int')
for c, nodes in enumerate(comms):
membership[nodes] = c
return membership
def get_comparison_matrix(comms1, comms2):
"""
Construye la matriz de comparación entre particiones.
La matriz de comparación es una matriz de 2x2 cuyos
elementos están definidos de la siguiente manera:
M_{11}: # pares de nodos que están en la misma comunidad en 'comms1' y 'comms2'
M_{00}: # pares de nodos que están en distintas comunidades en 'comms1' y 'comms2'
M_{10}: # pares de nodos que están en la misma comunidad en 'comms1' pero no en 'comms2'
M_{01}: # pares de nodos que están en la misma comunidad en 'comms2' pero no en 'comms1'
"""
if isinstance(comms1, NodeClustering):
comms1 = comms1.communities
if isinstance(comms2, NodeClustering):
comms2 = comms2.communities
membership1 = get_membership(comms1)
membership2 = get_membership(comms2)
N = len(membership1)
M = np.zeros((2,2), dtype='int')
for i in range(N):
for j in range(i+1, N):
if membership1[i] == membership1[j]:
if membership2[i] == membership2[j]:
M[1,1] += 1
else:
M[1,0] += 1
else:
if membership2[i] == membership2[j]:
M[0,1] += 1
else:
M[0,0] += 1
return M
def rand_index(comms1, comms2):
if isinstance(comms1, NodeClustering):
comms1 = comms1.communities
if isinstance(comms2, NodeClustering):
comms2 = comms2.communities
M = get_comparison_matrix(comms1, comms2)
return (M[0,0] + M[1,1]) / M.sum()
def jaccard_index(comms1, comms2):
if isinstance(comms1, NodeClustering):
comms1 = comms1.communities
if isinstance(comms2, NodeClustering):
comms2 = comms2.communities
M = get_comparison_matrix(comms1, comms2)
return M[0,0] / (M[1,1] + M[1,0] + M[0,1])
def mutual_information(comms1, comms2):
if isinstance(comms1, NodeClustering):
comms1 = comms1.communities
if isinstance(comms2, NodeClustering):
comms2 = comms2.communities
table = create_contingency_table(comms1, comms2)
table = table / table.sum()
MI = 0
for k1, comm1 in enumerate(comms1):
for k2, comm2 in enumerate(comms2):
if table[k1, k2] == 0:
continue
pk1 = table.sum(axis=1)[k1]
pk2 = table.sum(axis=0)[k2]
if (pk1 == 0) or (pk2 == 0):
continue
MI += table[k1, k2] * np.log2(table[k1, k2] / (pk1*pk2))
return MI
def run_louvain(g, min_res, max_res, samples=10, logspace=True):
comms_dict = {}
if logspace:
resolutions = np.logspace(np.log10(min_res), np.log10(max_res), samples)
else:
resolutions = np.linspace(min_res, max_res, samples)
for resolution in resolutions:
comms = algorithms.louvain(
g, weight='weight', resolution=resolution, randomize=False
)
comms_dict[resolution] = comms
return comms_dict
def plot_louvain(g, min_res, max_res, samples=10, logspace=True):
comms_dict = run_louvain(g, 0.01, 10, samples=200)
resolutions = comms_dict.keys()
Q_values = [evaluation.newman_girvan_modularity(g, comms).score for comms in comms_dict.values()]
fig, ax = plt.subplots(figsize=(12,5))
ax.set_ylabel('Q')
ax.set_xlabel('Resolution')
ax.set_xscale('log')
ax.plot(resolutions, Q_values, '-', label='Louvain')
ax.legend()
return ax
import networkx as nx
g = nx.karate_club_graph()
print(nx.info(g))
```
NetworkX permite almacenar propiedades, o *features* de los nodos, enlaces y del grafo en su totalidad. Veamos qué propiedades tienen en este caso los nodos.
```
get_all_node_properties(g)
```
La única propiedad corresponde al club al cual se unieron los participantes después de la división inicial. Esta etiqueta será nuestra clasificación "real", o *ground truth*, que buscaremos obtener mediante los distintos algoritmos de detección de comunidades.
```
membership = np.array([g.nodes[v]['club'] for v in g])
set(membership)
original_comms = [
np.array(g.nodes())[membership=='Mr. Hi'].tolist(),
np.array(g.nodes())[membership=='Officer'].tolist()
]
original_comms
```
Visualizamos la red
```
plot_graph(g, original_comms);
```
Utilizaremos la librería [cdlib](https://cdlib.readthedocs.io/en/latest/overview.html), especializada en detección de comunidades.
Si bien no tiene implementaciones propias, agrupa las implementaciones de otras librerías (por ejemplo igraph, NetworkX, graph-tool) en una interfaz común.
```
from cdlib import NodeClustering, evaluation, algorithms
## Construimos nuestro ground truth
ground_truth = NodeClustering(original_comms, graph=g, method_name='ground')
```
#### Primer algoritmo: Propagación de etiquetas
Comenzamos aplicando el algoritmo de propagación de etiquetas, o *label propagation*. Inicialmente, suponemos que cada nodo constituye una comunidad. Luego, actualizamos iterativamente la pertenencia de cada nodo asignandole la comunidad más frecuente entre sus vecinos. En caso de que haya empates, elegimos aleatoriamente entre las más populares.
<img src="./figuras/label_propagation.jpg" width="750"/>
[Figura](https://www.researchgate.net/publication/353777352_Weakly-supervised_learning_for_community_detection_based_on_graph_convolution_in_attributed_networks)
```
comms = algorithms.label_propagation(g)
comms.method_name
comms.method_parameters
comms.communities
plt.figure(figsize=(20, 20))
ax1 = plt.subplot(221)
ax2 = plt.subplot(222)
ax1.set_title('Comunidades detectadas')
plot_graph(g, comms.communities, ax=ax1)
ax2.set_title('Comunidades originales')
plot_graph(g, ground_truth.communities, ax=ax2)
plt.show()
```
## Evaluación de las comunidades encontradas
### Evaluación externa
Comparamos las comunidades halladas con las comunidades originales, o *ground truth*.
Comencemos con algunas definiciones:
Una **partición** $\mathcal{C}$ de un grafo $\mathcal{G}(V,E)$ es una división de los vértices de $\mathcal{G}$ en conjuntos mutuamente disjuntos
$$
\mathcal{C} = \left\lbrace C_1, C_2, \cdots C_K \right\rbrace
\quad \text{ tal que } \quad
C_k \cap C_l = \emptyset\quad \text{ y }\quad \bigcup\limits_{i=1}^{K} C_k = V
$$
Supongamos ahora que tenemos dos particiones distintas $\mathcal{C}$ y $\mathcal{C'}$. Definimos la **tabla de contingencia** como la matriz
$$
n_{kk'} = |C_k \cap C'_{k'}|.
$$
Notar que las particiones no necesariamente tienen la misma cantidad de elementos, por lo que $n_{kk'}$ es, en general, no cuadrada.
```
## Construimos y visualizamos la tabla de contingencia
table = create_contingency_table(ground_truth.communities, comms.communities)
sns.heatmap(table, annot=True, cmap='Blues');
```
¿Cómo cuantificamos qué tan cercana es la partición hallada a la original?
Definimos la **matriz de comparación** $M$ entre particiones como la matriz de $2\times 2$ cuyos elementos son
$$
\begin{align*}
m_{11} &\equiv \text{# pares de nodos que están en la misma comunidad bajo } \mathcal{C} \text{ y bajo } \mathcal{C'} \\
m_{00} &\equiv \text{# pares de nodos que están en distintas comunidades bajo } \mathcal{C} \text{ y bajo } \mathcal{C'} \\
m_{10} &\equiv \text{# pares de nodos que están en la misma comunidad bajo } \mathcal{C} \text{ pero no bajo } \mathcal{C'} \\
m_{01} &\equiv \text{# pares de nodos que están en la misma comunidad bajo } \mathcal{C'} \text{ pero no bajo } \mathcal{C}
\end{align*}
$$
```
## Ejemplo
comms1 = [[0, 1, 2], [3, 4, 5]]
comms2 = [[0, 1, 3], [2, 4, 5]]
N = sum(len(c) for c in comms1)
M = get_comparison_matrix(comms1, comms2)
print('M =\n', M)
```
Ahora, construimos la matriz de comparación para nuestro caso de estudio.
```
print('Comunidades detectadas:')
print('-'*30)
pprint([set(comm) for comm in comms.communities])
print('Comunidades originales:')
print('-'*30)
pprint([set(comm) for comm in ground_truth.communities])
get_comparison_matrix(comms, ground_truth)
```
#### **Índice de Rand**
Intuitivamente, podemos ver que cuanto más diagonal sea la matriz de comparación, más cercanas son las dos particiones. Eso motiva la definición del **índice de Rand**, de la siguiente forma:
$$
\mathcal{R}(\mathcal{C}, \mathcal{C'}) = \dfrac{m_{11} + m_{00}}{N(N-1)/2}.
$$
Podemos ver fácilmente que $0 \leq \mathcal{R}(\mathcal{C}, \mathcal{C'}) \leq 1$, siendo $\mathcal{R}(\mathcal{C}, \mathcal{C'}) = 1$ el valor ideal.
```
rand_index(comms, ground_truth)
```
Existe, sin embargo, un problema con el índice de Rand, y es que, en general, da valores positivos incluso cuando asignamos comunidades aleatoriamente. Es por eso que conviene normalizarlo. Una forma de hacerlo es definir el "índice de Rand ajustado" de la siguiente manera:
$$
\mathcal{AR}(\mathcal{C}, \mathcal{C'}) = \dfrac{\mathcal{R}(\mathcal{C}, \mathcal{C'})-\mathcal{E}[\mathcal{R}]}{1-\mathcal{E}[\mathcal{R}]},
$$
donde $\mathcal{E}[\mathcal{R}]$ es el valor esperado del índice de Rand bajo la hipótesis nula de particiones independientes (ver [Meila 2006]()).
```
evaluation.adjusted_rand_index(comms, ground_truth)
```
#### **Índice de Jaccard**
Otra métrica que se utiliza es el **índice de Jaccard**, el cual se define de la siguiente manera:
$$
\mathcal{J}(\mathcal{C}, \mathcal{C'}) = \dfrac{m_{11}}{m_{11} + m_{10} + m_{01}}.
$$
```
jaccard_index(comms, ground_truth)
```
#### **Información mutua**
La información mutua entre dos variables se puede interpretar como la cantidad de información que obtenemos sobre una variable al conocer la otra. En el contexto de particiones de grafos, se puede ver de la siguiente manera.
Definimos
$$
p(k, k') = \dfrac{n_{kk'}}{N} = \dfrac{|C_k \cap C'_{k'}|}{N}.
$$
Con esta definición, podemos pensar en $p(k, k')$ como la probabilidad conjunta de que un nodo escogido al azar pertenezca a la comunidad $k$ en la partición $\mathcal{C}$ y a la comunidad $k'$ en la partición $\mathcal{C'}$.
Definimos a su vez las distribuciones marginales
$$
\begin{align*}
p(k) &= \sum_{k'=1}^{K'} p(k, k'), \\
p(k') &= \sum_{k=1}^{K} p(k, k').
\end{align*}
$$
La información mutua se define entonces como
$$
I(\mathcal{C}, \mathcal{C'}) = \sum_{k=1}^{K} \sum_{k'=1}^{K'}p(k, k') \log \dfrac{p(k, k')}{p(k)p(k')},
$$
con la convención de que $0/0 = 0$.
```
mutual_information(comms, original_comms)
```
Al igual que para los índices de Rand y de Jaccard, conviene ajustar la información mutua para corregir valores positivos espúreos (ver [Vinh 2009]()).
```
evaluation.adjusted_mutual_information(comms, ground_truth)
```
### Evaluación interna
Es muy común que el problema de detección de comunidades se aborde como un problema de aprendizaje no supervisado. En la mayoría de los casos, no contamos con etiquetas de pertenencia a comunidades, o contamos con muy pocos ejemplos etiquetados. En estas situaciones, sólo podemos evaluar la calidad de una partición usando información estructural del grafo.
#### **Densidad interna**
La densidad de un grafo es el cociente entre la cantidad de enlaces existentes y la cantidad de enlaces posible. Es decir,
$$
\mu = \dfrac{m}{N (N-1)/2}.
$$
La densidad interna se define como el promedio de las densidades de las comunidades, tomando únicamente los enlaces intra-comunidad.
```
m = g.number_of_edges()
N = g.number_of_nodes()
print('Densidad del grafo:', m/(N*(N-1)/2))
print('Comunidades originales: ', evaluation.internal_edge_density(g, ground_truth))
print('Comunidades encontradas:', evaluation.internal_edge_density(g, comms))
```
#### **Modularidad**
La modularidad $Q$ es la métrica más utilizada para evaluar una partición. Esta métrica se calcula como la resta entre la fracción de enlaces internos y el valor esperado de esa cantidad bajo un modelo nulo conocido como **modelo de configuración**, el cual consiste en un grafo con la misma secuencia de grado pero donde las conexiones se realizan en forma completamente aleatoria.
$$
Q = \dfrac{1}{2m} \sum_{ij} \left[ A_{ij} - \dfrac{k_i k_j}{2m} \right] \delta_{c_i, c_j}
$$
```
print('Comunidades originales: ', evaluation.newman_girvan_modularity(g, ground_truth))
print('Comunidades encontradas:', evaluation.newman_girvan_modularity(g, comms))
```
La modularidad es una buena métrica para evaluar particiones, pero no siempre es la óptima. En particular, se sabe que tiene un límite de resolución. Si un grafo presenta comunidades de tamaños muy diversos, las particiones que maximizan la modularidad son a veces aquellas en las cuales las comunidades más pequeñas se agrupan entre sí. Un ejemplo es el caso de la siguiente figura, discutido en [Fortunato and Barthélemy (2006)](https://www.pnas.org/content/104/1/36).
<img src="./figuras/modularity.png" width="250"/>
#### **Índice de corte**
Es el promedio de la fracción de enlaces entre pares de comunidades
```
print('Comunidades originales: ', evaluation.cut_ratio(g, ground_truth))
print('Comunidades encontradas:', evaluation.cut_ratio(g, comms))
```
## Algoritmos de detección de comunidades
### Algoritmo de Louvain [Blondel, et al., (2008)](https://arxiv.org/abs/0803.0476)
Este algoritmo busca una partición que maximice la modularidad. Comenzando con alguna partición inicial (por ej., una donde cada nodo constituya una comunidad), se realizan iterativamente dos pasos:
- Optimización local de la modularidad
- Agregación de comunidades
<img src="./figuras/louvain.png" width="1000"/>
[Figura](https://arxiv.org/abs/0803.0476)
El algoritmo de Louvain es un algoritmo **jerárquico**. Después de cada iteración, obtenemos una partición cada vez menos granular. Esto lo podemos representar mediante un dendograma como el siguiente:
<img src="./figuras/dendogram2.png" width="500"/>
[Figura](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC122977/)
```
resolutions = [0.1, 1, 10]
comms_dict = {}
for resolution in resolutions:
comms = algorithms.louvain(g, weight='weight', resolution=resolution, randomize=False)
comms_dict[resolution] = comms
plot_graph_by_parameters(comms_dict, parameter_name='Resolution');
```
Observamos cómo varía la modularidad en función de la resolución del algoritmo
```
plot_louvain(g, min_res=0.01, max_res=10, samples=200, logspace=True);
```
Comparamos con otros algoritmos
- Label propagation [(Cordasco, et al. (2010)](https://arxiv.org/abs/1103.4550).
- Leiden [Traag, et al. (2019)](https://www.nature.com/articles/s41598-019-41695-z).
```
Q_label = evaluation.newman_girvan_modularity(g, algorithms.label_propagation(g)).score
Q_leiden = evaluation.newman_girvan_modularity(g, algorithms.leiden(g)).score
ax = plot_louvain(g, min_res=0.01, max_res=10, samples=200, logspace=True);
ax.axhline(Q_label, linestyle='--', label='Label Propagation', color='C1')
ax.axhline(Q_leiden, linestyle='--', label='Leiden', color='C2')
ax.legend();
```
### Algoritmo de Girvan-Newman [Girvan and Newman (2002)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC122977/)
El algoritmo de Girvan-Newman consiste en ir "desarmando" iterativamente el grafo, **removiendo** de a uno los **enlaces**. La idea es remover, en primer lugar, los enlaces que **intermedien** entre comunidades y, de ese modo, ir separando las comunidades entre sí. En su versión original, los enlaces se remueven en orden decreciente de **betweeness**.
<img src="./figuras/two_comms.png" width="500"/>
Al igual que el algoritmo de Louvain, este algoritmo es jerárquico.
```
levels = [1, 4, 8]
comms_dict = {}
for level in levels:
comms = algorithms.girvan_newman(g, level=level)
comms_dict[level] = comms
plot_graph_by_parameters(comms_dict, parameter_name='Level');
level = 1
levels = range(1, 16)
data = []
for level in levels:
comms = algorithms.girvan_newman(g, level=level)
ncomms = len(comms.communities)
Q = evaluation.newman_girvan_modularity(g, comms).score
data.append((level, ncomms, Q))
df = pd.DataFrame(data, columns=['Level', 'ncomms', 'Q'])
df
fig, ax = plt.subplots(figsize=(12,6))
ax.set_ylabel('Q')
ax.set_xlabel('Level')
ax.plot(df.Level, df.Q, '-', label='Girvan-Newman')
ax.legend()
plt.show()
```
|
github_jupyter
|
import numpy as np
import pandas as pd
from pprint import pprint
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
sns.set_context('talk')
def plot_graph(g, comms, ax=None):
if not ax:
fig, ax = plt.subplots(figsize=(12,12))
colors = np.zeros(g.number_of_nodes(), dtype='int')
for comm, node in enumerate(comms):
colors[node] = comm
pos = nx.spring_layout(g, seed=22)
nx.draw(
g, with_labels=True, pos=pos, node_color=colors, cmap='Set2',
ax=ax
)
return ax
def plot_graph_by_parameters(comms_dict, parameter_name):
ncols = len(comms_dict)
fig, axes = plt.subplots(figsize=(8*ncols, 10), ncols=ncols)
for i, (param, comms) in enumerate(comms_dict.items()):
ax = axes[i]
comms = comms_dict[param]
Q = evaluation.newman_girvan_modularity(g, comms).score
ax.set_title(f'{parameter_name}: {param:.3f}\nQ = {Q:.3f}')
plot_graph(g, comms.communities, ax=ax)
fig.suptitle(f'Algoritmo de {comms.method_name}', fontsize=28)
return fig, axes
def create_contingency_table(comms1, comms2):
"""
Compute contingency table between partition comms1 and
partition comms2.
The contingency table is defined by
n_{kk'} = |C_k \cap C'_{k'}|.
"""
table = np.zeros((len(comms1), len(comms2)))
for i, c1 in enumerate(comms1):
for j, c2 in enumerate(comms2):
table[i,j] = len(set(c1).intersection(c2))
return table
def get_all_node_properties(g):
"""
Return a set of all node properties of the graph.
"""
node_props = set()
for v in g.nodes():
node_props = node_props.union(set(g.nodes[v].keys()))
return node_props
def get_membership(comms):
"""
Devuelve un vector de N elementos, donde el i-ésimo elemento
corresponde a la etiqueta de la comunidad correspondiente
al nodo i.
"""
if isinstance(comms, NodeClustering):
comms = comms.communities
N = sum(len(c) for c in comms)
membership = np.zeros(N, dtype='int')
for c, nodes in enumerate(comms):
membership[nodes] = c
return membership
def get_comparison_matrix(comms1, comms2):
"""
Construye la matriz de comparación entre particiones.
La matriz de comparación es una matriz de 2x2 cuyos
elementos están definidos de la siguiente manera:
M_{11}: # pares de nodos que están en la misma comunidad en 'comms1' y 'comms2'
M_{00}: # pares de nodos que están en distintas comunidades en 'comms1' y 'comms2'
M_{10}: # pares de nodos que están en la misma comunidad en 'comms1' pero no en 'comms2'
M_{01}: # pares de nodos que están en la misma comunidad en 'comms2' pero no en 'comms1'
"""
if isinstance(comms1, NodeClustering):
comms1 = comms1.communities
if isinstance(comms2, NodeClustering):
comms2 = comms2.communities
membership1 = get_membership(comms1)
membership2 = get_membership(comms2)
N = len(membership1)
M = np.zeros((2,2), dtype='int')
for i in range(N):
for j in range(i+1, N):
if membership1[i] == membership1[j]:
if membership2[i] == membership2[j]:
M[1,1] += 1
else:
M[1,0] += 1
else:
if membership2[i] == membership2[j]:
M[0,1] += 1
else:
M[0,0] += 1
return M
def rand_index(comms1, comms2):
if isinstance(comms1, NodeClustering):
comms1 = comms1.communities
if isinstance(comms2, NodeClustering):
comms2 = comms2.communities
M = get_comparison_matrix(comms1, comms2)
return (M[0,0] + M[1,1]) / M.sum()
def jaccard_index(comms1, comms2):
if isinstance(comms1, NodeClustering):
comms1 = comms1.communities
if isinstance(comms2, NodeClustering):
comms2 = comms2.communities
M = get_comparison_matrix(comms1, comms2)
return M[0,0] / (M[1,1] + M[1,0] + M[0,1])
def mutual_information(comms1, comms2):
if isinstance(comms1, NodeClustering):
comms1 = comms1.communities
if isinstance(comms2, NodeClustering):
comms2 = comms2.communities
table = create_contingency_table(comms1, comms2)
table = table / table.sum()
MI = 0
for k1, comm1 in enumerate(comms1):
for k2, comm2 in enumerate(comms2):
if table[k1, k2] == 0:
continue
pk1 = table.sum(axis=1)[k1]
pk2 = table.sum(axis=0)[k2]
if (pk1 == 0) or (pk2 == 0):
continue
MI += table[k1, k2] * np.log2(table[k1, k2] / (pk1*pk2))
return MI
def run_louvain(g, min_res, max_res, samples=10, logspace=True):
comms_dict = {}
if logspace:
resolutions = np.logspace(np.log10(min_res), np.log10(max_res), samples)
else:
resolutions = np.linspace(min_res, max_res, samples)
for resolution in resolutions:
comms = algorithms.louvain(
g, weight='weight', resolution=resolution, randomize=False
)
comms_dict[resolution] = comms
return comms_dict
def plot_louvain(g, min_res, max_res, samples=10, logspace=True):
comms_dict = run_louvain(g, 0.01, 10, samples=200)
resolutions = comms_dict.keys()
Q_values = [evaluation.newman_girvan_modularity(g, comms).score for comms in comms_dict.values()]
fig, ax = plt.subplots(figsize=(12,5))
ax.set_ylabel('Q')
ax.set_xlabel('Resolution')
ax.set_xscale('log')
ax.plot(resolutions, Q_values, '-', label='Louvain')
ax.legend()
return ax
import networkx as nx
g = nx.karate_club_graph()
print(nx.info(g))
get_all_node_properties(g)
membership = np.array([g.nodes[v]['club'] for v in g])
set(membership)
original_comms = [
np.array(g.nodes())[membership=='Mr. Hi'].tolist(),
np.array(g.nodes())[membership=='Officer'].tolist()
]
original_comms
plot_graph(g, original_comms);
from cdlib import NodeClustering, evaluation, algorithms
## Construimos nuestro ground truth
ground_truth = NodeClustering(original_comms, graph=g, method_name='ground')
comms = algorithms.label_propagation(g)
comms.method_name
comms.method_parameters
comms.communities
plt.figure(figsize=(20, 20))
ax1 = plt.subplot(221)
ax2 = plt.subplot(222)
ax1.set_title('Comunidades detectadas')
plot_graph(g, comms.communities, ax=ax1)
ax2.set_title('Comunidades originales')
plot_graph(g, ground_truth.communities, ax=ax2)
plt.show()
## Construimos y visualizamos la tabla de contingencia
table = create_contingency_table(ground_truth.communities, comms.communities)
sns.heatmap(table, annot=True, cmap='Blues');
## Ejemplo
comms1 = [[0, 1, 2], [3, 4, 5]]
comms2 = [[0, 1, 3], [2, 4, 5]]
N = sum(len(c) for c in comms1)
M = get_comparison_matrix(comms1, comms2)
print('M =\n', M)
print('Comunidades detectadas:')
print('-'*30)
pprint([set(comm) for comm in comms.communities])
print('Comunidades originales:')
print('-'*30)
pprint([set(comm) for comm in ground_truth.communities])
get_comparison_matrix(comms, ground_truth)
rand_index(comms, ground_truth)
evaluation.adjusted_rand_index(comms, ground_truth)
jaccard_index(comms, ground_truth)
mutual_information(comms, original_comms)
evaluation.adjusted_mutual_information(comms, ground_truth)
m = g.number_of_edges()
N = g.number_of_nodes()
print('Densidad del grafo:', m/(N*(N-1)/2))
print('Comunidades originales: ', evaluation.internal_edge_density(g, ground_truth))
print('Comunidades encontradas:', evaluation.internal_edge_density(g, comms))
print('Comunidades originales: ', evaluation.newman_girvan_modularity(g, ground_truth))
print('Comunidades encontradas:', evaluation.newman_girvan_modularity(g, comms))
print('Comunidades originales: ', evaluation.cut_ratio(g, ground_truth))
print('Comunidades encontradas:', evaluation.cut_ratio(g, comms))
resolutions = [0.1, 1, 10]
comms_dict = {}
for resolution in resolutions:
comms = algorithms.louvain(g, weight='weight', resolution=resolution, randomize=False)
comms_dict[resolution] = comms
plot_graph_by_parameters(comms_dict, parameter_name='Resolution');
plot_louvain(g, min_res=0.01, max_res=10, samples=200, logspace=True);
Q_label = evaluation.newman_girvan_modularity(g, algorithms.label_propagation(g)).score
Q_leiden = evaluation.newman_girvan_modularity(g, algorithms.leiden(g)).score
ax = plot_louvain(g, min_res=0.01, max_res=10, samples=200, logspace=True);
ax.axhline(Q_label, linestyle='--', label='Label Propagation', color='C1')
ax.axhline(Q_leiden, linestyle='--', label='Leiden', color='C2')
ax.legend();
levels = [1, 4, 8]
comms_dict = {}
for level in levels:
comms = algorithms.girvan_newman(g, level=level)
comms_dict[level] = comms
plot_graph_by_parameters(comms_dict, parameter_name='Level');
level = 1
levels = range(1, 16)
data = []
for level in levels:
comms = algorithms.girvan_newman(g, level=level)
ncomms = len(comms.communities)
Q = evaluation.newman_girvan_modularity(g, comms).score
data.append((level, ncomms, Q))
df = pd.DataFrame(data, columns=['Level', 'ncomms', 'Q'])
df
fig, ax = plt.subplots(figsize=(12,6))
ax.set_ylabel('Q')
ax.set_xlabel('Level')
ax.plot(df.Level, df.Q, '-', label='Girvan-Newman')
ax.legend()
plt.show()
| 0.506591 | 0.835349 |
### Dependencies for the Project
```
import pandas as pd
from sqlalchemy import create_engine, inspect
from db_config import password
import psycopg2
```
### Importing the CSV file for Behavior and Attitudes
```
file = "./Resources/Behavior_and_Attitudes.csv"
Behavior_and_Attitudes= pd.read_csv(file)
Behavior_and_Attitudes.head()
```
### Exploring the Database
#### No of unique countries in the Survey
```
print(f"No of unique countries in the survey : {len(Behavior_and_Attitudes['economy'].unique())}")
```
#### Understanding the Number of economies every year.
We could see that not every country is been surveyed in all the years. 2001 was the year with minimum countries(28) in the survey and 2013 and 2014 had 70 countries participating. The latest year 2019 have 50 economies surveyed.
```
Behavior_and_Attitudes["year"].value_counts()
```
#### Null Values
The dataset have null values in certain column and that has been identified below. The columns with null values are,
1. Fear of failure rate *
2. Entrepreneurial intentions
3. Established Business Ownership
4. Entrepreneurial Employee Activity
5. Motivational Index
6. Female/Male Opportunity-Driven TEA
7. High Job Creation Expectation
8. Innovation
9. Business Services Sector
10. High Status to Successful Entrepreneurs
11. Entrepreneurship as a Good Career Choice
```
# identifying missing values
Behavior_and_Attitudes.count()
```
### Fear of failure rate--dealing with null value
```
# Fear of failure rate has just one null value. identifying the row or economy with null value
Behavior_and_Attitudes.loc[Behavior_and_Attitudes["Fear of failure rate *"].isna()]
# pulling all the data point related to Venezuela
Behavior_and_Attitudes.loc[Behavior_and_Attitudes["economy"]=="Venezuela"]
```
#### Treating the one null value in Fear of Failure rate
Since there are five data points, the one null value can be filled by calculating the mean from four other fear of
failure rate data.
```
#calculating mean failure rate
mean_ffrate=Behavior_and_Attitudes.loc[(Behavior_and_Attitudes["economy"]=="Venezuela") & (Behavior_and_Attitudes["year"]!=2007),:]["Fear of failure rate *"].mean()
print(f"The data is updated with the mean value {mean_ffrate}")
# adding it to the df
Behavior_and_Attitudes["Fear of failure rate *"]=Behavior_and_Attitudes["Fear of failure rate *"].fillna(mean_ffrate)
#Displaying the DF with the changes made
Behavior_and_Attitudes.loc[Behavior_and_Attitudes["economy"]=="Venezuela"]
```
### Entrepreneural intentions--dealing with null value
All the economies that was surveyed in 2001 might not have questions on entrepreneurial intentions and hence the datapoint is null for all 28 economies.
```
#identifying the null values
Behavior_and_Attitudes.loc[Behavior_and_Attitudes["Entrepreneurial intentions"].isna()]
```
### Established Business Ownership- null values
Replaced the single value with the closest data point.
```
Behavior_and_Attitudes.loc[Behavior_and_Attitudes['Established Business Ownership'].isna()]
Behavior_and_Attitudes.loc[Behavior_and_Attitudes['economy']=='Israel']
#Replacing with the closest value.
Behavior_and_Attitudes["Established Business Ownership"]=Behavior_and_Attitudes["Established Business Ownership"].fillna(5.66)
Behavior_and_Attitudes.loc[Behavior_and_Attitudes['economy']=='Israel']
```
### Entrepreneurial employee activity, Motivational Index , Female/Male Opportunity-Driven TEA ,Innovation,High Status to Successful Entrepreneurs, Entrepreneurship as a Good Career Choice, Business Services Sector, High Job Creation Expectation --missing values
These columns have more than 100 missing values and will be only used for plotting purposes.
```
print(f"Missing values in Entrepreneurial Employee Activity is :{len(Behavior_and_Attitudes.loc[Behavior_and_Attitudes['Entrepreneurial Employee Activity'].isna()])}")
print(f"Missing values in Motivational Index is :{len(Behavior_and_Attitudes.loc[Behavior_and_Attitudes['Motivational Index'].isna()])}")
print(f"Missing values in Female/Male Opportunity-Driven TEA is :{len(Behavior_and_Attitudes.loc[Behavior_and_Attitudes['Female/Male Opportunity-Driven TEA'].isna()])}")
print(f"Missing values in Innovation is :{len(Behavior_and_Attitudes.loc[Behavior_and_Attitudes['Innovation'].isna()])}")
print(f"Missing values in High Status to Successful Entrepreneurs is :{len(Behavior_and_Attitudes.loc[Behavior_and_Attitudes['High Status to Successful Entrepreneurs'].isna()])}")
print(f"Missing values in Entrepreneurship as a Good Career Choice is :{len(Behavior_and_Attitudes.loc[Behavior_and_Attitudes['Entrepreneurship as a Good Career Choice'].isna()])}")
print(f"Missing values in Business Services Sector is :{len(Behavior_and_Attitudes.loc[Behavior_and_Attitudes['Business Services Sector'].isna()])}")
print(f"Missing values in High Job Creation Expectation is :{len(Behavior_and_Attitudes.loc[Behavior_and_Attitudes['High Job Creation Expectation'].isna()])}")
Behavior_and_Attitudes.columns = Behavior_and_Attitudes.columns.str.replace(' ','_')
Behavior_and_Attitudes.head()
Behavior_and_Attitudes=Behavior_and_Attitudes.rename(columns={"economy":"country"})
Behavior_and_Attitudes.head()
Behavior_and_Attitudes=Behavior_and_Attitudes.rename(columns={"Fear_of_failure_rate_*":"Fear_of_failure_rate"})
Behavior_and_Attitudes.head()
Behavior_and_Attitudes=Behavior_and_Attitudes.rename(columns={"Total_early-stage_Entrepreneurial_Activity_(TEA)":"Total_early_stage_Entrepreneurial_Activity"})
Behavior_and_Attitudes=Behavior_and_Attitudes.rename(columns={"Female/Male_TEA":"Female_Male_TEA"})
Behavior_and_Attitudes=Behavior_and_Attitudes.rename(columns={"Female/Male_Opportunity-Driven_TEA":"Female_Male_Opportunity_Driven_TEA"})
Behavior_and_Attitudes.head()
conn = psycopg2.connect(
database="postgres", user="postgres", password=f"{password}", host='127.0.0.1', port= '5432'
)
conn.autocommit = True
cursor = conn.cursor()
cursor.execute("SELECT datname FROM pg_database;")
list_database = cursor.fetchall()
dbname = "gem_db"
# try:
if (dbname,) not in list_database:
cur = conn.cursor()
cur.execute('CREATE DATABASE ' + dbname)
cur.close()
conn.close()
print("Creating Database...")
engine = create_engine(f"postgresql://postgres:{password}@localhost:5432/{dbname}")
connection = engine.connect()
print('-'*30)
print("Creating Tables, Please wait...")
print('-'*30)
Behavior_and_Attitudes.to_sql("behavior_and_attitudes",engine)
print("Table Behavior_and_Attitudes created successfully")
connection.close()
print('-'*30)
print("Database is ready to use.")
else:
print("Database is already exists.")
# except:
# print("Something went wrong.")
data_2019=Behavior_and_Attitudes.loc[Behavior_and_Attitudes["year"]==2019]
data_2019.count()
data_2019=data_2019.dropna( axis=1,how='any')
data_2019.count()
data_2019
data_2019.to_csv('behavior_and_attitudes_2019.csv',index=False)
```
|
github_jupyter
|
import pandas as pd
from sqlalchemy import create_engine, inspect
from db_config import password
import psycopg2
file = "./Resources/Behavior_and_Attitudes.csv"
Behavior_and_Attitudes= pd.read_csv(file)
Behavior_and_Attitudes.head()
print(f"No of unique countries in the survey : {len(Behavior_and_Attitudes['economy'].unique())}")
Behavior_and_Attitudes["year"].value_counts()
# identifying missing values
Behavior_and_Attitudes.count()
# Fear of failure rate has just one null value. identifying the row or economy with null value
Behavior_and_Attitudes.loc[Behavior_and_Attitudes["Fear of failure rate *"].isna()]
# pulling all the data point related to Venezuela
Behavior_and_Attitudes.loc[Behavior_and_Attitudes["economy"]=="Venezuela"]
#calculating mean failure rate
mean_ffrate=Behavior_and_Attitudes.loc[(Behavior_and_Attitudes["economy"]=="Venezuela") & (Behavior_and_Attitudes["year"]!=2007),:]["Fear of failure rate *"].mean()
print(f"The data is updated with the mean value {mean_ffrate}")
# adding it to the df
Behavior_and_Attitudes["Fear of failure rate *"]=Behavior_and_Attitudes["Fear of failure rate *"].fillna(mean_ffrate)
#Displaying the DF with the changes made
Behavior_and_Attitudes.loc[Behavior_and_Attitudes["economy"]=="Venezuela"]
#identifying the null values
Behavior_and_Attitudes.loc[Behavior_and_Attitudes["Entrepreneurial intentions"].isna()]
Behavior_and_Attitudes.loc[Behavior_and_Attitudes['Established Business Ownership'].isna()]
Behavior_and_Attitudes.loc[Behavior_and_Attitudes['economy']=='Israel']
#Replacing with the closest value.
Behavior_and_Attitudes["Established Business Ownership"]=Behavior_and_Attitudes["Established Business Ownership"].fillna(5.66)
Behavior_and_Attitudes.loc[Behavior_and_Attitudes['economy']=='Israel']
print(f"Missing values in Entrepreneurial Employee Activity is :{len(Behavior_and_Attitudes.loc[Behavior_and_Attitudes['Entrepreneurial Employee Activity'].isna()])}")
print(f"Missing values in Motivational Index is :{len(Behavior_and_Attitudes.loc[Behavior_and_Attitudes['Motivational Index'].isna()])}")
print(f"Missing values in Female/Male Opportunity-Driven TEA is :{len(Behavior_and_Attitudes.loc[Behavior_and_Attitudes['Female/Male Opportunity-Driven TEA'].isna()])}")
print(f"Missing values in Innovation is :{len(Behavior_and_Attitudes.loc[Behavior_and_Attitudes['Innovation'].isna()])}")
print(f"Missing values in High Status to Successful Entrepreneurs is :{len(Behavior_and_Attitudes.loc[Behavior_and_Attitudes['High Status to Successful Entrepreneurs'].isna()])}")
print(f"Missing values in Entrepreneurship as a Good Career Choice is :{len(Behavior_and_Attitudes.loc[Behavior_and_Attitudes['Entrepreneurship as a Good Career Choice'].isna()])}")
print(f"Missing values in Business Services Sector is :{len(Behavior_and_Attitudes.loc[Behavior_and_Attitudes['Business Services Sector'].isna()])}")
print(f"Missing values in High Job Creation Expectation is :{len(Behavior_and_Attitudes.loc[Behavior_and_Attitudes['High Job Creation Expectation'].isna()])}")
Behavior_and_Attitudes.columns = Behavior_and_Attitudes.columns.str.replace(' ','_')
Behavior_and_Attitudes.head()
Behavior_and_Attitudes=Behavior_and_Attitudes.rename(columns={"economy":"country"})
Behavior_and_Attitudes.head()
Behavior_and_Attitudes=Behavior_and_Attitudes.rename(columns={"Fear_of_failure_rate_*":"Fear_of_failure_rate"})
Behavior_and_Attitudes.head()
Behavior_and_Attitudes=Behavior_and_Attitudes.rename(columns={"Total_early-stage_Entrepreneurial_Activity_(TEA)":"Total_early_stage_Entrepreneurial_Activity"})
Behavior_and_Attitudes=Behavior_and_Attitudes.rename(columns={"Female/Male_TEA":"Female_Male_TEA"})
Behavior_and_Attitudes=Behavior_and_Attitudes.rename(columns={"Female/Male_Opportunity-Driven_TEA":"Female_Male_Opportunity_Driven_TEA"})
Behavior_and_Attitudes.head()
conn = psycopg2.connect(
database="postgres", user="postgres", password=f"{password}", host='127.0.0.1', port= '5432'
)
conn.autocommit = True
cursor = conn.cursor()
cursor.execute("SELECT datname FROM pg_database;")
list_database = cursor.fetchall()
dbname = "gem_db"
# try:
if (dbname,) not in list_database:
cur = conn.cursor()
cur.execute('CREATE DATABASE ' + dbname)
cur.close()
conn.close()
print("Creating Database...")
engine = create_engine(f"postgresql://postgres:{password}@localhost:5432/{dbname}")
connection = engine.connect()
print('-'*30)
print("Creating Tables, Please wait...")
print('-'*30)
Behavior_and_Attitudes.to_sql("behavior_and_attitudes",engine)
print("Table Behavior_and_Attitudes created successfully")
connection.close()
print('-'*30)
print("Database is ready to use.")
else:
print("Database is already exists.")
# except:
# print("Something went wrong.")
data_2019=Behavior_and_Attitudes.loc[Behavior_and_Attitudes["year"]==2019]
data_2019.count()
data_2019=data_2019.dropna( axis=1,how='any')
data_2019.count()
data_2019
data_2019.to_csv('behavior_and_attitudes_2019.csv',index=False)
| 0.224991 | 0.854672 |
## Example of 3D data generation with a constant density model
In this example, we will show how to use PySIT to generate data for a 3D model with a constant density. The corresponding .py file can be found in ``/Demo/GenerateData3DConstantDensity.py``
```
%matplotlib inline
```
Import necessary modules:
```
import time
import copy
import numpy as np
import matplotlib.pyplot as plt
import math
import os
from shutil import copy2
from mpl_toolkits.axes_grid1 import make_axes_locatable
import sys
import scipy.io as sio
from pysit import *
from pysit.gallery import horizontal_reflector
from pysit.util.io import *
from pysit.vis.vis import *
from pysit.util.parallel import *
```
### Define the physical domain, computational mesh and velocity models ###
1. Define perfectly matched layer(PML) boundaries in x, y and z directions with width of ``0.1 km`` and PML coefficient of ``100`` by
``pmlx = PML(0.1, 100)``
``pmly = PML(0.1, 100)``
``pmlz = PML(0.1, 100)``
For more information about the PML object, we refer users to check ``/pysit/core/domain.py`` by
2. Define a 3D rectangular domain with the ranges of ``(0.1, 1.0) km``, ``(0.1, 0.9) km``, ``(0.1, 0.8) km`` in x, y, and z directions.
``x_config = (0.1, 1.0, pmlx, pmlx)`` (The physical domain of x direction starts at 0.1 km and ends at 1.0 km.)
``y_config = (0.1, 0.9, pmly, pmly)`` (The physical domain of y direction starts at 0.1 km and ends at 0.9 km.)
``z_config = (0.1, 0.8, pmlz, pmlz)`` (The physical domain of z direction starts at 0.1 km and ends at 0.8 km.)
``d = RectangularDomain(x_config, y_config, z_config)``
For more information about the RectangularDomain, we refer users to check ``/pysit/core/domain.py``.
3. Define the computational Cartesian mesh with ``46`` grids in ``x`` direction, ``41`` grids in ``y`` direction, and ``36`` grids in ``z`` direction by
``m = CartesianMesh(d, 46, 41, 36)`` (The computational mesh ``m`` meets the physical domain ``d``)
For more information about the CartesianMesh object, we refer users to check ``/pysit/core/mesh.py``.
4. Generate the true velocity model and initial model for a Horizontal reflector model by
``C, C0, m, d = horizontal_reflector(m)``
The output ``C`` is the true velocity model and ``C0`` is the initial model.
For more information about the horizontal_reflecotr object, we refer users to check ``/pysit/gallery/horizontal_reflector.py``.
```
pmlx = PML(0.1, 100)
pmly = PML(0.1, 100)
pmlz = PML(0.1, 100)
x_config = (0.1, 1.0, pmlx, pmlx)
y_config = (0.1, 0.9, pmly, pmly)
z_config = (0.1, 0.8, pmlz, pmlz)
d = RectangularDomain(x_config, y_config, z_config)
m = CartesianMesh(d, 46, 41, 36)
C, C0, m, d = horizontal_reflector(m)
n_data = (46, 41, 36)
n_dataplt = (n_data[0], n_data[2], n_data[1])
Cplot = np.reshape(C, n_data)
Cplot = np.transpose(Cplot, (0, 2, 1))
origins = [0.1, 0.1, 0.1]
deltas = [0.02, 0.02, 0.02]
axis_ticks = [np.array(list(range(0, n_dataplt[0]-5, (n_data[0]-6)//4))),
np.array(list(range(5, n_dataplt[1]-5, (n_data[1]-11)//4))),
np.array(list(range(0, n_dataplt[2], (n_data[2]-1)//2)))
]
axis_tickslabels = [(axis_ticks[0] * deltas[0] * 1000.0 + origins[0] * 1000.0).astype(int),
(axis_ticks[1] * deltas[1] * 1000.0 + origins[1] * 1000.0).astype(int),
(axis_ticks[2] * deltas[2] * 1000.0 + origins[2] * 1000.0).astype(int)
]
plot_3D_panel(Cplot, slice3d=(22, 18, 20),
axis_label=['x [m]', 'z [m]', 'y [m]'],
axis_ticks=axis_ticks,
axis_tickslabels=axis_tickslabels,
)
plt.title('Slice at \n x = 540 m, y = 500 m, z = 440 m')
```
### Set up shots
1. Set up the shots object by:
``shots = equispaced_acquisition(m, RickerWavelet(10.0), sources=Nshots, source_depth=zpos,source_kwargs={},receivers='max',receiver_depth=zpos,receiver_kwargs={})``
``equispaced_acquisition`` - create a shots object with equially spaced acquisition
``m`` - computational mesh
``RickerWavelet(10.0)`` - a Ricker wavelet centered at ``10 Hz``
``sources`` - number of sources, for 3D modeling, sources has two elements that indicates number of sources in x and y directions. For example, ``sources = (2,3)`` means that there are 2 shots in the x direction and 3 shots in the y direction.
``source_depth`` - the depth of sources
``receivers`` - number of receivers, if set to be ``max``, then the number of receivers equals to the number of grids in x direction.
``receiver_depth`` - the depth of receivers
For more information about the ``equispaced_acquisition`` object, we refer the users to check ``/pysit/core/acquisition.py``.
2. Set up the range of recording time by;
``trange = (0.0,3.0)``.
```
# Set up shots
zmin = d.z.lbound
zmax = d.z.rbound
zpos = zmin + (1./9.)*zmax
Nshots = 1,1
shots = equispaced_acquisition(m,
RickerWavelet(10.0),
sources=Nshots,
source_depth=zpos,
source_kwargs={},
receivers='max',
receiver_depth=zpos,
receiver_kwargs={}
)
shots_freq = copy.deepcopy(shots)
# Define and configure the wave solver
trange = (0.0,3.0)
```
### Define the wave-equation solver and the computational model object, and generate time-domain data
1. In this example, we use the time-domain constant density acoustic wave as our target wave equation. We set up our wave equation solver by:
``solver = ConstantDensityAcousticWave(m, spatial_accuracy_order=2, trange=trange, kernel_implementation='cpp')``
``m`` - the computational mesh
``spatial_accuracy_order`` - the spatial accuray order for the numerical solver. Users can select one of the four values ``2, 4, 6, 8``.
``trange`` - the range of the recording time
``kernel_implementattion`` - the implementation of the stencil kernel. When set it to be 'cpp', we use the stencil implemented in the language of ``C++``.
For more information about the ``ConstantDensityAcousticWave`` object, we refer the users to check ``/pysit/solvers/wave_factory.py``
2. Create the velocity model object for the wave-quation solver by:
``base_model = solver.ModelParameters(m,{'C': C})``
The model object ``base_model`` contains the information of the computational mesh and the velocity model ``C``.
3. Generate the time-domain data by:
``generate_seismic_data(shots, solver, base_model)``
The generated data are stored in the object ``shots``. In order to check the data of the $i^{\text{th}}$ shot, you may need to use the command:
``data = shots[i].receivers.data``
For more information about the ``generate_seismic_data`` function, we refer the users to check ``/pysit/modeling/data_modeling.py``.
```
solver = ConstantDensityAcousticWave(m,
spatial_accuracy_order=2,
trange=trange,
kernel_implementation='cpp')
base_model = solver.ModelParameters(m,{'C': C})
generate_seismic_data(shots, solver, base_model)
data = shots[0].receivers.data
t_smp = np.linspace(trange[0], trange[1], data.shape[0])
fig=plt.figure()
n_recdata = [len(t_smp), n_data[0], n_data[1]]
n_recdataplt = [n_data[0], len(t_smp), n_data[1]]
data = np.reshape(data, n_recdata)
dataplt = np.transpose(data, (1, 0, 2))
deltas_data = [deltas[0], solver.dt, deltas[2]]
origins_data = [origins[0], 0.0,origins[2]]
axis_ticks = [np.array(list(range(0, n_recdataplt[0]-5, (n_recdataplt[0]-1)//4))),
np.array(list(range(0, n_recdataplt[1]-5, (n_recdataplt[1]-1)//4))),
np.array(list(range(0, n_recdataplt[2], (n_recdataplt[2]-1)//2)))
]
axis_tickslabels = [np.round(axis_ticks[0] * deltas_data[0] + origins_data[0], 2),
np.round(axis_ticks[1] * deltas_data[1] + origins_data[1], 2),
np.round(axis_ticks[2] * deltas_data[2] + origins_data[2], 2)
]
plot_3D_panel(dataplt, slice3d=(22, 900, 20),
axis_label=[ 'x [km]', 'Time [s]', 'y [km]'],
axis_ticks=axis_ticks,
axis_tickslabels=axis_tickslabels,
width_ratios=[1,1], height_ratios=[1,1],cmap='seismic', vmin=-0.2,vmax=0.2
)
```
### Generate frequency-domain data
We have shown how to generate the time domain data. Now let us show how to generate the frequency domain data. We only need to change the solver.
In this example, we use the Helmholtz equation with constant density as our target wave equation. In order to generate the frequency domain data, you need to pass values to the parameter ``frequencies`` when using the function of ``generate_seismic_data``. Different from the time domain solver, when generating frequency domain data, the data of $i^{\text{th}}$ shot at frequency of ``f`` is stored in ``shots_freq[i].receivers.data_dft[f]``.
For 3D frequency domain data generation, when set up the PML object, we need to set an additional parameter ``compact`` that indicates we use the compact form of the Helmholtz equation and do not want to get the auxiliary wavefields. As a result, we need to use the following command:
``pmlx = PML(0.1, 100, compact=True)``
``pmly = PML(0.1, 100, compact=True)``
``pmlz = PML(0.1, 100, compact=True)``
```
pmlx = PML(0.1, 100, compact=True)
pmly = PML(0.1, 100, compact=True)
pmlz = PML(0.1, 100, compact=True)
x_config = (0.1, 1.0, pmlx, pmlx)
y_config = (0.1, 0.9, pmly, pmly)
z_config = (0.1, 0.8, pmlz, pmlz)
d = RectangularDomain(x_config, y_config, z_config)
m = CartesianMesh(d, 46, 41, 36)
C, C0, m, d = horizontal_reflector(m)
solver = ConstantDensityHelmholtz(m,
spatial_accuracy_order=4)
frequencies = [2.0,3.0]
generate_seismic_data(shots_freq, solver, base_model, frequencies=frequencies)
xrec = np.linspace(0.1,1.0,46)
yrec = np.linspace(0.1,0.9,41)
data1 = shots_freq[0].receivers.data_dft[2.0]
data2 = shots_freq[0].receivers.data_dft[3.0]
data1 = np.reshape(data1, (len(xrec),len(yrec)))
data2 = np.reshape(data2, (len(xrec),len(yrec)))
plt.figure(figsize=(12,12))
plt.subplot(2,2,1)
vmax = np.abs(np.real(data1)).max()
clim=np.array([-vmax, vmax])
plt.imshow(np.real(data1).transpose(),cmap='seismic',clim=clim,
extent=[xrec[0], xrec[-1], yrec[-1], yrec[0]])
plt.xlabel('X [km]')
plt.ylabel('Y [km]')
plt.title('Real part of data at 2 Hz')
plt.colorbar()
plt.subplot(2,2,2)
vmax = np.abs(np.imag(data1)).max()
clim=np.array([-vmax, vmax])
plt.imshow(np.imag(data1).transpose(),cmap='seismic',clim=clim,
extent=[xrec[0], xrec[-1], yrec[-1], yrec[0]])
plt.xlabel('X [km]')
plt.ylabel('Y [km]')
plt.title('Imaginary part of data at 2 Hz')
plt.colorbar()
plt.subplot(2,2,3)
vmax = np.abs(np.real(data2)).max()
clim=np.array([-vmax, vmax])
plt.imshow(np.real(data2).transpose(),cmap='seismic',clim=clim,
extent=[xrec[0], xrec[-1], yrec[-1], yrec[0]])
plt.xlabel('X [km]')
plt.ylabel('Y [km]')
plt.title('Real part of data at 3 Hz')
plt.colorbar()
plt.subplot(2,2,4)
vmax = np.abs(np.imag(data2)).max()
clim=np.array([-vmax, vmax])
plt.imshow(np.imag(data2).transpose(),cmap='seismic',clim=clim,
extent=[xrec[0], xrec[-1], yrec[-1], yrec[0]])
plt.xlabel('X [km]')
plt.ylabel('Y [km]')
plt.title('Imaginary part of data at 3 Hz')
plt.colorbar()
vmax
```
|
github_jupyter
|
%matplotlib inline
import time
import copy
import numpy as np
import matplotlib.pyplot as plt
import math
import os
from shutil import copy2
from mpl_toolkits.axes_grid1 import make_axes_locatable
import sys
import scipy.io as sio
from pysit import *
from pysit.gallery import horizontal_reflector
from pysit.util.io import *
from pysit.vis.vis import *
from pysit.util.parallel import *
pmlx = PML(0.1, 100)
pmly = PML(0.1, 100)
pmlz = PML(0.1, 100)
x_config = (0.1, 1.0, pmlx, pmlx)
y_config = (0.1, 0.9, pmly, pmly)
z_config = (0.1, 0.8, pmlz, pmlz)
d = RectangularDomain(x_config, y_config, z_config)
m = CartesianMesh(d, 46, 41, 36)
C, C0, m, d = horizontal_reflector(m)
n_data = (46, 41, 36)
n_dataplt = (n_data[0], n_data[2], n_data[1])
Cplot = np.reshape(C, n_data)
Cplot = np.transpose(Cplot, (0, 2, 1))
origins = [0.1, 0.1, 0.1]
deltas = [0.02, 0.02, 0.02]
axis_ticks = [np.array(list(range(0, n_dataplt[0]-5, (n_data[0]-6)//4))),
np.array(list(range(5, n_dataplt[1]-5, (n_data[1]-11)//4))),
np.array(list(range(0, n_dataplt[2], (n_data[2]-1)//2)))
]
axis_tickslabels = [(axis_ticks[0] * deltas[0] * 1000.0 + origins[0] * 1000.0).astype(int),
(axis_ticks[1] * deltas[1] * 1000.0 + origins[1] * 1000.0).astype(int),
(axis_ticks[2] * deltas[2] * 1000.0 + origins[2] * 1000.0).astype(int)
]
plot_3D_panel(Cplot, slice3d=(22, 18, 20),
axis_label=['x [m]', 'z [m]', 'y [m]'],
axis_ticks=axis_ticks,
axis_tickslabels=axis_tickslabels,
)
plt.title('Slice at \n x = 540 m, y = 500 m, z = 440 m')
# Set up shots
zmin = d.z.lbound
zmax = d.z.rbound
zpos = zmin + (1./9.)*zmax
Nshots = 1,1
shots = equispaced_acquisition(m,
RickerWavelet(10.0),
sources=Nshots,
source_depth=zpos,
source_kwargs={},
receivers='max',
receiver_depth=zpos,
receiver_kwargs={}
)
shots_freq = copy.deepcopy(shots)
# Define and configure the wave solver
trange = (0.0,3.0)
solver = ConstantDensityAcousticWave(m,
spatial_accuracy_order=2,
trange=trange,
kernel_implementation='cpp')
base_model = solver.ModelParameters(m,{'C': C})
generate_seismic_data(shots, solver, base_model)
data = shots[0].receivers.data
t_smp = np.linspace(trange[0], trange[1], data.shape[0])
fig=plt.figure()
n_recdata = [len(t_smp), n_data[0], n_data[1]]
n_recdataplt = [n_data[0], len(t_smp), n_data[1]]
data = np.reshape(data, n_recdata)
dataplt = np.transpose(data, (1, 0, 2))
deltas_data = [deltas[0], solver.dt, deltas[2]]
origins_data = [origins[0], 0.0,origins[2]]
axis_ticks = [np.array(list(range(0, n_recdataplt[0]-5, (n_recdataplt[0]-1)//4))),
np.array(list(range(0, n_recdataplt[1]-5, (n_recdataplt[1]-1)//4))),
np.array(list(range(0, n_recdataplt[2], (n_recdataplt[2]-1)//2)))
]
axis_tickslabels = [np.round(axis_ticks[0] * deltas_data[0] + origins_data[0], 2),
np.round(axis_ticks[1] * deltas_data[1] + origins_data[1], 2),
np.round(axis_ticks[2] * deltas_data[2] + origins_data[2], 2)
]
plot_3D_panel(dataplt, slice3d=(22, 900, 20),
axis_label=[ 'x [km]', 'Time [s]', 'y [km]'],
axis_ticks=axis_ticks,
axis_tickslabels=axis_tickslabels,
width_ratios=[1,1], height_ratios=[1,1],cmap='seismic', vmin=-0.2,vmax=0.2
)
pmlx = PML(0.1, 100, compact=True)
pmly = PML(0.1, 100, compact=True)
pmlz = PML(0.1, 100, compact=True)
x_config = (0.1, 1.0, pmlx, pmlx)
y_config = (0.1, 0.9, pmly, pmly)
z_config = (0.1, 0.8, pmlz, pmlz)
d = RectangularDomain(x_config, y_config, z_config)
m = CartesianMesh(d, 46, 41, 36)
C, C0, m, d = horizontal_reflector(m)
solver = ConstantDensityHelmholtz(m,
spatial_accuracy_order=4)
frequencies = [2.0,3.0]
generate_seismic_data(shots_freq, solver, base_model, frequencies=frequencies)
xrec = np.linspace(0.1,1.0,46)
yrec = np.linspace(0.1,0.9,41)
data1 = shots_freq[0].receivers.data_dft[2.0]
data2 = shots_freq[0].receivers.data_dft[3.0]
data1 = np.reshape(data1, (len(xrec),len(yrec)))
data2 = np.reshape(data2, (len(xrec),len(yrec)))
plt.figure(figsize=(12,12))
plt.subplot(2,2,1)
vmax = np.abs(np.real(data1)).max()
clim=np.array([-vmax, vmax])
plt.imshow(np.real(data1).transpose(),cmap='seismic',clim=clim,
extent=[xrec[0], xrec[-1], yrec[-1], yrec[0]])
plt.xlabel('X [km]')
plt.ylabel('Y [km]')
plt.title('Real part of data at 2 Hz')
plt.colorbar()
plt.subplot(2,2,2)
vmax = np.abs(np.imag(data1)).max()
clim=np.array([-vmax, vmax])
plt.imshow(np.imag(data1).transpose(),cmap='seismic',clim=clim,
extent=[xrec[0], xrec[-1], yrec[-1], yrec[0]])
plt.xlabel('X [km]')
plt.ylabel('Y [km]')
plt.title('Imaginary part of data at 2 Hz')
plt.colorbar()
plt.subplot(2,2,3)
vmax = np.abs(np.real(data2)).max()
clim=np.array([-vmax, vmax])
plt.imshow(np.real(data2).transpose(),cmap='seismic',clim=clim,
extent=[xrec[0], xrec[-1], yrec[-1], yrec[0]])
plt.xlabel('X [km]')
plt.ylabel('Y [km]')
plt.title('Real part of data at 3 Hz')
plt.colorbar()
plt.subplot(2,2,4)
vmax = np.abs(np.imag(data2)).max()
clim=np.array([-vmax, vmax])
plt.imshow(np.imag(data2).transpose(),cmap='seismic',clim=clim,
extent=[xrec[0], xrec[-1], yrec[-1], yrec[0]])
plt.xlabel('X [km]')
plt.ylabel('Y [km]')
plt.title('Imaginary part of data at 3 Hz')
plt.colorbar()
vmax
| 0.344443 | 0.979195 |
# Getting started with TensorFlow
**Learning Objectives**
1. Practice defining and performing basic operations on constant Tensors
1. Use Tensorflow's automatic differentiation capability
1. Learn how to train a linear regression from scratch with TensorFLow
In this notebook, we will start by reviewing the main operations on Tensors in TensorFlow and understand how to manipulate TensorFlow Variables. We explain how these are compatible with python built-in list and numpy arrays.
Then we will jump to the problem of training a linear regression from scratch with gradient descent. The first order of business will be to understand how to compute the gradients of a function (the loss here) with respect to some of its arguments (the model weights here). The TensorFlow construct allowing us to do that is `tf.GradientTape`, which we will describe.
At last we will create a simple training loop to learn the weights of a 1-dim linear regression using synthetic data generated from a linear model.
As a bonus exercise, we will do the same for data generated from a non linear model, forcing us to manual engineer non-linear features to improve our linear model performance.
```
import numpy as np
from matplotlib import pyplot as plt
import tensorflow as tf
print(tf.__version__)
```
## Operations on Tensors
### Variables and Constants
Tensors in TensorFlow are either contant (`tf.constant`) or variables (`tf.Variable`).
Constant values can not be changed, while variables values can be.
The main difference is that instances of `tf.Variable` have methods allowing us to change
their values while tensors constructed with `tf.constant` don't have these methods, and
therefore their values can not be changed. When you want to change the value of a `tf.Variable`
`x` use one of the following method:
* `x.assign(new_value)`
* `x.assign_add(value_to_be_added)`
* `x.assign_sub(value_to_be_subtracted`
```
x = tf.constant([2, 3, 4])
x
x = tf.Variable(2.0, dtype=tf.float32, name="my_variable")
x.assign(45.8) # TODO 1
x
x.assign_add(4) # TODO 2
x
x.assign_sub(3) # TODO 3
x
```
### Point-wise operations
Tensorflow offers similar point-wise tensor operations as numpy does:
* `tf.add` allows to add the components of a tensor
* `tf.multiply` allows us to multiply the components of a tensor
* `tf.subtract` allow us to substract the components of a tensor
* `tf.math.*` contains the usual math operations to be applied on the components of a tensor
* and many more...
Most of the standard aritmetic operations (`tf.add`, `tf.substrac`, etc.) are overloaded by the usual corresponding arithmetic symbols (`+`, `-`, etc.)
```
a = tf.constant([5, 3, 8]) # TODO 1
b = tf.constant([3, -1, 2])
c = tf.add(a, b)
d = a + b
print("c:", c)
print("d:", d)
a = tf.constant([5, 3, 8]) # TODO 2
b = tf.constant([3, -1, 2])
c = tf.multiply(a, b)
d = a * b
print("c:", c)
print("d:", d)
# tf.math.exp expects floats so we need to explicitly give the type
a = tf.constant([5, 3, 8], dtype=tf.float32)
b = tf.math.exp(a)
print("b:", b)
```
### NumPy Interoperability
In addition to native TF tensors, tensorflow operations can take native python types and NumPy arrays as operands.
```
# native python list
a_py = [1, 2]
b_py = [3, 4]
tf.add(a_py, b_py) # TODO 1
# numpy arrays
a_np = np.array([1, 2])
b_np = np.array([3, 4])
tf.add(a_np, b_np) # TODO 2
# native TF tensor
a_tf = tf.constant([1, 2])
b_tf = tf.constant([3, 4])
tf.add(a_tf, b_tf) # TODO 3
```
You can convert a native TF tensor to a NumPy array using .numpy()
```
a_tf.numpy()
```
## Linear Regression
Now let's use low level tensorflow operations to implement linear regression.
Later in the course you'll see abstracted ways to do this using high level TensorFlow.
### Toy Dataset
We'll model the following function:
\begin{equation}
y= 2x + 10
\end{equation}
```
X = tf.constant(range(10), dtype=tf.float32)
Y = 2 * X + 10
print(f"X:{X}")
print(f"Y:{Y}")
```
Let's also create a test dataset to evaluate our models:
```
X_test = tf.constant(range(10, 20), dtype=tf.float32)
Y_test = 2 * X_test + 10
print(f"X_test:{X_test}")
print(f"Y_test:{Y_test}")
```
#### Loss Function
The simplest model we can build is a model that for each value of x returns the sample mean of the training set:
```
y_mean = Y.numpy().mean()
def predict_mean(X):
y_hat = [y_mean] * len(X)
return y_hat
Y_hat = predict_mean(X_test)
```
Using mean squared error, our loss is:
\begin{equation}
MSE = \frac{1}{m}\sum_{i=1}^{m}(\hat{Y}_i-Y_i)^2
\end{equation}
For this simple model the loss is then:
```
errors = (Y_hat - Y) ** 2
loss = tf.reduce_mean(errors)
loss.numpy()
```
This values for the MSE loss above will give us a baseline to compare how a more complex model is doing.
Now, if $\hat{Y}$ represents the vector containing our model's predictions when we use a linear regression model
\begin{equation}
\hat{Y} = w_0X + w_1
\end{equation}
we can write a loss function taking as arguments the coefficients of the model:
```
def loss_mse(X, Y, w0, w1):
Y_hat = w0 * X + w1
errors = (Y_hat - Y) ** 2
return tf.reduce_mean(errors)
```
### Gradient Function
To use gradient descent we need to take the partial derivatives of the loss function with respect to each of the weights. We could manually compute the derivatives, but with Tensorflow's automatic differentiation capabilities we don't have to!
During gradient descent we think of the loss as a function of the parameters $w_0$ and $w_1$. Thus, we want to compute the partial derivative with respect to these variables.
For that we need to wrap our loss computation within the context of `tf.GradientTape` instance which will reccord gradient information:
```python
with tf.GradientTape() as tape:
loss = # computation
```
This will allow us to later compute the gradients of any tensor computed within the `tf.GradientTape` context with respect to instances of `tf.Variable`:
```python
gradients = tape.gradient(loss, [w0, w1])
```
We illustrate this procedure with by computing the loss gradients with respect to the model weights:
```
# TODO 1
def compute_gradients(X, Y, w0, w1):
with tf.GradientTape() as tape:
loss = loss_mse(X, Y, w0, w1)
return tape.gradient(loss, [w0, w1])
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
dw0, dw1 = compute_gradients(X, Y, w0, w1)
print("dw0:", dw0.numpy())
print("dw1", dw1.numpy())
```
### Training Loop
Here we have a very simple training loop that converges. Note we are ignoring best practices like batching, creating a separate test set, and random weight initialization for the sake of simplicity.
```
STEPS = 1000
LEARNING_RATE = 0.02
MSG = "STEP {step} - loss: {loss}, w0: {w0}, w1: {w1}\n"
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
for step in range(0, STEPS + 1):
dw0, dw1 = compute_gradients(X, Y, w0, w1)
w0.assign_sub(dw0 * LEARNING_RATE)
w1.assign_sub(dw1 * LEARNING_RATE)
if step % 100 == 0:
loss = loss_mse(X, Y, w0, w1)
print(MSG.format(step=step, loss=loss, w0=w0.numpy(), w1=w1.numpy()))
```
Now let's compare the test loss for this linear regression to the test loss from the baseline model that outputs always the mean of the training set:
```
loss = loss_mse(X_test, Y_test, w0, w1)
loss.numpy()
```
This is indeed much better!
## Bonus
Try modelling a non-linear function such as: $y=xe^{-x^2}$
```
X = tf.constant(np.linspace(0, 2, 1000), dtype=tf.float32)
Y = X * tf.exp(-(X ** 2))
%matplotlib inline
plt.plot(X, Y)
def make_features(X):
f1 = tf.ones_like(X) # Bias.
f2 = X
f3 = tf.square(X)
f4 = tf.sqrt(X)
f5 = tf.exp(X)
return tf.stack([f1, f2, f3, f4, f5], axis=1)
def predict(X, W):
return tf.squeeze(X @ W, -1)
def loss_mse(X, Y, W):
Y_hat = predict(X, W)
errors = (Y_hat - Y) ** 2
return tf.reduce_mean(errors)
def compute_gradients(X, Y, W):
with tf.GradientTape() as tape:
loss = loss_mse(Xf, Y, W)
return tape.gradient(loss, W)
STEPS = 2000
LEARNING_RATE = 0.02
Xf = make_features(X)
n_weights = Xf.shape[1]
W = tf.Variable(np.zeros((n_weights, 1)), dtype=tf.float32)
# For plotting
steps, losses = [], []
plt.figure()
for step in range(1, STEPS + 1):
dW = compute_gradients(X, Y, W)
W.assign_sub(dW * LEARNING_RATE)
if step % 100 == 0:
loss = loss_mse(Xf, Y, W)
steps.append(step)
losses.append(loss)
plt.clf()
plt.plot(steps, losses)
print("STEP: {} MSE: {}".format(STEPS, loss_mse(Xf, Y, W)))
plt.figure()
plt.plot(X, Y, label="actual")
plt.plot(X, predict(Xf, W), label="predicted")
plt.legend()
```
Copyright 2021 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
|
github_jupyter
|
import numpy as np
from matplotlib import pyplot as plt
import tensorflow as tf
print(tf.__version__)
x = tf.constant([2, 3, 4])
x
x = tf.Variable(2.0, dtype=tf.float32, name="my_variable")
x.assign(45.8) # TODO 1
x
x.assign_add(4) # TODO 2
x
x.assign_sub(3) # TODO 3
x
a = tf.constant([5, 3, 8]) # TODO 1
b = tf.constant([3, -1, 2])
c = tf.add(a, b)
d = a + b
print("c:", c)
print("d:", d)
a = tf.constant([5, 3, 8]) # TODO 2
b = tf.constant([3, -1, 2])
c = tf.multiply(a, b)
d = a * b
print("c:", c)
print("d:", d)
# tf.math.exp expects floats so we need to explicitly give the type
a = tf.constant([5, 3, 8], dtype=tf.float32)
b = tf.math.exp(a)
print("b:", b)
# native python list
a_py = [1, 2]
b_py = [3, 4]
tf.add(a_py, b_py) # TODO 1
# numpy arrays
a_np = np.array([1, 2])
b_np = np.array([3, 4])
tf.add(a_np, b_np) # TODO 2
# native TF tensor
a_tf = tf.constant([1, 2])
b_tf = tf.constant([3, 4])
tf.add(a_tf, b_tf) # TODO 3
a_tf.numpy()
X = tf.constant(range(10), dtype=tf.float32)
Y = 2 * X + 10
print(f"X:{X}")
print(f"Y:{Y}")
X_test = tf.constant(range(10, 20), dtype=tf.float32)
Y_test = 2 * X_test + 10
print(f"X_test:{X_test}")
print(f"Y_test:{Y_test}")
y_mean = Y.numpy().mean()
def predict_mean(X):
y_hat = [y_mean] * len(X)
return y_hat
Y_hat = predict_mean(X_test)
errors = (Y_hat - Y) ** 2
loss = tf.reduce_mean(errors)
loss.numpy()
def loss_mse(X, Y, w0, w1):
Y_hat = w0 * X + w1
errors = (Y_hat - Y) ** 2
return tf.reduce_mean(errors)
with tf.GradientTape() as tape:
loss = # computation
gradients = tape.gradient(loss, [w0, w1])
# TODO 1
def compute_gradients(X, Y, w0, w1):
with tf.GradientTape() as tape:
loss = loss_mse(X, Y, w0, w1)
return tape.gradient(loss, [w0, w1])
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
dw0, dw1 = compute_gradients(X, Y, w0, w1)
print("dw0:", dw0.numpy())
print("dw1", dw1.numpy())
STEPS = 1000
LEARNING_RATE = 0.02
MSG = "STEP {step} - loss: {loss}, w0: {w0}, w1: {w1}\n"
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
for step in range(0, STEPS + 1):
dw0, dw1 = compute_gradients(X, Y, w0, w1)
w0.assign_sub(dw0 * LEARNING_RATE)
w1.assign_sub(dw1 * LEARNING_RATE)
if step % 100 == 0:
loss = loss_mse(X, Y, w0, w1)
print(MSG.format(step=step, loss=loss, w0=w0.numpy(), w1=w1.numpy()))
loss = loss_mse(X_test, Y_test, w0, w1)
loss.numpy()
X = tf.constant(np.linspace(0, 2, 1000), dtype=tf.float32)
Y = X * tf.exp(-(X ** 2))
%matplotlib inline
plt.plot(X, Y)
def make_features(X):
f1 = tf.ones_like(X) # Bias.
f2 = X
f3 = tf.square(X)
f4 = tf.sqrt(X)
f5 = tf.exp(X)
return tf.stack([f1, f2, f3, f4, f5], axis=1)
def predict(X, W):
return tf.squeeze(X @ W, -1)
def loss_mse(X, Y, W):
Y_hat = predict(X, W)
errors = (Y_hat - Y) ** 2
return tf.reduce_mean(errors)
def compute_gradients(X, Y, W):
with tf.GradientTape() as tape:
loss = loss_mse(Xf, Y, W)
return tape.gradient(loss, W)
STEPS = 2000
LEARNING_RATE = 0.02
Xf = make_features(X)
n_weights = Xf.shape[1]
W = tf.Variable(np.zeros((n_weights, 1)), dtype=tf.float32)
# For plotting
steps, losses = [], []
plt.figure()
for step in range(1, STEPS + 1):
dW = compute_gradients(X, Y, W)
W.assign_sub(dW * LEARNING_RATE)
if step % 100 == 0:
loss = loss_mse(Xf, Y, W)
steps.append(step)
losses.append(loss)
plt.clf()
plt.plot(steps, losses)
print("STEP: {} MSE: {}".format(STEPS, loss_mse(Xf, Y, W)))
plt.figure()
plt.plot(X, Y, label="actual")
plt.plot(X, predict(Xf, W), label="predicted")
plt.legend()
| 0.414899 | 0.993789 |
```
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib notebook
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, LSTM, SimpleRNN, Dropout, Flatten, Bidirectional,Activation
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_absolute_error
from IPython.display import Image
```
## Import data
```
df = pd.read_csv("history_export_2018-12-04T18_30_11.csv",skiprows=12, header=None,
delimiter=';', names=['year',
'month',
'day',
'hour',
'minute',
'temp',
'shortwave_rad'])
df['Datetime']=pd.to_datetime(df[['year','month','day','hour']])
df.head()
df.drop(columns=['year', 'month', 'day', 'hour', 'minute','shortwave_rad'], axis=1, inplace=True)
df.set_index('Datetime', inplace=True)
df.plot()
df.info()
df.reset_index(drop=True, inplace=True)
df.shape[0]
df.isnull().sum()
```
# Logic of "Sliding Windows"
### Image link: https://i.stack.imgur.com/rfSAF.png
```
Image("pic_rnn/rfSAF.png")
```
# Define data for prediction and for training
## For training use all the data except the last 24 hours.¶
## For eval use the last 24 hours.
```
train=df.iloc[:len(df)-24]
test=df.iloc[len(train):]
train=np.array(train)
train=train.reshape(train.shape[0],1)
x_test=df[len(df)-len(test):]
x_test=x_test.values.reshape(-1,1)
```
# Data normalization using "MinMaxScaler"
```
scaler=MinMaxScaler(feature_range=(0,1))
scaler.fit(train)
train_scaled=scaler.transform(train)
x_test_scaled=scaler.transform(x_test)
```
# Sliding window with step 5
```
timestep = 5
x_test = []
y_test = []
for i in range(timestep,x_test_scaled.shape[0]):
x_test.append(x_test_scaled[i-timestep:i,0])
y_test.append(x_test_scaled[i,0])
x_test,y_test=np.array(x_test),np.array(y_test)
x_test=x_test.reshape(x_test.shape[0],x_test.shape[1],1)
print("x_test shape= ",x_test.shape)
print("y_test shape= ",y_test.shape)
y_test=np.array(y_test)
y_test=y_test.reshape(len(y_test),1)
y_test=scaler.inverse_transform(y_test)
timestep = 5
x_train = []
y_train = []
for i in range(timestep,train_scaled.shape[0]):
x_train.append(train_scaled[i-timestep:i,0])
y_train.append(train_scaled[i,0])
x_train,y_train=np.array(x_train),np.array(y_train)
x_train=x_train.reshape(x_train.shape[0],x_train.shape[1],1)
print("x_train shape= ",x_train.shape)
print("y_train shape= ",y_train.shape)
```
## First sequence has the first 5 values , the 6th value goes to y_train etc
```
x_train
y_train
```
## All the transformed values
```
df_scaled = scaler.transform(df)
df_scaled
```
## The rnn models takes as input (Length of sequence, features)
## Below are used 3 different models for this task
# First model using SimpleRNN cells
```
model_srnn=Sequential()
model_srnn.add(SimpleRNN(128,activation="relu",return_sequences=True,input_shape=(x_train.shape[1],1)))
model_srnn.add(Dropout(0.25))
model_srnn.add(SimpleRNN(256,activation="relu",return_sequences=True))
model_srnn.add(Dropout(0.25))
model_srnn.add(Flatten())
model_srnn.add(Dense(1, activation='linear'))
model_srnn.compile(loss="mean_squared_error",optimizer="adam")
model_srnn.fit(x_train,y_train,epochs=10,batch_size=36)
predict_srnn=model_srnn.predict(x_test)
predict_srnn=scaler.inverse_transform(predict_srnn)
plt.figure(figsize=(8,4), dpi=80, facecolor='w', edgecolor='k')
plt.plot(y_test,color="r",label="True temperature")
plt.plot(predict_srnn,color="b",label="Model Prediction")
plt.legend()
plt.xlabel("HOURS")
plt.ylabel("TEMPERATURE")
plt.title("SimpleRNN Model")
plt.grid(True)
mse_srnn = mean_absolute_error(y_true=y_test, y_pred=predict_srnn)
mse_srnn
```
# Second model using LSTM cells
```
model_lstm=Sequential()
model_lstm.add(LSTM(128,input_shape=(x_train.shape[1],1),activation="relu",return_sequences=True))
model_lstm.add(Dropout(0.25))
model_lstm.add(LSTM(256,activation="relu",return_sequences=False))
model_lstm.add(Dropout(0.25))
model_lstm.add(Dense(1, activation='linear'))
model_lstm.compile(loss="mean_squared_error",optimizer="adam")
model_lstm.fit(x_train,y_train,epochs=10,batch_size=32)
predict_lstm=model_lstm.predict(x_test)
predict_lstm=scaler.inverse_transform(predict_lstm)
mse_lstm = mean_absolute_error(y_true=y_test, y_pred=predict_lstm)
mse_lstm
plt.figure(figsize=(8,4), dpi=80, facecolor='w', edgecolor='k')
plt.plot(y_test,color="r",label="True temperature")
plt.plot(predict_lstm,color="b",label="Model prediction")
plt.legend()
plt.xlabel("HOURS")
plt.ylabel("TEMPERATURE")
plt.title("LSTM Model")
plt.grid(True)
```
# Third model using Biderectional with SimpleRNN
```
model_bi = Sequential()
model_bi.add(Bidirectional(SimpleRNN(128, return_sequences=True), input_shape=(x_train.shape[1],1)))
model_bi.add(Dropout(0.25))
model_bi.add(Activation('relu'))
model_bi.add(Bidirectional(SimpleRNN(256, return_sequences=False)))
model_bi.add(Dropout(0.25))
model_bi.add(Activation('relu'))
model_bi.add(Dense(1, activation='linear'))
model_bi.compile(loss='mean_squared_error', optimizer='adam')
model_bi.fit(x_train,y_train,epochs=10,batch_size=32)
predict_bi=model_bi.predict(x_test)
predict_bi=scaler.inverse_transform(predict_bi)
plt.figure(figsize=(8,4), dpi=80, facecolor='w', edgecolor='k')
plt.plot(y_test,color="r",label="True temperature")
plt.plot(predict_bi,color="b",label="Model Prediction")
plt.legend()
plt.xlabel("HOURS")
plt.ylabel("TEMPERATURE")
plt.title("Bidirectional Model with SimpleRNN")
plt.grid(True)
mse_bi = mean_absolute_error(y_true=y_test, y_pred=predict_bi)
mse_bi
```
|
github_jupyter
|
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib notebook
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, LSTM, SimpleRNN, Dropout, Flatten, Bidirectional,Activation
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_absolute_error
from IPython.display import Image
df = pd.read_csv("history_export_2018-12-04T18_30_11.csv",skiprows=12, header=None,
delimiter=';', names=['year',
'month',
'day',
'hour',
'minute',
'temp',
'shortwave_rad'])
df['Datetime']=pd.to_datetime(df[['year','month','day','hour']])
df.head()
df.drop(columns=['year', 'month', 'day', 'hour', 'minute','shortwave_rad'], axis=1, inplace=True)
df.set_index('Datetime', inplace=True)
df.plot()
df.info()
df.reset_index(drop=True, inplace=True)
df.shape[0]
df.isnull().sum()
Image("pic_rnn/rfSAF.png")
train=df.iloc[:len(df)-24]
test=df.iloc[len(train):]
train=np.array(train)
train=train.reshape(train.shape[0],1)
x_test=df[len(df)-len(test):]
x_test=x_test.values.reshape(-1,1)
scaler=MinMaxScaler(feature_range=(0,1))
scaler.fit(train)
train_scaled=scaler.transform(train)
x_test_scaled=scaler.transform(x_test)
timestep = 5
x_test = []
y_test = []
for i in range(timestep,x_test_scaled.shape[0]):
x_test.append(x_test_scaled[i-timestep:i,0])
y_test.append(x_test_scaled[i,0])
x_test,y_test=np.array(x_test),np.array(y_test)
x_test=x_test.reshape(x_test.shape[0],x_test.shape[1],1)
print("x_test shape= ",x_test.shape)
print("y_test shape= ",y_test.shape)
y_test=np.array(y_test)
y_test=y_test.reshape(len(y_test),1)
y_test=scaler.inverse_transform(y_test)
timestep = 5
x_train = []
y_train = []
for i in range(timestep,train_scaled.shape[0]):
x_train.append(train_scaled[i-timestep:i,0])
y_train.append(train_scaled[i,0])
x_train,y_train=np.array(x_train),np.array(y_train)
x_train=x_train.reshape(x_train.shape[0],x_train.shape[1],1)
print("x_train shape= ",x_train.shape)
print("y_train shape= ",y_train.shape)
x_train
y_train
df_scaled = scaler.transform(df)
df_scaled
model_srnn=Sequential()
model_srnn.add(SimpleRNN(128,activation="relu",return_sequences=True,input_shape=(x_train.shape[1],1)))
model_srnn.add(Dropout(0.25))
model_srnn.add(SimpleRNN(256,activation="relu",return_sequences=True))
model_srnn.add(Dropout(0.25))
model_srnn.add(Flatten())
model_srnn.add(Dense(1, activation='linear'))
model_srnn.compile(loss="mean_squared_error",optimizer="adam")
model_srnn.fit(x_train,y_train,epochs=10,batch_size=36)
predict_srnn=model_srnn.predict(x_test)
predict_srnn=scaler.inverse_transform(predict_srnn)
plt.figure(figsize=(8,4), dpi=80, facecolor='w', edgecolor='k')
plt.plot(y_test,color="r",label="True temperature")
plt.plot(predict_srnn,color="b",label="Model Prediction")
plt.legend()
plt.xlabel("HOURS")
plt.ylabel("TEMPERATURE")
plt.title("SimpleRNN Model")
plt.grid(True)
mse_srnn = mean_absolute_error(y_true=y_test, y_pred=predict_srnn)
mse_srnn
model_lstm=Sequential()
model_lstm.add(LSTM(128,input_shape=(x_train.shape[1],1),activation="relu",return_sequences=True))
model_lstm.add(Dropout(0.25))
model_lstm.add(LSTM(256,activation="relu",return_sequences=False))
model_lstm.add(Dropout(0.25))
model_lstm.add(Dense(1, activation='linear'))
model_lstm.compile(loss="mean_squared_error",optimizer="adam")
model_lstm.fit(x_train,y_train,epochs=10,batch_size=32)
predict_lstm=model_lstm.predict(x_test)
predict_lstm=scaler.inverse_transform(predict_lstm)
mse_lstm = mean_absolute_error(y_true=y_test, y_pred=predict_lstm)
mse_lstm
plt.figure(figsize=(8,4), dpi=80, facecolor='w', edgecolor='k')
plt.plot(y_test,color="r",label="True temperature")
plt.plot(predict_lstm,color="b",label="Model prediction")
plt.legend()
plt.xlabel("HOURS")
plt.ylabel("TEMPERATURE")
plt.title("LSTM Model")
plt.grid(True)
model_bi = Sequential()
model_bi.add(Bidirectional(SimpleRNN(128, return_sequences=True), input_shape=(x_train.shape[1],1)))
model_bi.add(Dropout(0.25))
model_bi.add(Activation('relu'))
model_bi.add(Bidirectional(SimpleRNN(256, return_sequences=False)))
model_bi.add(Dropout(0.25))
model_bi.add(Activation('relu'))
model_bi.add(Dense(1, activation='linear'))
model_bi.compile(loss='mean_squared_error', optimizer='adam')
model_bi.fit(x_train,y_train,epochs=10,batch_size=32)
predict_bi=model_bi.predict(x_test)
predict_bi=scaler.inverse_transform(predict_bi)
plt.figure(figsize=(8,4), dpi=80, facecolor='w', edgecolor='k')
plt.plot(y_test,color="r",label="True temperature")
plt.plot(predict_bi,color="b",label="Model Prediction")
plt.legend()
plt.xlabel("HOURS")
plt.ylabel("TEMPERATURE")
plt.title("Bidirectional Model with SimpleRNN")
plt.grid(True)
mse_bi = mean_absolute_error(y_true=y_test, y_pred=predict_bi)
mse_bi
| 0.698124 | 0.830869 |
# Quantum Process Tomography with Q# and Python #
## Abstract ##
In this sample, we will demonstrate interoperability between Q# and Python by using the QInfer and QuTiP libraries for Python to characterize and verify quantum processes implemented in Q#.
In particular, this sample will use *quantum process tomography* to learn about the behavior of a "noisy" Hadamard operation from the results of random Pauli measurements.
## Preamble ##
```
import warnings
warnings.simplefilter('ignore')
```
We can enable Q# support in Python by importing the `qsharp` package.
```
import qsharp
```
Once we do so, any Q# source files in the current working directory are compiled, and their namespaces are made available as Python modules.
For instance, the `Quantum.qs` source file provided with this sample implements a `HelloWorld` operation in the `Microsoft.Quantum.Samples.Python` Q# namespace:
```
with open('Quantum.qs') as f:
print(f.read())
```
We can import this `HelloWorld` operation as though it was an ordinary Python function by using the Q# namespace as a Python module:
```
from Microsoft.Quantum.Samples.Python import HelloWorld
HelloWorld
```
Once we've imported the new names, we can then ask our simulator to run each function and operation using the `simulate` method.
```
HelloWorld.simulate(pauli=qsharp.Pauli.Z)
```
## Tomography ##
The `qsharp` interoperability package also comes with a `single_qubit_process_tomography` function which uses the QInfer library for Python to learn the channels corresponding to single-qubit Q# operations.
```
from qsharp.tomography import single_qubit_process_tomography
```
Next, we import plotting support and the QuTiP library, since these will be helpful to us in manipulating the quantum objects returned by the quantum process tomography functionality that we call later.
```
%matplotlib inline
import matplotlib.pyplot as plt
import qutip as qt
qt.settings.colorblind_safe = True
```
To use this, we define a new operation that takes a preparation and a measurement, then returns the result of performing that tomographic measurement on the noisy Hadamard operation that we defined in `Quantum.qs`.
```
experiment = qsharp.compile("""
open Microsoft.Quantum.Samples.Python;
open Microsoft.Quantum.Characterization;
operation Experiment(prep : Pauli, meas : Pauli) : Result {
return SingleQubitProcessTomographyMeasurement(prep, meas, NoisyHadamardChannel(0.1));
}
""")
```
Here, we ask for 10,000 measurements from the noisy Hadamard operation that we defined above.
```
estimation_results = single_qubit_process_tomography(experiment, n_measurements=10000)
```
To visualize the results, it's helpful to compare to the actual channel, which we can find exactly in QuTiP.
```
depolarizing_channel = sum(map(qt.to_super, [qt.qeye(2), qt.sigmax(), qt.sigmay(), qt.sigmaz()])) / 4.0
actual_noisy_h = 0.1 * qt.to_choi(depolarizing_channel) + 0.9 * qt.to_choi(qt.hadamard_transform())
```
We then plot the estimated and actual channels as Hinton diagrams, showing how each acts on the Pauli operators $X$, $Y$ and $Z$.
```
fig, (left, right) = plt.subplots(ncols=2, figsize=(12, 4))
plt.sca(left)
plt.xlabel('Estimated', fontsize='x-large')
qt.visualization.hinton(estimation_results['est_channel'], ax=left)
plt.sca(right)
plt.xlabel('Actual', fontsize='x-large')
qt.visualization.hinton(actual_noisy_h, ax=right)
```
We also obtain a wealth of other information as well, such as the covariance matrix over each parameter of the resulting channel.
This shows us which parameters we are least certain about, as well as how those parameters are correlated with each other.
```
plt.figure(figsize=(10, 10))
estimation_results['posterior'].plot_covariance()
plt.xticks(rotation=90)
```
## Diagnostics ##
```
for component, version in sorted(qsharp.component_versions().items(), key=lambda x: x[0]):
print(f"{component:20}{version}")
import sys
print(sys.version)
```
|
github_jupyter
|
import warnings
warnings.simplefilter('ignore')
import qsharp
with open('Quantum.qs') as f:
print(f.read())
from Microsoft.Quantum.Samples.Python import HelloWorld
HelloWorld
HelloWorld.simulate(pauli=qsharp.Pauli.Z)
from qsharp.tomography import single_qubit_process_tomography
%matplotlib inline
import matplotlib.pyplot as plt
import qutip as qt
qt.settings.colorblind_safe = True
experiment = qsharp.compile("""
open Microsoft.Quantum.Samples.Python;
open Microsoft.Quantum.Characterization;
operation Experiment(prep : Pauli, meas : Pauli) : Result {
return SingleQubitProcessTomographyMeasurement(prep, meas, NoisyHadamardChannel(0.1));
}
""")
estimation_results = single_qubit_process_tomography(experiment, n_measurements=10000)
depolarizing_channel = sum(map(qt.to_super, [qt.qeye(2), qt.sigmax(), qt.sigmay(), qt.sigmaz()])) / 4.0
actual_noisy_h = 0.1 * qt.to_choi(depolarizing_channel) + 0.9 * qt.to_choi(qt.hadamard_transform())
fig, (left, right) = plt.subplots(ncols=2, figsize=(12, 4))
plt.sca(left)
plt.xlabel('Estimated', fontsize='x-large')
qt.visualization.hinton(estimation_results['est_channel'], ax=left)
plt.sca(right)
plt.xlabel('Actual', fontsize='x-large')
qt.visualization.hinton(actual_noisy_h, ax=right)
plt.figure(figsize=(10, 10))
estimation_results['posterior'].plot_covariance()
plt.xticks(rotation=90)
for component, version in sorted(qsharp.component_versions().items(), key=lambda x: x[0]):
print(f"{component:20}{version}")
import sys
print(sys.version)
| 0.452294 | 0.991513 |
# Register and visualize dataset
### Introduction
In this lab you will ingest and transform the customer product reviews dataset. Then you will use AWS data stack services such as AWS Glue and Amazon Athena for ingesting and querying the dataset. Finally you will use AWS Data Wrangler to analyze the dataset and plot some visuals extracting insights.
### Table of Contents
- [1. Ingest and transform the public dataset](#c1w1-1.)
- [1.1. List the dataset files in the public S3 bucket](#c1w1-1.1.)
- [Exercise 1](#c1w1-ex-1)
- [1.2. Copy the data locally to the notebook](#c1w1-1.2.)
- [1.3. Transform the data](#c1w1-1.3.)
- [1.4 Write the data to a CSV file](#c1w1-1.4.)
- [2. Register the public dataset for querying and visualizing](#c1w1-2.)
- [2.1. Register S3 dataset files as a table for querying](#c1w1-2.1.)
- [Exercise 2](#c1w1-ex-2)
- [2.2. Create default S3 bucket for Amazon Athena](#c1w1-2.2.)
- [3. Visualize data](#c1w1-3.)
- [3.1. Preparation for data visualization](#c1w1-3.1.)
- [3.2. How many reviews per sentiment?](#c1w1-3.2.)
- [Exercise 3](#c1w1-ex-3)
- [3.3. Which product categories are highest rated by average sentiment?](#c1w1-3.3.)
- [3.4. Which product categories have the most reviews?](#c1w1-3.4.)
- [Exercise 4](#c1w1-ex-4)
- [3.5. What is the breakdown of sentiments per product category?](#c1w1-3.5.)
- [3.6. Analyze the distribution of review word counts](#c1w1-3.6.)
Let's install the required modules first.
```
# please ignore warning messages during the installation
!pip install --disable-pip-version-check -q sagemaker==2.35.0
!pip install --disable-pip-version-check -q pandas==1.1.4
!pip install --disable-pip-version-check -q awswrangler==2.7.0
!pip install --disable-pip-version-check -q numpy==1.18.5
!pip install --disable-pip-version-check -q seaborn==0.11.0
!pip install --disable-pip-version-check -q matplotlib===3.3.3
```
<a name='c1w1-1.'></a>
# 1. Ingest and transform the public dataset
The dataset [Women's Clothing Reviews](https://www.kaggle.com/nicapotato/womens-ecommerce-clothing-reviews) has been chosen as the main dataset.
It is shared in a public Amazon S3 bucket, and is available as a comma-separated value (CSV) text format:
`s3://dlai-practical-data-science/data/raw/womens_clothing_ecommerce_reviews.csv`
<a name='c1w1-1.1.'></a>
### 1.1. List the dataset files in the public S3 bucket
The [AWS Command Line Interface (CLI)](https://awscli.amazonaws.com/v2/documentation/api/latest/index.html) is a unified tool to manage your AWS services. With just one tool, you can control multiple AWS services from the command line and automate them through scripts. You will use it to list the dataset files.
**View dataset files in CSV format**
```aws s3 ls [bucket_name]``` function lists all objects in the S3 bucket. Let's use it to view the reviews data files in CSV format:
<a name='c1w1-ex-1'></a>
### Exercise 1
View the list of the files available in the public bucket `s3://dlai-practical-data-science/data/raw/`.
**Instructions**:
Use `aws s3 ls [bucket_name]` function. To run the AWS CLI command from the notebook you will need to put an exclamation mark in front of it: `!aws`. You should see the data file `womens_clothing_ecommerce_reviews.csv` in the list.
```
### BEGIN SOLUTION - DO NOT delete this comment for grading purposes
!aws s3 ls s3://dlai-practical-data-science/data/raw/ # Replace None
### END SOLUTION - DO NOT delete this comment for grading purposes
# EXPECTED OUTPUT
# ... womens_clothing_ecommerce_reviews.csv
```
<a name='c1w1-1.2.'></a>
### 1.2. Copy the data locally to the notebook
```aws s3 cp [bucket_name/file_name] [file_name]``` function copies the file from the S3 bucket into the local environment or into another S3 bucket. Let's use it to copy the file with the dataset locally.
```
!aws s3 cp s3://dlai-practical-data-science/data/raw/womens_clothing_ecommerce_reviews.csv ./womens_clothing_ecommerce_reviews.csv
```
Now use the Pandas dataframe to load and preview the data.
```
import pandas as pd
import csv
df = pd.read_csv('./womens_clothing_ecommerce_reviews.csv',
index_col=0)
df.shape
df.head()
```
<a name='c1w1-1.3.'></a>
### 1.3. Transform the data
To simplify the task, you will transform the data into a comma-separated value (CSV) file that contains only a `review_body`, `product_category`, and `sentiment` derived from the original data.
```
df_transformed = df.rename(columns={'Review Text': 'review_body',
'Rating': 'star_rating',
'Class Name': 'product_category'})
df_transformed.drop(columns=['Clothing ID', 'Age', 'Title', 'Recommended IND', 'Positive Feedback Count', 'Division Name', 'Department Name'],
inplace=True)
df_transformed.dropna(inplace=True)
df_transformed.shape
```
Now convert the `star_rating` into the `sentiment` (positive, neutral, negative), which later on will be for the prediction.
```
def to_sentiment(star_rating):
if star_rating in {1, 2}: # negative
return -1
if star_rating == 3: # neutral
return 0
if star_rating in {4, 5}: # positive
return 1
# transform star_rating into the sentiment
df_transformed['sentiment'] = df_transformed['star_rating'].apply(lambda star_rating:
to_sentiment(star_rating=star_rating)
)
# drop the star rating column
df_transformed.drop(columns=['star_rating'],
inplace=True)
# remove reviews for product_categories with < 10 reviews
df_transformed = df_transformed.groupby('product_category').filter(lambda reviews : len(reviews) > 10)[['sentiment', 'review_body', 'product_category']]
df_transformed.shape
# preview the results
df_transformed.head()
```
<a name='c1w1-1.4.'></a>
### 1.4 Write the data to a CSV file
```
df_transformed.to_csv('./womens_clothing_ecommerce_reviews_transformed.csv',
index=False)
!head -n 5 ./womens_clothing_ecommerce_reviews_transformed.csv
```
<a name='c1w1-2.'></a>
# 2. Register the public dataset for querying and visualizing
You will register the public dataset into an S3-backed database table so you can query and visualize our dataset at scale.
<a name='c1w1-2.1.'></a>
### 2.1. Register S3 dataset files as a table for querying
Let's import required modules.
`boto3` is the AWS SDK for Python to create, configure, and manage AWS services, such as Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3). The SDK provides an object-oriented API as well as low-level access to AWS services.
`sagemaker` is the SageMaker Python SDK which provides several high-level abstractions for working with the Amazon SageMaker.
```
import boto3
import sagemaker
import pandas as pd
import numpy as np
import botocore
config = botocore.config.Config(user_agent_extra='dlai-pds/c1/w1')
# low-level service client of the boto3 session
sm = boto3.client(service_name='sagemaker',
config=config)
sess = sagemaker.Session(sagemaker_client=sm)
bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
region = sess.boto_region_name
account_id = sess.account_id
print('S3 Bucket: {}'.format(bucket))
print('Region: {}'.format(region))
print('Account ID: {}'.format(account_id))
```
Review the empty bucket which was created automatically for this account.
**Instructions**:
- open the link
- click on the S3 bucket name `sagemaker-us-east-1-ACCOUNT`
- check that it is empty at this stage
```
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="top" href="https://s3.console.aws.amazon.com/s3/home?region={}#">Amazon S3 buckets</a></b>'.format(region)))
```
Copy the file into the S3 bucket.
```
!aws s3 cp ./womens_clothing_ecommerce_reviews_transformed.csv s3://$bucket/data/transformed/womens_clothing_ecommerce_reviews_transformed.csv
```
Review the bucket with the file we uploaded above.
**Instructions**:
- open the link
- check that the CSV file is located in the S3 bucket
- check the location directory structure is the same as in the CLI command above
- click on the file name and see the available information about the file (region, size, S3 URI, Amazon Resource Name (ARN))
```
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="top" href="https://s3.console.aws.amazon.com/s3/buckets/{}?region={}&prefix=data/transformed/#">Amazon S3 buckets</a></b>'.format(bucket, region)))
```
**Import AWS Data Wrangler**
[AWS Data Wrangler](https://github.com/awslabs/aws-data-wrangler) is an AWS Professional Service open source python initiative that extends the power of Pandas library to AWS connecting dataframes and AWS data related services (Amazon Redshift, AWS Glue, Amazon Athena, Amazon EMR, Amazon QuickSight, etc).
Built on top of other open-source projects like Pandas, Apache Arrow, Boto3, SQLAlchemy, Psycopg2 and PyMySQL, it offers abstracted functions to execute usual ETL tasks like load/unload data from data lakes, data warehouses and databases.
Review the AWS Data Wrangler documentation: https://aws-data-wrangler.readthedocs.io/en/stable/
```
import awswrangler as wr
```
**Create AWS Glue Catalog database**
The data catalog features of **AWS Glue** and the inbuilt integration to Amazon S3 simplify the process of identifying data and deriving the schema definition out of the discovered data. Using AWS Glue crawlers within your data catalog, you can traverse your data stored in Amazon S3 and build out the metadata tables that are defined in your data catalog.
Here you will use `wr.catalog.create_database` function to create a database with the name `dsoaws_deep_learning` ("dsoaws" stands for "Data Science on AWS").
```
wr.catalog.create_database(
name='dsoaws_deep_learning',
exist_ok=True
)
dbs = wr.catalog.get_databases()
for db in dbs:
print("Database name: " + db['Name'])
```
Review the created database in the AWS Glue Catalog.
**Instructions**:
- open the link
- on the left side panel notice that you are in the AWS Glue -> Data Catalog -> Databases
- check that the database `dsoaws_deep_learning` has been created
- click on the name of the database
- click on the `Tables in dsoaws_deep_learning` link to see that there are no tables
```
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="top" href="https://console.aws.amazon.com/glue/home?region={}#catalog:tab=databases">AWS Glue Databases</a></b>'.format(region)))
```
**Register CSV data with AWS Glue Catalog**
<a name='c1w1-ex-2'></a>
### Exercise 2
Register CSV data with AWS Glue Catalog.
**Instructions**:
Use ```wr.catalog.create_csv_table``` function with the following parameters
```python
res = wr.catalog.create_csv_table(
database='...', # AWS Glue Catalog database name
path='s3://{}/data/transformed/'.format(bucket), # S3 object path for the data
table='reviews', # registered table name
columns_types={
'sentiment': 'int',
'review_body': 'string',
'product_category': 'string'
},
mode='overwrite',
skip_header_line_count=1,
sep=','
)
```
```
wr.catalog.create_csv_table(
### BEGIN SOLUTION - DO NOT delete this comment for grading purposes
database='dsoaws_deep_learning', # Replace None
### END SOLUTION - DO NOT delete this comment for grading purposes
path='s3://{}/data/transformed/'.format(bucket),
table="reviews",
columns_types={
'sentiment': 'int',
'review_body': 'string',
'product_category': 'string'
},
mode='overwrite',
skip_header_line_count=1,
sep=','
)
```
Review the registered table in the AWS Glue Catalog.
**Instructions**:
- open the link
- on the left side panel notice that you are in the AWS Glue -> Data Catalog -> Databases -> Tables
- check that you can see the table `reviews` from the database `dsoaws_deep_learning` in the list
- click on the name of the table
- explore the available information about the table (name, database, classification, location, schema etc.)
```
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="top" href="https://console.aws.amazon.com/glue/home?region={}#">AWS Glue Catalog</a></b>'.format(region)))
```
Review the table shape:
```
table = wr.catalog.table(database='dsoaws_deep_learning',
table='reviews')
table
```
<a name='c1w1-2.2.'></a>
### 2.2. Create default S3 bucket for Amazon Athena
Amazon Athena requires this S3 bucket to store temporary query results and improve performance of subsequent queries.
The contents of this bucket are mostly binary and human-unreadable.
```
# S3 bucket name
wr.athena.create_athena_bucket()
# EXPECTED OUTPUT
# 's3://aws-athena-query-results-ACCOUNT-REGION/'
```
<a name='c1w1-3.'></a>
# 3. Visualize data
**Reviews dataset - column descriptions**
- `sentiment`: The review's sentiment (-1, 0, 1).
- `product_category`: Broad product category that can be used to group reviews (in this case digital videos).
- `review_body`: The text of the review.
<a name='c1w1-3.1.'></a>
### 3.1. Preparation for data visualization
**Imports**
```
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format='retina'
```
**Settings**
Set AWS Glue database and table name.
```
# Do not change the database and table names - they are used for grading purposes!
database_name = 'dsoaws_deep_learning'
table_name = 'reviews'
```
Set seaborn parameters. You can review seaborn documentation following the [link](https://seaborn.pydata.org/index.html).
```
sns.set_style = 'seaborn-whitegrid'
sns.set(rc={"font.style":"normal",
"axes.facecolor":"white",
'grid.color': '.8',
'grid.linestyle': '-',
"figure.facecolor":"white",
"figure.titlesize":20,
"text.color":"black",
"xtick.color":"black",
"ytick.color":"black",
"axes.labelcolor":"black",
"axes.grid":True,
'axes.labelsize':10,
'xtick.labelsize':10,
'font.size':10,
'ytick.labelsize':10})
```
Helper code to display values on barplots:
**Run SQL queries using Amazon Athena**
**Amazon Athena** lets you query data in Amazon S3 using a standard SQL interface. It reflects the databases and tables in the AWS Glue Catalog. You can create interactive queries and perform any data manipulations required for further downstream processing.
Standard SQL query can be saved as a string and then passed as a parameter into the Athena query. Run the following cells as an example to count the total number of reviews by sentiment. The SQL query here will take the following form:
```sql
SELECT column_name, COUNT(column_name) as new_column_name
FROM table_name
GROUP BY column_name
ORDER BY column_name
```
If you are not familiar with the SQL query statements, you can review some tutorials following the [link](https://www.w3schools.com/sql/default.asp).
<a name='c1w1-3.2.'></a>
### 3.2. How many reviews per sentiment?
Set the SQL statement to find the count of sentiments:
```
statement_count_by_sentiment = """
SELECT sentiment, COUNT(sentiment) AS count_sentiment
FROM reviews
GROUP BY sentiment
ORDER BY sentiment
"""
print(statement_count_by_sentiment)
```
Query data in Amazon Athena database cluster using the prepared SQL statement:
```
df_count_by_sentiment = wr.athena.read_sql_query(
sql=statement_count_by_sentiment,
database=database_name
)
print(df_count_by_sentiment)
```
Preview the results of the query:
```
df_count_by_sentiment.plot(kind='bar', x='sentiment', y='count_sentiment', rot=0)
```
<a name='c1w1-ex-3'></a>
### Exercise 3
Use Amazon Athena query with the standard SQL statement passed as a parameter, to calculate the total number of reviews per `product_category` in the table ```reviews```.
**Instructions**: Pass the SQL statement of the form
```sql
SELECT category_column, COUNT(column_name) AS new_column_name
FROM table_name
GROUP BY category_column
ORDER BY new_column_name DESC
```
as a triple quote string into the variable `statement_count_by_category`. Please use the column `sentiment` in the `COUNT` function and give it a new name `count_sentiment`.
```
# Replace all None
### BEGIN SOLUTION - DO NOT delete this comment for grading purposes
statement_count_by_category = """
SELECT product_category, COUNT(sentiment) AS count_sentiment
FROM reviews
GROUP BY product_category
ORDER BY count_sentiment DESC
"""
### END SOLUTION - DO NOT delete this comment for grading purposes
print(statement_count_by_category)
```
Query data in Amazon Athena database passing the prepared SQL statement:
```
%%time
df_count_by_category = wr.athena.read_sql_query(
sql=statement_count_by_category,
database=database_name
)
df_count_by_category
# EXPECTED OUTPUT
# Dresses: 6145
# Knits: 4626
# Blouses: 2983
# Sweaters: 1380
# Pants: 1350
# ...
```
<a name='c1w1-3.3.'></a>
### 3.3. Which product categories are highest rated by average sentiment?
Set the SQL statement to find the average sentiment per product category, showing the results in the descending order:
```
statement_avg_by_category = """
SELECT product_category, AVG(sentiment) AS avg_sentiment
FROM {}
GROUP BY product_category
ORDER BY avg_sentiment DESC
""".format(table_name)
print(statement_avg_by_category)
```
Query data in Amazon Athena database passing the prepared SQL statement:
```
%%time
df_avg_by_category = wr.athena.read_sql_query(
sql=statement_avg_by_category,
database=database_name
)
```
Preview the query results in the temporary S3 bucket: `s3://aws-athena-query-results-ACCOUNT-REGION/`
**Instructions**:
- open the link
- check the name of the S3 bucket
- briefly check the content of it
```
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="top" href="https://s3.console.aws.amazon.com/s3/buckets/aws-athena-query-results-{}-{}?region={}">Amazon S3 buckets</a></b>'.format(account_id, region, region)))
```
Preview the results of the query:
```
df_avg_by_category
```
**Visualization**
```
def show_values_barplot(axs, space):
def _show_on_plot(ax):
for p in ax.patches:
_x = p.get_x() + p.get_width() + float(space)
_y = p.get_y() + p.get_height()
value = round(float(p.get_width()),2)
ax.text(_x, _y, value, ha="left")
if isinstance(axs, np.ndarray):
for idx, ax in np.ndenumerate(axs):
_show_on_plot(ax)
else:
_show_on_plot(axs)
# Create plot
barplot = sns.barplot(
data = df_avg_by_category,
y='product_category',
x='avg_sentiment',
color="b",
saturation=1
)
# Set the size of the figure
sns.set(rc={'figure.figsize':(15.0, 10.0)})
# Set title and x-axis ticks
plt.title('Average sentiment by product category')
#plt.xticks([-1, 0, 1], ['Negative', 'Neutral', 'Positive'])
# Helper code to show actual values afters bars
show_values_barplot(barplot, 0.1)
plt.xlabel("Average sentiment")
plt.ylabel("Product category")
plt.tight_layout()
# Do not change the figure name - it is used for grading purposes!
plt.savefig('avg_sentiment_per_category.png', dpi=300)
# Show graphic
plt.show(barplot)
# Upload image to S3 bucket
sess.upload_data(path='avg_sentiment_per_category.png', bucket=bucket, key_prefix="images")
```
Review the bucket on the account.
**Instructions**:
- open the link
- click on the S3 bucket name `sagemaker-us-east-1-ACCOUNT`
- open the images folder
- check the existence of the image `avg_sentiment_per_category.png`
- if you click on the image name, you can see the information about the image file. You can also download the file with the command on the top right Object Actions -> Download / Download as
<img src="images/download_image_file.png" width="100%">
```
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="top" href="https://s3.console.aws.amazon.com/s3/home?region={}">Amazon S3 buckets</a></b>'.format(region)))
```
<a name='c1w1-3.4.'></a>
### 3.4. Which product categories have the most reviews?
Set the SQL statement to find the count of sentiment per product category, showing the results in the descending order:
```
statement_count_by_category_desc = """
SELECT product_category, COUNT(*) AS count_reviews
FROM {}
GROUP BY product_category
ORDER BY count_reviews DESC
""".format(table_name)
print(statement_count_by_category_desc)
```
Query data in Amazon Athena database passing the prepared SQL statement:
```
%%time
df_count_by_category_desc = wr.athena.read_sql_query(
sql=statement_count_by_category_desc,
database=database_name
)
```
Store maximum number of sentiment for the visualization plot:
```
max_sentiment = df_count_by_category_desc['count_reviews'].max()
print('Highest number of reviews (in a single category): {}'.format(max_sentiment))
df_count_by_category_desc
```
**Visualization**
<a name='c1w1-ex-4'></a>
### Exercise 4
Use `barplot` function to plot number of reviews per product category.
**Instructions**: Use the `barplot` chart example in the previous section, passing the newly defined dataframe `df_count_by_category_desc` with the count of reviews. Here, please put the `product_category` column into the `y` argument.
```
# Create seaborn barplot
barplot = sns.barplot(
### BEGIN SOLUTION - DO NOT delete this comment for grading purposes
data=df_count_by_category_desc, # Replace None
y='product_category', # Replace None
x='count_reviews', # Replace None
### END SOLUTION - DO NOT delete this comment for grading purposes
color="b",
saturation=1
)
# Set the size of the figure
sns.set(rc={'figure.figsize':(15.0, 10.0)})
# Set title
plt.title("Number of reviews per product category")
plt.xlabel("Number of reviews")
plt.ylabel("Product category")
plt.tight_layout()
# Do not change the figure name - it is used for grading purposes!
plt.savefig('num_reviews_per_category.png', dpi=300)
# Show the barplot
plt.show(barplot)
# Upload image to S3 bucket
sess.upload_data(path='num_reviews_per_category.png', bucket=bucket, key_prefix="images")
```
<a name='c1w1-3.5.'></a>
### 3.5. What is the breakdown of sentiments per product category?
Set the SQL statement to find the count of sentiment per product category and sentiment:
```
statement_count_by_category_and_sentiment = """
SELECT product_category,
sentiment,
COUNT(*) AS count_reviews
FROM {}
GROUP BY product_category, sentiment
ORDER BY product_category ASC, sentiment DESC, count_reviews
""".format(table_name)
print(statement_count_by_category_and_sentiment)
```
Query data in Amazon Athena database passing the prepared SQL statement:
```
%%time
df_count_by_category_and_sentiment = wr.athena.read_sql_query(
sql=statement_count_by_category_and_sentiment,
database=database_name
)
```
Prepare for stacked percentage horizontal bar plot showing proportion of sentiments per product category.
```
# Create grouped dataframes by category and by sentiment
grouped_category = df_count_by_category_and_sentiment.groupby('product_category')
grouped_star = df_count_by_category_and_sentiment.groupby('sentiment')
# Create sum of sentiments per star sentiment
df_sum = df_count_by_category_and_sentiment.groupby(['sentiment']).sum()
# Calculate total number of sentiments
total = df_sum['count_reviews'].sum()
print('Total number of reviews: {}'.format(total))
```
Create dictionary of product categories and array of star rating distribution per category.
```
distribution = {}
count_reviews_per_star = []
i=0
for category, sentiments in grouped_category:
count_reviews_per_star = []
for star in sentiments['sentiment']:
count_reviews_per_star.append(sentiments.at[i, 'count_reviews'])
i=i+1;
distribution[category] = count_reviews_per_star
```
Build array per star across all categories.
```
distribution
df_distribution_pct = pd.DataFrame(distribution).transpose().apply(
lambda num_sentiments: num_sentiments/sum(num_sentiments)*100, axis=1
)
df_distribution_pct.columns=['1', '0', '-1']
df_distribution_pct
```
**Visualization**
Plot the distributions of sentiments per product category.
```
categories = df_distribution_pct.index
# Plot bars
plt.figure(figsize=(10,5))
df_distribution_pct.plot(kind="barh",
stacked=True,
edgecolor='white',
width=1.0,
color=['green',
'orange',
'blue'])
plt.title("Distribution of reviews per sentiment per category",
fontsize='16')
plt.legend(bbox_to_anchor=(1.04,1),
loc="upper left",
labels=['Positive',
'Neutral',
'Negative'])
plt.xlabel("% Breakdown of sentiments", fontsize='14')
plt.gca().invert_yaxis()
plt.tight_layout()
# Do not change the figure name - it is used for grading purposes!
plt.savefig('distribution_sentiment_per_category.png', dpi=300)
plt.show()
# Upload image to S3 bucket
sess.upload_data(path='distribution_sentiment_per_category.png', bucket=bucket, key_prefix="images")
```
<a name='c1w1-3.6.'></a>
### 3.6. Analyze the distribution of review word counts
Set the SQL statement to count the number of the words in each of the reviews:
```
statement_num_words = """
SELECT CARDINALITY(SPLIT(review_body, ' ')) as num_words
FROM {}
""".format(table_name)
print(statement_num_words)
```
Query data in Amazon Athena database passing the SQL statement:
```
%%time
df_num_words = wr.athena.read_sql_query(
sql=statement_num_words,
database=database_name
)
```
Print out and analyse some descriptive statistics:
```
summary = df_num_words["num_words"].describe(percentiles=[0.10, 0.20, 0.30, 0.40, 0.50, 0.60, 0.70, 0.80, 0.90, 1.00])
summary
```
Plot the distribution of the words number per review:
```
df_num_words["num_words"].plot.hist(xticks=[0, 16, 32, 64, 128, 256], bins=100, range=[0, 256]).axvline(
x=summary["100%"], c="red"
)
plt.xlabel("Words number", fontsize='14')
plt.ylabel("Frequency", fontsize='14')
plt.savefig('distribution_num_words_per_review.png', dpi=300)
plt.show()
# Upload image to S3 bucket
sess.upload_data(path='distribution_num_words_per_review.png', bucket=bucket, key_prefix="images")
```
Upload the notebook into S3 bucket for grading purposes.
**Note**: you may need to click on "Save" button before the upload.
```
!aws s3 cp ./C1_W1_Assignment.ipynb s3://$bucket/C1_W1_Assignment_Learner.ipynb
```
Please go to the main lab window and click on `Submit` button (see the `Finish the lab` section of the instructions).
|
github_jupyter
|
# please ignore warning messages during the installation
!pip install --disable-pip-version-check -q sagemaker==2.35.0
!pip install --disable-pip-version-check -q pandas==1.1.4
!pip install --disable-pip-version-check -q awswrangler==2.7.0
!pip install --disable-pip-version-check -q numpy==1.18.5
!pip install --disable-pip-version-check -q seaborn==0.11.0
!pip install --disable-pip-version-check -q matplotlib===3.3.3
### BEGIN SOLUTION - DO NOT delete this comment for grading purposes
!aws s3 ls s3://dlai-practical-data-science/data/raw/ # Replace None
### END SOLUTION - DO NOT delete this comment for grading purposes
# EXPECTED OUTPUT
# ... womens_clothing_ecommerce_reviews.csv
!aws s3 cp s3://dlai-practical-data-science/data/raw/womens_clothing_ecommerce_reviews.csv ./womens_clothing_ecommerce_reviews.csv
import pandas as pd
import csv
df = pd.read_csv('./womens_clothing_ecommerce_reviews.csv',
index_col=0)
df.shape
df.head()
df_transformed = df.rename(columns={'Review Text': 'review_body',
'Rating': 'star_rating',
'Class Name': 'product_category'})
df_transformed.drop(columns=['Clothing ID', 'Age', 'Title', 'Recommended IND', 'Positive Feedback Count', 'Division Name', 'Department Name'],
inplace=True)
df_transformed.dropna(inplace=True)
df_transformed.shape
def to_sentiment(star_rating):
if star_rating in {1, 2}: # negative
return -1
if star_rating == 3: # neutral
return 0
if star_rating in {4, 5}: # positive
return 1
# transform star_rating into the sentiment
df_transformed['sentiment'] = df_transformed['star_rating'].apply(lambda star_rating:
to_sentiment(star_rating=star_rating)
)
# drop the star rating column
df_transformed.drop(columns=['star_rating'],
inplace=True)
# remove reviews for product_categories with < 10 reviews
df_transformed = df_transformed.groupby('product_category').filter(lambda reviews : len(reviews) > 10)[['sentiment', 'review_body', 'product_category']]
df_transformed.shape
# preview the results
df_transformed.head()
df_transformed.to_csv('./womens_clothing_ecommerce_reviews_transformed.csv',
index=False)
!head -n 5 ./womens_clothing_ecommerce_reviews_transformed.csv
import boto3
import sagemaker
import pandas as pd
import numpy as np
import botocore
config = botocore.config.Config(user_agent_extra='dlai-pds/c1/w1')
# low-level service client of the boto3 session
sm = boto3.client(service_name='sagemaker',
config=config)
sess = sagemaker.Session(sagemaker_client=sm)
bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
region = sess.boto_region_name
account_id = sess.account_id
print('S3 Bucket: {}'.format(bucket))
print('Region: {}'.format(region))
print('Account ID: {}'.format(account_id))
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="top" href="https://s3.console.aws.amazon.com/s3/home?region={}#">Amazon S3 buckets</a></b>'.format(region)))
!aws s3 cp ./womens_clothing_ecommerce_reviews_transformed.csv s3://$bucket/data/transformed/womens_clothing_ecommerce_reviews_transformed.csv
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="top" href="https://s3.console.aws.amazon.com/s3/buckets/{}?region={}&prefix=data/transformed/#">Amazon S3 buckets</a></b>'.format(bucket, region)))
import awswrangler as wr
wr.catalog.create_database(
name='dsoaws_deep_learning',
exist_ok=True
)
dbs = wr.catalog.get_databases()
for db in dbs:
print("Database name: " + db['Name'])
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="top" href="https://console.aws.amazon.com/glue/home?region={}#catalog:tab=databases">AWS Glue Databases</a></b>'.format(region)))
res = wr.catalog.create_csv_table(
database='...', # AWS Glue Catalog database name
path='s3://{}/data/transformed/'.format(bucket), # S3 object path for the data
table='reviews', # registered table name
columns_types={
'sentiment': 'int',
'review_body': 'string',
'product_category': 'string'
},
mode='overwrite',
skip_header_line_count=1,
sep=','
)
wr.catalog.create_csv_table(
### BEGIN SOLUTION - DO NOT delete this comment for grading purposes
database='dsoaws_deep_learning', # Replace None
### END SOLUTION - DO NOT delete this comment for grading purposes
path='s3://{}/data/transformed/'.format(bucket),
table="reviews",
columns_types={
'sentiment': 'int',
'review_body': 'string',
'product_category': 'string'
},
mode='overwrite',
skip_header_line_count=1,
sep=','
)
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="top" href="https://console.aws.amazon.com/glue/home?region={}#">AWS Glue Catalog</a></b>'.format(region)))
table = wr.catalog.table(database='dsoaws_deep_learning',
table='reviews')
table
# S3 bucket name
wr.athena.create_athena_bucket()
# EXPECTED OUTPUT
# 's3://aws-athena-query-results-ACCOUNT-REGION/'
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format='retina'
# Do not change the database and table names - they are used for grading purposes!
database_name = 'dsoaws_deep_learning'
table_name = 'reviews'
sns.set_style = 'seaborn-whitegrid'
sns.set(rc={"font.style":"normal",
"axes.facecolor":"white",
'grid.color': '.8',
'grid.linestyle': '-',
"figure.facecolor":"white",
"figure.titlesize":20,
"text.color":"black",
"xtick.color":"black",
"ytick.color":"black",
"axes.labelcolor":"black",
"axes.grid":True,
'axes.labelsize':10,
'xtick.labelsize':10,
'font.size':10,
'ytick.labelsize':10})
SELECT column_name, COUNT(column_name) as new_column_name
FROM table_name
GROUP BY column_name
ORDER BY column_name
statement_count_by_sentiment = """
SELECT sentiment, COUNT(sentiment) AS count_sentiment
FROM reviews
GROUP BY sentiment
ORDER BY sentiment
"""
print(statement_count_by_sentiment)
df_count_by_sentiment = wr.athena.read_sql_query(
sql=statement_count_by_sentiment,
database=database_name
)
print(df_count_by_sentiment)
df_count_by_sentiment.plot(kind='bar', x='sentiment', y='count_sentiment', rot=0)
SELECT category_column, COUNT(column_name) AS new_column_name
FROM table_name
GROUP BY category_column
ORDER BY new_column_name DESC
# Replace all None
### BEGIN SOLUTION - DO NOT delete this comment for grading purposes
statement_count_by_category = """
SELECT product_category, COUNT(sentiment) AS count_sentiment
FROM reviews
GROUP BY product_category
ORDER BY count_sentiment DESC
"""
### END SOLUTION - DO NOT delete this comment for grading purposes
print(statement_count_by_category)
%%time
df_count_by_category = wr.athena.read_sql_query(
sql=statement_count_by_category,
database=database_name
)
df_count_by_category
# EXPECTED OUTPUT
# Dresses: 6145
# Knits: 4626
# Blouses: 2983
# Sweaters: 1380
# Pants: 1350
# ...
statement_avg_by_category = """
SELECT product_category, AVG(sentiment) AS avg_sentiment
FROM {}
GROUP BY product_category
ORDER BY avg_sentiment DESC
""".format(table_name)
print(statement_avg_by_category)
%%time
df_avg_by_category = wr.athena.read_sql_query(
sql=statement_avg_by_category,
database=database_name
)
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="top" href="https://s3.console.aws.amazon.com/s3/buckets/aws-athena-query-results-{}-{}?region={}">Amazon S3 buckets</a></b>'.format(account_id, region, region)))
df_avg_by_category
def show_values_barplot(axs, space):
def _show_on_plot(ax):
for p in ax.patches:
_x = p.get_x() + p.get_width() + float(space)
_y = p.get_y() + p.get_height()
value = round(float(p.get_width()),2)
ax.text(_x, _y, value, ha="left")
if isinstance(axs, np.ndarray):
for idx, ax in np.ndenumerate(axs):
_show_on_plot(ax)
else:
_show_on_plot(axs)
# Create plot
barplot = sns.barplot(
data = df_avg_by_category,
y='product_category',
x='avg_sentiment',
color="b",
saturation=1
)
# Set the size of the figure
sns.set(rc={'figure.figsize':(15.0, 10.0)})
# Set title and x-axis ticks
plt.title('Average sentiment by product category')
#plt.xticks([-1, 0, 1], ['Negative', 'Neutral', 'Positive'])
# Helper code to show actual values afters bars
show_values_barplot(barplot, 0.1)
plt.xlabel("Average sentiment")
plt.ylabel("Product category")
plt.tight_layout()
# Do not change the figure name - it is used for grading purposes!
plt.savefig('avg_sentiment_per_category.png', dpi=300)
# Show graphic
plt.show(barplot)
# Upload image to S3 bucket
sess.upload_data(path='avg_sentiment_per_category.png', bucket=bucket, key_prefix="images")
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="top" href="https://s3.console.aws.amazon.com/s3/home?region={}">Amazon S3 buckets</a></b>'.format(region)))
statement_count_by_category_desc = """
SELECT product_category, COUNT(*) AS count_reviews
FROM {}
GROUP BY product_category
ORDER BY count_reviews DESC
""".format(table_name)
print(statement_count_by_category_desc)
%%time
df_count_by_category_desc = wr.athena.read_sql_query(
sql=statement_count_by_category_desc,
database=database_name
)
max_sentiment = df_count_by_category_desc['count_reviews'].max()
print('Highest number of reviews (in a single category): {}'.format(max_sentiment))
df_count_by_category_desc
# Create seaborn barplot
barplot = sns.barplot(
### BEGIN SOLUTION - DO NOT delete this comment for grading purposes
data=df_count_by_category_desc, # Replace None
y='product_category', # Replace None
x='count_reviews', # Replace None
### END SOLUTION - DO NOT delete this comment for grading purposes
color="b",
saturation=1
)
# Set the size of the figure
sns.set(rc={'figure.figsize':(15.0, 10.0)})
# Set title
plt.title("Number of reviews per product category")
plt.xlabel("Number of reviews")
plt.ylabel("Product category")
plt.tight_layout()
# Do not change the figure name - it is used for grading purposes!
plt.savefig('num_reviews_per_category.png', dpi=300)
# Show the barplot
plt.show(barplot)
# Upload image to S3 bucket
sess.upload_data(path='num_reviews_per_category.png', bucket=bucket, key_prefix="images")
statement_count_by_category_and_sentiment = """
SELECT product_category,
sentiment,
COUNT(*) AS count_reviews
FROM {}
GROUP BY product_category, sentiment
ORDER BY product_category ASC, sentiment DESC, count_reviews
""".format(table_name)
print(statement_count_by_category_and_sentiment)
%%time
df_count_by_category_and_sentiment = wr.athena.read_sql_query(
sql=statement_count_by_category_and_sentiment,
database=database_name
)
# Create grouped dataframes by category and by sentiment
grouped_category = df_count_by_category_and_sentiment.groupby('product_category')
grouped_star = df_count_by_category_and_sentiment.groupby('sentiment')
# Create sum of sentiments per star sentiment
df_sum = df_count_by_category_and_sentiment.groupby(['sentiment']).sum()
# Calculate total number of sentiments
total = df_sum['count_reviews'].sum()
print('Total number of reviews: {}'.format(total))
distribution = {}
count_reviews_per_star = []
i=0
for category, sentiments in grouped_category:
count_reviews_per_star = []
for star in sentiments['sentiment']:
count_reviews_per_star.append(sentiments.at[i, 'count_reviews'])
i=i+1;
distribution[category] = count_reviews_per_star
distribution
df_distribution_pct = pd.DataFrame(distribution).transpose().apply(
lambda num_sentiments: num_sentiments/sum(num_sentiments)*100, axis=1
)
df_distribution_pct.columns=['1', '0', '-1']
df_distribution_pct
categories = df_distribution_pct.index
# Plot bars
plt.figure(figsize=(10,5))
df_distribution_pct.plot(kind="barh",
stacked=True,
edgecolor='white',
width=1.0,
color=['green',
'orange',
'blue'])
plt.title("Distribution of reviews per sentiment per category",
fontsize='16')
plt.legend(bbox_to_anchor=(1.04,1),
loc="upper left",
labels=['Positive',
'Neutral',
'Negative'])
plt.xlabel("% Breakdown of sentiments", fontsize='14')
plt.gca().invert_yaxis()
plt.tight_layout()
# Do not change the figure name - it is used for grading purposes!
plt.savefig('distribution_sentiment_per_category.png', dpi=300)
plt.show()
# Upload image to S3 bucket
sess.upload_data(path='distribution_sentiment_per_category.png', bucket=bucket, key_prefix="images")
statement_num_words = """
SELECT CARDINALITY(SPLIT(review_body, ' ')) as num_words
FROM {}
""".format(table_name)
print(statement_num_words)
%%time
df_num_words = wr.athena.read_sql_query(
sql=statement_num_words,
database=database_name
)
summary = df_num_words["num_words"].describe(percentiles=[0.10, 0.20, 0.30, 0.40, 0.50, 0.60, 0.70, 0.80, 0.90, 1.00])
summary
df_num_words["num_words"].plot.hist(xticks=[0, 16, 32, 64, 128, 256], bins=100, range=[0, 256]).axvline(
x=summary["100%"], c="red"
)
plt.xlabel("Words number", fontsize='14')
plt.ylabel("Frequency", fontsize='14')
plt.savefig('distribution_num_words_per_review.png', dpi=300)
plt.show()
# Upload image to S3 bucket
sess.upload_data(path='distribution_num_words_per_review.png', bucket=bucket, key_prefix="images")
!aws s3 cp ./C1_W1_Assignment.ipynb s3://$bucket/C1_W1_Assignment_Learner.ipynb
| 0.237311 | 0.961893 |
# CDFS-SWIRE master catalogue: Flags
```
import numpy as np
from astropy.table import Table
import itertools
from herschelhelp_internal.flagging import flag_outliers
SUFFIX = "20180613"
FIELD = "CDFS-SWIRE"
catname = "../../dmu1/dmu1_ml_CDFS-SWIRE/data/master_catalogue_cdfs-swire_{}.fits".format(SUFFIX)
master_catalogue = Table.read(catname)
u_bands = ["OmegaCAM u", "WFI u"]
g_bands = ["OmegaCAM g", "GPC1 g", "DECam g"]
r_bands = ["OmegaCAM r", "WFI r", "GPC1 r", "DECam r"]
i_bands = ["OmegaCAM i", "WFI i", "GPC1 i", "DECam i"]
z_bands = ["OmegaCAM z", "GPC1 z", "DECam z", "VISTA z"]
y_bands = [ "GPC1 y", "DECam y", "VISTA y"]
all_bands = [u_bands, g_bands, r_bands, i_bands, z_bands, y_bands]
```
## 1. Magnitudes and magnitude erros
```
def flag_mag(flagcol, mask):
# Add flag columns if does not exist
if flagcol not in master_catalogue.colnames:
master_catalogue[flagcol] = np.zeros(len(master_catalogue), dtype=bool)
# Flagged
master_catalogue[flagcol][mask] = np.ones(len(mask), dtype=bool)
print(' Number of flagged objects:', len(master_catalogue[flagcol][mask]))
```
### 1.a Pan-STARRS Aperture and Total magnitude errors
```
## dmu0: Pan-STARRS stack cat
gpc1_err = 0.05000000074505806
bands = ["GPC1 g", "GPC1 r", "GPC1 i", "GPC1 z", "GPC1 y"]
for i, band in enumerate(bands):
print(band)
basecol = band.replace(" ", "_").lower()
ecol_ap, ecol_tot = "merr_ap_{}".format(basecol), "merr_{}".format(basecol)
flagcol_ap, flagcol_tot = "flag_ap_{}".format(basecol), "flag_{}".format(basecol)
mask_ap = np.where(master_catalogue[ecol_ap] == gpc1_err)[0]
mask_tot = np.where(master_catalogue[ecol_tot] == gpc1_err)[0]
print(' Aperture magnitude')
flag_mag(flagcol_ap, mask_ap)
print(' Total magnitude')
flag_mag(flagcol_tot, mask_tot)
```
### 2.c IRAC Aperture magnitudes
```
irac_mag = 3.9000000001085695
bands = ["IRAC i1", "IRAC i2", "IRAC i3", "IRAC i4"]
for i, band in enumerate(bands):
print(band)
basecol = band.replace(" ", "_").lower()
ecol_ap = "merr_ap_{}".format(basecol)
flagcol_ap = "flag_{}_ap".format(basecol)
mask_ap = np.where(master_catalogue[ecol_ap] == irac_mag)[0]
print(' Aperture magnitude')
flag_mag(flagcol_ap, mask_ap)
```
## 2. Outliers
```
for band_of_a_kind in all_bands:
for band1, band2 in itertools.combinations(band_of_a_kind, 2):
basecol1, basecol2 = band1.replace(" ", "_").lower(), band2.replace(" ", "_").lower()
# Aperture mag
try:
col1, col2 = "m_ap_{}".format(basecol1), "m_ap_{}".format(basecol2)
ecol1, ecol2 = "merr_ap_{}".format(basecol1), "merr_ap_{}".format(basecol2)
flagcol1, flagcol2 = "flag_{}_ap".format(basecol1), "flag_{}_ap".format(basecol2)
flag_outliers(master_catalogue[col1], master_catalogue[col2],
master_catalogue[ecol1], master_catalogue[ecol2],
master_catalogue[flagcol1], master_catalogue[flagcol2],
labels=("{} (aperture)".format(band1), "{} (aperture)".format(band2)))
except KeyError:
print("Probably no aperture mag on {} or {}".format(col1, col2))
try:
# Tot mag
col1, col2 = "m_{}".format(basecol1), "m_{}".format(basecol2)
ecol1, ecol2 = "merr_{}".format(basecol1), "merr_{}".format(basecol2)
flagcol1, flagcol2 = "flag_{}".format(basecol1), "flag_{}".format(basecol2)
flag_outliers(master_catalogue[col1], master_catalogue[col2],
master_catalogue[ecol1], master_catalogue[ecol2],
master_catalogue[flagcol1], master_catalogue[flagcol2],
labels=("{} (total)".format(band1), "{} (total)".format(band2)))
except KeyError:
print("Probably no aperture mag on {} or {}".format(col1, col2))
```
## 3. Save table
```
#Merge any aperture flags#Merge
for col in master_catalogue.colnames:
if col.startswith("flag_ap_"):
try:
master_catalogue[col.replace("_ap_", "_")] = (master_catalogue[col.replace("_ap_", "_")] |
master_catalogue[col])
master_catalogue.remove_column(col)
except KeyError:
print("{} only has aperture flags.".format(col))
master_catalogue.rename_column(col, col.replace("_ap_", "_"))
flag_cols = ["help_id"]
for col in master_catalogue.colnames:
if col.startswith("flag_"):
flag_cols += [col]
new_catname = "./data/{}_{}_flags.fits".format(FIELD.lower(),SUFFIX)
master_catalogue[flag_cols].write(new_catname, overwrite = True)
```
|
github_jupyter
|
import numpy as np
from astropy.table import Table
import itertools
from herschelhelp_internal.flagging import flag_outliers
SUFFIX = "20180613"
FIELD = "CDFS-SWIRE"
catname = "../../dmu1/dmu1_ml_CDFS-SWIRE/data/master_catalogue_cdfs-swire_{}.fits".format(SUFFIX)
master_catalogue = Table.read(catname)
u_bands = ["OmegaCAM u", "WFI u"]
g_bands = ["OmegaCAM g", "GPC1 g", "DECam g"]
r_bands = ["OmegaCAM r", "WFI r", "GPC1 r", "DECam r"]
i_bands = ["OmegaCAM i", "WFI i", "GPC1 i", "DECam i"]
z_bands = ["OmegaCAM z", "GPC1 z", "DECam z", "VISTA z"]
y_bands = [ "GPC1 y", "DECam y", "VISTA y"]
all_bands = [u_bands, g_bands, r_bands, i_bands, z_bands, y_bands]
def flag_mag(flagcol, mask):
# Add flag columns if does not exist
if flagcol not in master_catalogue.colnames:
master_catalogue[flagcol] = np.zeros(len(master_catalogue), dtype=bool)
# Flagged
master_catalogue[flagcol][mask] = np.ones(len(mask), dtype=bool)
print(' Number of flagged objects:', len(master_catalogue[flagcol][mask]))
## dmu0: Pan-STARRS stack cat
gpc1_err = 0.05000000074505806
bands = ["GPC1 g", "GPC1 r", "GPC1 i", "GPC1 z", "GPC1 y"]
for i, band in enumerate(bands):
print(band)
basecol = band.replace(" ", "_").lower()
ecol_ap, ecol_tot = "merr_ap_{}".format(basecol), "merr_{}".format(basecol)
flagcol_ap, flagcol_tot = "flag_ap_{}".format(basecol), "flag_{}".format(basecol)
mask_ap = np.where(master_catalogue[ecol_ap] == gpc1_err)[0]
mask_tot = np.where(master_catalogue[ecol_tot] == gpc1_err)[0]
print(' Aperture magnitude')
flag_mag(flagcol_ap, mask_ap)
print(' Total magnitude')
flag_mag(flagcol_tot, mask_tot)
irac_mag = 3.9000000001085695
bands = ["IRAC i1", "IRAC i2", "IRAC i3", "IRAC i4"]
for i, band in enumerate(bands):
print(band)
basecol = band.replace(" ", "_").lower()
ecol_ap = "merr_ap_{}".format(basecol)
flagcol_ap = "flag_{}_ap".format(basecol)
mask_ap = np.where(master_catalogue[ecol_ap] == irac_mag)[0]
print(' Aperture magnitude')
flag_mag(flagcol_ap, mask_ap)
for band_of_a_kind in all_bands:
for band1, band2 in itertools.combinations(band_of_a_kind, 2):
basecol1, basecol2 = band1.replace(" ", "_").lower(), band2.replace(" ", "_").lower()
# Aperture mag
try:
col1, col2 = "m_ap_{}".format(basecol1), "m_ap_{}".format(basecol2)
ecol1, ecol2 = "merr_ap_{}".format(basecol1), "merr_ap_{}".format(basecol2)
flagcol1, flagcol2 = "flag_{}_ap".format(basecol1), "flag_{}_ap".format(basecol2)
flag_outliers(master_catalogue[col1], master_catalogue[col2],
master_catalogue[ecol1], master_catalogue[ecol2],
master_catalogue[flagcol1], master_catalogue[flagcol2],
labels=("{} (aperture)".format(band1), "{} (aperture)".format(band2)))
except KeyError:
print("Probably no aperture mag on {} or {}".format(col1, col2))
try:
# Tot mag
col1, col2 = "m_{}".format(basecol1), "m_{}".format(basecol2)
ecol1, ecol2 = "merr_{}".format(basecol1), "merr_{}".format(basecol2)
flagcol1, flagcol2 = "flag_{}".format(basecol1), "flag_{}".format(basecol2)
flag_outliers(master_catalogue[col1], master_catalogue[col2],
master_catalogue[ecol1], master_catalogue[ecol2],
master_catalogue[flagcol1], master_catalogue[flagcol2],
labels=("{} (total)".format(band1), "{} (total)".format(band2)))
except KeyError:
print("Probably no aperture mag on {} or {}".format(col1, col2))
#Merge any aperture flags#Merge
for col in master_catalogue.colnames:
if col.startswith("flag_ap_"):
try:
master_catalogue[col.replace("_ap_", "_")] = (master_catalogue[col.replace("_ap_", "_")] |
master_catalogue[col])
master_catalogue.remove_column(col)
except KeyError:
print("{} only has aperture flags.".format(col))
master_catalogue.rename_column(col, col.replace("_ap_", "_"))
flag_cols = ["help_id"]
for col in master_catalogue.colnames:
if col.startswith("flag_"):
flag_cols += [col]
new_catname = "./data/{}_{}_flags.fits".format(FIELD.lower(),SUFFIX)
master_catalogue[flag_cols].write(new_catname, overwrite = True)
| 0.296552 | 0.672343 |
```
from robust_gcn_structure.certification import certify
from robust_gcn_structure.utils import load_npz
from matplotlib import pyplot as plt
import torch
dataset = "citeseer"
robust_gcn = False # Whether to load weights for GCN trained with the approach by [Zügner and Günnemann 2019
local_budget = 3
global_budget = 5
target_node = 3311
eval_class = None #0
solver = "ECOS"
max_iters = 250
tolerance = 1e-2
kwargs = {
'tolerance': tolerance,
'max_iter': max_iters
}
A, X, z = load_npz(f'../datasets/{dataset}.npz')
A = A + A.T
A[A > 1] = 1
A.setdiag(0)
X = (X>0).astype("float32")
z = z.astype("int64")
N, D = X.shape
weight_path = f"../pretrained_weights/{dataset}"
if robust_gcn:
weight_path = f"{weight_path}_robust_gcn.pkl"
else:
weight_path = f"{weight_path}_gcn.pkl"
state_dict = torch.load(weight_path, map_location="cpu")
weights = [v for k,v in state_dict.items() if "weight" in k and "conv" in k]
biases = [v for k,v in state_dict.items() if "bias" in k and "conv" in k]
W1, W2 = [w.cpu().detach().numpy() for w in weights]
b1, b2 = [b.cpu().detach().numpy() for b in biases]
shapes = [x.shape[0] for x in biases]
num_hidden = len(shapes) - 1
if num_hidden > 1:
raise NotImplementedError("Only one hidden layer is supported.")
weight_list = [W1, b1, W2, b2]
# info_dict = {}
results = certify(target_node, A, X, weight_list, z,
local_changes=local_budget,
global_changes=global_budget,
solver=solver, eval_class=eval_class,
use_predicted_class=True,
# info_dict=info_dict,
**kwargs)
import torch as th
import numpy as np
def gcn_forward(A_hat, X, weights, i=None):
W1, b1, W2, b2 = weights
l1 = Linear(W1.shape[0], W1.shape[1], bias=True)
l2 = Linear(W2.shape[0], W2.shape[1], bias=True)
abs_ahat = Linear(A_hat.shape[0], A_hat.shape[1], bias=False)
W1 = th.from_numpy(W1)
b1 = th.from_numpy(b1)
W2 = th.from_numpy(W2)
b2 = th.from_numpy(b2)
A_hat = A_hat.tocoo()
A_hat = th.sparse.DoubleTensor(th.LongTensor([A_hat.row.tolist(), A_hat.col.tolist()]),
th.DoubleTensor(A_hat.data.astype(np.int32)))
l1.weight.data = W1
l1.bias.data = b1
l2.weight.data = W2
l2.bias.data = b2
abs_ahat.weight.data = A_hat
l1_out = th.relu(abs_ahat(l1(X)))
logits = abs_ahat(l2(l1_out))
if i is not None:
logits = logits[i]
return logits
gcn_forward(A, X, weight_list)
results
if results['robust'] == True:
print(f"Robustness for node {target_node} and class {eval_class} successfully certified.")
else:
print(f"Robustness for node {target_node} and class {eval_class} could not be certified.")
plt.plot(results['best_lowers'], label="lower bound")
plt.plot(results['best_uppers'], label="upper bound")
plt.plot((0,len(results['best_uppers'])-1), (0,0), color="black", linestyle="--")
plt.legend()
plt.show()
```
|
github_jupyter
|
from robust_gcn_structure.certification import certify
from robust_gcn_structure.utils import load_npz
from matplotlib import pyplot as plt
import torch
dataset = "citeseer"
robust_gcn = False # Whether to load weights for GCN trained with the approach by [Zügner and Günnemann 2019
local_budget = 3
global_budget = 5
target_node = 3311
eval_class = None #0
solver = "ECOS"
max_iters = 250
tolerance = 1e-2
kwargs = {
'tolerance': tolerance,
'max_iter': max_iters
}
A, X, z = load_npz(f'../datasets/{dataset}.npz')
A = A + A.T
A[A > 1] = 1
A.setdiag(0)
X = (X>0).astype("float32")
z = z.astype("int64")
N, D = X.shape
weight_path = f"../pretrained_weights/{dataset}"
if robust_gcn:
weight_path = f"{weight_path}_robust_gcn.pkl"
else:
weight_path = f"{weight_path}_gcn.pkl"
state_dict = torch.load(weight_path, map_location="cpu")
weights = [v for k,v in state_dict.items() if "weight" in k and "conv" in k]
biases = [v for k,v in state_dict.items() if "bias" in k and "conv" in k]
W1, W2 = [w.cpu().detach().numpy() for w in weights]
b1, b2 = [b.cpu().detach().numpy() for b in biases]
shapes = [x.shape[0] for x in biases]
num_hidden = len(shapes) - 1
if num_hidden > 1:
raise NotImplementedError("Only one hidden layer is supported.")
weight_list = [W1, b1, W2, b2]
# info_dict = {}
results = certify(target_node, A, X, weight_list, z,
local_changes=local_budget,
global_changes=global_budget,
solver=solver, eval_class=eval_class,
use_predicted_class=True,
# info_dict=info_dict,
**kwargs)
import torch as th
import numpy as np
def gcn_forward(A_hat, X, weights, i=None):
W1, b1, W2, b2 = weights
l1 = Linear(W1.shape[0], W1.shape[1], bias=True)
l2 = Linear(W2.shape[0], W2.shape[1], bias=True)
abs_ahat = Linear(A_hat.shape[0], A_hat.shape[1], bias=False)
W1 = th.from_numpy(W1)
b1 = th.from_numpy(b1)
W2 = th.from_numpy(W2)
b2 = th.from_numpy(b2)
A_hat = A_hat.tocoo()
A_hat = th.sparse.DoubleTensor(th.LongTensor([A_hat.row.tolist(), A_hat.col.tolist()]),
th.DoubleTensor(A_hat.data.astype(np.int32)))
l1.weight.data = W1
l1.bias.data = b1
l2.weight.data = W2
l2.bias.data = b2
abs_ahat.weight.data = A_hat
l1_out = th.relu(abs_ahat(l1(X)))
logits = abs_ahat(l2(l1_out))
if i is not None:
logits = logits[i]
return logits
gcn_forward(A, X, weight_list)
results
if results['robust'] == True:
print(f"Robustness for node {target_node} and class {eval_class} successfully certified.")
else:
print(f"Robustness for node {target_node} and class {eval_class} could not be certified.")
plt.plot(results['best_lowers'], label="lower bound")
plt.plot(results['best_uppers'], label="upper bound")
plt.plot((0,len(results['best_uppers'])-1), (0,0), color="black", linestyle="--")
plt.legend()
plt.show()
| 0.630002 | 0.576453 |
```
import os
import time
import json
%reload_ext e2emlstorlets.tools.ipython
os.environ['OS_AUTH_VERSION'] = '3'
os.environ['OS_AUTH_URL'] = 'http://127.0.0.1:5000/v3'
os.environ['OS_USERNAME'] = 'tester'
os.environ['OS_PASSWORD'] = 'testing'
os.environ['OS_USER_DOMAIN_NAME'] = 'default'
os.environ['OS_PROJECT_DOMAIN_NAME'] = 'default'
os.environ['OS_PROJECT_NAME'] = 'test'
%%storletapp extract_face.ExtractFace
import cv2
import numpy as np
def detect(im):
mat=cv2.imdecode(im, cv2.IMREAD_GRAYSCALE)
cascade = cv2.CascadeClassifier("/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt2.xml")
rects = cascade.detectMultiScale(mat)
if len(rects) == 0:
return [], mat
rects[:, 2:] += rects[:, :2]
rect = rects[0]
return mat, rect
def crop(img, rect):
h = rect[3]-rect[1]
w = rect[2]-rect[0]
x = rect[0]
y = rect[1]
hm = int(0.1 * h)
if y < hm:
h = h + (hm - y)
hm = y
return img[y-hm:y+h, x:x+w]
class ExtractFace(object):
def __init__(self, logger):
self.logger = logger
def __call__(self, in_files, out_files, params):
metadata = in_files[0].get_metadata()
out_files[0].set_metadata(metadata)
# Read the image
img_str = ''
while True:
buf = in_files[0].read(1024)
if not buf:
break
img_str += buf
img_nparr = np.fromstring(img_str, np.uint8)
# Detect face
mat, rect = detect(img_nparr)
# Crop the face and decrease resolution
face = crop(mat, rect)
small_face = cv2.resize(face, (50,55))
# Write result
retval, small_face_buf = cv2.imencode('.jpg', small_face)
out_files[0].write(small_face_buf)
in_files[0].close()
out_files[0].close()
self.logger.debug('Done\n')
# Iterate over all pictures, and extract faces
start_time = time.time()
%list_container -i train -o obj_list
for obj in obj_list:
input_path=os.path.join('/train',obj)
output_path=os.path.join('/extracted',obj)
%copy \
--storlet extract_face.py \
--input path:$input_path \
--output path:$output_path \
-o result
extract_faces_time = time.time() - start_time
print('Done extracting faces in %d seconds' % round(extract_faces_time))
# Show before and after face extraction
%show_image --input path:/train/bibi21.jpeg
%show_image --input path:/extracted/bibi21.jpeg
# Train the model with all faces
start_time = time.time()
input_path = os.path.join('path:extracted/', obj_list[0])
extra = ','.join([ '/extracted/%s' % obj_name for obj_name in obj_list[1:]])
output_path = 'path:/trained/model'
params = {'hidden_layer_sizes': '(100, 20, 8)',
'alpha': '0.00000004',
'tol': '1e-9'}
%copy \
--storlet train_model.py \
--input $input_path \
--output $output_path \
--extra $extra \
-i params \
-o result
train_model_time = time.time() - start_time
print('Model training done in %d seconds' % round(train_model_time))
%play_video -c test -v bibi_mov.avi
# Add face recognition tag
start_time = time.time()
input_path = 'path:/test/trump_mov.avi'
output_path = 'path:/video/tagged_trump_mov.avi'
extra_resources = '/trained/model'
%copy \
--storlet video_recognize_face.py \
--input $input_path \
--output $output_path \
--extra $extra_resources \
-o result
recognize_faces_time = time.time() - start_time
print('Recognize face done in %d seconds' % round(recognize_faces_time))
%play_video -c video -v tagged_trump_mov.avi
# now lets see how this performs when working against S3
from e2emlstorlets.s3 import extract_faces, \
train_model, recognize_faces
start_time = time.time()
extract_faces.extract_and_upload_all()
extract_faces_s3_time = time.time() - start_time
print('Done extracting faces in %d seconds' % round(extract_faces_s3_time))
start_time = time.time()
train_model.train_and_upload_model()
train_model_s3_time = time.time() - start_time
print('Model training done in %d seconds' % round(train_model_s3_time))
start_time = time.time()
recognize_faces.get_tag_and_upload("bibi_mov.avi")
tag_faces_s3_time = time.time() - start_time
print('Recognizing faces done in %d seconds' % round(tag_faces_s3_time))
from e2emlstorlets.tools import plot_times
plot_times.show_plot(extract_faces_time,
train_model_time,
recognize_faces_time,
extract_faces_s3_time,
train_model_s3_time,
tag_faces_s3_time)
```
|
github_jupyter
|
import os
import time
import json
%reload_ext e2emlstorlets.tools.ipython
os.environ['OS_AUTH_VERSION'] = '3'
os.environ['OS_AUTH_URL'] = 'http://127.0.0.1:5000/v3'
os.environ['OS_USERNAME'] = 'tester'
os.environ['OS_PASSWORD'] = 'testing'
os.environ['OS_USER_DOMAIN_NAME'] = 'default'
os.environ['OS_PROJECT_DOMAIN_NAME'] = 'default'
os.environ['OS_PROJECT_NAME'] = 'test'
%%storletapp extract_face.ExtractFace
import cv2
import numpy as np
def detect(im):
mat=cv2.imdecode(im, cv2.IMREAD_GRAYSCALE)
cascade = cv2.CascadeClassifier("/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt2.xml")
rects = cascade.detectMultiScale(mat)
if len(rects) == 0:
return [], mat
rects[:, 2:] += rects[:, :2]
rect = rects[0]
return mat, rect
def crop(img, rect):
h = rect[3]-rect[1]
w = rect[2]-rect[0]
x = rect[0]
y = rect[1]
hm = int(0.1 * h)
if y < hm:
h = h + (hm - y)
hm = y
return img[y-hm:y+h, x:x+w]
class ExtractFace(object):
def __init__(self, logger):
self.logger = logger
def __call__(self, in_files, out_files, params):
metadata = in_files[0].get_metadata()
out_files[0].set_metadata(metadata)
# Read the image
img_str = ''
while True:
buf = in_files[0].read(1024)
if not buf:
break
img_str += buf
img_nparr = np.fromstring(img_str, np.uint8)
# Detect face
mat, rect = detect(img_nparr)
# Crop the face and decrease resolution
face = crop(mat, rect)
small_face = cv2.resize(face, (50,55))
# Write result
retval, small_face_buf = cv2.imencode('.jpg', small_face)
out_files[0].write(small_face_buf)
in_files[0].close()
out_files[0].close()
self.logger.debug('Done\n')
# Iterate over all pictures, and extract faces
start_time = time.time()
%list_container -i train -o obj_list
for obj in obj_list:
input_path=os.path.join('/train',obj)
output_path=os.path.join('/extracted',obj)
%copy \
--storlet extract_face.py \
--input path:$input_path \
--output path:$output_path \
-o result
extract_faces_time = time.time() - start_time
print('Done extracting faces in %d seconds' % round(extract_faces_time))
# Show before and after face extraction
%show_image --input path:/train/bibi21.jpeg
%show_image --input path:/extracted/bibi21.jpeg
# Train the model with all faces
start_time = time.time()
input_path = os.path.join('path:extracted/', obj_list[0])
extra = ','.join([ '/extracted/%s' % obj_name for obj_name in obj_list[1:]])
output_path = 'path:/trained/model'
params = {'hidden_layer_sizes': '(100, 20, 8)',
'alpha': '0.00000004',
'tol': '1e-9'}
%copy \
--storlet train_model.py \
--input $input_path \
--output $output_path \
--extra $extra \
-i params \
-o result
train_model_time = time.time() - start_time
print('Model training done in %d seconds' % round(train_model_time))
%play_video -c test -v bibi_mov.avi
# Add face recognition tag
start_time = time.time()
input_path = 'path:/test/trump_mov.avi'
output_path = 'path:/video/tagged_trump_mov.avi'
extra_resources = '/trained/model'
%copy \
--storlet video_recognize_face.py \
--input $input_path \
--output $output_path \
--extra $extra_resources \
-o result
recognize_faces_time = time.time() - start_time
print('Recognize face done in %d seconds' % round(recognize_faces_time))
%play_video -c video -v tagged_trump_mov.avi
# now lets see how this performs when working against S3
from e2emlstorlets.s3 import extract_faces, \
train_model, recognize_faces
start_time = time.time()
extract_faces.extract_and_upload_all()
extract_faces_s3_time = time.time() - start_time
print('Done extracting faces in %d seconds' % round(extract_faces_s3_time))
start_time = time.time()
train_model.train_and_upload_model()
train_model_s3_time = time.time() - start_time
print('Model training done in %d seconds' % round(train_model_s3_time))
start_time = time.time()
recognize_faces.get_tag_and_upload("bibi_mov.avi")
tag_faces_s3_time = time.time() - start_time
print('Recognizing faces done in %d seconds' % round(tag_faces_s3_time))
from e2emlstorlets.tools import plot_times
plot_times.show_plot(extract_faces_time,
train_model_time,
recognize_faces_time,
extract_faces_s3_time,
train_model_s3_time,
tag_faces_s3_time)
| 0.332744 | 0.137185 |
```
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from tqdm import tqdm
%matplotlib inline
from torch.utils.data import Dataset, DataLoader
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
from torch.nn import functional as F
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
```
# Generate dataset
```
np.random.seed(12)
y = np.random.randint(0,10,5000)
idx= []
for i in range(10):
print(i,sum(y==i))
idx.append(y==i)
x = np.zeros((5000,2))
np.random.seed(12)
x[idx[0],:] = np.random.multivariate_normal(mean = [4,6.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[0]))
x[idx[1],:] = np.random.multivariate_normal(mean = [5.5,6],cov=[[0.01,0],[0,0.01]],size=sum(idx[1]))
x[idx[2],:] = np.random.multivariate_normal(mean = [4.5,4.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[2]))
x[idx[3],:] = np.random.multivariate_normal(mean = [3,3.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[3]))
x[idx[4],:] = np.random.multivariate_normal(mean = [2.5,5.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[4]))
x[idx[5],:] = np.random.multivariate_normal(mean = [3.5,8],cov=[[0.01,0],[0,0.01]],size=sum(idx[5]))
x[idx[6],:] = np.random.multivariate_normal(mean = [5.5,8],cov=[[0.01,0],[0,0.01]],size=sum(idx[6]))
x[idx[7],:] = np.random.multivariate_normal(mean = [7,6.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[7]))
x[idx[8],:] = np.random.multivariate_normal(mean = [6.5,4.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[8]))
x[idx[9],:] = np.random.multivariate_normal(mean = [5,3],cov=[[0.01,0],[0,0.01]],size=sum(idx[9]))
x[idx[0]][0], x[idx[5]][5]
for i in range(10):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
bg_idx = [ np.where(idx[3] == True)[0],
np.where(idx[4] == True)[0],
np.where(idx[5] == True)[0],
np.where(idx[6] == True)[0],
np.where(idx[7] == True)[0],
np.where(idx[8] == True)[0],
np.where(idx[9] == True)[0]]
bg_idx = np.concatenate(bg_idx, axis = 0)
bg_idx.shape
np.unique(bg_idx).shape
x = x - np.mean(x[bg_idx], axis = 0, keepdims = True)
np.mean(x[bg_idx], axis = 0, keepdims = True), np.mean(x, axis = 0, keepdims = True)
x = x/np.std(x[bg_idx], axis = 0, keepdims = True)
np.std(x[bg_idx], axis = 0, keepdims = True), np.std(x, axis = 0, keepdims = True)
for i in range(10):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
foreground_classes = {'class_0','class_1', 'class_2'}
background_classes = {'class_3','class_4', 'class_5', 'class_6','class_7', 'class_8', 'class_9'}
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,9)
a = []
for i in range(9):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
print(a.shape)
print(fg_class , fg_idx)
a.shape
np.reshape(a,(18,1))
a=np.reshape(a,(3,6))
plt.imshow(a)
desired_num = 2000
mosaic_list_of_images =[]
mosaic_label = []
fore_idx=[]
for j in range(desired_num):
np.random.seed(j)
fg_class = np.random.randint(0,3)
fg_idx = 0
a = []
for i in range(9):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
# print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
mosaic_list_of_images.append(np.reshape(a,(18,1)))
mosaic_label.append(fg_class)
fore_idx.append(fg_idx)
mosaic_list_of_images = np.concatenate(mosaic_list_of_images,axis=1).T
mosaic_list_of_images.shape, mosaic_list_of_images[0]
for j in range(9):
print(mosaic_list_of_images[0][2*j:2*j+2])
def create_avg_image_from_mosaic_dataset(mosaic_dataset,labels,foreground_index,dataset_number):
"""
mosaic_dataset : mosaic_dataset contains 9 images 32 x 32 each as 1 data point
labels : mosaic_dataset labels
foreground_index : contains list of indexes where foreground image is present so that using this we can take weighted average
dataset_number : will help us to tell what ratio of foreground image to be taken. for eg: if it is "j" then fg_image_ratio = j/9 , bg_image_ratio = (9-j)/8*9
"""
avg_image_dataset = []
cnt = 0
counter = np.array([0,0,0,0,0,0,0,0,0])
for i in range(len(mosaic_dataset)):
img = torch.zeros([2], dtype=torch.float64)
np.random.seed(int(dataset_number*10000 + i))
give_pref = foreground_index[i] #np.random.randint(0,9)
# print("outside", give_pref,foreground_index[i])
for j in range(9):
if j == give_pref:
img = img + mosaic_dataset[i][2*j:2*j+2]*dataset_number/9 #2 is data dim
else :
img = img + mosaic_dataset[i][2*j:2*j+2]*(9-dataset_number)/(8*9)
if give_pref == foreground_index[i] :
# print("equal are", give_pref,foreground_index[i])
cnt += 1
counter[give_pref] += 1
else :
counter[give_pref] += 1
avg_image_dataset.append(img)
print("number of correct averaging happened for dataset "+str(dataset_number)+" is "+str(cnt))
print("the averaging are done as ", counter)
return avg_image_dataset , labels , foreground_index
avg_image_dataset_1 , labels_1, fg_index_1 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 0)
avg_image_dataset_2 , labels_2, fg_index_2 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 0.2)
avg_image_dataset_3 , labels_3, fg_index_3 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 0.4)
avg_image_dataset_4 , labels_4, fg_index_4 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 0.6)
avg_image_dataset_5 , labels_5, fg_index_5 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 0.8)
avg_image_dataset_6 , labels_6, fg_index_6 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 1)
avg_image_dataset_7 , labels_7, fg_index_7 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 1.2)
avg_image_dataset_8 , labels_8, fg_index_8 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 1.4)
avg_image_dataset_9 , labels_9, fg_index_9 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 1.6)
avg_image_dataset_10 , labels_10, fg_index_10 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 1.8)
test_dataset , labels , fg_index = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[1000:2000], mosaic_label[1000:2000], fore_idx[1000:2000] , 9)
avg = torch.stack(avg_image_dataset_1, axis = 0)
avg_image_dataset_1 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
print(torch.mean(avg_image_dataset_1, keepdims= True, axis = 0))
print(torch.std(avg_image_dataset_1, keepdims= True, axis = 0))
print("=="*40)
avg = torch.stack(avg_image_dataset_2, axis = 0)
avg_image_dataset_2 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
print(torch.mean(avg_image_dataset_2, keepdims= True, axis = 0))
print(torch.std(avg_image_dataset_2, keepdims= True, axis = 0))
print("=="*40)
avg = torch.stack(avg_image_dataset_3, axis = 0)
avg_image_dataset_3 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
print(torch.mean(avg_image_dataset_3, keepdims= True, axis = 0))
print(torch.std(avg_image_dataset_3, keepdims= True, axis = 0))
print("=="*40)
avg = torch.stack(avg_image_dataset_4, axis = 0)
avg_image_dataset_4 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
print(torch.mean(avg_image_dataset_4, keepdims= True, axis = 0))
print(torch.std(avg_image_dataset_4, keepdims= True, axis = 0))
print("=="*40)
avg = torch.stack(avg_image_dataset_5, axis = 0)
avg_image_dataset_5 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
print(torch.mean(avg_image_dataset_5, keepdims= True, axis = 0))
print(torch.std(avg_image_dataset_5, keepdims= True, axis = 0))
print("=="*40)
avg = torch.stack(avg_image_dataset_6, axis = 0)
avg_image_dataset_6 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
print(torch.mean(avg_image_dataset_6, keepdims= True, axis = 0))
print(torch.std(avg_image_dataset_6, keepdims= True, axis = 0))
print("=="*40)
avg = torch.stack(avg_image_dataset_7, axis = 0)
avg_image_dataset_7 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
print(torch.mean(avg_image_dataset_7, keepdims= True, axis = 0))
print(torch.std(avg_image_dataset_7, keepdims= True, axis = 0))
print("=="*40)
avg = torch.stack(avg_image_dataset_8, axis = 0)
avg_image_dataset_8 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
print(torch.mean(avg_image_dataset_8, keepdims= True, axis = 0))
print(torch.std(avg_image_dataset_8, keepdims= True, axis = 0))
print("=="*40)
avg = torch.stack(avg_image_dataset_9, axis = 0)
avg_image_dataset_9 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
print(torch.mean(avg_image_dataset_9, keepdims= True, axis = 0))
print(torch.std(avg_image_dataset_9, keepdims= True, axis = 0))
print("=="*40)
avg = torch.stack(avg_image_dataset_10, axis = 0)
avg_image_dataset_10 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
print(torch.mean(avg_image_dataset_10, keepdims= True, axis = 0))
print(torch.std(avg_image_dataset_10, keepdims= True, axis = 0))
print("=="*40)
avg = torch.stack(test_dataset, axis = 0)
test_dataset = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
print(torch.mean(test_dataset, keepdims= True, axis = 0))
print(torch.std(test_dataset, keepdims= True, axis = 0))
print("=="*40)
x1 = (avg_image_dataset_3).numpy()
y1 = np.array(labels_3)
plt.scatter(x1[y1==0,0], x1[y1==0,1], label='class 0')
plt.scatter(x1[y1==1,0], x1[y1==1,1], label='class 1')
plt.scatter(x1[y1==2,0], x1[y1==2,1], label='class 2')
plt.legend()
plt.title("type 2 alpha = 0.4/9")
x1 = (avg_image_dataset_6).numpy()
y1 = np.array(labels_6)
plt.scatter(x1[y1==0,0], x1[y1==0,1], label='class 0')
plt.scatter(x1[y1==1,0], x1[y1==1,1], label='class 1')
plt.scatter(x1[y1==2,0], x1[y1==2,1], label='class 2')
plt.legend()
plt.title("type 2 alpha = 1/9")
x1 = (avg_image_dataset_9).numpy()
y1 = np.array(labels_9)
plt.scatter(x1[y1==0,0], x1[y1==0,1], label='class 0')
plt.scatter(x1[y1==1,0], x1[y1==1,1], label='class 1')
plt.scatter(x1[y1==2,0], x1[y1==2,1], label='class 2')
plt.legend()
plt.title("type 2 alpha = 1.6/9")
x1 = (avg_image_dataset_10).numpy()
y1 = np.array(labels_10)
plt.scatter(x1[y1==0,0], x1[y1==0,1], label='class 0')
plt.scatter(x1[y1==1,0], x1[y1==1,1], label='class 1')
plt.scatter(x1[y1==2,0], x1[y1==2,1], label='class 2')
plt.legend()
plt.title("type 2 alpha = 1.8/9")
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list_of_images, mosaic_label):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list_of_images
self.label = mosaic_label
#self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] #, self.fore_idx[idx]
avg_image_dataset_1[0].shape
avg_image_dataset_1[0]
l = [ labels_1, labels_2, labels_3, labels_4, labels_5, labels_6, labels_7, labels_8, labels_9, labels_10]
for i in l:
print(np.unique(l))
batch = 200
traindata_1 = MosaicDataset(avg_image_dataset_1, labels_1 )
trainloader_1 = DataLoader( traindata_1 , batch_size= batch ,shuffle=True)
traindata_2 = MosaicDataset(avg_image_dataset_2, labels_2 )
trainloader_2 = DataLoader( traindata_2 , batch_size= batch ,shuffle=True)
traindata_3 = MosaicDataset(avg_image_dataset_3, labels_3 )
trainloader_3 = DataLoader( traindata_3 , batch_size= batch ,shuffle=True)
traindata_4 = MosaicDataset(avg_image_dataset_4, labels_4 )
trainloader_4 = DataLoader( traindata_4 , batch_size= batch ,shuffle=True)
traindata_5 = MosaicDataset(avg_image_dataset_5, labels_5 )
trainloader_5 = DataLoader( traindata_5 , batch_size= batch ,shuffle=True)
traindata_6 = MosaicDataset(avg_image_dataset_6, labels_6 )
trainloader_6 = DataLoader( traindata_6 , batch_size= batch ,shuffle=True)
traindata_7 = MosaicDataset(avg_image_dataset_7, labels_7 )
trainloader_7 = DataLoader( traindata_7 , batch_size= batch ,shuffle=True)
traindata_8 = MosaicDataset(avg_image_dataset_8, labels_8 )
trainloader_8 = DataLoader( traindata_8 , batch_size= batch ,shuffle=True)
traindata_9 = MosaicDataset(avg_image_dataset_9, labels_9 )
trainloader_9 = DataLoader( traindata_9 , batch_size= batch ,shuffle=True)
traindata_10 = MosaicDataset(avg_image_dataset_10, labels_10 )
trainloader_10 = DataLoader( traindata_10 , batch_size= batch ,shuffle=True)
testdata_1 = MosaicDataset(avg_image_dataset_1, labels_1 )
testloader_1 = DataLoader( testdata_1 , batch_size= batch ,shuffle=False)
testdata_2 = MosaicDataset(avg_image_dataset_2, labels_2 )
testloader_2 = DataLoader( testdata_2 , batch_size= batch ,shuffle=False)
testdata_3 = MosaicDataset(avg_image_dataset_3, labels_3 )
testloader_3 = DataLoader( testdata_3 , batch_size= batch ,shuffle=False)
testdata_4 = MosaicDataset(avg_image_dataset_4, labels_4 )
testloader_4 = DataLoader( testdata_4 , batch_size= batch ,shuffle=False)
testdata_5 = MosaicDataset(avg_image_dataset_5, labels_5 )
testloader_5 = DataLoader( testdata_5 , batch_size= batch ,shuffle=False)
testdata_6 = MosaicDataset(avg_image_dataset_6, labels_6 )
testloader_6 = DataLoader( testdata_6 , batch_size= batch ,shuffle=False)
testdata_7 = MosaicDataset(avg_image_dataset_7, labels_7 )
testloader_7 = DataLoader( testdata_7 , batch_size= batch ,shuffle=False)
testdata_8 = MosaicDataset(avg_image_dataset_8, labels_8 )
testloader_8 = DataLoader( testdata_8 , batch_size= batch ,shuffle=False)
testdata_9 = MosaicDataset(avg_image_dataset_9, labels_9 )
testloader_9 = DataLoader( testdata_9 , batch_size= batch ,shuffle=False)
testdata_10 = MosaicDataset(avg_image_dataset_10, labels_10 )
testloader_10 = DataLoader( testdata_10 , batch_size= batch ,shuffle=False)
testdata_11 = MosaicDataset(test_dataset, labels )
testloader_11 = DataLoader( testdata_11 , batch_size= batch ,shuffle=False)
class Whatnet(nn.Module):
def __init__(self):
super(Whatnet,self).__init__()
self.linear1 = nn.Linear(2,3)
# self.linear2 = nn.Linear(50,10)
# self.linear3 = nn.Linear(10,3)
def forward(self,x):
# x = F.relu(self.linear1(x))
# x = F.relu(self.linear2(x))
x = (self.linear1(x))
return x
def calculate_loss(dataloader,model,criter):
model.eval()
r_loss = 0
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = model(inputs)
loss = criter(outputs, labels)
r_loss += loss.item()
return r_loss/i
def test_all(number, testloader,net):
correct = 0
total = 0
out = []
pred = []
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to("cuda"),labels.to("cuda")
out.append(labels.cpu().numpy())
outputs= net(images)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
total += labels.size(0)
correct += (predicted == labels).sum().item()
pred = np.concatenate(pred, axis = 0)
out = np.concatenate(out, axis = 0)
print("unique out: ", np.unique(out), "unique pred: ", np.unique(pred) )
print("correct: ", correct, "total ", total)
print('Accuracy of the network on the 1000 test dataset %d: %.2f %%' % (number , 100 * correct / total))
def train_all(trainloader, ds_number, testloader_list):
print("--"*40)
print("training on data set ", ds_number)
torch.manual_seed(12)
net = Whatnet().double()
net = net.to("cuda")
criterion_net = nn.CrossEntropyLoss()
optimizer_net = optim.Adam(net.parameters(), lr=0.001 ) #, momentum=0.9)
acti = []
loss_curi = []
epochs = 1000
running_loss = calculate_loss(trainloader,net,criterion_net)
loss_curi.append(running_loss)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
net.train()
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_net.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion_net(outputs, labels)
# print statistics
running_loss += loss.item()
loss.backward()
optimizer_net.step()
running_loss = calculate_loss(trainloader,net,criterion_net)
if(epoch%500 == 0):
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.05:
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
break
print('Finished Training')
correct = 0
total = 0
with torch.no_grad():
for data in trainloader:
images, labels = data
images, labels = images.to("cuda"), labels.to("cuda")
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 1000 train images: %.2f %%' % ( 100 * correct / total))
for i, j in enumerate(testloader_list):
test_all(i+1, j,net)
print("--"*40)
return loss_curi
train_loss_all=[]
testloader_list= [ testloader_1, testloader_2, testloader_3, testloader_4, testloader_5, testloader_6,
testloader_7, testloader_8, testloader_9, testloader_10, testloader_11]
train_loss_all.append(train_all(trainloader_1, 1, testloader_list))
train_loss_all.append(train_all(trainloader_2, 2, testloader_list))
train_loss_all.append(train_all(trainloader_3, 3, testloader_list))
train_loss_all.append(train_all(trainloader_4, 4, testloader_list))
train_loss_all.append(train_all(trainloader_5, 5, testloader_list))
train_loss_all.append(train_all(trainloader_6, 6, testloader_list))
train_loss_all.append(train_all(trainloader_7, 7, testloader_list))
train_loss_all.append(train_all(trainloader_8, 8, testloader_list))
train_loss_all.append(train_all(trainloader_9, 9, testloader_list))
train_loss_all.append(train_all(trainloader_10, 10, testloader_list))
%matplotlib inline
for i,j in enumerate(train_loss_all):
plt.plot(j,label ="dataset "+str(i+1))
plt.xlabel("Epochs")
plt.ylabel("Training_loss")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
```
|
github_jupyter
|
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from tqdm import tqdm
%matplotlib inline
from torch.utils.data import Dataset, DataLoader
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
from torch.nn import functional as F
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
np.random.seed(12)
y = np.random.randint(0,10,5000)
idx= []
for i in range(10):
print(i,sum(y==i))
idx.append(y==i)
x = np.zeros((5000,2))
np.random.seed(12)
x[idx[0],:] = np.random.multivariate_normal(mean = [4,6.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[0]))
x[idx[1],:] = np.random.multivariate_normal(mean = [5.5,6],cov=[[0.01,0],[0,0.01]],size=sum(idx[1]))
x[idx[2],:] = np.random.multivariate_normal(mean = [4.5,4.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[2]))
x[idx[3],:] = np.random.multivariate_normal(mean = [3,3.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[3]))
x[idx[4],:] = np.random.multivariate_normal(mean = [2.5,5.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[4]))
x[idx[5],:] = np.random.multivariate_normal(mean = [3.5,8],cov=[[0.01,0],[0,0.01]],size=sum(idx[5]))
x[idx[6],:] = np.random.multivariate_normal(mean = [5.5,8],cov=[[0.01,0],[0,0.01]],size=sum(idx[6]))
x[idx[7],:] = np.random.multivariate_normal(mean = [7,6.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[7]))
x[idx[8],:] = np.random.multivariate_normal(mean = [6.5,4.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[8]))
x[idx[9],:] = np.random.multivariate_normal(mean = [5,3],cov=[[0.01,0],[0,0.01]],size=sum(idx[9]))
x[idx[0]][0], x[idx[5]][5]
for i in range(10):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
bg_idx = [ np.where(idx[3] == True)[0],
np.where(idx[4] == True)[0],
np.where(idx[5] == True)[0],
np.where(idx[6] == True)[0],
np.where(idx[7] == True)[0],
np.where(idx[8] == True)[0],
np.where(idx[9] == True)[0]]
bg_idx = np.concatenate(bg_idx, axis = 0)
bg_idx.shape
np.unique(bg_idx).shape
x = x - np.mean(x[bg_idx], axis = 0, keepdims = True)
np.mean(x[bg_idx], axis = 0, keepdims = True), np.mean(x, axis = 0, keepdims = True)
x = x/np.std(x[bg_idx], axis = 0, keepdims = True)
np.std(x[bg_idx], axis = 0, keepdims = True), np.std(x, axis = 0, keepdims = True)
for i in range(10):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
foreground_classes = {'class_0','class_1', 'class_2'}
background_classes = {'class_3','class_4', 'class_5', 'class_6','class_7', 'class_8', 'class_9'}
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,9)
a = []
for i in range(9):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
print(a.shape)
print(fg_class , fg_idx)
a.shape
np.reshape(a,(18,1))
a=np.reshape(a,(3,6))
plt.imshow(a)
desired_num = 2000
mosaic_list_of_images =[]
mosaic_label = []
fore_idx=[]
for j in range(desired_num):
np.random.seed(j)
fg_class = np.random.randint(0,3)
fg_idx = 0
a = []
for i in range(9):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
# print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
mosaic_list_of_images.append(np.reshape(a,(18,1)))
mosaic_label.append(fg_class)
fore_idx.append(fg_idx)
mosaic_list_of_images = np.concatenate(mosaic_list_of_images,axis=1).T
mosaic_list_of_images.shape, mosaic_list_of_images[0]
for j in range(9):
print(mosaic_list_of_images[0][2*j:2*j+2])
def create_avg_image_from_mosaic_dataset(mosaic_dataset,labels,foreground_index,dataset_number):
"""
mosaic_dataset : mosaic_dataset contains 9 images 32 x 32 each as 1 data point
labels : mosaic_dataset labels
foreground_index : contains list of indexes where foreground image is present so that using this we can take weighted average
dataset_number : will help us to tell what ratio of foreground image to be taken. for eg: if it is "j" then fg_image_ratio = j/9 , bg_image_ratio = (9-j)/8*9
"""
avg_image_dataset = []
cnt = 0
counter = np.array([0,0,0,0,0,0,0,0,0])
for i in range(len(mosaic_dataset)):
img = torch.zeros([2], dtype=torch.float64)
np.random.seed(int(dataset_number*10000 + i))
give_pref = foreground_index[i] #np.random.randint(0,9)
# print("outside", give_pref,foreground_index[i])
for j in range(9):
if j == give_pref:
img = img + mosaic_dataset[i][2*j:2*j+2]*dataset_number/9 #2 is data dim
else :
img = img + mosaic_dataset[i][2*j:2*j+2]*(9-dataset_number)/(8*9)
if give_pref == foreground_index[i] :
# print("equal are", give_pref,foreground_index[i])
cnt += 1
counter[give_pref] += 1
else :
counter[give_pref] += 1
avg_image_dataset.append(img)
print("number of correct averaging happened for dataset "+str(dataset_number)+" is "+str(cnt))
print("the averaging are done as ", counter)
return avg_image_dataset , labels , foreground_index
avg_image_dataset_1 , labels_1, fg_index_1 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 0)
avg_image_dataset_2 , labels_2, fg_index_2 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 0.2)
avg_image_dataset_3 , labels_3, fg_index_3 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 0.4)
avg_image_dataset_4 , labels_4, fg_index_4 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 0.6)
avg_image_dataset_5 , labels_5, fg_index_5 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 0.8)
avg_image_dataset_6 , labels_6, fg_index_6 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 1)
avg_image_dataset_7 , labels_7, fg_index_7 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 1.2)
avg_image_dataset_8 , labels_8, fg_index_8 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 1.4)
avg_image_dataset_9 , labels_9, fg_index_9 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 1.6)
avg_image_dataset_10 , labels_10, fg_index_10 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 1.8)
test_dataset , labels , fg_index = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[1000:2000], mosaic_label[1000:2000], fore_idx[1000:2000] , 9)
avg = torch.stack(avg_image_dataset_1, axis = 0)
avg_image_dataset_1 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
print(torch.mean(avg_image_dataset_1, keepdims= True, axis = 0))
print(torch.std(avg_image_dataset_1, keepdims= True, axis = 0))
print("=="*40)
avg = torch.stack(avg_image_dataset_2, axis = 0)
avg_image_dataset_2 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
print(torch.mean(avg_image_dataset_2, keepdims= True, axis = 0))
print(torch.std(avg_image_dataset_2, keepdims= True, axis = 0))
print("=="*40)
avg = torch.stack(avg_image_dataset_3, axis = 0)
avg_image_dataset_3 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
print(torch.mean(avg_image_dataset_3, keepdims= True, axis = 0))
print(torch.std(avg_image_dataset_3, keepdims= True, axis = 0))
print("=="*40)
avg = torch.stack(avg_image_dataset_4, axis = 0)
avg_image_dataset_4 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
print(torch.mean(avg_image_dataset_4, keepdims= True, axis = 0))
print(torch.std(avg_image_dataset_4, keepdims= True, axis = 0))
print("=="*40)
avg = torch.stack(avg_image_dataset_5, axis = 0)
avg_image_dataset_5 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
print(torch.mean(avg_image_dataset_5, keepdims= True, axis = 0))
print(torch.std(avg_image_dataset_5, keepdims= True, axis = 0))
print("=="*40)
avg = torch.stack(avg_image_dataset_6, axis = 0)
avg_image_dataset_6 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
print(torch.mean(avg_image_dataset_6, keepdims= True, axis = 0))
print(torch.std(avg_image_dataset_6, keepdims= True, axis = 0))
print("=="*40)
avg = torch.stack(avg_image_dataset_7, axis = 0)
avg_image_dataset_7 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
print(torch.mean(avg_image_dataset_7, keepdims= True, axis = 0))
print(torch.std(avg_image_dataset_7, keepdims= True, axis = 0))
print("=="*40)
avg = torch.stack(avg_image_dataset_8, axis = 0)
avg_image_dataset_8 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
print(torch.mean(avg_image_dataset_8, keepdims= True, axis = 0))
print(torch.std(avg_image_dataset_8, keepdims= True, axis = 0))
print("=="*40)
avg = torch.stack(avg_image_dataset_9, axis = 0)
avg_image_dataset_9 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
print(torch.mean(avg_image_dataset_9, keepdims= True, axis = 0))
print(torch.std(avg_image_dataset_9, keepdims= True, axis = 0))
print("=="*40)
avg = torch.stack(avg_image_dataset_10, axis = 0)
avg_image_dataset_10 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
print(torch.mean(avg_image_dataset_10, keepdims= True, axis = 0))
print(torch.std(avg_image_dataset_10, keepdims= True, axis = 0))
print("=="*40)
avg = torch.stack(test_dataset, axis = 0)
test_dataset = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
print(torch.mean(test_dataset, keepdims= True, axis = 0))
print(torch.std(test_dataset, keepdims= True, axis = 0))
print("=="*40)
x1 = (avg_image_dataset_3).numpy()
y1 = np.array(labels_3)
plt.scatter(x1[y1==0,0], x1[y1==0,1], label='class 0')
plt.scatter(x1[y1==1,0], x1[y1==1,1], label='class 1')
plt.scatter(x1[y1==2,0], x1[y1==2,1], label='class 2')
plt.legend()
plt.title("type 2 alpha = 0.4/9")
x1 = (avg_image_dataset_6).numpy()
y1 = np.array(labels_6)
plt.scatter(x1[y1==0,0], x1[y1==0,1], label='class 0')
plt.scatter(x1[y1==1,0], x1[y1==1,1], label='class 1')
plt.scatter(x1[y1==2,0], x1[y1==2,1], label='class 2')
plt.legend()
plt.title("type 2 alpha = 1/9")
x1 = (avg_image_dataset_9).numpy()
y1 = np.array(labels_9)
plt.scatter(x1[y1==0,0], x1[y1==0,1], label='class 0')
plt.scatter(x1[y1==1,0], x1[y1==1,1], label='class 1')
plt.scatter(x1[y1==2,0], x1[y1==2,1], label='class 2')
plt.legend()
plt.title("type 2 alpha = 1.6/9")
x1 = (avg_image_dataset_10).numpy()
y1 = np.array(labels_10)
plt.scatter(x1[y1==0,0], x1[y1==0,1], label='class 0')
plt.scatter(x1[y1==1,0], x1[y1==1,1], label='class 1')
plt.scatter(x1[y1==2,0], x1[y1==2,1], label='class 2')
plt.legend()
plt.title("type 2 alpha = 1.8/9")
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list_of_images, mosaic_label):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list_of_images
self.label = mosaic_label
#self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] #, self.fore_idx[idx]
avg_image_dataset_1[0].shape
avg_image_dataset_1[0]
l = [ labels_1, labels_2, labels_3, labels_4, labels_5, labels_6, labels_7, labels_8, labels_9, labels_10]
for i in l:
print(np.unique(l))
batch = 200
traindata_1 = MosaicDataset(avg_image_dataset_1, labels_1 )
trainloader_1 = DataLoader( traindata_1 , batch_size= batch ,shuffle=True)
traindata_2 = MosaicDataset(avg_image_dataset_2, labels_2 )
trainloader_2 = DataLoader( traindata_2 , batch_size= batch ,shuffle=True)
traindata_3 = MosaicDataset(avg_image_dataset_3, labels_3 )
trainloader_3 = DataLoader( traindata_3 , batch_size= batch ,shuffle=True)
traindata_4 = MosaicDataset(avg_image_dataset_4, labels_4 )
trainloader_4 = DataLoader( traindata_4 , batch_size= batch ,shuffle=True)
traindata_5 = MosaicDataset(avg_image_dataset_5, labels_5 )
trainloader_5 = DataLoader( traindata_5 , batch_size= batch ,shuffle=True)
traindata_6 = MosaicDataset(avg_image_dataset_6, labels_6 )
trainloader_6 = DataLoader( traindata_6 , batch_size= batch ,shuffle=True)
traindata_7 = MosaicDataset(avg_image_dataset_7, labels_7 )
trainloader_7 = DataLoader( traindata_7 , batch_size= batch ,shuffle=True)
traindata_8 = MosaicDataset(avg_image_dataset_8, labels_8 )
trainloader_8 = DataLoader( traindata_8 , batch_size= batch ,shuffle=True)
traindata_9 = MosaicDataset(avg_image_dataset_9, labels_9 )
trainloader_9 = DataLoader( traindata_9 , batch_size= batch ,shuffle=True)
traindata_10 = MosaicDataset(avg_image_dataset_10, labels_10 )
trainloader_10 = DataLoader( traindata_10 , batch_size= batch ,shuffle=True)
testdata_1 = MosaicDataset(avg_image_dataset_1, labels_1 )
testloader_1 = DataLoader( testdata_1 , batch_size= batch ,shuffle=False)
testdata_2 = MosaicDataset(avg_image_dataset_2, labels_2 )
testloader_2 = DataLoader( testdata_2 , batch_size= batch ,shuffle=False)
testdata_3 = MosaicDataset(avg_image_dataset_3, labels_3 )
testloader_3 = DataLoader( testdata_3 , batch_size= batch ,shuffle=False)
testdata_4 = MosaicDataset(avg_image_dataset_4, labels_4 )
testloader_4 = DataLoader( testdata_4 , batch_size= batch ,shuffle=False)
testdata_5 = MosaicDataset(avg_image_dataset_5, labels_5 )
testloader_5 = DataLoader( testdata_5 , batch_size= batch ,shuffle=False)
testdata_6 = MosaicDataset(avg_image_dataset_6, labels_6 )
testloader_6 = DataLoader( testdata_6 , batch_size= batch ,shuffle=False)
testdata_7 = MosaicDataset(avg_image_dataset_7, labels_7 )
testloader_7 = DataLoader( testdata_7 , batch_size= batch ,shuffle=False)
testdata_8 = MosaicDataset(avg_image_dataset_8, labels_8 )
testloader_8 = DataLoader( testdata_8 , batch_size= batch ,shuffle=False)
testdata_9 = MosaicDataset(avg_image_dataset_9, labels_9 )
testloader_9 = DataLoader( testdata_9 , batch_size= batch ,shuffle=False)
testdata_10 = MosaicDataset(avg_image_dataset_10, labels_10 )
testloader_10 = DataLoader( testdata_10 , batch_size= batch ,shuffle=False)
testdata_11 = MosaicDataset(test_dataset, labels )
testloader_11 = DataLoader( testdata_11 , batch_size= batch ,shuffle=False)
class Whatnet(nn.Module):
def __init__(self):
super(Whatnet,self).__init__()
self.linear1 = nn.Linear(2,3)
# self.linear2 = nn.Linear(50,10)
# self.linear3 = nn.Linear(10,3)
def forward(self,x):
# x = F.relu(self.linear1(x))
# x = F.relu(self.linear2(x))
x = (self.linear1(x))
return x
def calculate_loss(dataloader,model,criter):
model.eval()
r_loss = 0
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = model(inputs)
loss = criter(outputs, labels)
r_loss += loss.item()
return r_loss/i
def test_all(number, testloader,net):
correct = 0
total = 0
out = []
pred = []
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to("cuda"),labels.to("cuda")
out.append(labels.cpu().numpy())
outputs= net(images)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
total += labels.size(0)
correct += (predicted == labels).sum().item()
pred = np.concatenate(pred, axis = 0)
out = np.concatenate(out, axis = 0)
print("unique out: ", np.unique(out), "unique pred: ", np.unique(pred) )
print("correct: ", correct, "total ", total)
print('Accuracy of the network on the 1000 test dataset %d: %.2f %%' % (number , 100 * correct / total))
def train_all(trainloader, ds_number, testloader_list):
print("--"*40)
print("training on data set ", ds_number)
torch.manual_seed(12)
net = Whatnet().double()
net = net.to("cuda")
criterion_net = nn.CrossEntropyLoss()
optimizer_net = optim.Adam(net.parameters(), lr=0.001 ) #, momentum=0.9)
acti = []
loss_curi = []
epochs = 1000
running_loss = calculate_loss(trainloader,net,criterion_net)
loss_curi.append(running_loss)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
net.train()
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_net.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion_net(outputs, labels)
# print statistics
running_loss += loss.item()
loss.backward()
optimizer_net.step()
running_loss = calculate_loss(trainloader,net,criterion_net)
if(epoch%500 == 0):
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.05:
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
break
print('Finished Training')
correct = 0
total = 0
with torch.no_grad():
for data in trainloader:
images, labels = data
images, labels = images.to("cuda"), labels.to("cuda")
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 1000 train images: %.2f %%' % ( 100 * correct / total))
for i, j in enumerate(testloader_list):
test_all(i+1, j,net)
print("--"*40)
return loss_curi
train_loss_all=[]
testloader_list= [ testloader_1, testloader_2, testloader_3, testloader_4, testloader_5, testloader_6,
testloader_7, testloader_8, testloader_9, testloader_10, testloader_11]
train_loss_all.append(train_all(trainloader_1, 1, testloader_list))
train_loss_all.append(train_all(trainloader_2, 2, testloader_list))
train_loss_all.append(train_all(trainloader_3, 3, testloader_list))
train_loss_all.append(train_all(trainloader_4, 4, testloader_list))
train_loss_all.append(train_all(trainloader_5, 5, testloader_list))
train_loss_all.append(train_all(trainloader_6, 6, testloader_list))
train_loss_all.append(train_all(trainloader_7, 7, testloader_list))
train_loss_all.append(train_all(trainloader_8, 8, testloader_list))
train_loss_all.append(train_all(trainloader_9, 9, testloader_list))
train_loss_all.append(train_all(trainloader_10, 10, testloader_list))
%matplotlib inline
for i,j in enumerate(train_loss_all):
plt.plot(j,label ="dataset "+str(i+1))
plt.xlabel("Epochs")
plt.ylabel("Training_loss")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
| 0.283186 | 0.720125 |
# Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
## Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
```
## Explore the Data
Play around with view_sentence_range to view different parts of the data.
```
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
index = 5
print(source_text.split('\n')[index])
print(target_text.split('\n')[index])
```
## Implement Preprocessing Function
### Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `<EOS>` word id at the end of `target_text`. This will help the neural network predict when the sentence should end.
You can get the `<EOS>` word id by doing:
```python
target_vocab_to_int['<EOS>']
```
You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`.
```
EOS = '<EOS>'
GO = '<GO>'
UNK = '<UNK>'
def convert(text, vocab_to_int, append=True):
# converting the text to word ids
ids = [[vocab_to_int[word] for word in sentence.split()] for sentence in text.split('\n')]
# appending EOS when necessary
if append:
for sentence in ids:
sentence.append(vocab_to_int[EOS])
return ids
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
"""
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
"""
source_vocab_to_int = convert(source_text, source_vocab_to_int, False)
target_vocab_to_int = convert(target_text, target_vocab_to_int)
return source_vocab_to_int, target_vocab_to_int
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)
```
### Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
```
# Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
```
### Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
```
## Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- `model_inputs`
- `process_decoder_input`
- `encoding_layer`
- `decoding_layer_train`
- `decoding_layer_infer`
- `decoding_layer`
- `seq2seq_model`
### Input
Implement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
- Targets placeholder with rank 2.
- Learning rate placeholder with rank 0.
- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
- Target sequence length placeholder named "target_sequence_length" with rank 1
- Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
- Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
```
def model_inputs():
"""
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
"""
X = tf.placeholder(tf.int32, [None, None], name='input')
y = tf.placeholder(tf.int32, [None, None], name='labels')
eta = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
y_seq_len = tf.placeholder(tf.int32, [None], name='target_sequence_length')
y_max_seq_len = tf.reduce_max(y_seq_len, name='max_target_len')
src_sequence_len = tf.placeholder(tf.int32, [None], name='source_sequence_length')
return X, y, eta, keep_prob, y_seq_len, y_max_seq_len, src_sequence_len
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
```
###### Process Decoder Input
Implement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the beginning of each batch.
```
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
"""
# removing the last word id
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
# creating a GO id array
filler = tf.fill([batch_size, 1], target_vocab_to_int[GO])
# adding GO id at beginning of each batch
decoder_input = tf.concat([filler, ending], axis=1)
return decoder_input
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_encoding_input(process_decoder_input)
```
### Encoding
Implement `encoding_layer()` to create a Encoder RNN layer:
* Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence)
* Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.md#stacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper)
* Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn)
```
from imp import reload
reload(tests)
def make_cell(rnn_size, keep_prob):
initializer = tf.random_uniform_initializer(-.1, .1)
cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=initializer)
cell = tf.contrib.rnn.DropoutWrapper(cell, keep_prob)
return cell
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
"""
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
"""
# Encoder embedding
enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
# Building RNN cell
enc_cell = [make_cell(rnn_size, keep_prob) for _ in range(num_layers)]
enc_cell = tf.contrib.rnn.MultiRNNCell(enc_cell)
# Running dynamic RNN
output, state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, source_sequence_length, dtype=tf.float32)
return output, state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)
```
### Decoding - Training
Create a training decoding layer:
* Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper)
* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)
* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode)
```
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
"""
# https://github.com/tensorflow/nmt#decoder
# helper
helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length)
# decoder
decoder = tf.contrib.seq2seq.BasicDecoder(
dec_cell, helper, encoder_state,
output_layer=output_layer)
# dynamic decoding
decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(decoder, impute_finished=True, maximum_iterations=max_summary_length)
return decoder_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)
```
### Decoding - Inference
Create inference decoder:
* Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper)
* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)
* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode)
```
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
"""
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
"""
# https://github.com/tensorflow/nmt#inference--how-to-generate-translations
# helper
start_tokens = tf.fill([batch_size], start_of_sequence_id)
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(
dec_embeddings, start_tokens, end_of_sequence_id)
# decoder
decoder = tf.contrib.seq2seq.BasicDecoder(
dec_cell, inference_helper, encoder_state, output_layer=output_layer)
# dynamic decoding
outputs, _ = tf.contrib.seq2seq.dynamic_decode(decoder, impute_finished=True, maximum_iterations=max_target_sequence_length)
return outputs
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)
```
### Build the Decoding Layer
Implement `decoding_layer()` to create a Decoder RNN layer.
* Embed the target sequences
* Construct the decoder LSTM cell (just like you constructed the encoder cell above)
* Create an output layer to map the outputs of the decoder to the elements of our vocabulary
* Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits.
* Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits.
Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference.
```
def get_embed(input_data, vocab_size, embed_dim, name=None):
"""
Create embedding for input_data.
:param input_data: TF placeholder.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Tuple of (Embedded input, embedding variable)
"""
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embded = tf.nn.embedding_lookup(embedding, ids=input_data, name=name)
return embded, embedding
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
"""
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
# decoder embedding
dec_embed_input, dec_embeddings = get_embed(dec_input, target_vocab_size, decoding_embedding_size)
# decoder cell
dec_cell = [make_cell(rnn_size, keep_prob) for _ in range(num_layers)]
dec_cell = tf.contrib.rnn.MultiRNNCell(dec_cell)
# output layer to map the outputs of the decoder
output_layer = Dense(target_vocab_size,
kernel_initializer=tf.truncated_normal_initializer(mean=.0, stddev=.1))
# training decoder
with tf.variable_scope('decode'):
training_decoder_output = decoding_layer_train(
encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_target_sequence_length,
output_layer, keep_prob)
# inference decoder
start_of_sequence_id = target_vocab_to_int[GO]
end_of_sequence_id = target_vocab_to_int[EOS]
with tf.variable_scope('decode', reuse=True):
inference_decoder_output = decoding_layer_infer(
encoder_state, dec_cell, dec_embeddings,
start_of_sequence_id, end_of_sequence_id, max_target_sequence_length,
target_vocab_size, output_layer, batch_size, keep_prob)
return training_decoder_output, inference_decoder_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)
```
### Build the Neural Network
Apply the functions you implemented above to:
- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`.
- Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function.
- Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function.
```
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
"""
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
# building the enconding layer
_, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size, enc_embedding_size)
# processing the target sequence
dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
# building the decoding layer
training_decoder_output, inference_decoder_output = \
decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length,
rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size,
keep_prob, dec_embedding_size)
return training_decoder_output, inference_decoder_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)
```
## Neural Network Training
### Hyperparameters
Tune the following parameters:
- Set `epochs` to the number of epochs.
- Set `batch_size` to the batch size.
- Set `rnn_size` to the size of the RNNs.
- Set `num_layers` to the number of layers.
- Set `encoding_embedding_size` to the size of the embedding for the encoder.
- Set `decoding_embedding_size` to the size of the embedding for the decoder.
- Set `learning_rate` to the learning rate.
- Set `keep_probability` to the Dropout keep probability
- Set `display_step` to state how many steps between each debug output statement
```
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 1024
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 200
decoding_embedding_size = 200
# Learning Rate
learning_rate = 1e-3
# Dropout Keep Probability
keep_probability = .75
display_step = 10
```
### Build the Graph
Build the graph using the neural network you implemented.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
```
Batch and pad the source and target sequences
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def pad_sentence_batch(sentence_batch, pad_int):
"""Pad sentences with <PAD> so that each sentence of a batch has the same length"""
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
"""Batch targets, sources, and the lengths of their sentences together"""
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
```
### Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def get_accuracy(target, logits):
"""
Calculate accuracy
"""
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
```
### Save Parameters
Save the `batch_size` and `save_path` parameters for inference.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)
```
# Checkpoint
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
```
## Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.
- Convert the sentence to lowercase
- Convert words into ids using `vocab_to_int`
- Convert words not in the vocabulary, to the `<UNK>` word id.
```
def sentence_to_seq(sentence, vocab_to_int):
"""
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
"""
sentence = sentence.lower()
new_sentence = []
for word in sentence.split():
word_id = vocab_to_int[word] if word in vocab_to_int else vocab_to_int[UNK]
new_sentence.append(word_id)
return new_sentence
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)
```
## Translate
This will translate `translate_sentence` from English to French.
```
translate_sentence = 'he saw a old yellow truck .'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
```
## Imperfect Translation
You might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.
You can train on the [WMT10 French-English corpus](http://www.statmt.org/wmt10/training-giga-fren.tar). This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.
## Submitting This Project
When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_language_translation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.
|
github_jupyter
|
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
index = 5
print(source_text.split('\n')[index])
print(target_text.split('\n')[index])
target_vocab_to_int['<EOS>']
EOS = '<EOS>'
GO = '<GO>'
UNK = '<UNK>'
def convert(text, vocab_to_int, append=True):
# converting the text to word ids
ids = [[vocab_to_int[word] for word in sentence.split()] for sentence in text.split('\n')]
# appending EOS when necessary
if append:
for sentence in ids:
sentence.append(vocab_to_int[EOS])
return ids
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
"""
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
"""
source_vocab_to_int = convert(source_text, source_vocab_to_int, False)
target_vocab_to_int = convert(target_text, target_vocab_to_int)
return source_vocab_to_int, target_vocab_to_int
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
"""
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
"""
X = tf.placeholder(tf.int32, [None, None], name='input')
y = tf.placeholder(tf.int32, [None, None], name='labels')
eta = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
y_seq_len = tf.placeholder(tf.int32, [None], name='target_sequence_length')
y_max_seq_len = tf.reduce_max(y_seq_len, name='max_target_len')
src_sequence_len = tf.placeholder(tf.int32, [None], name='source_sequence_length')
return X, y, eta, keep_prob, y_seq_len, y_max_seq_len, src_sequence_len
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
"""
# removing the last word id
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
# creating a GO id array
filler = tf.fill([batch_size, 1], target_vocab_to_int[GO])
# adding GO id at beginning of each batch
decoder_input = tf.concat([filler, ending], axis=1)
return decoder_input
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_encoding_input(process_decoder_input)
from imp import reload
reload(tests)
def make_cell(rnn_size, keep_prob):
initializer = tf.random_uniform_initializer(-.1, .1)
cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=initializer)
cell = tf.contrib.rnn.DropoutWrapper(cell, keep_prob)
return cell
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
"""
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
"""
# Encoder embedding
enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
# Building RNN cell
enc_cell = [make_cell(rnn_size, keep_prob) for _ in range(num_layers)]
enc_cell = tf.contrib.rnn.MultiRNNCell(enc_cell)
# Running dynamic RNN
output, state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, source_sequence_length, dtype=tf.float32)
return output, state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
"""
# https://github.com/tensorflow/nmt#decoder
# helper
helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length)
# decoder
decoder = tf.contrib.seq2seq.BasicDecoder(
dec_cell, helper, encoder_state,
output_layer=output_layer)
# dynamic decoding
decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(decoder, impute_finished=True, maximum_iterations=max_summary_length)
return decoder_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
"""
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
"""
# https://github.com/tensorflow/nmt#inference--how-to-generate-translations
# helper
start_tokens = tf.fill([batch_size], start_of_sequence_id)
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(
dec_embeddings, start_tokens, end_of_sequence_id)
# decoder
decoder = tf.contrib.seq2seq.BasicDecoder(
dec_cell, inference_helper, encoder_state, output_layer=output_layer)
# dynamic decoding
outputs, _ = tf.contrib.seq2seq.dynamic_decode(decoder, impute_finished=True, maximum_iterations=max_target_sequence_length)
return outputs
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)
def get_embed(input_data, vocab_size, embed_dim, name=None):
"""
Create embedding for input_data.
:param input_data: TF placeholder.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Tuple of (Embedded input, embedding variable)
"""
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embded = tf.nn.embedding_lookup(embedding, ids=input_data, name=name)
return embded, embedding
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
"""
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
# decoder embedding
dec_embed_input, dec_embeddings = get_embed(dec_input, target_vocab_size, decoding_embedding_size)
# decoder cell
dec_cell = [make_cell(rnn_size, keep_prob) for _ in range(num_layers)]
dec_cell = tf.contrib.rnn.MultiRNNCell(dec_cell)
# output layer to map the outputs of the decoder
output_layer = Dense(target_vocab_size,
kernel_initializer=tf.truncated_normal_initializer(mean=.0, stddev=.1))
# training decoder
with tf.variable_scope('decode'):
training_decoder_output = decoding_layer_train(
encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_target_sequence_length,
output_layer, keep_prob)
# inference decoder
start_of_sequence_id = target_vocab_to_int[GO]
end_of_sequence_id = target_vocab_to_int[EOS]
with tf.variable_scope('decode', reuse=True):
inference_decoder_output = decoding_layer_infer(
encoder_state, dec_cell, dec_embeddings,
start_of_sequence_id, end_of_sequence_id, max_target_sequence_length,
target_vocab_size, output_layer, batch_size, keep_prob)
return training_decoder_output, inference_decoder_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
"""
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
# building the enconding layer
_, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size, enc_embedding_size)
# processing the target sequence
dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
# building the decoding layer
training_decoder_output, inference_decoder_output = \
decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length,
rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size,
keep_prob, dec_embedding_size)
return training_decoder_output, inference_decoder_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 1024
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 200
decoding_embedding_size = 200
# Learning Rate
learning_rate = 1e-3
# Dropout Keep Probability
keep_probability = .75
display_step = 10
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def pad_sentence_batch(sentence_batch, pad_int):
"""Pad sentences with <PAD> so that each sentence of a batch has the same length"""
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
"""Batch targets, sources, and the lengths of their sentences together"""
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def get_accuracy(target, logits):
"""
Calculate accuracy
"""
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
"""
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
"""
sentence = sentence.lower()
new_sentence = []
for word in sentence.split():
word_id = vocab_to_int[word] if word in vocab_to_int else vocab_to_int[UNK]
new_sentence.append(word_id)
return new_sentence
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a old yellow truck .'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
| 0.668123 | 0.921322 |
```
# Biblio
# http://www.aprendemachinelearning.com/k-means-en-python-paso-a-paso/
# https://datatofish.com/k-means-clustering-python/
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sb
from sklearn.cluster import KMeans
from sklearn.metrics import pairwise_distances_argmin_min, silhouette_score
import os
%matplotlib inline
from mpl_toolkits.mplot3d import Axes3D
# Constantes
FICHERO = 'Medias_Y_Coeficientes_Hogares.csv'
RUTA_ACTUAL = os.path.dirname(os.path.realpath('__file__'))
DIRECTORIO_GUARDADO_IMG = 'IMG_CLUSTER'
plt.rcParams['figure.figsize'] = (6, 4)
plt.style.use('ggplot')
def silhouette_valor(clusters, X):
kmeans_sil = KMeans(n_clusters=clusters).fit(X)
labels_sil = kmeans_sil.predict(X)
silhouette_media = silhouette_score(X, labels_sil)
print("Para ", clusters, " clusteres. La puntuación de silhouette es: ", silhouette_media.round(2))
def obtener_coef_proporcion(fichero):
# Obtener el coeficiente y la proporcion
dataframeData = pd.read_csv('Medias_Y_Coeficientes_Hogares.csv', delimiter = ',')
dataframeData.set_index('Hogar')
dataframeData = dataframeData.drop(columns=['stdRango 00-06', 'meanRango 00-06','stdRango 06-12','stdRango 06-12','meanRango 06-12','coefRango 06-12','stdRango 12-18','meanRango 12-18','coefRango 12-18','stdRango 18-00','meanRango 18-00','coefRango 18-00'])
df = pd.DataFrame()
df['Hogar']=dataframeData['Hogar']
df['coef_r0']=dataframeData['coefRango 00-06']
df['proporc_r0']=dataframeData['prop. 00-06 resto']
X = np.array(df[['proporc_r0','coef_r0']])
#X.shape
return X, df
def calcula_kmeans(X, num_cluster):
kmeans = KMeans(n_clusters=num_cluster).fit(X)
centroides = kmeans.cluster_centers_
etiquetas = kmeans.predict(X)
return kmeans, centroides, etiquetas
def calculo_silouette(X):
clusters_range = [2, 3, 4, 5, 6]
for clusters in clusters_range:
silhouette_valor(clusters, X)
def obtenerCategorias(df, num_centroides):
resultados = pd.DataFrame()
resultados['Hogar'] = df['Hogar']
resultados['Kmeans_Cat'] = etiquetas
resultados.to_csv('CategoriasKmeans_' + str(num_centroides) + '_Centroides.csv', sep=',', encoding='utf-8')
return resultados
def guardarGraficaKmeans(df, centroides, num_centroides):
plt.scatter(df['proporc_r0'],df['coef_r0'],c= kmeans.labels_.astype(float), s=50, alpha=0.5)
plt.scatter(centroides[:, 0], centroides[:, 1], c='red', s=50)
plt.title('(coeficiente variabilidad - Proporción consumo energía) \n Para rango horario 00-06h')
plt.xlabel('Proporción consumo', fontsize = 16)
plt.ylabel('Coeficiente Variabilidad', fontsize = 16)
fichero = os.path.join(RUTA_ACTUAL, DIRECTORIO_GUARDADO_IMG, 'Kmeans_rango_horario_' + str(num_centroides) +'_centroides.png')
plt.savefig(fichero)
X, df = obtener_coef_proporcion(FICHERO)
calculo_silouette(X)
kmeans, centroides, etiquetas = calcula_kmeans(X, 2)
guardarGraficaKmeans(df, centroides, 2)
obtenerCategorias(df, 2)
kmeans, centroides, etiquetas = calcula_kmeans(X, 4)
guardarGraficaKmeans(df, centroides, 4)
obtenerCategorias(df, 4)
```
|
github_jupyter
|
# Biblio
# http://www.aprendemachinelearning.com/k-means-en-python-paso-a-paso/
# https://datatofish.com/k-means-clustering-python/
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sb
from sklearn.cluster import KMeans
from sklearn.metrics import pairwise_distances_argmin_min, silhouette_score
import os
%matplotlib inline
from mpl_toolkits.mplot3d import Axes3D
# Constantes
FICHERO = 'Medias_Y_Coeficientes_Hogares.csv'
RUTA_ACTUAL = os.path.dirname(os.path.realpath('__file__'))
DIRECTORIO_GUARDADO_IMG = 'IMG_CLUSTER'
plt.rcParams['figure.figsize'] = (6, 4)
plt.style.use('ggplot')
def silhouette_valor(clusters, X):
kmeans_sil = KMeans(n_clusters=clusters).fit(X)
labels_sil = kmeans_sil.predict(X)
silhouette_media = silhouette_score(X, labels_sil)
print("Para ", clusters, " clusteres. La puntuación de silhouette es: ", silhouette_media.round(2))
def obtener_coef_proporcion(fichero):
# Obtener el coeficiente y la proporcion
dataframeData = pd.read_csv('Medias_Y_Coeficientes_Hogares.csv', delimiter = ',')
dataframeData.set_index('Hogar')
dataframeData = dataframeData.drop(columns=['stdRango 00-06', 'meanRango 00-06','stdRango 06-12','stdRango 06-12','meanRango 06-12','coefRango 06-12','stdRango 12-18','meanRango 12-18','coefRango 12-18','stdRango 18-00','meanRango 18-00','coefRango 18-00'])
df = pd.DataFrame()
df['Hogar']=dataframeData['Hogar']
df['coef_r0']=dataframeData['coefRango 00-06']
df['proporc_r0']=dataframeData['prop. 00-06 resto']
X = np.array(df[['proporc_r0','coef_r0']])
#X.shape
return X, df
def calcula_kmeans(X, num_cluster):
kmeans = KMeans(n_clusters=num_cluster).fit(X)
centroides = kmeans.cluster_centers_
etiquetas = kmeans.predict(X)
return kmeans, centroides, etiquetas
def calculo_silouette(X):
clusters_range = [2, 3, 4, 5, 6]
for clusters in clusters_range:
silhouette_valor(clusters, X)
def obtenerCategorias(df, num_centroides):
resultados = pd.DataFrame()
resultados['Hogar'] = df['Hogar']
resultados['Kmeans_Cat'] = etiquetas
resultados.to_csv('CategoriasKmeans_' + str(num_centroides) + '_Centroides.csv', sep=',', encoding='utf-8')
return resultados
def guardarGraficaKmeans(df, centroides, num_centroides):
plt.scatter(df['proporc_r0'],df['coef_r0'],c= kmeans.labels_.astype(float), s=50, alpha=0.5)
plt.scatter(centroides[:, 0], centroides[:, 1], c='red', s=50)
plt.title('(coeficiente variabilidad - Proporción consumo energía) \n Para rango horario 00-06h')
plt.xlabel('Proporción consumo', fontsize = 16)
plt.ylabel('Coeficiente Variabilidad', fontsize = 16)
fichero = os.path.join(RUTA_ACTUAL, DIRECTORIO_GUARDADO_IMG, 'Kmeans_rango_horario_' + str(num_centroides) +'_centroides.png')
plt.savefig(fichero)
X, df = obtener_coef_proporcion(FICHERO)
calculo_silouette(X)
kmeans, centroides, etiquetas = calcula_kmeans(X, 2)
guardarGraficaKmeans(df, centroides, 2)
obtenerCategorias(df, 2)
kmeans, centroides, etiquetas = calcula_kmeans(X, 4)
guardarGraficaKmeans(df, centroides, 4)
obtenerCategorias(df, 4)
| 0.500977 | 0.492615 |
# 드롭아웃 Dropout
- nn.Dropout(p=0.5, inplace=False)

```
# 런타임 유형을 GPU로 바꾸시길 추천드립니다.
#!pip install torch torchvision
```
## 1. Settings
### 1) Import required libraries
```
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.init as init
import torchvision.datasets as dset
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
```
### 2) Set hyperparameters
```
batch_size = 256
learning_rate = 0.0002
num_epoch = 10
```
## 2. Data
### 1) Download Data
```
mnist_train = dset.MNIST("../data", train=True, transform=transforms.ToTensor(), target_transform=None, download=True)
mnist_test = dset.MNIST("../data", train=False, transform=transforms.ToTensor(), target_transform=None, download=True)
```
### 2) Check Dataset
```
print(mnist_train.__getitem__(0)[0].size(), mnist_train.__len__())
mnist_test.__getitem__(0)[0].size(), mnist_test.__len__()
```
### 3) Set DataLoader
```
train_loader = torch.utils.data.DataLoader(mnist_train,batch_size=batch_size, shuffle=True,num_workers=2,drop_last=True)
test_loader = torch.utils.data.DataLoader(mnist_test,batch_size=batch_size, shuffle=False,num_workers=2,drop_last=True)
```
## 3. Model & Optimizer
### 1) CNN Model
```
# 드롭아웃을 중간중간에 넣어줌으로써 모델이 오버피팅하는 경우 이를 어느정도 극복할 수 있습니다.
# 정형화에서 눈치채신분도 계시겠지만 오버피팅하지 않는 상태에서 정형화나 드롭아웃을 넣으면 오히려 학습이 잘 안됩니다.
class CNN(nn.Module):
def __init__(self):
super(CNN,self).__init__()
self.layer = nn.Sequential(
nn.Conv2d(1,16,3,padding=1), # 28
nn.ReLU(),
#nn.Dropout2d(0.2),
nn.Conv2d(16,32,3,padding=1), # 28
nn.ReLU(),
#nn.Dropout2d(0.2),
nn.MaxPool2d(2,2), # 14
nn.Conv2d(32,64,3,padding=1), # 14
nn.ReLU(),
#nn.Dropout2d(0.2),
nn.MaxPool2d(2,2) # 7
)
self.fc_layer = nn.Sequential(
nn.Linear(64*7*7,100),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(100,10)
)
def forward(self,x):
out = self.layer(x)
out = out.view(batch_size,-1)
out = self.fc_layer(out)
return out
```
### 2) Loss func & Optimizer
```
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
model = CNN().to(device)
loss_func = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
```
## 4. Train
```
for i in range(num_epoch):
for j,[image,label] in enumerate(train_loader):
x = image.to(device)
y_= label.to(device)
optimizer.zero_grad()
output = model.forward(x)
loss = loss_func(output,y_)
loss.backward()
optimizer.step()
if i % 10 == 0:
print(loss)
#param_list = list(model.parameters())
#print(param_list)
```
## 5. Test
```
correct = 0
total = 0
# 배치정규화나 드롭아웃은 학습할때와 테스트 할때 다르게 동작하기 때문에 모델을 evaluation 모드로 바꿔서 테스트해야합니다.
model.eval()
with torch.no_grad():
for image,label in test_loader:
x = image.to(device)
y_= label.to(device)
output = model.forward(x)
_,output_index = torch.max(output,1)
total += label.size(0)
correct += (output_index == y_).sum().float()
print("Accuracy of Test Data: {}".format(100*correct/total))
```
|
github_jupyter
|
# 런타임 유형을 GPU로 바꾸시길 추천드립니다.
#!pip install torch torchvision
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.init as init
import torchvision.datasets as dset
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
batch_size = 256
learning_rate = 0.0002
num_epoch = 10
mnist_train = dset.MNIST("../data", train=True, transform=transforms.ToTensor(), target_transform=None, download=True)
mnist_test = dset.MNIST("../data", train=False, transform=transforms.ToTensor(), target_transform=None, download=True)
print(mnist_train.__getitem__(0)[0].size(), mnist_train.__len__())
mnist_test.__getitem__(0)[0].size(), mnist_test.__len__()
train_loader = torch.utils.data.DataLoader(mnist_train,batch_size=batch_size, shuffle=True,num_workers=2,drop_last=True)
test_loader = torch.utils.data.DataLoader(mnist_test,batch_size=batch_size, shuffle=False,num_workers=2,drop_last=True)
# 드롭아웃을 중간중간에 넣어줌으로써 모델이 오버피팅하는 경우 이를 어느정도 극복할 수 있습니다.
# 정형화에서 눈치채신분도 계시겠지만 오버피팅하지 않는 상태에서 정형화나 드롭아웃을 넣으면 오히려 학습이 잘 안됩니다.
class CNN(nn.Module):
def __init__(self):
super(CNN,self).__init__()
self.layer = nn.Sequential(
nn.Conv2d(1,16,3,padding=1), # 28
nn.ReLU(),
#nn.Dropout2d(0.2),
nn.Conv2d(16,32,3,padding=1), # 28
nn.ReLU(),
#nn.Dropout2d(0.2),
nn.MaxPool2d(2,2), # 14
nn.Conv2d(32,64,3,padding=1), # 14
nn.ReLU(),
#nn.Dropout2d(0.2),
nn.MaxPool2d(2,2) # 7
)
self.fc_layer = nn.Sequential(
nn.Linear(64*7*7,100),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(100,10)
)
def forward(self,x):
out = self.layer(x)
out = out.view(batch_size,-1)
out = self.fc_layer(out)
return out
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
model = CNN().to(device)
loss_func = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
for i in range(num_epoch):
for j,[image,label] in enumerate(train_loader):
x = image.to(device)
y_= label.to(device)
optimizer.zero_grad()
output = model.forward(x)
loss = loss_func(output,y_)
loss.backward()
optimizer.step()
if i % 10 == 0:
print(loss)
#param_list = list(model.parameters())
#print(param_list)
correct = 0
total = 0
# 배치정규화나 드롭아웃은 학습할때와 테스트 할때 다르게 동작하기 때문에 모델을 evaluation 모드로 바꿔서 테스트해야합니다.
model.eval()
with torch.no_grad():
for image,label in test_loader:
x = image.to(device)
y_= label.to(device)
output = model.forward(x)
_,output_index = torch.max(output,1)
total += label.size(0)
correct += (output_index == y_).sum().float()
print("Accuracy of Test Data: {}".format(100*correct/total))
| 0.770335 | 0.92976 |

# Programming Language With Numerical Methods
<heads>
Joanna Kozuchowska, Msc
## Class 07. Plotting in in Python
1. [Matplotlib](#matplotlib)
1. [Creating simple plots](#plots)
2. [Adding elements to a plot](#plot_description)
3. [Saving the figure](#save)
3. [Multiple plots on one figure](#multiple)
2. [Exercises](#exercises)
**Whenever you learn a new feature, you should try it out in interactive mode and make errors on purpose to see what goes wrong and what types of errors you run into.**
## Matplotlib<a name="matplotlib"></a>

Source: https://matplotlib.org/tutorials/introductory/usage.html#sphx-glr-tutorials-introductory-usage-py
In Jupyter Notebook we can modify the output of plots: it is possible to embed the plot in the notebook instead of creating and additional window. Use a magic function before plotting to embed the plots.
```
%matplotlib inline
```
Juypter Notebook does not require using `plt.show()` method to show a plot. Remember to put this line in your Python script run from console (or IDE, e.g. Spyder or PyCharm).
Other options of showing the plot
- `%matplotlib` - default view in an additional interactive window
- `%matplotlib qt` - interactive window (qt, wx, gtk, tk)
- `%matplotlib inline` - plots embedded in the notebook
- `matplotlib notebook` - plots embedded in the notebook + interactive functions
```
import matplotlib.pyplot as plt
```
Tutorial:
- https://matplotlib.org/tutorials/introductory/pyplot.html
- https://matplotlib.org/tutorials/introductory/usage.html#sphx-glr-tutorials-introductory-usage-py
Differences between the Pyplot API and Object-Oriented API
- https://matplotlib.org/tutorials/introductory/lifecycle.html#the-lifecycle-of-a-plot
### Creating simple plots<a name="plots"></a>
#### Line plot
`fig` represents the whole figure (collection of plots), `ax` is a single plot;
```
plt.plot([2, 3, 5, 6])
plt.show()
x_values = [0, 1, 2, 3, 4, 5]
squares = [0, 1, 4, 9, 16, 25]
fig, ax = plt.subplots()
ax.plot(x_values, squares)
```
#### Scatter plot
`scatter` takes x and y value of a datapoint and uses it as coordinates to place datapoints in the figure. `s` keyword argument defines the size of each point
```
x_values = list(range(100))
squares = [x**2 for x in x_values]
fig, ax = plt.subplots()
ax.scatter(x_values, squares, s=30, c="green", marker="^")
plt.show()
```
#### Plotting two datasets
Example below present how to plot to datesets on the same Axes, plus,how to fill the space between the two plots with color.
```
x_values = list(range(10))
squares = [x**2 for x in x_values]
cubes = [x**3 for x in x_values]
fig, ax = plt.subplots()
ax.scatter(x_values, squares, c='green', s=10)
ax.scatter(x_values, cubes, c='red', s=10)
ax.fill_between(x_values, cubes, squares, color="green", alpha=0.25)
plt.show()
#stateful
x_values = list(range(10))
squares = [x**2 for x in x_values]
cubes = [x**3 for x in x_values]
plt.plot(x_values, squares)
plt.plot(x_values, cubes)
plt.show()
#stateful
x_values = list(range(10))
squares = [x**2 for x in x_values]
cubes = [x**3 for x in x_values]
plt.plot(x_values, squares, 'r--', x_values, cubes, 'g-.')
plt.show()
```
#### Bar graph
```
# statefull approach
labels = ["A", "B", "C", "D", "E"]
values = [23, 42, 13, 55, 38]
plt.bar(labels, values, color="red")
plt.title("Bar graph")
plt.xlabel("Description")
plt.ylabel("Number of values")
```
Use `barh` for horizontal bars. We can also pass `xerr` or `yerr` argument to depict variance in the data.
```
labels = ["A", "B", "C", "D", "E"]
values = [23, 42, 13, 55, 38]
variance = [2, 3, 0.5, 3, 2.5]
plt.barh(labels, values, xerr=variance, color="red")
plt.title("Bar graph")
plt.xlabel("Description")
plt.ylabel("Number of values")
```
#### Pie chart
```
description = ["A", "B", "C", "D", "E"]
values = [23, 25, 5, 32, 13]
plt.pie(values, labels=description)
plt.axis('equal')
plt.legend(title="Example legend")
plt.show()
```
#### Histogram
```
import numpy as np
x = np.random.randn(1000)
plt.hist(x, 100)
plt.title("Histogram")
plt.ylabel("Frequency")
plt.xlabel("Random data")
plt.show()
```
#### 3D scatter plot
```
from mpl_toolkits import mplot3d
x = np.random.randn(100)
y = np.random.randn(100)
ax = plt.axes(projection="3d")
ax.scatter3D(x,y)
ax.set_xlabel("x")
ax.set_ylabel("y")
plt.show()
```
### Adding elements to a plot<a name="plot_description"></a>
```
import numpy as np
x = np.linspace(-2*np.pi, 2*np.pi, 300)
cosx = np.cos(x)
sinx = np.sin(x)
fig, ax = plt.subplots()
ax.plot(x, cosx, linewidth=2.5, linestyle="-", color="red", label="Cosine")
ax.plot(x, sinx, linewidth=1, linestyle="-.", color="green", label="Sine")
ax.set_xlim(-2*np.pi, 2*np.pi)
ax.set_ylim(-1.1, 1.1)
#ax.set_xticks([-2*np.pi, -np.pi, 0, np.pi, 2*np.pi],[r'$-2\pi$', r'$-\pi$', '0', r'$\pi$', r'$2\pi$'])
ax.set_xticks([-2*np.pi, -np.pi, 0, np.pi, 2*np.pi])
ax.set_yticks(np.linspace(-1, 1, 5))
ax.legend(loc="upper right")
plt.show()
x_values = list(range(1000))
squares = [x**2 for x in x_values]
fig, ax = plt.subplots()
ax.scatter(x_values, squares, s=10)
# chart title and label axes.
ax.set_title('Square Numbers', fontsize=24)
ax.set_xlabel('Value', fontsize=14)
ax.set_ylabel('Square of Value', fontsize=14)
# Set limits of axes, and size of tick labels.
ax.axis([0, 1100, 0, 1_100_000])
ax.tick_params(axis='both', labelsize=14)
```
#### Colormaps
A colormap can vary the point colors (or line colors) used in the plot, from one shade to another, based on a value of each point.
Many colormaps are available to choose from. To use a reverse of one of the colormaps, append `_r` to its name, for example, `Blues_r`.
```
fig, ax = plt.subplots()
ax.scatter(x_values, squares, c=squares, cmap=plt.cm.Accent, s=10)
plt.show()
```

Source: http://scipy-lectures.org/intro/matplotlib/index.html
### Saving the figure<a name="save"></a>
To save a created plot use `plt.savefig`. As arguments, filename and extension (file format) are needed.
```
fig.savefig('file_name.png',format='png',box_inches='tight', pad_inches=0.01, dpi=72)
# pyplot approach
plt.savefig('file_name.png',format='png',box_inches='tight', pad_inches=0.01, dpi=72)
```
### Multiple plots in one figure<a name="multiple"></a>
`plt.subplots()` function returns a figure object and a tuple of axes. Each axes corresponds to a separate plot and can be modified independently. Two arguments in `plt.subplots` function define a number of rows and a number of columns in the figure.
<div>
<img src="attachment:subplots.png" width="200"/> <img src="attachment:sphx_glr_plot_subplot-vertical_001.png" width="200"/> <img src="attachment:sphx_glr_plot_subplot-grid_001.png" width="200"/> <img src="attachment:sphx_glr_plot_gridspec_001.png" width="200"/>
Source: http://scipy-lectures.org/intro/matplotlib/index.html#figures-subplots-axes-and-ticks
</div>
```
#stateful approach
x = list(range(11))
squares = [x**2 for x in x]
cubes = [x**3 for x in x]
plt.subplot(211)
plt.plot(x, squares, 'r--o')
plt.title('squares')
plt.subplot(2,1,2)
plt.plot(x, squares, 'bo')
plt.title('cubes')
plt.suptitle('two subplots')
# object oriented
x_values = list(range(11))
squares = [x**2 for x in x_values]
cubes = [x**3 for x in x_values]
fig, axs = plt.subplots(2, 1)
axs[0].scatter(x_values, squares)
axs[0].set_title('Squares')
axs[1].scatter(x_values, cubes, c='red')
axs[1].set_title('Cubes')
fig.suptitle("Two plots in one figure")
plt.show()
#### Sharing an axis
Two plots can share on of the axis
x_values = list(range(11))
squares = [x**2 for x in x_values]
cubes = [x**3 for x in x_values]
fig, axs = plt.subplots(2, 1, sharex=True)
axs[0].scatter(x_values, squares)
axs[0].set_title('Squares')
axs[1].scatter(x_values, cubes, c='red')
axs[1].set_title('Cubes')
plt.show()
x_values = list(range(11))
squares = [x**2 for x in x_values]
cubes = [x**3 for x in x_values]
fig, axs = plt.subplots(1, 2, sharey=True)
axs[0].scatter(x_values, squares)
axs[0].set_title('Squares')
axs[1].scatter(x_values, cubes, c='red')
axs[1].set_title('Cubes')
plt.show()
#adding subplots to an existing figure
x_values = list(range(11))
squares = [x**2 for x in x_values]
cubes = [x**3 for x in x_values]
fig = plt.figure()
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
ax1.plot(x_values, squares)
ax2.plot(x_values, cubes)
```
## Exercises<a name="exercises"></a>
**Exercise 1**
Define two lists, `y` and `x` of length 10, containing numeric data.
1. Use the defined lists to plot `y(x)`. Adjacent points should be matched with a line. Add a title to a plot and describe the axis.
2. Change the figure size to width=13 and height=8.
3. Change the markers for data points to green triangles.
4. Use the defined lists to plot only datapoints, not connected with a line, use `scatter`. Datapoints should be marked with black circles.
**Exercise 2**
1. Plot two datasets on one plot, use only one `plot` function call. The first dataset is `y(x)` from exercise 1, the second is a function `f(x)`:
$$f(x) = x^2 - 5$$
$f(x)$ should be computed for 9 equally spaced points in the range $<2, 8>$.
2. Plot the same data as in point 1, add legend. This time, use two `plot` function calls. You can modify the color and width of the lines if you want.
**Exercise 3**
Crate a figure with two subfigures. In the first subfigure plot the information from the previous task, in the second subfigure plot a quadratic function ($x^2$) in range $x \in <-5, 5>$. Add a title to both subfigures and a title to the whole set (`suptitle`).
1. Put subfigures next to each other (two columns); use `add_subplot`;
2. Put subfigures next to each other (two columns); use `add_subplot`;
3. Do the task from points 1 and 2, using `plt.subplots()` or `plt.subplot()`
**Exercise 4**
1. Create a plot bar including nutrition information in a chosen product. For example, 1 lemon covers 90% of the vitamin C daily value and 12% of dietary fiber. Change the color of the bars. Include data description.
2. Create the same plot, but put bars horizontally.
Nutrition information can be found, for example in WolframAlpha (1 apple): https://www.wolframalpha.com/input/?i=1+apple
**Exercise 5**
Create a histogram from randomly generated data (at least 1000 data points). Save a figure to a file. Try using different file extensions. Add a keyword argument `bins`, divide data into 10, 20 and 50 bins.
**Exercise 6**
The sine function can be approximated by a Taylor series according to the formula:
$$
\sin x \approx S(x,n)= \sum\limits_{j=0}^{n} (-1)^j \frac{x^{2j+1}}{(2j+1)!}.
$$
The error in the approximation $S(x, n)$ decreases as $n$ increases. Visualize the quality of $S(x, n)$ as $n$ increases together with a sine function on $\left[0,\ 4\pi \right]. Fill the space between two plots with a colour.
|
github_jupyter
|
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot([2, 3, 5, 6])
plt.show()
x_values = [0, 1, 2, 3, 4, 5]
squares = [0, 1, 4, 9, 16, 25]
fig, ax = plt.subplots()
ax.plot(x_values, squares)
x_values = list(range(100))
squares = [x**2 for x in x_values]
fig, ax = plt.subplots()
ax.scatter(x_values, squares, s=30, c="green", marker="^")
plt.show()
x_values = list(range(10))
squares = [x**2 for x in x_values]
cubes = [x**3 for x in x_values]
fig, ax = plt.subplots()
ax.scatter(x_values, squares, c='green', s=10)
ax.scatter(x_values, cubes, c='red', s=10)
ax.fill_between(x_values, cubes, squares, color="green", alpha=0.25)
plt.show()
#stateful
x_values = list(range(10))
squares = [x**2 for x in x_values]
cubes = [x**3 for x in x_values]
plt.plot(x_values, squares)
plt.plot(x_values, cubes)
plt.show()
#stateful
x_values = list(range(10))
squares = [x**2 for x in x_values]
cubes = [x**3 for x in x_values]
plt.plot(x_values, squares, 'r--', x_values, cubes, 'g-.')
plt.show()
# statefull approach
labels = ["A", "B", "C", "D", "E"]
values = [23, 42, 13, 55, 38]
plt.bar(labels, values, color="red")
plt.title("Bar graph")
plt.xlabel("Description")
plt.ylabel("Number of values")
labels = ["A", "B", "C", "D", "E"]
values = [23, 42, 13, 55, 38]
variance = [2, 3, 0.5, 3, 2.5]
plt.barh(labels, values, xerr=variance, color="red")
plt.title("Bar graph")
plt.xlabel("Description")
plt.ylabel("Number of values")
description = ["A", "B", "C", "D", "E"]
values = [23, 25, 5, 32, 13]
plt.pie(values, labels=description)
plt.axis('equal')
plt.legend(title="Example legend")
plt.show()
import numpy as np
x = np.random.randn(1000)
plt.hist(x, 100)
plt.title("Histogram")
plt.ylabel("Frequency")
plt.xlabel("Random data")
plt.show()
from mpl_toolkits import mplot3d
x = np.random.randn(100)
y = np.random.randn(100)
ax = plt.axes(projection="3d")
ax.scatter3D(x,y)
ax.set_xlabel("x")
ax.set_ylabel("y")
plt.show()
import numpy as np
x = np.linspace(-2*np.pi, 2*np.pi, 300)
cosx = np.cos(x)
sinx = np.sin(x)
fig, ax = plt.subplots()
ax.plot(x, cosx, linewidth=2.5, linestyle="-", color="red", label="Cosine")
ax.plot(x, sinx, linewidth=1, linestyle="-.", color="green", label="Sine")
ax.set_xlim(-2*np.pi, 2*np.pi)
ax.set_ylim(-1.1, 1.1)
#ax.set_xticks([-2*np.pi, -np.pi, 0, np.pi, 2*np.pi],[r'$-2\pi$', r'$-\pi$', '0', r'$\pi$', r'$2\pi$'])
ax.set_xticks([-2*np.pi, -np.pi, 0, np.pi, 2*np.pi])
ax.set_yticks(np.linspace(-1, 1, 5))
ax.legend(loc="upper right")
plt.show()
x_values = list(range(1000))
squares = [x**2 for x in x_values]
fig, ax = plt.subplots()
ax.scatter(x_values, squares, s=10)
# chart title and label axes.
ax.set_title('Square Numbers', fontsize=24)
ax.set_xlabel('Value', fontsize=14)
ax.set_ylabel('Square of Value', fontsize=14)
# Set limits of axes, and size of tick labels.
ax.axis([0, 1100, 0, 1_100_000])
ax.tick_params(axis='both', labelsize=14)
fig, ax = plt.subplots()
ax.scatter(x_values, squares, c=squares, cmap=plt.cm.Accent, s=10)
plt.show()
fig.savefig('file_name.png',format='png',box_inches='tight', pad_inches=0.01, dpi=72)
# pyplot approach
plt.savefig('file_name.png',format='png',box_inches='tight', pad_inches=0.01, dpi=72)
#stateful approach
x = list(range(11))
squares = [x**2 for x in x]
cubes = [x**3 for x in x]
plt.subplot(211)
plt.plot(x, squares, 'r--o')
plt.title('squares')
plt.subplot(2,1,2)
plt.plot(x, squares, 'bo')
plt.title('cubes')
plt.suptitle('two subplots')
# object oriented
x_values = list(range(11))
squares = [x**2 for x in x_values]
cubes = [x**3 for x in x_values]
fig, axs = plt.subplots(2, 1)
axs[0].scatter(x_values, squares)
axs[0].set_title('Squares')
axs[1].scatter(x_values, cubes, c='red')
axs[1].set_title('Cubes')
fig.suptitle("Two plots in one figure")
plt.show()
#### Sharing an axis
Two plots can share on of the axis
x_values = list(range(11))
squares = [x**2 for x in x_values]
cubes = [x**3 for x in x_values]
fig, axs = plt.subplots(2, 1, sharex=True)
axs[0].scatter(x_values, squares)
axs[0].set_title('Squares')
axs[1].scatter(x_values, cubes, c='red')
axs[1].set_title('Cubes')
plt.show()
x_values = list(range(11))
squares = [x**2 for x in x_values]
cubes = [x**3 for x in x_values]
fig, axs = plt.subplots(1, 2, sharey=True)
axs[0].scatter(x_values, squares)
axs[0].set_title('Squares')
axs[1].scatter(x_values, cubes, c='red')
axs[1].set_title('Cubes')
plt.show()
#adding subplots to an existing figure
x_values = list(range(11))
squares = [x**2 for x in x_values]
cubes = [x**3 for x in x_values]
fig = plt.figure()
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
ax1.plot(x_values, squares)
ax2.plot(x_values, cubes)
| 0.595375 | 0.977926 |
STA 663 Final Project
<br>
Author: Yiling Liu, Lanqiu Yao
master project 恶心死人了,之后再多写一个字都好煎熬 TUT
<p style="text-align: center;">
<span style="color: ; font-family: Babas; font-size: 2em;">Hierarchical Topic models and </span>
</p>
<p style="text-align: center;">
<span style="color: ; font-family: Babas; font-size: 2em;">the Nested Chinese Restaurant Process</span>
</p>
## 1. Introduction
## 2. Chinese Restaurant Process
The Chinese Restaurant Process (CRP) is a distribuition on partitions of integers. Imagine there are M customers in a Chinese restaurant with infinte tables. The first customer sit in the first table. The following customers have two kinds of choices:
+ Sit in the table that some one alse is already there
+ Sit in a new table
These two choices have probabilities that depend on the previosu customers at the tables.
<br>
Specifically, for the $m$th customer, the probability to sit in a table is:
+ p(occupied table i| previous customers) = $\frac{m_i}{\gamma+m-1}$
+ p(next unoccupied table| previous customers) = $\frac{\gamma}{\gamma+m-1}$,
where $m_i$ represnets the number of previous customers at the table $i$; $\gamma$ is a parameter.
If we have M customers, the CRP will give us a partion of M customers, which has the same structure as a Dirichlet process.

### 2.1 Nested Chinese Restaurant Process
The CRP establishes a one-to-one relationship between tables and mixture components. A hierarchical version of CRP was also developed to model one-to-many
The nCRP is very similar with CRP except for its hieracrchical structure.
We can see an example in the following plot.

### 2.2 A very simple version of CRP in Python
The function for chinese restaurant process (CRP):
**Input**
+ N is the number of customs
+ alpha is the $\alpha$ parameter
**Output**
+ The number of custom in each table
+ The probability of sitting in each table
```
def CRP(alpha,N):
"""
Description
---------
Funcion: Chinese Restaurant Process
Parameter
---------
alpha: concentration parameter
N: the number of customers
Return
------
tables: number of customers at each table
p: the probability for customers to sit in each table
"""
import numpy as np
# initial
# N: total number of people
# alpha: the alpha parameter
tables = np.zeros(N) # table's number of customer
tables[0] = 1 # at first, every table is empty
if N==1:
tables=np.array(1)
p=[1]
if N>1:
for i in range(2,N+1):
p_old=tables/(alpha+i-1) # the probability of sitting in a table with other people
p_old=p_old[p_old>0]
p_new=alpha/(alpha+i-1) # the probability of sitting in a new table
n_temp=len(p_old)+1
p=list(p_old)+[p_new]
num=np.random.choice(n_temp,p=p) # generate the table number based on the probabilities
tables[num]=tables[num]+1
tables=tables[tables>0]
return(tables,p)
def CRP_next(alpha,topic):
"""
Description
---------
Funcion: Chinese Restaurant Process
Parameter
---------
alpha: concentration parameter
topic: the exist tables
Return
------
p: the probability for a new customers to sit in each of the tables
"""
import numpy as np
N=len(topic) # number of tables
word_list=[] # total customers
for t in topic:
word_list=word_list+t
m=len(word_list) # customers' number
tables = np.array([len(x) for x in topic]) # tables with their customers
p_old=tables/(alpha+m) # the probability of sitting in a table with other people
p_new=alpha/(alpha+m) # the probability of sitting in a new table
p=list(p_old)+[p_new] # the last probability is the probability to sit in a new table
return(p)
```
**Example **
```
CRP(1,100)
topic=[['a', 'ggtfdg', 'dsgfgfd', 'ds', 'ds', 'yhhr'], ['123', '66']]
CRP_next(1,topic)
```
## 3. A hierarchical topic model
<span style="color:red"> need more descriptions.</span>
### 3.1 A topic model
Generation of a document:
1. Choose a $K$-vector $\theta$ of topic proportions from a distribution $p(\theta|\alpha)$
2. Repeated sample words from the mixture distriubtion $p(\omega|\theta)$ for the chosen value of $\theta$
Besides, when the $p(\theta|\alpha)$ is chosen to be a Dirichlet distribution, these processes are identified as a latent Dirichlet allocation model (LDA)
### 3.2 A hierarchical topic model
Back to the hierarchical topic model, which is very simliar with previous one but added a hierarchical structure. For a hierarchial topic model with L-levels, we can imagine it as a L-level tree and each node presents a topic.
Generation of a document:
1. Choose a path from the root to a leaf
2. Choose the topic proportions $\theta$ from a L-dimension Dirichlet
3. Generated the words in the document for m a mixture of the topics along the path from the root to leaf, wiht mixing proportions $\theta$
This generation of document is very simliar with previous one except the mixing proportion $\theta$ is from a hierarchical structure

The graph represnets the hierarchical LDA (hLDA) model. The hLDA has a prior from nCRP.
just to show our understanding of the project
+ $\omega$
+ $z$: a multinomial variable
+ $\beta$: a parameter
+ $\theta$: a $K-$dimensional vector
document specific mixture distribution: $p(\omega|\theta)=\sum_{i=1}^{K} \theta_i p(\omega| z=i, \beta+i)$
$p(\theta|\alpha)$ Dirichlet distribution
+ $\alpha$: a corpus-level parameter
## 4. Approximate inference by Gibbs sampling
### 4.1 Introduction to Gibbs sampling
<span style="color:red"> add some introduction about gibbs sampling?.</span>
### 4.2 Gibbs sampling for the hLDA model
**The variables that are needed to be sampled are:**
1. $w_{m,n}$: the $n$th word in the $m$th document (Important note: these are the only observed variables in the model)
2. $c_{m,l}$: the restaurant (node), the $l$th topic in the $m$th document
3. $z_{m,n}$: the assignment of the $n$th word in the $m$th document to one of the $L$ topics
4. There are also some variables needed in the model, but they are not needed to be sampled
After illustrate the variables in the model, we also need to know the order and the methods of the sampling. We can apply the sampling methods into two steps:
1. sample the $z_{m,n}$ variale by using LDA+CRP
2. sample the $c_{m,l}$ based on the first step (given the LDA hidden variables).
* To be more specific:
### 4.2.1 Sample $z_{m,n}$
The $z_{m,n}$ is sampled under LDA model based on the method in paper:
<p style="text-align: center;">
*A probabilistic approach to semantic representation*
</p>
The distribution of any word
```
def Z(corpus, T, alpha, beta):
"""
Description
---------
Funcion: sample zmn under LDA model
Parameter
---------
corpus: the total corpus, a list of documents, that is, a list of lists
T: the number of topics
alpha, beta: parameters
Return
------
topic: the word list in each topic
topic_num: the length of each topic
"""
W=np.sum([len(word) for word in corpus]) # the number of the total words
N=len(corpus) # the number of documents
topic=[[] for t in range(T)]
topic_num=np.zeros(T)
for i,di in enumerate(corpus):
for wi in di:
p=np.zeros(T)
for j in range(T):
nij_wi=topic[j].count(wi) # number of wi tht assigned to topic j
nij=len(topic[j]) # total number of words assigned to topic j
nij_di=np.sum(np.isin(topic[j],di)) # number of words from di in topic j
ni_di=len(di) # total number of words in di
part1=(nij_wi+beta)/(nij+W*beta)
part2=(nij_di+alpha)/(ni_di+T*alpha)
p[j]=part1 * part2
pp=p/np.sum(p)
w_assign=np.random.multinomial(1, pp, size=1)
i_topic=int(np.where(w_assign[0]==1)[0])
topic[i_topic].append(wi)
topic_num=topic_num+w_assign
return(topic,topic_num)
```
**Example**
```
corpus=[['a'], ['123', 'ggtfdg'], ['dsgfgfd', 'ds'], ['ds', '66', 'yhhr']]
T=2
alpha=1
beta=1
Z(corpus, T, alpha, beta)
```
### 4.2.2 sample $c_m$ from the nCRP
$$p(c_m | w, c_{-m}, z) \propto p(w_m | c, w_{-m}, z) p(c_m | c_{-m})$$
The calculation of the $p(w_m | c, w_{-m},z)$ value based on the likelihood function:
$$p(w_m | c, w_{-m},z) = \prod_{l=1}^{L} (\frac{\Gamma (n_{c_{m,l,-m}}^{(\cdot)}+W\eta)}{\prod_{\omega} \Gamma (n_{c_{m,l,-m}}^{(\omega)}+\eta)}\frac{\prod_{\omega} \Gamma(n_{c_{m,l,-m}}^{(\omega)}+n_{c_{m,l,m}}^{(\cdot)}+\eta)}{\Gamma(n_{c_{m,l,-m}}^{(\cdot)}+ n_{c_{m,l,m}}^{(\cdot)} W\eta)})$$
where,
```
def word_likelihood(corpus,topic,eta):
"""
Description
---------
Funcion: calculation of p(w|c,w,z), based on the likelihood function
Parameter
---------
corpus: the total corpus, a list of documents, that is, a list of lists
topic: the topics of the corpus
eta: parameter
Return
------
a matrix of probabilities:
the number of rows = the number of documents,
the number of columns = the number of topics,
the cell: the probability of each document to be assigned in each topic
"""
import math
res=np.zeros((len(corpus),len(topic))) # generate the results matrix
word_list=[] # generate the word list that contains all the words
for i in range(len(corpus)):
word_list=word_list+corpus[i]
W=len(word_list) # the length of word list
for i,di in enumerate(corpus):
p_w=1
for j in range(len(topic)): #calculate the tow parts of the equation
nc_dot=len(topic[j])
part1_denominator=1
part2_nominator=1
part1_nominator = math.gamma(nc_dot-np.sum(np.isin(topic[j],di))+W*eta)
part2_denominator = math.gamma(nc_dot+W*eta)
for word in word_list:
ncm_w=topic[j].count(word)-di.count(word)
if ncm_w <0:
ncm_w=0
nc_w=topic[j].count(word)
part1_denominator=part1_denominator*(ncm_w+eta)
part2_nominator=part2_nominator*(nc_w+eta)
p_w=p_w*part1_nominator*part2_nominator/(part1_denominator*part2_denominator)
res[i,j]=p_w
res=res/np.sum(res,axis=1).reshape(-1,1)
return(res)
```
**Example**
```
corpus=[['a'], ['123', 'ggtfdg'], ['dsgfgfd', 'ds'], ['ds', '66', 'yhhr']]
T=2
alpha=1
beta=1
eta=1
topic=Z(corpus, T, alpha, beta)[0]
word_likelihood(corpus,topic,eta)
```
### 4.2.3 sample the $p(c_m|c_{-m})$
```
def CRP_prior(corpus,topic,alpha):
res=np.zeros((len(corpus),len(topic)))
for i,corpus_i in enumerate(corpus):
topic_new=[]
for t in topic:
topic_new.append([k for k in t if k not in corpus_i])
p=CRP_next(alpha,topic_new)
res[i,:]=p[1:]
return(res)
```
**Example**
```
corpus=[['a'], ['123', 'ggtfdg'], ['dsgfgfd', 'ds'], ['ds', '66', 'yhhr']]
T=2
alpha=1
beta=1
eta=1
topic=Z(corpus, T, alpha, beta)[0]
CRP_prior(corpus,topic,alpha)
```
### 4 function to combine the previous functions
```
def gibbs_position(corpus,T,alpha,beta,eta,iters=100):
word_list=[]
for i in corpus:
word_list=word_list+i
W=len(word_list)
gibbs=np.zeros((W,iters))
for j in range(iters):
topic=Z(corpus, T, alpha, beta)[0]
w_m=word_likelihood(corpus,topic,eta)
c_=CRP_prior(corpus,topic,alpha)
c_m = (w_m * c_) / (w_m * c_).sum(axis = 1)[:, np.newaxis]
g=[]
for i,corpus_i in enumerate(corpus):
for word in corpus_i:
g.append(int(np.where(np.random.multinomial(1, c_m[i])!=0)[0]))
gibbs[:,j]=g
word_topic=[]
for i in range(W):
counts=[]
for t in range(T):
counts.append(list(gibbs[i]).count(t))
word_topic.append(np.where(counts==np.max(counts))[0][0])
return(word_topic)
def gibbs_list(corpus,T,alpha,beta,eta,iters):
word_list=[]
for i in corpus:
word_list=word_list+i
position=gibbs1(corpus,T,alpha,beta,eta,iters)
n_topic=len(np.unique(position))
word_list_topic=[[] for x in range(n_topic)]
for n_t in range(n_topic):
word_list_topic[n_t].append(list(np.array(word_list)[np.array(position)==np.array(n_t)]))
return(position,word_list_topic)
```
**Example **
```
corpus=[['a'], ['123', 'ggtfdg'], ['dsgfgfd', 'ds'], ['ds', '66', 'yhhr']]
T=2
alpha=1
beta=1
eta=0.1
iters=100
gibbs_list(corpus,T,alpha,beta,eta,iters)[1]
```
### Wrap up to our *hLDA* function
```
def node_sampling(corpus, alpha):
topic = []
for corpus_i in corpus:
for word in corpus_i:
c_m = CRP_next(alpha,topic)
theta = np.random.multinomial(1, (np.array(c_m)/sum(c_m))).argmax()
if theta == len(c_m)-1:
topic.append([word])
else:
topic[theta].append(word)
return topic
```
**Example**
```
phi=2
topic = node_sampling(corpus, phi)
topic
def hLDA(corpus, alpha, beta, eta, iters, level):
topic = node_sampling(corpus, phi)
hLDA_tree = [[] for _ in range(level)]
tmp_tree = []
node = [[] for _ in range(level+1)]
node[0].append(1)
for i in range(level):
if i == 0:
wn_topic = gibbs_list(corpus, len(topic), alpha, beta, eta, iters)[1]
node_topic = [x for word in wn_topic for x in word]
hLDA_tree[0].append(node_topic)
tmp_tree.append(wn_topic[1:])
tmp_tree = tmp_tree[0]
node[1].append(len(wn_topic[1:]))
else:
for j in range(sum(node[i])):
if tmp_tree == []:
break
wn_topic = gibbs_list(corpus, len(topic), alpha, beta, eta, iters)[1]
node_topic = [x for word in wn_topic for x in word]
hLDA_tree[i].append(node_topic)
tmp_tree.remove(tmp_tree[0])
if wn_topic[1:] != []:
tmp_tree.extend(wn_topic[1:])
node[i+1].append(len(wn_topic[1:]))
return hLDA_tree, node[:level]
wn_topic=gibbs_list(corpus, len(topic), alpha, beta, eta, iters)[1]
alpha=0.1
beta=0.1
eta=0.1
hLDA(corpus, alpha, beta, eta, 100, 2)
```
<span style="color:red"> output 太丑了,得改.</span>
```
trees=hLDA(corpus, alpha, beta, eta, 100, 2)
trees
import numpy as np
from scipy.special import gammaln
import random
from collections import Counter
import string
import graphviz
import pygraphviz
! pip install pydot
import pydot
most_common = lambda x: Counter(x).most_common(1)[0][0]
hLDA(corpus, alpha, beta, eta, 100, 2)
HLDA_plot(trees, Len = 8, save = False)
HLDA_plot(hLDA(corpus, alpha, beta, eta, 100, 2), Len = 8, save = False)
def HLDA_plot(hLDA_object, Len = 8, save = False):
from IPython.display import Image, display
def viewPydot(pdot):
plt = Image(pdot.create_png())
display(plt)
words = hLDA_object[0]
struc = hLDA_object[1]
graph = pydot.Dot(graph_type='graph')
end_index = [np.insert(np.cumsum(i),0,0) for i in struc]
for level in range(len(struc)-1):
leaf_level = level + 1
leaf_word = words[leaf_level]
leaf_struc = struc[leaf_level]
word = words[level]
end_leaf_index = end_index[leaf_level]
for len_root in range(len(word)):
root_word = '\n'.join([x[0] for x in Counter(word[len_root]).most_common(Len)])
leaf_index = leaf_struc[len_root]
start = end_leaf_index[len_root]
end = end_leaf_index[len_root+1]
lf = leaf_word[start:end]
for l in lf:
leaf_w = '\n'.join([x[0] for x in Counter(list(l)).most_common(Len)])
edge = pydot.Edge(root_word, leaf_w)
graph.add_edge(edge)
if save == True:
graph.write_png('graph.png')
viewPydot(graph)
```
## 5. Example
## 6.Optimization
To make things faster
I think the easiest way to do optimization is :
1. Use of vectorization
2. JIT or AOT compilation of critical functions
## 7. Install our package
The CRP is amenable to mixture modeling because we can establish a one-to-one rela- tionship between tables and mixture components and a one-to-many relationship between mixture components and data. In the models that we will consider, however, each data point is associated with multiple mixture components which lie along a path in a hierarchy. We develop a hierarchical version of the CRP to use in specifying a prior for such models.
A nested Chinese restaurant process can be defined by imagining the following scenario. Suppose that there are an infinite number of infinite-table Chinese restaurants in a city. One restaurant is determined to be the root restaurant and on each of its infinite tables is a card with the name of another restaurant. On each of the tables in those restaurants are cards that refer to other restaurants, and this structure repeats infinitely. Each restaurant is referred to exactly once; thus, the restaurants in the city are organized into an infinitely-branched tree. Note that each restaurant is associated with a level in this tree (e.g., the root restaurant is at level 1 and the restaurants it refers to are at level 2).
A tourist arrives in the city for a culinary vacation. On the first evening, he enters the root Chinese restaurant and selects a table using Eq. (1). On the second evening, he goes to the restaurant identified on the first night’s table and chooses another table, again from Eq. (1). He repeats this process for L days. At the end of the trip, the tourist has sat at L restaurants which constitute a path from the root to a restaurant at the Lth level in the infinite tree described above. After M tourists take L-day vacations, the collection of paths describe a particular L-level subtree of the infinite tree (see Figure 1a for an example of such a tree).
This prior can be used to model topic hierarchies. Just as a standard CRP can be used to express uncertainty about a possible number of components, the nested CRP can be used to express uncertainty about possible L-level trees.
|
github_jupyter
|
def CRP(alpha,N):
"""
Description
---------
Funcion: Chinese Restaurant Process
Parameter
---------
alpha: concentration parameter
N: the number of customers
Return
------
tables: number of customers at each table
p: the probability for customers to sit in each table
"""
import numpy as np
# initial
# N: total number of people
# alpha: the alpha parameter
tables = np.zeros(N) # table's number of customer
tables[0] = 1 # at first, every table is empty
if N==1:
tables=np.array(1)
p=[1]
if N>1:
for i in range(2,N+1):
p_old=tables/(alpha+i-1) # the probability of sitting in a table with other people
p_old=p_old[p_old>0]
p_new=alpha/(alpha+i-1) # the probability of sitting in a new table
n_temp=len(p_old)+1
p=list(p_old)+[p_new]
num=np.random.choice(n_temp,p=p) # generate the table number based on the probabilities
tables[num]=tables[num]+1
tables=tables[tables>0]
return(tables,p)
def CRP_next(alpha,topic):
"""
Description
---------
Funcion: Chinese Restaurant Process
Parameter
---------
alpha: concentration parameter
topic: the exist tables
Return
------
p: the probability for a new customers to sit in each of the tables
"""
import numpy as np
N=len(topic) # number of tables
word_list=[] # total customers
for t in topic:
word_list=word_list+t
m=len(word_list) # customers' number
tables = np.array([len(x) for x in topic]) # tables with their customers
p_old=tables/(alpha+m) # the probability of sitting in a table with other people
p_new=alpha/(alpha+m) # the probability of sitting in a new table
p=list(p_old)+[p_new] # the last probability is the probability to sit in a new table
return(p)
CRP(1,100)
topic=[['a', 'ggtfdg', 'dsgfgfd', 'ds', 'ds', 'yhhr'], ['123', '66']]
CRP_next(1,topic)
def Z(corpus, T, alpha, beta):
"""
Description
---------
Funcion: sample zmn under LDA model
Parameter
---------
corpus: the total corpus, a list of documents, that is, a list of lists
T: the number of topics
alpha, beta: parameters
Return
------
topic: the word list in each topic
topic_num: the length of each topic
"""
W=np.sum([len(word) for word in corpus]) # the number of the total words
N=len(corpus) # the number of documents
topic=[[] for t in range(T)]
topic_num=np.zeros(T)
for i,di in enumerate(corpus):
for wi in di:
p=np.zeros(T)
for j in range(T):
nij_wi=topic[j].count(wi) # number of wi tht assigned to topic j
nij=len(topic[j]) # total number of words assigned to topic j
nij_di=np.sum(np.isin(topic[j],di)) # number of words from di in topic j
ni_di=len(di) # total number of words in di
part1=(nij_wi+beta)/(nij+W*beta)
part2=(nij_di+alpha)/(ni_di+T*alpha)
p[j]=part1 * part2
pp=p/np.sum(p)
w_assign=np.random.multinomial(1, pp, size=1)
i_topic=int(np.where(w_assign[0]==1)[0])
topic[i_topic].append(wi)
topic_num=topic_num+w_assign
return(topic,topic_num)
corpus=[['a'], ['123', 'ggtfdg'], ['dsgfgfd', 'ds'], ['ds', '66', 'yhhr']]
T=2
alpha=1
beta=1
Z(corpus, T, alpha, beta)
def word_likelihood(corpus,topic,eta):
"""
Description
---------
Funcion: calculation of p(w|c,w,z), based on the likelihood function
Parameter
---------
corpus: the total corpus, a list of documents, that is, a list of lists
topic: the topics of the corpus
eta: parameter
Return
------
a matrix of probabilities:
the number of rows = the number of documents,
the number of columns = the number of topics,
the cell: the probability of each document to be assigned in each topic
"""
import math
res=np.zeros((len(corpus),len(topic))) # generate the results matrix
word_list=[] # generate the word list that contains all the words
for i in range(len(corpus)):
word_list=word_list+corpus[i]
W=len(word_list) # the length of word list
for i,di in enumerate(corpus):
p_w=1
for j in range(len(topic)): #calculate the tow parts of the equation
nc_dot=len(topic[j])
part1_denominator=1
part2_nominator=1
part1_nominator = math.gamma(nc_dot-np.sum(np.isin(topic[j],di))+W*eta)
part2_denominator = math.gamma(nc_dot+W*eta)
for word in word_list:
ncm_w=topic[j].count(word)-di.count(word)
if ncm_w <0:
ncm_w=0
nc_w=topic[j].count(word)
part1_denominator=part1_denominator*(ncm_w+eta)
part2_nominator=part2_nominator*(nc_w+eta)
p_w=p_w*part1_nominator*part2_nominator/(part1_denominator*part2_denominator)
res[i,j]=p_w
res=res/np.sum(res,axis=1).reshape(-1,1)
return(res)
corpus=[['a'], ['123', 'ggtfdg'], ['dsgfgfd', 'ds'], ['ds', '66', 'yhhr']]
T=2
alpha=1
beta=1
eta=1
topic=Z(corpus, T, alpha, beta)[0]
word_likelihood(corpus,topic,eta)
def CRP_prior(corpus,topic,alpha):
res=np.zeros((len(corpus),len(topic)))
for i,corpus_i in enumerate(corpus):
topic_new=[]
for t in topic:
topic_new.append([k for k in t if k not in corpus_i])
p=CRP_next(alpha,topic_new)
res[i,:]=p[1:]
return(res)
corpus=[['a'], ['123', 'ggtfdg'], ['dsgfgfd', 'ds'], ['ds', '66', 'yhhr']]
T=2
alpha=1
beta=1
eta=1
topic=Z(corpus, T, alpha, beta)[0]
CRP_prior(corpus,topic,alpha)
def gibbs_position(corpus,T,alpha,beta,eta,iters=100):
word_list=[]
for i in corpus:
word_list=word_list+i
W=len(word_list)
gibbs=np.zeros((W,iters))
for j in range(iters):
topic=Z(corpus, T, alpha, beta)[0]
w_m=word_likelihood(corpus,topic,eta)
c_=CRP_prior(corpus,topic,alpha)
c_m = (w_m * c_) / (w_m * c_).sum(axis = 1)[:, np.newaxis]
g=[]
for i,corpus_i in enumerate(corpus):
for word in corpus_i:
g.append(int(np.where(np.random.multinomial(1, c_m[i])!=0)[0]))
gibbs[:,j]=g
word_topic=[]
for i in range(W):
counts=[]
for t in range(T):
counts.append(list(gibbs[i]).count(t))
word_topic.append(np.where(counts==np.max(counts))[0][0])
return(word_topic)
def gibbs_list(corpus,T,alpha,beta,eta,iters):
word_list=[]
for i in corpus:
word_list=word_list+i
position=gibbs1(corpus,T,alpha,beta,eta,iters)
n_topic=len(np.unique(position))
word_list_topic=[[] for x in range(n_topic)]
for n_t in range(n_topic):
word_list_topic[n_t].append(list(np.array(word_list)[np.array(position)==np.array(n_t)]))
return(position,word_list_topic)
corpus=[['a'], ['123', 'ggtfdg'], ['dsgfgfd', 'ds'], ['ds', '66', 'yhhr']]
T=2
alpha=1
beta=1
eta=0.1
iters=100
gibbs_list(corpus,T,alpha,beta,eta,iters)[1]
def node_sampling(corpus, alpha):
topic = []
for corpus_i in corpus:
for word in corpus_i:
c_m = CRP_next(alpha,topic)
theta = np.random.multinomial(1, (np.array(c_m)/sum(c_m))).argmax()
if theta == len(c_m)-1:
topic.append([word])
else:
topic[theta].append(word)
return topic
phi=2
topic = node_sampling(corpus, phi)
topic
def hLDA(corpus, alpha, beta, eta, iters, level):
topic = node_sampling(corpus, phi)
hLDA_tree = [[] for _ in range(level)]
tmp_tree = []
node = [[] for _ in range(level+1)]
node[0].append(1)
for i in range(level):
if i == 0:
wn_topic = gibbs_list(corpus, len(topic), alpha, beta, eta, iters)[1]
node_topic = [x for word in wn_topic for x in word]
hLDA_tree[0].append(node_topic)
tmp_tree.append(wn_topic[1:])
tmp_tree = tmp_tree[0]
node[1].append(len(wn_topic[1:]))
else:
for j in range(sum(node[i])):
if tmp_tree == []:
break
wn_topic = gibbs_list(corpus, len(topic), alpha, beta, eta, iters)[1]
node_topic = [x for word in wn_topic for x in word]
hLDA_tree[i].append(node_topic)
tmp_tree.remove(tmp_tree[0])
if wn_topic[1:] != []:
tmp_tree.extend(wn_topic[1:])
node[i+1].append(len(wn_topic[1:]))
return hLDA_tree, node[:level]
wn_topic=gibbs_list(corpus, len(topic), alpha, beta, eta, iters)[1]
alpha=0.1
beta=0.1
eta=0.1
hLDA(corpus, alpha, beta, eta, 100, 2)
trees=hLDA(corpus, alpha, beta, eta, 100, 2)
trees
import numpy as np
from scipy.special import gammaln
import random
from collections import Counter
import string
import graphviz
import pygraphviz
! pip install pydot
import pydot
most_common = lambda x: Counter(x).most_common(1)[0][0]
hLDA(corpus, alpha, beta, eta, 100, 2)
HLDA_plot(trees, Len = 8, save = False)
HLDA_plot(hLDA(corpus, alpha, beta, eta, 100, 2), Len = 8, save = False)
def HLDA_plot(hLDA_object, Len = 8, save = False):
from IPython.display import Image, display
def viewPydot(pdot):
plt = Image(pdot.create_png())
display(plt)
words = hLDA_object[0]
struc = hLDA_object[1]
graph = pydot.Dot(graph_type='graph')
end_index = [np.insert(np.cumsum(i),0,0) for i in struc]
for level in range(len(struc)-1):
leaf_level = level + 1
leaf_word = words[leaf_level]
leaf_struc = struc[leaf_level]
word = words[level]
end_leaf_index = end_index[leaf_level]
for len_root in range(len(word)):
root_word = '\n'.join([x[0] for x in Counter(word[len_root]).most_common(Len)])
leaf_index = leaf_struc[len_root]
start = end_leaf_index[len_root]
end = end_leaf_index[len_root+1]
lf = leaf_word[start:end]
for l in lf:
leaf_w = '\n'.join([x[0] for x in Counter(list(l)).most_common(Len)])
edge = pydot.Edge(root_word, leaf_w)
graph.add_edge(edge)
if save == True:
graph.write_png('graph.png')
viewPydot(graph)
| 0.605216 | 0.96793 |
# 5. Histogrammien vertailu samassa kuvassa
Tässä tehtävässä käytetään datatiedostoa *Dimuon_DoubleMu.csv* [1], jossa LHC:n kiihdyttämät protonit törmäävät CMS-ilmaisimen sisällä ja törmäyksestä syntyy myoni ja antimyoni. Tehtävänäsi on jaotella dataa myonin energian perusteella ja vertailla eri energisten myonien histogrammeja.
> 1. Tuo moduulit pandas, numpy ja matplotlib.pyplot
<br>
> 1. Lue datatiedosto ja tallenna invariantti massa. Tiedoston polku: *'[https://raw.githubusercontent.com/cms-opendata-education/cms-jupyter-materials-finnish/master/Data/Dimuon_DoubleMu.csv](https://raw.githubusercontent.com/cms-opendata-education/cms-jupyter-materials-finnish/master/Data/Dimuon_DoubleMu.csv)'*
<br>
$\color{purple}{\text{Kirjoita koodi alle.}}$
```
# Tuo tarvittavat moduulit
# Lue datatiedosto
# Tallenna invariantti massa
```
> 1. Piirrä invariantin massan histogrammi
<br>
> 1. Piirrä histogrammi uudelleen, mutta siten, että zoomaat jonkun piikin kohdalle. Zoomaus onnistuu lisäämällä koodiin range() -attribuutti. Esimerkiksi *plt.hist(invariantti_massa, bins=100, range=(50,100))* piirtäisi histogrammin, jossa on 100 pylvästä ja invariantti massa on välillä 50 GeV - 100 GeV.
<br>
$\color{purple}{\text{Kirjoita koodi alle.}}$
```
# Piirrä histogrammi ja zoomaa jonkun piikin kohdalle
plt.hist(invariantti_massa, bins=500)
plt.title('Kahden myonin invariantti massa')
plt.xlabel('Invariantti massa (GeV)')
plt.ylabel('Tapahtumien lukumäärä')
plt.show()
```
Dataa voidaan käsitellä matemaattisilla operaattoreilla, kuten yhteen- ja vähennyslaskulla. Datan lajittelu onnistuu luomalla uusi muuttuja, johon talletetaan vain tietyn ehdon täyttävät arvot. Alla olevassa koodisolussa valitaan KorkeaE-muuttujaan ne tapahtumat datasetistä, joissa hiukkasten yhteenlaskettu energia on yli 150 GeV. Vastaavasti MatalaE-muuttujaan valitaan ne tapahtumat, joissa hiukkasten yhteenlaskettu energia on alle 150 GeV.
```
KorkeaE = datasetti[datasetti.E1 + datasetti.E2 > 150]
MatalaE = datasetti[datasetti.E1 + datasetti.E2 < 150]
```
Uudet datasetit voidaan piirtää erikseen, omiin kuvaajiinsa, tai ne voidaan piirtää samaan kuvaajaan. Kaksi histogrammia voidaan piirtää päällekkäin säätämällä niiden läpinäkyvyyttä komennolla *alpha*. Kokeile piirtää histogrammi myös uudelleen käyttämällä eri raja-arvoa korkealle ja matalalle energialle.
```
plt.xlabel('Invariantti massa [GeV]')
plt.ylabel('Tapahtumien lukumäärä')
plt.title('Kahden myonin invariantti massa, vertailu energian perusteella \n')
# alpha -komento määrittää läpinäkyvyyden. 0 on täysin läpinäkyvä, 1 täysin läpinäkymätön.
# label -komennon avulla voimme määrittää kuvaajassa, mikä väri kuvaa mitäkin histogrammia.
plt.hist(MatalaE.M, bins=50, range=(80,100), alpha=0.5, label='Matala E')
plt.hist(KorkeaE.M, bins=50, range=(80,100), alpha=0.5, label='Korkea E')
# legend-komento täytyy asettaa, jotta aiemmin määritellyt labelit näkyvät kuvassa.
plt.legend()
plt.show()
```
> 1. Piirrä samaan kuvaajaan kaksi histogrammia, joista toisessa hiukkasten yhteenlaskettu pseudorapiditeetti (eta) on suuri ja toisessa pieni (esim. yli 2.2 ja alle 1.5). Pyri valitsemaan luvut siten, että molempiin luokkiin jää suurinpiirtein yhtä paljon tapahtumia. Huom. Ota pseudorapiditeetin arvoista itseisarvo ennen yhteenlaskua __abs()__-funktiolla.
> 1. Rajoita invariantti massa välille 80 GeV - 100 GeV
<br>
$\color{purple}{\text{Kirjoita koodi alle.}}$
```
# Määrittele korkea ja matala pseudorapiditeetti
# Piirrä korkean ja matalan pseudorapiditeetin histogrammit samaan kuvaajaan
```
> Pohdi, mitä näet pseudorapiditeettien kuvaajassa.
<br>
> Pseudorapiditeettiin palataan tehtävässä 7.
### Lähteet
[1] Käytetty tiedosto, Datasets derived from the Run2011A DoubleMu
Url:http://opendata.cern.ch/record/545
|
github_jupyter
|
# Tuo tarvittavat moduulit
# Lue datatiedosto
# Tallenna invariantti massa
# Piirrä histogrammi ja zoomaa jonkun piikin kohdalle
plt.hist(invariantti_massa, bins=500)
plt.title('Kahden myonin invariantti massa')
plt.xlabel('Invariantti massa (GeV)')
plt.ylabel('Tapahtumien lukumäärä')
plt.show()
KorkeaE = datasetti[datasetti.E1 + datasetti.E2 > 150]
MatalaE = datasetti[datasetti.E1 + datasetti.E2 < 150]
plt.xlabel('Invariantti massa [GeV]')
plt.ylabel('Tapahtumien lukumäärä')
plt.title('Kahden myonin invariantti massa, vertailu energian perusteella \n')
# alpha -komento määrittää läpinäkyvyyden. 0 on täysin läpinäkyvä, 1 täysin läpinäkymätön.
# label -komennon avulla voimme määrittää kuvaajassa, mikä väri kuvaa mitäkin histogrammia.
plt.hist(MatalaE.M, bins=50, range=(80,100), alpha=0.5, label='Matala E')
plt.hist(KorkeaE.M, bins=50, range=(80,100), alpha=0.5, label='Korkea E')
# legend-komento täytyy asettaa, jotta aiemmin määritellyt labelit näkyvät kuvassa.
plt.legend()
plt.show()
# Määrittele korkea ja matala pseudorapiditeetti
# Piirrä korkean ja matalan pseudorapiditeetin histogrammit samaan kuvaajaan
| 0.385837 | 0.88136 |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.imports import *
from fastai.structured import *
from pandas_summary import DataFrameSummary
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier
from IPython.display import display
from sklearn import metrics
PATH = os.getcwd();
PATH = PATH+"\\AV_Lord"
df_raw = pd.read_feather(f'{PATH}\combined.raw')
df_raw.shape
df_raw.head(1)
def display_all(df):
with pd.option_context("display.max_rows", 1000):
with pd.option_context("display.max_columns", 1000):
display(df)
def disply_dtype_plot(df = None):
if df is None:
return
l = []
cols = df.columns
for i in cols:
if df[i].dtype == 'int64':
l.append('integer dtype')
elif df[i].dtype == 'object':
l.append('object dtype')
elif df[i].dtype == 'float64':
l.append('float dtype')
else:
pass
sns.countplot(l)
del l
disply_dtype_plot(df_raw)
df_raw.head(0)
df_raw.info()
df_raw = df_raw * 1
train_cats(df_raw)
os.makedirs('tmp', exist_ok=True)
df_raw.to_feather('tmp/av_lord-raw')
df_raw = pd.read_feather('tmp/av_lord-raw')
df, y, nas, mapper = proc_df(df_raw, 'is_click', do_scale=True,max_n_cat=30)
sns.countplot(y)
y[-26:] = 0
#df.drop('is_open', axis=1, inplace=True)
m = RandomForestRegressor(n_jobs=-1)
m.fit(df, y)
m.score(df,y)
def print_score(m):
res = [rmse(m.predict(X_train), y_train), rmse(m.predict(X_valid), y_valid),
m.score(X_train, y_train), m.score(X_valid, y_valid)]
if hasattr(m, 'oob_score_'): res.append(m.oob_score_)
print(res)
m = RandomForestRegressor(n_jobs=-1)
%%time m.fit(X_train, y_train)
print_score(m)
display_all(test.isnull().sum().sort_index()/len(df_raw))
display_all(df.columns)
```
## testset transforms
```
test = pd.read_csv(f'{PATH}\\test_BDIfz5B.csv')
test['y'] = y[:773858]
test.head(2)
add_datepart(test,'send_date')
test.drop('send_Elapsed',axis=1,inplace=True)
test.head(2)
np.unique(camp['campaign_id'])
test = test.merge(camp,on='campaign_id');
test['link_diff'] = test['total_links'] - test['no_of_internal_links']
test['av_links'] = (test['no_of_internal_links']/ test['total_links']) * 100
test['img_per_section'] = test['no_of_images']/ test['no_of_sections']
test['link_diff_%'] = (test['total_links'] - test['no_of_internal_links'])/test['total_links'] * 100
test.head(1)
test.to_feather('tmp/av_lord_test')
train_cats(test)
nas
test.columns
mapper
test.columns
test, _, _ = proc_df(test,max_n_cat=30,mapper=mapper,na_dict=nas)
test.columns
df.drop(list(set(df.columns) - set(test.columns)), axis=1,inplace=True)
len(test.columns)
len(df.columns)
print(df['img_per_sec'].value_counts())
sns.countplot(df['img_per_sec'],orient='h');
print(df['is_open'].value_counts())
sns.countplot(df['is_open']);
sns.countplot(df['no_of_images']);
sns.countplot(df['no_of_sections']);
sns.countplot(df['link_diff']);
train_cats(df)
apply_cats(test, df)
df.drop(['id', 'user_id'], axis=1, inplace=True);
test.drop(['id', 'user_id'], axis=1, inplace=True);
df.head(1)
test.head(1)
df.info()
categorical_features_indices = np.where(df.dtypes == 'category')[0]
categorical_features_indices
df[:].fillna(method='ffill', inplace=True)
def rmse(x,y): return math.sqrt(((x-y)**2).mean())
def print_score(m):
res = [rmse(m.predict(X_train), y_train), rmse(m.predict(X_valid), y_valid),
m.score(X_train, y_train), m.score(X_valid, y_valid)]
if hasattr(m, 'oob_score_'): res.append(m.oob_score_)
print(res)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(C=1, random_state=17, solver='lbfgs',class_weight='balanced',n_jobs=-1,max_iter=1000)
df.drop(['email_body','subject','email_url'], axis =1, inplace=True)
test.drop(['email_body','subject','email_url'], axis =1, inplace=True)
train_cats(df)
apply_cats(test,df)
categorical_features_indices = np.where(df.dtypes == 'category')[0]
categorical_features_indices
#importing library and building model
from catboost import CatBoostRegressor
#model=CatBoostClassifier(iterations=1000, depth=10,learning_rate=0.01, loss_function='CrossEntropy',\
#)
#model.fit(X_train, y_train,cat_features=categorical_features_indices,eval_set=(X_validation, y_validation))
df, _, nan, mapper = proc_df(df,do_scale=True,max_n_cat=30)
from sklearn.model_selection import train_test_split
X_train, X_validation, y_train, y_validation = train_test_split(df, y_target, train_size=0.8, random_state=1234)
lr.fit(X_train,y_train)
df.isnull().head()
m
preds = m.predict(test)
preds
sample_sub['is_click'] = preds
sample_sub
def make_submission(probs):
sample = pd.read_csv(f'{PATH}//sample_submission.csv')
submit = sample.copy()
submit['is_click'] = probs
return submit
submit = make_submission(preds)
submit.head(2)
submit.to_csv(PATH + '//av_cat_2.csv', index=False)
```
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.imports import *
from fastai.structured import *
from pandas_summary import DataFrameSummary
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier
from IPython.display import display
from sklearn import metrics
PATH = os.getcwd();
PATH = PATH+"\\AV_Lord"
df_raw = pd.read_feather(f'{PATH}\combined.raw')
df_raw.shape
df_raw.head(1)
def display_all(df):
with pd.option_context("display.max_rows", 1000):
with pd.option_context("display.max_columns", 1000):
display(df)
def disply_dtype_plot(df = None):
if df is None:
return
l = []
cols = df.columns
for i in cols:
if df[i].dtype == 'int64':
l.append('integer dtype')
elif df[i].dtype == 'object':
l.append('object dtype')
elif df[i].dtype == 'float64':
l.append('float dtype')
else:
pass
sns.countplot(l)
del l
disply_dtype_plot(df_raw)
df_raw.head(0)
df_raw.info()
df_raw = df_raw * 1
train_cats(df_raw)
os.makedirs('tmp', exist_ok=True)
df_raw.to_feather('tmp/av_lord-raw')
df_raw = pd.read_feather('tmp/av_lord-raw')
df, y, nas, mapper = proc_df(df_raw, 'is_click', do_scale=True,max_n_cat=30)
sns.countplot(y)
y[-26:] = 0
#df.drop('is_open', axis=1, inplace=True)
m = RandomForestRegressor(n_jobs=-1)
m.fit(df, y)
m.score(df,y)
def print_score(m):
res = [rmse(m.predict(X_train), y_train), rmse(m.predict(X_valid), y_valid),
m.score(X_train, y_train), m.score(X_valid, y_valid)]
if hasattr(m, 'oob_score_'): res.append(m.oob_score_)
print(res)
m = RandomForestRegressor(n_jobs=-1)
%%time m.fit(X_train, y_train)
print_score(m)
display_all(test.isnull().sum().sort_index()/len(df_raw))
display_all(df.columns)
test = pd.read_csv(f'{PATH}\\test_BDIfz5B.csv')
test['y'] = y[:773858]
test.head(2)
add_datepart(test,'send_date')
test.drop('send_Elapsed',axis=1,inplace=True)
test.head(2)
np.unique(camp['campaign_id'])
test = test.merge(camp,on='campaign_id');
test['link_diff'] = test['total_links'] - test['no_of_internal_links']
test['av_links'] = (test['no_of_internal_links']/ test['total_links']) * 100
test['img_per_section'] = test['no_of_images']/ test['no_of_sections']
test['link_diff_%'] = (test['total_links'] - test['no_of_internal_links'])/test['total_links'] * 100
test.head(1)
test.to_feather('tmp/av_lord_test')
train_cats(test)
nas
test.columns
mapper
test.columns
test, _, _ = proc_df(test,max_n_cat=30,mapper=mapper,na_dict=nas)
test.columns
df.drop(list(set(df.columns) - set(test.columns)), axis=1,inplace=True)
len(test.columns)
len(df.columns)
print(df['img_per_sec'].value_counts())
sns.countplot(df['img_per_sec'],orient='h');
print(df['is_open'].value_counts())
sns.countplot(df['is_open']);
sns.countplot(df['no_of_images']);
sns.countplot(df['no_of_sections']);
sns.countplot(df['link_diff']);
train_cats(df)
apply_cats(test, df)
df.drop(['id', 'user_id'], axis=1, inplace=True);
test.drop(['id', 'user_id'], axis=1, inplace=True);
df.head(1)
test.head(1)
df.info()
categorical_features_indices = np.where(df.dtypes == 'category')[0]
categorical_features_indices
df[:].fillna(method='ffill', inplace=True)
def rmse(x,y): return math.sqrt(((x-y)**2).mean())
def print_score(m):
res = [rmse(m.predict(X_train), y_train), rmse(m.predict(X_valid), y_valid),
m.score(X_train, y_train), m.score(X_valid, y_valid)]
if hasattr(m, 'oob_score_'): res.append(m.oob_score_)
print(res)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(C=1, random_state=17, solver='lbfgs',class_weight='balanced',n_jobs=-1,max_iter=1000)
df.drop(['email_body','subject','email_url'], axis =1, inplace=True)
test.drop(['email_body','subject','email_url'], axis =1, inplace=True)
train_cats(df)
apply_cats(test,df)
categorical_features_indices = np.where(df.dtypes == 'category')[0]
categorical_features_indices
#importing library and building model
from catboost import CatBoostRegressor
#model=CatBoostClassifier(iterations=1000, depth=10,learning_rate=0.01, loss_function='CrossEntropy',\
#)
#model.fit(X_train, y_train,cat_features=categorical_features_indices,eval_set=(X_validation, y_validation))
df, _, nan, mapper = proc_df(df,do_scale=True,max_n_cat=30)
from sklearn.model_selection import train_test_split
X_train, X_validation, y_train, y_validation = train_test_split(df, y_target, train_size=0.8, random_state=1234)
lr.fit(X_train,y_train)
df.isnull().head()
m
preds = m.predict(test)
preds
sample_sub['is_click'] = preds
sample_sub
def make_submission(probs):
sample = pd.read_csv(f'{PATH}//sample_submission.csv')
submit = sample.copy()
submit['is_click'] = probs
return submit
submit = make_submission(preds)
submit.head(2)
submit.to_csv(PATH + '//av_cat_2.csv', index=False)
| 0.294621 | 0.436682 |
# `keras-unet-collection` user guide
* **This user guide is no longer updated, visit https://github.com/yingkaisha/keras-unet-collection/tree/main/examples for the lastest version.**
```
import tensorflow as tf
from tensorflow import keras
print('TensorFlow {}; Keras {}'.format(tf.__version__, keras.__version__))
```
### Step 1: importing `models` from `keras_unet_collection`
```
from keras_unet_collection import models
```
### Step 2: defining your hyper-parameters
Commonly used hyper-parameter options are listed as follows. Full details are available through the Python helper function:
* `inpust_size`: a tuple or list that defines the shape of input tensors. `models.resunet_a_2d` supports int only, others support NoneType.
* `filter_num`: a list that defines the number of convolutional filters per down- and up-sampling blocks.
* For `unet_2d`, `att_unet_2d`, `unet_plus_2d`, `r2_unet_2d`, depth $\ge$ 2 is expected.
* For `resunet_a_2d` and `u2net_2d`, depth $\ge$ 3 is expected.
* `n_labels`: number of output targets, e.g., `n_labels=2` for binary classification.
* `activation`: the activation function of hidden layers. Available choices are "ReLU", "LeakyReLU", "PReLU", "ELU", "GELU", "Snake".
* `output_activation`: the activation function of the output layer. Recommended choices are "Sigmoid", "Softmax", None (linear), "Snake".
* `batch_norm`: if specified as True, all convolutional layers will be configured as stacked "Conv2D-BN-Activation" blocks.
* `stack_num_down`: number of stacked convolutional layers per downsampling level.
* `stack_num_up`: number of stacked convolutional layers (after concatenation) per upsampling level.
* `pool`: if specified as False, the downsampling (encoding) blocks will be configured with stridden convolutional layers (2-by-2 linear kernels with 2 strides and activation function). Otherwise, (pool=True) max-pooling is used.
* `unpool`: if specified as False, the upsampling (decoding) blocks will be configured with transpose convolutional layers (2-by-2 transpose kernels with 2 strides and activation function). Otherwise (unpool=True), reflective padding is used.
* `name`: user-specified prefix of the configured layer and model. Use `keras.models.Model.summary` to identify the exact name of each layer.
### Step 3: Configuring your model (examples are provided)
**Example 1**: U-net for binary classification with:
1. Five down- and upsampliung levels (or four downsampling levels and one bottom level).
2. Two convolutional layers per downsampling level.
3. One convolutional layer (after concatenation) per upsamling level.
2. Gaussian Error Linear Unit (GELU) activcation, Softmax output activation, batch normalization.
3. Downsampling through Maxpooling.
4. Upsampling through reflective padding.
```
unet = models.unet_2d((None, None, 3), [64, 128, 256, 512, 1024], n_labels=2,
stack_num_down=2, stack_num_up=1,
activation='GELU', output_activation='Softmax',
batch_norm=True, pool=True, unpool=True, name='unet')
```
**Example 2**: attention-Unet for single target regression with:
1. Four down- and upsampling levels.
2. Two convolutional layers per downsampling level.
3. Two convolutional layers (after concatenation) per upsampling level.
2. ReLU activation, Linear output activation (None), batch normalization.
3. Additive attention, ReLU attention activation.
4. Downsampling through stride convolutional layers.
5. Upsampling through transpose convolutional layers.
```
att_unet = models.att_unet_2d((None, None, 3), [64, 128, 256, 512], n_labels=1,
stack_num_down=2, stack_num_up=2,
activation='ReLU', atten_activation='ReLU', attention='add', output_activation=None,
batch_norm=True, pool=False, unpool=False, name='att-unet')
```
**Example 3**: U-net++ for three-label classification with:
1. Four down- and upsampling levels.
2. Two convolutional layers per downsampling level.
3. Two convolutional layers (after concatenation) per upsampling level.
2. LeakyReLU activation, Softmax output activation, no batch normalization.
3. Downsampling through Maxpooling.
4. Upsampling through transpose convolutional layers.
5. Deep supervision.
```
xnet = models.unet_plus_2d((None, None, 3), [64, 128, 256, 512], n_labels=3,
stack_num_down=2, stack_num_up=2,
activation='LeakyReLU', output_activation='Softmax',
batch_norm=False, pool=True, unpool=False, deep_supervision=True, name='xnet')
```
**Example 4**: UNet 3+ for binary classification with:
1. Four down- and upsampling levels.
2. Two convolutional layers per downsampling level.
3. One convolutional layers (after concatenation) per upsampling level.
2. ReLU activation, Sigmoid output activation, batch normalization.
3. Downsampling through Maxpooling.
4. Upsampling through transpose convolutional layers.
5. Deep supervision.
```
unet3plus = models.unet_3plus_2d((None, None, 3), n_labels=2, filter_num_down=[64, 128, 256, 512],
filter_num_skip='auto', filter_num_aggregate='auto',
stack_num_down=2, stack_num_up=1, activation='ReLU', output_activation='Sigmoid',
batch_norm=False, pool=True, unpool=False, deep_supervision=True, name='unet3plus')
```
* `filter_num_skip` and `filter_num_aggregate` can be specified explicitly:
```
unet3plus = models.unet_3plus_2d((512, 512, 3), n_labels=2, filter_num_down=[64, 128, 256, 512],
filter_num_skip=[64, 64, 64], filter_num_aggregate=256,
stack_num_down=2, stack_num_up=1, activation='ReLU', output_activation='Sigmoid',
batch_norm=False, pool=True, unpool=False, deep_supervision=True, name='unet3plus')
```
**Example 5**: R2U-net for binary classification with:
1. Four down- and upsampling levels.
2. Two recurrent convolutional layers with two iterations per down- and upsampling level.
2. ReLU activation, Softmax output activation, no batch normalization.
3. Downsampling through Maxpooling.
4. Upsampling through reflective padding.
```
r2_unet = models.r2_unet_2d((None, None, 3), [64, 128, 256, 512], n_labels=2,
stack_num_down=2, stack_num_up=1, recur_num=2,
activation='ReLU', output_activation='Softmax',
batch_norm=True, pool=True, unpool=True, name='r2-unet')
```
**Example 6**: ResUnet-a for 16-label classification with:
1. input size of (128, 128, 3)
1. Six downsampling levels followed by an Atrous Spatial Pyramid Pooling (ASPP) layer with 256 filters.
1. Six upsampling levels followed by an ASPP layer with 128 filters.
2. dilation rates of {1, 3, 15, 31} for shallow layers, {1,3,15} for intermediate layers, and {1,} for deep layers.
3. ReLU activation, Sigmoid output activation, batch normalization.
4. Upsampling through reflective padding.
* (Downsampling is fixed to strided convolutional layers)
```
resunet_a = models.resunet_a_2d((128, 128, 3), [32, 64, 128, 256, 512, 1024],
dilation_num=[1, 3, 15, 31],
n_labels=16, aspp_num_down=256, aspp_num_up=128,
activation='ReLU', output_activation='Sigmoid',
batch_norm=True, unpool=True, name='resunet')
```
* `dilation_num` can be specified per down- and uplampling level:
```
resunet_a = models.resunet_a_2d((128, 128, 3), [32, 64, 128, 256, 512, 1024],
dilation_num=[[1, 3, 15, 31], [1, 3, 15, 31], [1, 3, 15], [1, 3, 15], [1,], [1,],],
n_labels=16, aspp_num_down=256, aspp_num_up=128,
activation='ReLU', output_activation='Sigmoid',
batch_norm=True, unpool=True, name='resunet')
```
**Example 7**: U^2-Net for binary classification with:
1. Six downsampling levels with the first four layers built with RSU, and the last two (one downsampling layer, one bottom layer) built with RSU-F4.
* `filter_num_down=[64, 128, 256, 512]`
* `filter_mid_num_down=[32, 32, 64, 128]`
* `filter_4f_num=[512, 512]`
* `filter_4f_mid_num=[256, 256]`
1. Six upsampling levels with the deepest layer built with RSU-F4, and the other four layers built with RSU.
* `filter_num_up=[64, 64, 128, 256]`
* `filter_mid_num_up=[16, 32, 64, 128]`
3. ReLU activation, Sigmoid output activation, batch normalization.
4. Deep supervision
5. Downsampling through stride convolutional layers.
6. Upsampling through transpose convolutional layers.
*In the original work of U^2-Net, down- and upsampling were achieved through maxpooling (`pool=True`) and bilinear interpolation (`unpool=True`).
```
u2net = models.u2net_2d((None, None, 3), n_labels=2,
filter_num_down=[64, 128, 256, 512], filter_num_up=[64, 64, 128, 256],
filter_mid_num_down=[32, 32, 64, 128], filter_mid_num_up=[16, 32, 64, 128],
filter_4f_num=[512, 512], filter_4f_mid_num=[256, 256],
activation='ReLU', output_activation='Sigmoid',
batch_norm=True, pool=False, unpool=False, deep_supervision=True, name='u2net')
```
* `u2net_2d` supports automated determination of filter numbers per down- and upsampling level. Auto-mode may produce a slightly larger network.
```
u2net = models.u2net_2d((None, None, 3), n_labels=2,
filter_num_down=[64, 128, 256, 512],
activation='ReLU', output_activation='Sigmoid',
batch_norm=True, deep_supervision=True, name='u2net')
```
|
github_jupyter
|
import tensorflow as tf
from tensorflow import keras
print('TensorFlow {}; Keras {}'.format(tf.__version__, keras.__version__))
from keras_unet_collection import models
unet = models.unet_2d((None, None, 3), [64, 128, 256, 512, 1024], n_labels=2,
stack_num_down=2, stack_num_up=1,
activation='GELU', output_activation='Softmax',
batch_norm=True, pool=True, unpool=True, name='unet')
att_unet = models.att_unet_2d((None, None, 3), [64, 128, 256, 512], n_labels=1,
stack_num_down=2, stack_num_up=2,
activation='ReLU', atten_activation='ReLU', attention='add', output_activation=None,
batch_norm=True, pool=False, unpool=False, name='att-unet')
xnet = models.unet_plus_2d((None, None, 3), [64, 128, 256, 512], n_labels=3,
stack_num_down=2, stack_num_up=2,
activation='LeakyReLU', output_activation='Softmax',
batch_norm=False, pool=True, unpool=False, deep_supervision=True, name='xnet')
unet3plus = models.unet_3plus_2d((None, None, 3), n_labels=2, filter_num_down=[64, 128, 256, 512],
filter_num_skip='auto', filter_num_aggregate='auto',
stack_num_down=2, stack_num_up=1, activation='ReLU', output_activation='Sigmoid',
batch_norm=False, pool=True, unpool=False, deep_supervision=True, name='unet3plus')
unet3plus = models.unet_3plus_2d((512, 512, 3), n_labels=2, filter_num_down=[64, 128, 256, 512],
filter_num_skip=[64, 64, 64], filter_num_aggregate=256,
stack_num_down=2, stack_num_up=1, activation='ReLU', output_activation='Sigmoid',
batch_norm=False, pool=True, unpool=False, deep_supervision=True, name='unet3plus')
r2_unet = models.r2_unet_2d((None, None, 3), [64, 128, 256, 512], n_labels=2,
stack_num_down=2, stack_num_up=1, recur_num=2,
activation='ReLU', output_activation='Softmax',
batch_norm=True, pool=True, unpool=True, name='r2-unet')
resunet_a = models.resunet_a_2d((128, 128, 3), [32, 64, 128, 256, 512, 1024],
dilation_num=[1, 3, 15, 31],
n_labels=16, aspp_num_down=256, aspp_num_up=128,
activation='ReLU', output_activation='Sigmoid',
batch_norm=True, unpool=True, name='resunet')
resunet_a = models.resunet_a_2d((128, 128, 3), [32, 64, 128, 256, 512, 1024],
dilation_num=[[1, 3, 15, 31], [1, 3, 15, 31], [1, 3, 15], [1, 3, 15], [1,], [1,],],
n_labels=16, aspp_num_down=256, aspp_num_up=128,
activation='ReLU', output_activation='Sigmoid',
batch_norm=True, unpool=True, name='resunet')
u2net = models.u2net_2d((None, None, 3), n_labels=2,
filter_num_down=[64, 128, 256, 512], filter_num_up=[64, 64, 128, 256],
filter_mid_num_down=[32, 32, 64, 128], filter_mid_num_up=[16, 32, 64, 128],
filter_4f_num=[512, 512], filter_4f_mid_num=[256, 256],
activation='ReLU', output_activation='Sigmoid',
batch_norm=True, pool=False, unpool=False, deep_supervision=True, name='u2net')
u2net = models.u2net_2d((None, None, 3), n_labels=2,
filter_num_down=[64, 128, 256, 512],
activation='ReLU', output_activation='Sigmoid',
batch_norm=True, deep_supervision=True, name='u2net')
| 0.754373 | 0.947381 |
```
import numpy as np
import tensorflow as tf
from sklearn.utils import shuffle
import re
import time
import collections
import os
def build_dataset(words, n_words, atleast=1):
count = [['GO', 0], ['PAD', 1], ['EOS', 2], ['UNK', 3]]
counter = collections.Counter(words).most_common(n_words)
counter = [i for i in counter if i[1] >= atleast]
count.extend(counter)
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
index = dictionary.get(word, 0)
if index == 0:
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary
lines = open('movie_lines.txt', encoding='utf-8', errors='ignore').read().split('\n')
conv_lines = open('movie_conversations.txt', encoding='utf-8', errors='ignore').read().split('\n')
id2line = {}
for line in lines:
_line = line.split(' +++$+++ ')
if len(_line) == 5:
id2line[_line[0]] = _line[4]
convs = [ ]
for line in conv_lines[:-1]:
_line = line.split(' +++$+++ ')[-1][1:-1].replace("'","").replace(" ","")
convs.append(_line.split(','))
questions = []
answers = []
for conv in convs:
for i in range(len(conv)-1):
questions.append(id2line[conv[i]])
answers.append(id2line[conv[i+1]])
def clean_text(text):
text = text.lower()
text = re.sub(r"i'm", "i am", text)
text = re.sub(r"he's", "he is", text)
text = re.sub(r"she's", "she is", text)
text = re.sub(r"it's", "it is", text)
text = re.sub(r"that's", "that is", text)
text = re.sub(r"what's", "that is", text)
text = re.sub(r"where's", "where is", text)
text = re.sub(r"how's", "how is", text)
text = re.sub(r"\'ll", " will", text)
text = re.sub(r"\'ve", " have", text)
text = re.sub(r"\'re", " are", text)
text = re.sub(r"\'d", " would", text)
text = re.sub(r"\'re", " are", text)
text = re.sub(r"won't", "will not", text)
text = re.sub(r"can't", "cannot", text)
text = re.sub(r"n't", " not", text)
text = re.sub(r"n'", "ng", text)
text = re.sub(r"'bout", "about", text)
text = re.sub(r"'til", "until", text)
text = re.sub(r"[-()\"#/@;:<>{}`+=~|.!?,]", "", text)
return ' '.join([i.strip() for i in filter(None, text.split())])
clean_questions = []
for question in questions:
clean_questions.append(clean_text(question))
clean_answers = []
for answer in answers:
clean_answers.append(clean_text(answer))
min_line_length = 2
max_line_length = 5
short_questions_temp = []
short_answers_temp = []
i = 0
for question in clean_questions:
if len(question.split()) >= min_line_length and len(question.split()) <= max_line_length:
short_questions_temp.append(question)
short_answers_temp.append(clean_answers[i])
i += 1
short_questions = []
short_answers = []
i = 0
for answer in short_answers_temp:
if len(answer.split()) >= min_line_length and len(answer.split()) <= max_line_length:
short_answers.append(answer)
short_questions.append(short_questions_temp[i])
i += 1
question_test = short_questions[500:550]
answer_test = short_answers[500:550]
short_questions = short_questions[:500]
short_answers = short_answers[:500]
concat_from = ' '.join(short_questions+question_test).split()
vocabulary_size_from = len(list(set(concat_from)))
data_from, count_from, dictionary_from, rev_dictionary_from = build_dataset(concat_from, vocabulary_size_from)
print('vocab from size: %d'%(vocabulary_size_from))
print('Most common words', count_from[4:10])
print('Sample data', data_from[:10], [rev_dictionary_from[i] for i in data_from[:10]])
print('filtered vocab size:',len(dictionary_from))
print("% of vocab used: {}%".format(round(len(dictionary_from)/vocabulary_size_from,4)*100))
concat_to = ' '.join(short_answers+answer_test).split()
vocabulary_size_to = len(list(set(concat_to)))
data_to, count_to, dictionary_to, rev_dictionary_to = build_dataset(concat_to, vocabulary_size_to)
print('vocab from size: %d'%(vocabulary_size_to))
print('Most common words', count_to[4:10])
print('Sample data', data_to[:10], [rev_dictionary_to[i] for i in data_to[:10]])
print('filtered vocab size:',len(dictionary_to))
print("% of vocab used: {}%".format(round(len(dictionary_to)/vocabulary_size_to,4)*100))
GO = dictionary_from['GO']
PAD = dictionary_from['PAD']
EOS = dictionary_from['EOS']
UNK = dictionary_from['UNK']
for i in range(len(short_answers)):
short_answers[i] += ' EOS'
class Chatbot:
def __init__(self, size_layer, num_layers, embedded_size,
from_dict_size, to_dict_size, learning_rate,
batch_size, dropout = 0.5):
def lstm_cell(reuse=False):
return tf.nn.rnn_cell.LSTMCell(size_layer, initializer=tf.orthogonal_initializer(),
reuse=reuse)
def attention(encoder_out, seq_len, reuse=False):
attention_mechanism = tf.contrib.seq2seq.BahdanauAttention(num_units = size_layer,
memory = encoder_out,
memory_sequence_length = seq_len)
return tf.contrib.seq2seq.AttentionWrapper(
cell = tf.nn.rnn_cell.MultiRNNCell([lstm_cell(reuse) for _ in range(num_layers)]),
attention_mechanism = attention_mechanism,
attention_layer_size = size_layer)
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.int32, [None, None])
self.X_seq_len = tf.placeholder(tf.int32, [None])
self.Y_seq_len = tf.placeholder(tf.int32, [None])
# encoder
encoder_embeddings = tf.Variable(tf.random_uniform([from_dict_size, embedded_size], -1, 1))
encoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, self.X)
encoder_cells = tf.nn.rnn_cell.MultiRNNCell([lstm_cell() for _ in range(num_layers)])
self.encoder_out, self.encoder_state = tf.nn.dynamic_rnn(cell = encoder_cells,
inputs = encoder_embedded,
sequence_length = self.X_seq_len,
dtype = tf.float32)
self.encoder_state = tuple(self.encoder_state[-1] for _ in range(num_layers))
main = tf.strided_slice(self.Y, [0, 0], [batch_size, -1], [1, 1])
decoder_input = tf.concat([tf.fill([batch_size, 1], GO), main], 1)
# decoder
decoder_embeddings = tf.Variable(tf.random_uniform([to_dict_size, embedded_size], -1, 1))
decoder_cell = attention(self.encoder_out, self.X_seq_len)
dense_layer = tf.layers.Dense(to_dict_size)
training_helper = tf.contrib.seq2seq.TrainingHelper(
inputs = tf.nn.embedding_lookup(decoder_embeddings, decoder_input),
sequence_length = self.Y_seq_len,
time_major = False)
training_decoder = tf.contrib.seq2seq.BasicDecoder(
cell = decoder_cell,
helper = training_helper,
initial_state = decoder_cell.zero_state(batch_size, tf.float32).clone(cell_state=self.encoder_state),
output_layer = dense_layer)
training_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder = training_decoder,
impute_finished = True,
maximum_iterations = tf.reduce_max(self.Y_seq_len))
predicting_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(
embedding = decoder_embeddings,
start_tokens = tf.tile(tf.constant([GO], dtype=tf.int32), [batch_size]),
end_token = EOS)
predicting_decoder = tf.contrib.seq2seq.BasicDecoder(
cell = decoder_cell,
helper = predicting_helper,
initial_state = decoder_cell.zero_state(batch_size, tf.float32).clone(cell_state=self.encoder_state),
output_layer = dense_layer)
predicting_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder = predicting_decoder,
impute_finished = True,
maximum_iterations = 2 * tf.reduce_max(self.X_seq_len))
self.training_logits = training_decoder_output.rnn_output
self.predicting_ids = predicting_decoder_output.sample_id
masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32)
self.cost = tf.contrib.seq2seq.sequence_loss(logits = self.training_logits,
targets = self.Y,
weights = masks)
self.optimizer = tf.train.AdamOptimizer(learning_rate).minimize(self.cost)
size_layer = 256
num_layers = 2
embedded_size = 128
learning_rate = 0.001
batch_size = 16
epoch = 20
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Chatbot(size_layer, num_layers, embedded_size, len(dictionary_from),
len(dictionary_to), learning_rate,batch_size)
sess.run(tf.global_variables_initializer())
def str_idx(corpus, dic):
X = []
for i in corpus:
ints = []
for k in i.split():
ints.append(dic.get(k,UNK))
X.append(ints)
return X
X = str_idx(short_questions, dictionary_from)
Y = str_idx(short_answers, dictionary_to)
X_test = str_idx(question_test, dictionary_from)
Y_test = str_idx(answer_test, dictionary_from)
def pad_sentence_batch(sentence_batch, pad_int):
padded_seqs = []
seq_lens = []
max_sentence_len = max([len(sentence) for sentence in sentence_batch])
for sentence in sentence_batch:
padded_seqs.append(sentence + [pad_int] * (max_sentence_len - len(sentence)))
seq_lens.append(len(sentence))
return padded_seqs, seq_lens
def check_accuracy(logits, Y):
acc = 0
for i in range(logits.shape[0]):
internal_acc = 0
count = 0
for k in range(len(Y[i])):
try:
if Y[i][k] == logits[i][k]:
internal_acc += 1
count += 1
if Y[i][k] == EOS:
break
except:
break
acc += (internal_acc / count)
return acc / logits.shape[0]
for i in range(epoch):
total_loss, total_accuracy = 0, 0
for k in range(0, (len(short_questions) // batch_size) * batch_size, batch_size):
batch_x, seq_x = pad_sentence_batch(X[k: k+batch_size], PAD)
batch_y, seq_y = pad_sentence_batch(Y[k: k+batch_size], PAD)
predicted, loss, _ = sess.run([model.predicting_ids, model.cost, model.optimizer],
feed_dict={model.X:batch_x,
model.Y:batch_y,
model.X_seq_len:seq_x,
model.Y_seq_len:seq_y})
total_loss += loss
total_accuracy += check_accuracy(predicted,batch_y)
total_loss /= (len(short_questions) // batch_size)
total_accuracy /= (len(short_questions) // batch_size)
print('epoch: %d, avg loss: %f, avg accuracy: %f'%(i+1, total_loss, total_accuracy))
for i in range(len(batch_x)):
print('row %d'%(i+1))
print('QUESTION:',' '.join([rev_dictionary_from[n] for n in batch_x[i] if n not in [0,1,2,3]]))
print('REAL ANSWER:',' '.join([rev_dictionary_to[n] for n in batch_y[i] if n not in[0,1,2,3]]))
print('PREDICTED ANSWER:',' '.join([rev_dictionary_to[n] for n in predicted[i] if n not in[0,1,2,3]]),'\n')
batch_x, seq_x = pad_sentence_batch(X_test[:batch_size], PAD)
batch_y, seq_y = pad_sentence_batch(Y_test[:batch_size], PAD)
predicted = sess.run(model.predicting_ids, feed_dict={model.X:batch_x,model.X_seq_len:seq_x})
for i in range(len(batch_x)):
print('row %d'%(i+1))
print('QUESTION:',' '.join([rev_dictionary_from[n] for n in batch_x[i] if n not in [0,1,2,3]]))
print('REAL ANSWER:',' '.join([rev_dictionary_to[n] for n in batch_y[i] if n not in[0,1,2,3]]))
print('PREDICTED ANSWER:',' '.join([rev_dictionary_to[n] for n in predicted[i] if n not in[0,1,2,3]]),'\n')
```
|
github_jupyter
|
import numpy as np
import tensorflow as tf
from sklearn.utils import shuffle
import re
import time
import collections
import os
def build_dataset(words, n_words, atleast=1):
count = [['GO', 0], ['PAD', 1], ['EOS', 2], ['UNK', 3]]
counter = collections.Counter(words).most_common(n_words)
counter = [i for i in counter if i[1] >= atleast]
count.extend(counter)
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
index = dictionary.get(word, 0)
if index == 0:
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary
lines = open('movie_lines.txt', encoding='utf-8', errors='ignore').read().split('\n')
conv_lines = open('movie_conversations.txt', encoding='utf-8', errors='ignore').read().split('\n')
id2line = {}
for line in lines:
_line = line.split(' +++$+++ ')
if len(_line) == 5:
id2line[_line[0]] = _line[4]
convs = [ ]
for line in conv_lines[:-1]:
_line = line.split(' +++$+++ ')[-1][1:-1].replace("'","").replace(" ","")
convs.append(_line.split(','))
questions = []
answers = []
for conv in convs:
for i in range(len(conv)-1):
questions.append(id2line[conv[i]])
answers.append(id2line[conv[i+1]])
def clean_text(text):
text = text.lower()
text = re.sub(r"i'm", "i am", text)
text = re.sub(r"he's", "he is", text)
text = re.sub(r"she's", "she is", text)
text = re.sub(r"it's", "it is", text)
text = re.sub(r"that's", "that is", text)
text = re.sub(r"what's", "that is", text)
text = re.sub(r"where's", "where is", text)
text = re.sub(r"how's", "how is", text)
text = re.sub(r"\'ll", " will", text)
text = re.sub(r"\'ve", " have", text)
text = re.sub(r"\'re", " are", text)
text = re.sub(r"\'d", " would", text)
text = re.sub(r"\'re", " are", text)
text = re.sub(r"won't", "will not", text)
text = re.sub(r"can't", "cannot", text)
text = re.sub(r"n't", " not", text)
text = re.sub(r"n'", "ng", text)
text = re.sub(r"'bout", "about", text)
text = re.sub(r"'til", "until", text)
text = re.sub(r"[-()\"#/@;:<>{}`+=~|.!?,]", "", text)
return ' '.join([i.strip() for i in filter(None, text.split())])
clean_questions = []
for question in questions:
clean_questions.append(clean_text(question))
clean_answers = []
for answer in answers:
clean_answers.append(clean_text(answer))
min_line_length = 2
max_line_length = 5
short_questions_temp = []
short_answers_temp = []
i = 0
for question in clean_questions:
if len(question.split()) >= min_line_length and len(question.split()) <= max_line_length:
short_questions_temp.append(question)
short_answers_temp.append(clean_answers[i])
i += 1
short_questions = []
short_answers = []
i = 0
for answer in short_answers_temp:
if len(answer.split()) >= min_line_length and len(answer.split()) <= max_line_length:
short_answers.append(answer)
short_questions.append(short_questions_temp[i])
i += 1
question_test = short_questions[500:550]
answer_test = short_answers[500:550]
short_questions = short_questions[:500]
short_answers = short_answers[:500]
concat_from = ' '.join(short_questions+question_test).split()
vocabulary_size_from = len(list(set(concat_from)))
data_from, count_from, dictionary_from, rev_dictionary_from = build_dataset(concat_from, vocabulary_size_from)
print('vocab from size: %d'%(vocabulary_size_from))
print('Most common words', count_from[4:10])
print('Sample data', data_from[:10], [rev_dictionary_from[i] for i in data_from[:10]])
print('filtered vocab size:',len(dictionary_from))
print("% of vocab used: {}%".format(round(len(dictionary_from)/vocabulary_size_from,4)*100))
concat_to = ' '.join(short_answers+answer_test).split()
vocabulary_size_to = len(list(set(concat_to)))
data_to, count_to, dictionary_to, rev_dictionary_to = build_dataset(concat_to, vocabulary_size_to)
print('vocab from size: %d'%(vocabulary_size_to))
print('Most common words', count_to[4:10])
print('Sample data', data_to[:10], [rev_dictionary_to[i] for i in data_to[:10]])
print('filtered vocab size:',len(dictionary_to))
print("% of vocab used: {}%".format(round(len(dictionary_to)/vocabulary_size_to,4)*100))
GO = dictionary_from['GO']
PAD = dictionary_from['PAD']
EOS = dictionary_from['EOS']
UNK = dictionary_from['UNK']
for i in range(len(short_answers)):
short_answers[i] += ' EOS'
class Chatbot:
def __init__(self, size_layer, num_layers, embedded_size,
from_dict_size, to_dict_size, learning_rate,
batch_size, dropout = 0.5):
def lstm_cell(reuse=False):
return tf.nn.rnn_cell.LSTMCell(size_layer, initializer=tf.orthogonal_initializer(),
reuse=reuse)
def attention(encoder_out, seq_len, reuse=False):
attention_mechanism = tf.contrib.seq2seq.BahdanauAttention(num_units = size_layer,
memory = encoder_out,
memory_sequence_length = seq_len)
return tf.contrib.seq2seq.AttentionWrapper(
cell = tf.nn.rnn_cell.MultiRNNCell([lstm_cell(reuse) for _ in range(num_layers)]),
attention_mechanism = attention_mechanism,
attention_layer_size = size_layer)
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.int32, [None, None])
self.X_seq_len = tf.placeholder(tf.int32, [None])
self.Y_seq_len = tf.placeholder(tf.int32, [None])
# encoder
encoder_embeddings = tf.Variable(tf.random_uniform([from_dict_size, embedded_size], -1, 1))
encoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, self.X)
encoder_cells = tf.nn.rnn_cell.MultiRNNCell([lstm_cell() for _ in range(num_layers)])
self.encoder_out, self.encoder_state = tf.nn.dynamic_rnn(cell = encoder_cells,
inputs = encoder_embedded,
sequence_length = self.X_seq_len,
dtype = tf.float32)
self.encoder_state = tuple(self.encoder_state[-1] for _ in range(num_layers))
main = tf.strided_slice(self.Y, [0, 0], [batch_size, -1], [1, 1])
decoder_input = tf.concat([tf.fill([batch_size, 1], GO), main], 1)
# decoder
decoder_embeddings = tf.Variable(tf.random_uniform([to_dict_size, embedded_size], -1, 1))
decoder_cell = attention(self.encoder_out, self.X_seq_len)
dense_layer = tf.layers.Dense(to_dict_size)
training_helper = tf.contrib.seq2seq.TrainingHelper(
inputs = tf.nn.embedding_lookup(decoder_embeddings, decoder_input),
sequence_length = self.Y_seq_len,
time_major = False)
training_decoder = tf.contrib.seq2seq.BasicDecoder(
cell = decoder_cell,
helper = training_helper,
initial_state = decoder_cell.zero_state(batch_size, tf.float32).clone(cell_state=self.encoder_state),
output_layer = dense_layer)
training_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder = training_decoder,
impute_finished = True,
maximum_iterations = tf.reduce_max(self.Y_seq_len))
predicting_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(
embedding = decoder_embeddings,
start_tokens = tf.tile(tf.constant([GO], dtype=tf.int32), [batch_size]),
end_token = EOS)
predicting_decoder = tf.contrib.seq2seq.BasicDecoder(
cell = decoder_cell,
helper = predicting_helper,
initial_state = decoder_cell.zero_state(batch_size, tf.float32).clone(cell_state=self.encoder_state),
output_layer = dense_layer)
predicting_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder = predicting_decoder,
impute_finished = True,
maximum_iterations = 2 * tf.reduce_max(self.X_seq_len))
self.training_logits = training_decoder_output.rnn_output
self.predicting_ids = predicting_decoder_output.sample_id
masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32)
self.cost = tf.contrib.seq2seq.sequence_loss(logits = self.training_logits,
targets = self.Y,
weights = masks)
self.optimizer = tf.train.AdamOptimizer(learning_rate).minimize(self.cost)
size_layer = 256
num_layers = 2
embedded_size = 128
learning_rate = 0.001
batch_size = 16
epoch = 20
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Chatbot(size_layer, num_layers, embedded_size, len(dictionary_from),
len(dictionary_to), learning_rate,batch_size)
sess.run(tf.global_variables_initializer())
def str_idx(corpus, dic):
X = []
for i in corpus:
ints = []
for k in i.split():
ints.append(dic.get(k,UNK))
X.append(ints)
return X
X = str_idx(short_questions, dictionary_from)
Y = str_idx(short_answers, dictionary_to)
X_test = str_idx(question_test, dictionary_from)
Y_test = str_idx(answer_test, dictionary_from)
def pad_sentence_batch(sentence_batch, pad_int):
padded_seqs = []
seq_lens = []
max_sentence_len = max([len(sentence) for sentence in sentence_batch])
for sentence in sentence_batch:
padded_seqs.append(sentence + [pad_int] * (max_sentence_len - len(sentence)))
seq_lens.append(len(sentence))
return padded_seqs, seq_lens
def check_accuracy(logits, Y):
acc = 0
for i in range(logits.shape[0]):
internal_acc = 0
count = 0
for k in range(len(Y[i])):
try:
if Y[i][k] == logits[i][k]:
internal_acc += 1
count += 1
if Y[i][k] == EOS:
break
except:
break
acc += (internal_acc / count)
return acc / logits.shape[0]
for i in range(epoch):
total_loss, total_accuracy = 0, 0
for k in range(0, (len(short_questions) // batch_size) * batch_size, batch_size):
batch_x, seq_x = pad_sentence_batch(X[k: k+batch_size], PAD)
batch_y, seq_y = pad_sentence_batch(Y[k: k+batch_size], PAD)
predicted, loss, _ = sess.run([model.predicting_ids, model.cost, model.optimizer],
feed_dict={model.X:batch_x,
model.Y:batch_y,
model.X_seq_len:seq_x,
model.Y_seq_len:seq_y})
total_loss += loss
total_accuracy += check_accuracy(predicted,batch_y)
total_loss /= (len(short_questions) // batch_size)
total_accuracy /= (len(short_questions) // batch_size)
print('epoch: %d, avg loss: %f, avg accuracy: %f'%(i+1, total_loss, total_accuracy))
for i in range(len(batch_x)):
print('row %d'%(i+1))
print('QUESTION:',' '.join([rev_dictionary_from[n] for n in batch_x[i] if n not in [0,1,2,3]]))
print('REAL ANSWER:',' '.join([rev_dictionary_to[n] for n in batch_y[i] if n not in[0,1,2,3]]))
print('PREDICTED ANSWER:',' '.join([rev_dictionary_to[n] for n in predicted[i] if n not in[0,1,2,3]]),'\n')
batch_x, seq_x = pad_sentence_batch(X_test[:batch_size], PAD)
batch_y, seq_y = pad_sentence_batch(Y_test[:batch_size], PAD)
predicted = sess.run(model.predicting_ids, feed_dict={model.X:batch_x,model.X_seq_len:seq_x})
for i in range(len(batch_x)):
print('row %d'%(i+1))
print('QUESTION:',' '.join([rev_dictionary_from[n] for n in batch_x[i] if n not in [0,1,2,3]]))
print('REAL ANSWER:',' '.join([rev_dictionary_to[n] for n in batch_y[i] if n not in[0,1,2,3]]))
print('PREDICTED ANSWER:',' '.join([rev_dictionary_to[n] for n in predicted[i] if n not in[0,1,2,3]]),'\n')
| 0.324449 | 0.416025 |
# 00 - Exploration
```
%load_ext watermark
%watermark -v -n -m -p numpy,scipy,sklearn,pandas
# Magic commands must be in separate cells
# to properly display light background for
# plots with JupyterLab dark theme
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import pandas as pd
import sys
import matplotlib.pyplot as plt
import matplotlib as mpl
import matplotlib.ticker as mtick
import seaborn as sns
plt.style.use('ggplot')
sns.set()
# plt.style.use('seaborn')
mpl.style.use('seaborn')
pd.options.mode.chained_assignment = None # default='warn'
pd.options.display.max_columns = 999
pd.options.display.max_rows = 500
np.set_printoptions(precision=6)
DATA_PATH = '../data/'
print(pd.__version__)
```
# Read CSV
```
import os
RAW_PATH = os.path.join(DATA_PATH, 'raw', 'diabetic_data.csv')
raw_df = pd.read_csv(RAW_PATH, na_values='?')
# replace NO as >30
raw_df['readmitted'] = raw_df['readmitted'].replace({'NO': '>30'})
print(raw_df.shape)
display(raw_df.head())
```
## Feature Info
- Null values
- data types (dtypes)
- unique values
```
print(f'Total num samples: {raw_df.shape[0]} \n')
null_df = raw_df.isnull().sum()
dtypes_df = raw_df.dtypes
nunique_df = raw_df.nunique(dropna=False)
summary_df = pd.concat([null_df, dtypes_df, nunique_df], axis=1)
summary_df.columns = ['num_null', 'dtype', 'nunique']
display(summary_df)
```
## Define Feature Types
```
ignore_features = [
'encounter_id',
'patient_nbr',
]
continuous_features = [
'time_in_hospital',
'num_lab_procedures',
'num_procedures',
'num_medications',
'number_outpatient',
'number_emergency',
'number_inpatient',
'number_diagnoses',
]
output_features = ['readmitted']
categorical_features = [var for var in raw_df.columns
if var not in ignore_features
and var not in continuous_features
and var not in output_features]
print(categorical_features)
```
## Inspect Continuous Features - Before Downsampling
```
axes = raw_df.hist(column=continuous_features, figsize=(16,12))
```
## Inspect Categorical Features - Before Downsampling
```
def display_categorical_features(df, features, ignore=None):
for var in features:
if var in ignore:
continue
print(f'\n{var}: {df[var].nunique(dropna=False)} unique values \n')
display(df[var].value_counts(dropna=False).sort_index())
print('-'*50)
display_categorical_features(raw_df,
features=categorical_features+output_features,
ignore=['diag_1', 'diag_2', 'diag_3'])
raw_df['readmitted'].value_counts(normalize=True)
```
|
github_jupyter
|
%load_ext watermark
%watermark -v -n -m -p numpy,scipy,sklearn,pandas
# Magic commands must be in separate cells
# to properly display light background for
# plots with JupyterLab dark theme
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import pandas as pd
import sys
import matplotlib.pyplot as plt
import matplotlib as mpl
import matplotlib.ticker as mtick
import seaborn as sns
plt.style.use('ggplot')
sns.set()
# plt.style.use('seaborn')
mpl.style.use('seaborn')
pd.options.mode.chained_assignment = None # default='warn'
pd.options.display.max_columns = 999
pd.options.display.max_rows = 500
np.set_printoptions(precision=6)
DATA_PATH = '../data/'
print(pd.__version__)
import os
RAW_PATH = os.path.join(DATA_PATH, 'raw', 'diabetic_data.csv')
raw_df = pd.read_csv(RAW_PATH, na_values='?')
# replace NO as >30
raw_df['readmitted'] = raw_df['readmitted'].replace({'NO': '>30'})
print(raw_df.shape)
display(raw_df.head())
print(f'Total num samples: {raw_df.shape[0]} \n')
null_df = raw_df.isnull().sum()
dtypes_df = raw_df.dtypes
nunique_df = raw_df.nunique(dropna=False)
summary_df = pd.concat([null_df, dtypes_df, nunique_df], axis=1)
summary_df.columns = ['num_null', 'dtype', 'nunique']
display(summary_df)
ignore_features = [
'encounter_id',
'patient_nbr',
]
continuous_features = [
'time_in_hospital',
'num_lab_procedures',
'num_procedures',
'num_medications',
'number_outpatient',
'number_emergency',
'number_inpatient',
'number_diagnoses',
]
output_features = ['readmitted']
categorical_features = [var for var in raw_df.columns
if var not in ignore_features
and var not in continuous_features
and var not in output_features]
print(categorical_features)
axes = raw_df.hist(column=continuous_features, figsize=(16,12))
def display_categorical_features(df, features, ignore=None):
for var in features:
if var in ignore:
continue
print(f'\n{var}: {df[var].nunique(dropna=False)} unique values \n')
display(df[var].value_counts(dropna=False).sort_index())
print('-'*50)
display_categorical_features(raw_df,
features=categorical_features+output_features,
ignore=['diag_1', 'diag_2', 'diag_3'])
raw_df['readmitted'].value_counts(normalize=True)
| 0.275812 | 0.742025 |
### This notebook is for querying station log files for checking locations and checking trends in thresholds and trigger rates
```
%pylab qt
import os
import pandas as pd
import read_logs
import datetime as dt
import numpy as np
```
### For reading in all logs from a single folder within a time range
Note: Within the parsing functions, the log files are examined for any bad lines (wrong number of entries or extremely bad GPS locations). The file will then be automatically backed up and these bad lines are then removed, if there are less than 26. This is designed to be run on a Linux OS, so they may error on other OS's, but the lines will be printed and can be removed manually, then re-ran.
```
useddir = '/home/dustdevil/Desktop/VSElogs/'
#useddir = '/home/dustdevil/Dropbox/logs_4_June_2012/'
start_time = dt.datetime(2016,4,9)
end_time = dt.datetime(2016,4,17,23,59)
days = np.array([start_time+dt.timedelta(days=i) for i in range((end_time-start_time).days+1)])
days_string = np.array([i.strftime("%y%m%d") for i in days])
logs = pd.DataFrame()
dir = os.listdir(useddir)
for file in dir:
if np.any(file[2:] == days_string):
print file
logs = pd.concat([logs, read_logs.parsing(useddir+file, T_set='False', old7='False')])
# Set old7='True' if it is an old v10 file format with 7 primary columns instead of the current 8
logs.sort_index(inplace=True)
station_byid = logs.groupby('ID') # Easier
```
### A few different methods of interogating based on groups and whether they will be reused or just plotted
```
fig = plt.figure()
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212, sharex=ax1)
for key, grp in station_byid:
ax1.plot(pd.rolling_mean(grp['Triggers'],300), label = key)
ax2.plot(grp['Threshold'], label=key)
plt.legend()
Just_one = station_byid.get_group('H') # Saves all of the H logs in a df
# pd.rolling_mean(Just_one['Threshold'], 120).plot()
Just_one['Triggers'][dt.datetime(2016,3,1):dt.datetime(2016,3,2)].plot()
```
### Quick voltage comparison
```
names = ['day', 'mon', 'date', 'hms', 'zone', 'year',
'V_load', 'V_batt', 'V_PV', 'I_load', 'I_batt', 'I_PV'
]
usecols=[0 ,1 ,2 ,3 ,4 ,5 ,9 ,13,17,21,25,29,]
def parser(*args):
ds = ' '.join(args)
date = datetime.datetime.strptime(ds,
'%a %b %d %H:%M:%S %Z %Y:',
)
return date
lma_pwr = pd.read_table('/home/dustdevil/Desktop/VSElogs/voltage_log_X', sep='\s*',#delim_whitespace=True,
usecols=usecols, date_parser=parser, #skiprows=10, skipfooter=15,
header=None, names=names, index_col=False, parse_dates={'UTC':[0,1,2,3,4,5]})
lma_pwr.set_index('UTC',inplace=True)
lma_pwr2 = pd.read_table('/home/dustdevil/Desktop/VSElogs/voltage_log_H', sep='\s*',#delim_whitespace=True,
usecols=usecols, date_parser=parser, #skiprows=10, skipfooter=15,
header=None, names=names, index_col=False, parse_dates={'UTC':[0,1,2,3,4,5]})
lma_pwr2.set_index('UTC',inplace=True)
fig = plt.figure()
ax1 = fig.add_subplot(411)
ax2 = fig.add_subplot(412, sharex=ax1)
ax3 = fig.add_subplot(413, sharex=ax1)
ax4 = fig.add_subplot(414, sharex=ax1)
for key, grp in station_byid:
ax1.plot(pd.rolling_mean(grp['Triggers'],300), label = key)
ax2.plot(grp['Threshold'], label=key)
ax3.plot(lma_pwr2['V_PV'][dt.datetime(2016,3,1):],label='H')
ax3.plot(lma_pwr['V_PV'][dt.datetime(2016,3,1):],label='X')
ax4.plot(lma_pwr2['V_batt'][dt.datetime(2016,3,1):],label='H')
ax4.plot(lma_pwr['V_batt'][dt.datetime(2016,3,1):],label='X')
ax1.legend()
ax2.legend()
ax3.legend()
```
|
github_jupyter
|
%pylab qt
import os
import pandas as pd
import read_logs
import datetime as dt
import numpy as np
useddir = '/home/dustdevil/Desktop/VSElogs/'
#useddir = '/home/dustdevil/Dropbox/logs_4_June_2012/'
start_time = dt.datetime(2016,4,9)
end_time = dt.datetime(2016,4,17,23,59)
days = np.array([start_time+dt.timedelta(days=i) for i in range((end_time-start_time).days+1)])
days_string = np.array([i.strftime("%y%m%d") for i in days])
logs = pd.DataFrame()
dir = os.listdir(useddir)
for file in dir:
if np.any(file[2:] == days_string):
print file
logs = pd.concat([logs, read_logs.parsing(useddir+file, T_set='False', old7='False')])
# Set old7='True' if it is an old v10 file format with 7 primary columns instead of the current 8
logs.sort_index(inplace=True)
station_byid = logs.groupby('ID') # Easier
fig = plt.figure()
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212, sharex=ax1)
for key, grp in station_byid:
ax1.plot(pd.rolling_mean(grp['Triggers'],300), label = key)
ax2.plot(grp['Threshold'], label=key)
plt.legend()
Just_one = station_byid.get_group('H') # Saves all of the H logs in a df
# pd.rolling_mean(Just_one['Threshold'], 120).plot()
Just_one['Triggers'][dt.datetime(2016,3,1):dt.datetime(2016,3,2)].plot()
names = ['day', 'mon', 'date', 'hms', 'zone', 'year',
'V_load', 'V_batt', 'V_PV', 'I_load', 'I_batt', 'I_PV'
]
usecols=[0 ,1 ,2 ,3 ,4 ,5 ,9 ,13,17,21,25,29,]
def parser(*args):
ds = ' '.join(args)
date = datetime.datetime.strptime(ds,
'%a %b %d %H:%M:%S %Z %Y:',
)
return date
lma_pwr = pd.read_table('/home/dustdevil/Desktop/VSElogs/voltage_log_X', sep='\s*',#delim_whitespace=True,
usecols=usecols, date_parser=parser, #skiprows=10, skipfooter=15,
header=None, names=names, index_col=False, parse_dates={'UTC':[0,1,2,3,4,5]})
lma_pwr.set_index('UTC',inplace=True)
lma_pwr2 = pd.read_table('/home/dustdevil/Desktop/VSElogs/voltage_log_H', sep='\s*',#delim_whitespace=True,
usecols=usecols, date_parser=parser, #skiprows=10, skipfooter=15,
header=None, names=names, index_col=False, parse_dates={'UTC':[0,1,2,3,4,5]})
lma_pwr2.set_index('UTC',inplace=True)
fig = plt.figure()
ax1 = fig.add_subplot(411)
ax2 = fig.add_subplot(412, sharex=ax1)
ax3 = fig.add_subplot(413, sharex=ax1)
ax4 = fig.add_subplot(414, sharex=ax1)
for key, grp in station_byid:
ax1.plot(pd.rolling_mean(grp['Triggers'],300), label = key)
ax2.plot(grp['Threshold'], label=key)
ax3.plot(lma_pwr2['V_PV'][dt.datetime(2016,3,1):],label='H')
ax3.plot(lma_pwr['V_PV'][dt.datetime(2016,3,1):],label='X')
ax4.plot(lma_pwr2['V_batt'][dt.datetime(2016,3,1):],label='H')
ax4.plot(lma_pwr['V_batt'][dt.datetime(2016,3,1):],label='X')
ax1.legend()
ax2.legend()
ax3.legend()
| 0.234494 | 0.801198 |
# Hybridizing Models in VESIcal
One of the advantages of implementing the solubility models in a generic python module is the flexibility this affords the user in changing the way solubility models are defined and used. In particular, the structure allows any combination of pure-fluid models to be used together in modelling mixed-fluids, and fugacity or activity models can be quickly changed without modifying code. This allows advanced users to see how changing a fugacity or activity model implemented in any particular solubility model would affect model results.
To run this notebook, first VESIcal must be imported and a sample defined:
```
import VESIcal as v
mysample = v.Sample({'SiO2': 56.3,
'TiO2': 0.08,
'Al2O3': 15.6,
'Fe2O3': 0.207,
'Cr2O3': 0.0,
'FeO': 1.473,
'MnO': 0.0,
'MgO': 8.03,
'NiO': 0.0,
'CoO': 0.0,
'CaO': 9.43,
'Na2O': 3.98,
'K2O': 4.88,
'P2O5': 0.0,
'H2O': 6.5,
'CO2': 0.05})
```
## Using Model objects directly
The calculations shown in this manuscript utilise the python-class Calculation interfaces. When the class is called, the required model is usually selected from the default models using the model name as a string, e.g.:
```
calculation = v.calculate_dissolved_volatiles(sample=mysample, pressure=1000.0, X_fluid=0.1, model='ShishkinaIdealMixing')
```
When the `calculate_dissolved_volatiles` class is initiated, it retrieves a pre-defined model object instance. However, creating model objects directly affords greater control over how the calculation is performed. A model object for a pure fluid can be created by:
```
model_object = v.models.shishkina.carbon()
```
Any method that is used during solubility calculations can now be accessed directly. For example, the compositional dependence of CO$_2$ solubility is captured by the $\pi^*$ parameter in the --SHISHKINA-- parameterisation. The value of this parameter is calculated everytime a solubility calculation is performed using the `ShishkinaCarbon` model, but is not accessible through the `Calculation` class interfaces. However, the method that calculates $\pi^*$ can be called directly from the model object:
```
model_object.PiStar(sample=mysample)
```
The available methods can be found when using Jupyterlab or ipython by pressing the tab key after typing `model_object.`. Calculation methods can also be called directly from the model object, without using the `Calculation` class interface:
```
model_object.calculate_dissolved_volatiles(sample=mysample, pressure=1000.0)
```
This is computationally faster than using the `Calculation` interface, but does not automatically pre-process the sample composition, or run calibration checks. Alternatively, the `model_object` can be used with the `Calculation` class interface by passing the object in place of a string for the `model` variable:
```
calculation = v.calculate_dissolved_volatiles(sample=mysample, pressure=1000.0, model=model_object)
calculation.result
```
## Changing Model fugacity and activity models
This functionality is more powerful when the user makes changes to components of the model. For example, when every model object is initialized in VESIcal, it has a fugacity and activity model associated with it. Where models parameterise solubility as a function of pressure (or partial pressure) directly, as done by --SHISHKINA--, this is equivalent to assuming the fugacity is that of an ideal gas. By retrieving the fugacity model from the `model_object` we created above, we can see that this is the case:
```
model_object.fugacity_model
```
Other models, e.g., --DIXON--, parameterise solubility as a function of fugacity, calculated using an equation of state for the vapour phase. The default fugacity model for `DixonCarbon` is the --KERRICK AND JACOBS--, and is set when the model is initialized:
```
model_object = v.models.dixon.carbon()
model_object.fugacity_model
```
However, if we wanted to see how the calculation results would change were the --REDLICH-KWONG-- model used instead, we can change this component of the model:
```
model_object.set_fugacity_model(v.fugacity_models.fugacity_RK_co2())
model_object.fugacity_model
```
Any calculations now performed using `model_object` will use fugacities calculated with --REDLICH-KWONG-- in place of --KERRICK AND JACOBS--. Each model object also has an activity model associated with it. This allows for non-ideal solution of vapour species in the melt. Whilst none of the models presently within VESIcal use non-ideal activities, this would permit models such as --DUAN-- to be implemented within the VESIcal framework in the future.
## Defining and using MixedFluid model objects
The model objects for mixed fluids have a similar structure, with one major difference. A `MixedFluid` model object is a generic model which may be implemented with any of the pure-fluid models within VESIcal. The default `MixedFluid` model object for --SHISHKINA-- is defined by:
```
mixed_model = v.model_classes.MixedFluid({'CO2':v.models.shishkina.carbon(),
'H2O':v.models.shishkina.water()})
```
As with the pure-fluid model objects, calculations can be performed directly using the model object, e.g.:
```
mixed_model.calculate_equilibrium_fluid_comp(sample=mysample, pressure=1000.0)
```
or by supplying the `Calculate` class interface with `mixed_model` as the value of `model`:
```
calculation = v.calculate_equilibrium_fluid_comp(sample=mysample, pressure=1000.0, model=mixed_model)
calculation.result
```
If we wanted to change the fugacity (or activity) models used in the calculation, we must access the pure-fluid model objects stored within the mixed-fluid model object:
```
mixed_model.models[0].fugacity_model
mixed_model.models[0].set_fugacity_model(v.fugacity_models.fugacity_KJ81_co2())
mixed_model.models[0].fugacity_model
```
The `MixedFluid` model object also allows different solubility models to be combined, for example if we wanted to use the --ALLISON-- CO2 solubility model in conjunction with a water solubility model we could define our own `MixedFluid` model object:
```
mixed_model = v.model_classes.MixedFluid({'CO2':v.models.allison.carbon(),
'H2O':v.models.iaconomarziano.water()})
```
|
github_jupyter
|
import VESIcal as v
mysample = v.Sample({'SiO2': 56.3,
'TiO2': 0.08,
'Al2O3': 15.6,
'Fe2O3': 0.207,
'Cr2O3': 0.0,
'FeO': 1.473,
'MnO': 0.0,
'MgO': 8.03,
'NiO': 0.0,
'CoO': 0.0,
'CaO': 9.43,
'Na2O': 3.98,
'K2O': 4.88,
'P2O5': 0.0,
'H2O': 6.5,
'CO2': 0.05})
calculation = v.calculate_dissolved_volatiles(sample=mysample, pressure=1000.0, X_fluid=0.1, model='ShishkinaIdealMixing')
model_object = v.models.shishkina.carbon()
model_object.PiStar(sample=mysample)
model_object.calculate_dissolved_volatiles(sample=mysample, pressure=1000.0)
calculation = v.calculate_dissolved_volatiles(sample=mysample, pressure=1000.0, model=model_object)
calculation.result
model_object.fugacity_model
model_object = v.models.dixon.carbon()
model_object.fugacity_model
model_object.set_fugacity_model(v.fugacity_models.fugacity_RK_co2())
model_object.fugacity_model
mixed_model = v.model_classes.MixedFluid({'CO2':v.models.shishkina.carbon(),
'H2O':v.models.shishkina.water()})
mixed_model.calculate_equilibrium_fluid_comp(sample=mysample, pressure=1000.0)
calculation = v.calculate_equilibrium_fluid_comp(sample=mysample, pressure=1000.0, model=mixed_model)
calculation.result
mixed_model.models[0].fugacity_model
mixed_model.models[0].set_fugacity_model(v.fugacity_models.fugacity_KJ81_co2())
mixed_model.models[0].fugacity_model
mixed_model = v.model_classes.MixedFluid({'CO2':v.models.allison.carbon(),
'H2O':v.models.iaconomarziano.water()})
| 0.407569 | 0.981113 |
```
# default_exp models.afn
```
# AFN
> A pytorch implementation of AFN.
```
#hide
from nbdev.showdoc import *
from fastcore.nb_imports import *
from fastcore.test import *
```
## v1
```
#export
import torch
from torch import nn
from recohut.models.layers.embedding import EmbeddingLayer
from recohut.models.layers.common import MLP_Layer, LR_Layer
from recohut.models.bases.ctr import CTRModel
#export
class AFN(CTRModel):
def __init__(self,
feature_map,
model_id="AFN",
task="binary_classification",
learning_rate=1e-3,
embedding_initializer="torch.nn.init.normal_(std=1e-4)",
embedding_dim=10,
ensemble_dnn=True,
dnn_hidden_units=[64, 64, 64],
dnn_activations="ReLU",
dnn_dropout=0,
afn_hidden_units=[64, 64, 64],
afn_activations="ReLU",
afn_dropout=0,
logarithmic_neurons=5,
batch_norm=True,
**kwargs):
super(AFN, self).__init__(feature_map,
model_id=model_id,
**kwargs)
self.num_fields = feature_map.num_fields
self.embedding_layer = EmbeddingLayer(feature_map, embedding_dim)
self.coefficient_W = nn.Linear(self.num_fields, logarithmic_neurons, bias=False)
self.dense_layer = MLP_Layer(input_dim=embedding_dim * logarithmic_neurons,
output_dim=1,
hidden_units=afn_hidden_units,
hidden_activations=afn_activations,
output_activation=None,
dropout_rates=afn_dropout,
batch_norm=batch_norm,
use_bias=True)
self.log_batch_norm = nn.BatchNorm1d(self.num_fields)
self.exp_batch_norm = nn.BatchNorm1d(logarithmic_neurons)
self.ensemble_dnn = ensemble_dnn
if ensemble_dnn:
self.embedding_layer2 = EmbeddingLayer(feature_map,
embedding_dim,
embedding_dropout)
self.dnn = MLP_Layer(input_dim=embedding_dim * self.num_fields,
output_dim=1,
hidden_units=dnn_hidden_units,
hidden_activations=dnn_activations,
output_activation=None,
dropout_rates=dnn_dropout,
batch_norm=batch_norm,
use_bias=True)
self.fc = nn.Linear(2, 1)
self.output_activation = self.get_final_activation(task)
self.init_weights(embedding_initializer=embedding_initializer)
def forward(self, inputs):
feature_emb = self.embedding_layer(inputs)
dnn_input = self.logarithmic_net(feature_emb)
afn_out = self.dense_layer(dnn_input)
if self.ensemble_dnn:
feature_emb_list2 = self.embedding_layer2(X)
concate_feature_emb = torch.cat(feature_emb_list2, dim=1)
dnn_out = self.dnn(concate_feature_emb)
y_pred = self.fc(torch.cat([afn_out, dnn_out], dim=-1))
else:
y_pred = afn_out
if self.output_activation is not None:
y_pred = self.output_activation(y_pred)
return y_pred
def logarithmic_net(self, feature_emb):
feature_emb = torch.abs(feature_emb)
feature_emb = torch.clamp(feature_emb, min=1e-5) # ReLU with min 1e-5 (better than 1e-7 suggested in paper)
log_feature_emb = torch.log(feature_emb) # element-wise log
log_feature_emb = self.log_batch_norm(log_feature_emb) # batch_size * num_fields * embedding_dim
logarithmic_out = self.coefficient_W(log_feature_emb.transpose(2, 1)).transpose(1, 2)
cross_out = torch.exp(logarithmic_out) # element-wise exp
cross_out = self.exp_batch_norm(cross_out) # batch_size * logarithmic_neurons * embedding_dim
concat_out = torch.flatten(cross_out, start_dim=1)
return concat_out
```
Example
```
params = {'model_id': 'AFN',
'data_dir': '/content/data',
'model_root': './checkpoints/',
'learning_rate': 1e-3,
'batch_norm': False,
'optimizer': 'adamw',
'task': 'binary_classification',
'loss': 'binary_crossentropy',
'metrics': ['logloss', 'AUC'],
'embedding_dim': 10,
'logarithmic_neurons': 1200,
'afn_hidden_units': [400, 400, 400],
'afn_activations': 'relu',
'afn_dropout': 0,
'ensemble_dnn': False,
'dnn_hidden_units': [400, 400, 400],
'dnn_activations': 'relu',
'dnn_dropout': 0,
'batch_size': 64,
'epochs': 3,
'shuffle': True,
'seed': 2019,
'use_hdf5': True,
'workers': 1,
'verbose': 0}
model = AFN(ds.dataset.feature_map, **params)
pl_trainer(model, ds, max_epochs=5)
```
## v2
```
#export
import math
import torch
import torch.nn.functional as F
from recohut.models.layers.common import FeaturesEmbedding, FeaturesLinear, MultiLayerPerceptron
#exporti
class LNN(torch.nn.Module):
"""
A pytorch implementation of LNN layer
Input shape
- A 3D tensor with shape: ``(batch_size,field_size,embedding_size)``.
Output shape
- 2D tensor with shape:``(batch_size,LNN_dim*embedding_size)``.
Arguments
- **in_features** : Embedding of feature.
- **num_fields**: int.The field size of feature.
- **LNN_dim**: int.The number of Logarithmic neuron.
- **bias**: bool.Whether or not use bias in LNN.
"""
def __init__(self, num_fields, embed_dim, LNN_dim, bias=False):
super(LNN, self).__init__()
self.num_fields = num_fields
self.embed_dim = embed_dim
self.LNN_dim = LNN_dim
self.lnn_output_dim = LNN_dim * embed_dim
self.weight = torch.nn.Parameter(torch.Tensor(LNN_dim, num_fields))
if bias:
self.bias = torch.nn.Parameter(torch.Tensor(LNN_dim, embed_dim))
else:
self.register_parameter('bias', None)
self.reset_parameters()
def reset_parameters(self):
stdv = 1. / math.sqrt(self.weight.size(1))
self.weight.data.uniform_(-stdv, stdv)
if self.bias is not None:
self.bias.data.uniform_(-stdv, stdv)
def forward(self, x):
"""
:param x: Long tensor of size ``(batch_size, num_fields, embedding_size)``
"""
embed_x_abs = torch.abs(x) # Computes the element-wise absolute value of the given input tensor.
embed_x_afn = torch.add(embed_x_abs, 1e-7)
# Logarithmic Transformation
embed_x_log = torch.log1p(embed_x_afn) # torch.log1p and torch.expm1
lnn_out = torch.matmul(self.weight, embed_x_log)
if self.bias is not None:
lnn_out += self.bias
lnn_exp = torch.expm1(lnn_out)
output = F.relu(lnn_exp).contiguous().view(-1, self.lnn_output_dim)
return output
#export
class AFN_v2(torch.nn.Module):
"""
A pytorch implementation of AFN.
Reference:
Cheng W, et al. Adaptive Factorization Network: Learning Adaptive-Order Feature Interactions, 2019.
"""
def __init__(self, field_dims, embed_dim, LNN_dim, mlp_dims, dropouts):
super().__init__()
self.num_fields = len(field_dims)
self.linear = FeaturesLinear(field_dims) # Linear
self.embedding = FeaturesEmbedding(field_dims, embed_dim) # Embedding
self.LNN_dim = LNN_dim
self.LNN_output_dim = self.LNN_dim * embed_dim
self.LNN = LNN(self.num_fields, embed_dim, LNN_dim)
self.mlp = MultiLayerPerceptron(self.LNN_output_dim, mlp_dims, dropouts[0])
def forward(self, x):
"""
:param x: Long tensor of size ``(batch_size, num_fields)``
"""
embed_x = self.embedding(x)
lnn_out = self.LNN(embed_x)
x = self.linear(x) + self.mlp(lnn_out)
return torch.sigmoid(x.squeeze(1))
```
> **References**
> - Cheng W, et al. Adaptive Factorization Network: Learning Adaptive-Order Feature Interactions, 2019.
> - https://github.com/rixwew/pytorch-fm/blob/master/torchfm/model/afn.py
```
#hide
%reload_ext watermark
%watermark -a "Sparsh A." -m -iv -u -t -d -p recohut
```
|
github_jupyter
|
# default_exp models.afn
#hide
from nbdev.showdoc import *
from fastcore.nb_imports import *
from fastcore.test import *
#export
import torch
from torch import nn
from recohut.models.layers.embedding import EmbeddingLayer
from recohut.models.layers.common import MLP_Layer, LR_Layer
from recohut.models.bases.ctr import CTRModel
#export
class AFN(CTRModel):
def __init__(self,
feature_map,
model_id="AFN",
task="binary_classification",
learning_rate=1e-3,
embedding_initializer="torch.nn.init.normal_(std=1e-4)",
embedding_dim=10,
ensemble_dnn=True,
dnn_hidden_units=[64, 64, 64],
dnn_activations="ReLU",
dnn_dropout=0,
afn_hidden_units=[64, 64, 64],
afn_activations="ReLU",
afn_dropout=0,
logarithmic_neurons=5,
batch_norm=True,
**kwargs):
super(AFN, self).__init__(feature_map,
model_id=model_id,
**kwargs)
self.num_fields = feature_map.num_fields
self.embedding_layer = EmbeddingLayer(feature_map, embedding_dim)
self.coefficient_W = nn.Linear(self.num_fields, logarithmic_neurons, bias=False)
self.dense_layer = MLP_Layer(input_dim=embedding_dim * logarithmic_neurons,
output_dim=1,
hidden_units=afn_hidden_units,
hidden_activations=afn_activations,
output_activation=None,
dropout_rates=afn_dropout,
batch_norm=batch_norm,
use_bias=True)
self.log_batch_norm = nn.BatchNorm1d(self.num_fields)
self.exp_batch_norm = nn.BatchNorm1d(logarithmic_neurons)
self.ensemble_dnn = ensemble_dnn
if ensemble_dnn:
self.embedding_layer2 = EmbeddingLayer(feature_map,
embedding_dim,
embedding_dropout)
self.dnn = MLP_Layer(input_dim=embedding_dim * self.num_fields,
output_dim=1,
hidden_units=dnn_hidden_units,
hidden_activations=dnn_activations,
output_activation=None,
dropout_rates=dnn_dropout,
batch_norm=batch_norm,
use_bias=True)
self.fc = nn.Linear(2, 1)
self.output_activation = self.get_final_activation(task)
self.init_weights(embedding_initializer=embedding_initializer)
def forward(self, inputs):
feature_emb = self.embedding_layer(inputs)
dnn_input = self.logarithmic_net(feature_emb)
afn_out = self.dense_layer(dnn_input)
if self.ensemble_dnn:
feature_emb_list2 = self.embedding_layer2(X)
concate_feature_emb = torch.cat(feature_emb_list2, dim=1)
dnn_out = self.dnn(concate_feature_emb)
y_pred = self.fc(torch.cat([afn_out, dnn_out], dim=-1))
else:
y_pred = afn_out
if self.output_activation is not None:
y_pred = self.output_activation(y_pred)
return y_pred
def logarithmic_net(self, feature_emb):
feature_emb = torch.abs(feature_emb)
feature_emb = torch.clamp(feature_emb, min=1e-5) # ReLU with min 1e-5 (better than 1e-7 suggested in paper)
log_feature_emb = torch.log(feature_emb) # element-wise log
log_feature_emb = self.log_batch_norm(log_feature_emb) # batch_size * num_fields * embedding_dim
logarithmic_out = self.coefficient_W(log_feature_emb.transpose(2, 1)).transpose(1, 2)
cross_out = torch.exp(logarithmic_out) # element-wise exp
cross_out = self.exp_batch_norm(cross_out) # batch_size * logarithmic_neurons * embedding_dim
concat_out = torch.flatten(cross_out, start_dim=1)
return concat_out
params = {'model_id': 'AFN',
'data_dir': '/content/data',
'model_root': './checkpoints/',
'learning_rate': 1e-3,
'batch_norm': False,
'optimizer': 'adamw',
'task': 'binary_classification',
'loss': 'binary_crossentropy',
'metrics': ['logloss', 'AUC'],
'embedding_dim': 10,
'logarithmic_neurons': 1200,
'afn_hidden_units': [400, 400, 400],
'afn_activations': 'relu',
'afn_dropout': 0,
'ensemble_dnn': False,
'dnn_hidden_units': [400, 400, 400],
'dnn_activations': 'relu',
'dnn_dropout': 0,
'batch_size': 64,
'epochs': 3,
'shuffle': True,
'seed': 2019,
'use_hdf5': True,
'workers': 1,
'verbose': 0}
model = AFN(ds.dataset.feature_map, **params)
pl_trainer(model, ds, max_epochs=5)
#export
import math
import torch
import torch.nn.functional as F
from recohut.models.layers.common import FeaturesEmbedding, FeaturesLinear, MultiLayerPerceptron
#exporti
class LNN(torch.nn.Module):
"""
A pytorch implementation of LNN layer
Input shape
- A 3D tensor with shape: ``(batch_size,field_size,embedding_size)``.
Output shape
- 2D tensor with shape:``(batch_size,LNN_dim*embedding_size)``.
Arguments
- **in_features** : Embedding of feature.
- **num_fields**: int.The field size of feature.
- **LNN_dim**: int.The number of Logarithmic neuron.
- **bias**: bool.Whether or not use bias in LNN.
"""
def __init__(self, num_fields, embed_dim, LNN_dim, bias=False):
super(LNN, self).__init__()
self.num_fields = num_fields
self.embed_dim = embed_dim
self.LNN_dim = LNN_dim
self.lnn_output_dim = LNN_dim * embed_dim
self.weight = torch.nn.Parameter(torch.Tensor(LNN_dim, num_fields))
if bias:
self.bias = torch.nn.Parameter(torch.Tensor(LNN_dim, embed_dim))
else:
self.register_parameter('bias', None)
self.reset_parameters()
def reset_parameters(self):
stdv = 1. / math.sqrt(self.weight.size(1))
self.weight.data.uniform_(-stdv, stdv)
if self.bias is not None:
self.bias.data.uniform_(-stdv, stdv)
def forward(self, x):
"""
:param x: Long tensor of size ``(batch_size, num_fields, embedding_size)``
"""
embed_x_abs = torch.abs(x) # Computes the element-wise absolute value of the given input tensor.
embed_x_afn = torch.add(embed_x_abs, 1e-7)
# Logarithmic Transformation
embed_x_log = torch.log1p(embed_x_afn) # torch.log1p and torch.expm1
lnn_out = torch.matmul(self.weight, embed_x_log)
if self.bias is not None:
lnn_out += self.bias
lnn_exp = torch.expm1(lnn_out)
output = F.relu(lnn_exp).contiguous().view(-1, self.lnn_output_dim)
return output
#export
class AFN_v2(torch.nn.Module):
"""
A pytorch implementation of AFN.
Reference:
Cheng W, et al. Adaptive Factorization Network: Learning Adaptive-Order Feature Interactions, 2019.
"""
def __init__(self, field_dims, embed_dim, LNN_dim, mlp_dims, dropouts):
super().__init__()
self.num_fields = len(field_dims)
self.linear = FeaturesLinear(field_dims) # Linear
self.embedding = FeaturesEmbedding(field_dims, embed_dim) # Embedding
self.LNN_dim = LNN_dim
self.LNN_output_dim = self.LNN_dim * embed_dim
self.LNN = LNN(self.num_fields, embed_dim, LNN_dim)
self.mlp = MultiLayerPerceptron(self.LNN_output_dim, mlp_dims, dropouts[0])
def forward(self, x):
"""
:param x: Long tensor of size ``(batch_size, num_fields)``
"""
embed_x = self.embedding(x)
lnn_out = self.LNN(embed_x)
x = self.linear(x) + self.mlp(lnn_out)
return torch.sigmoid(x.squeeze(1))
#hide
%reload_ext watermark
%watermark -a "Sparsh A." -m -iv -u -t -d -p recohut
| 0.882814 | 0.703384 |
```
import numpy as np
import torch
```
# Model $f(\cdot)=Relu(tanh((\cdot) + bias))$
a model for implementig BP-TRW and BP for experiments of figures 6, 7, and 8
```
dtype = torch.float
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# bias = False : Initial bias vectors with zero vectors
# bias = True : Initial bias vectors with N(initial_mean , initial_var )
# N_layer_list : A list which contains number of neurons in each layer. For example N_layer_list = [100,20,30] creates a network with input layer of size 100, a single hidden layer of size 20, and output layer of size 30
# Forward and backward weight matrices are initialized from N(initial_mean,initial_var )
class my_network1():
def __init__(self , N_layer_list , bias = False , initial_mean = 0 , initial_var = 0.1):
self.norm_W_list = []
self.norm_B_list = []
self.bias_vec = []
self.N_layer_list = N_layer_list
self.W =[]
self.L =[]
self.Z =[]
self.delta_BP =[]
self.delta_FA =[]
self.E_BP =[]
self.E_FA =[]
self.B =[]
self.DeltaW_bp = []
self.DeltaW_fa = []
self.Delta_bias_bp = []
self.Delta_bias_fa = []
self.bias_vec.append(None)
self.Delta_bias_bp.append(None)
self.Delta_bias_fa.append(None)
self.num_layers = len(self.N_layer_list)-1
for i in range( self.num_layers ):
self.W.append( ((torch.randn( N_layer_list[i] , N_layer_list[i+1] )) ) .to(device).to(dtype) * initial_var + initial_mean )
self.DeltaW_bp.append( torch.zeros_like(self.W[-1]).to(device) )
self.DeltaW_fa.append( torch.zeros_like(self.W[-1]).to(device) )
self.B.append( ((torch.randn(self.N_layer_list[i] , self.N_layer_list[i+1])) ).t().to(device).to(dtype) * initial_var + initial_mean )
if bias==True :
self.bias_vec.append ( torch.randn( 1 , self.N_layer_list[i+1] ) .to(device).to(dtype) * initial_var + initial_mean )
elif bias==False :
self.bias_vec.append ( torch.zeros( 1 , self.N_layer_list[i+1] ) .to(device).to(dtype) )
self.Delta_bias_bp.append( torch.zeros( 1 , self.N_layer_list[i+1] ).to(device).to(dtype) )
self.Delta_bias_fa.append( torch.zeros( 1 , self.N_layer_list[i+1] ) .to(device).to(dtype) )
for i in range( self.num_layers + 1 ):
self.L.append( None )
self.Z.append( None )
self.delta_FA.append( None )
self.delta_BP.append( None )
self.E_FA.append( None )
self.E_BP.append( None )
def forward( self , X ):
self.L[0] = X
for i in range( self.num_layers ):
self.Z[i+1] = torch.matmul( self.L[i] , self.W[i] ) + self.bias_vec [i+1]
self.L[i+1] = self.activation1( self.Z[i+1] )
return self.L[-1]
def backprop(self, E_l ):
self.E_BP[-1] = E_l
self.delta_BP[-1] = torch.mul( self.derivative_activation1( self.Z[-1] ) , self.E_BP[-1] )
for i in range(len(self.N_layer_list)-2):
self.E_BP[ -2 - i] = torch.matmul( self.delta_BP[-1-i] , torch.transpose( self.W[-1-i] , 0,1) )
self.delta_BP[-2-i] = torch.mul ( self.derivative_activation1(self.Z[-2-i] ) , self.E_BP[ -2 - i] )
for i in range(len(self.N_layer_list)-1):
self.DeltaW_bp[i] = torch.matmul( torch.transpose( self.L[i] , 0,1) , self.delta_BP[i+1] )
for i in range(1,len(self.N_layer_list)):
self.Delta_bias_bp[i] = self.delta_BP[i] . sum(dim=0) .view(1,-1)
return self.DeltaW_bp , self.Delta_bias_bp
def BP_TRW(self, E ):
self.E_FA[-1 ] = E
self.delta_FA[-1] = torch.mul( self.derivative_activation1( self.Z[-1] ) , self.E_FA[-1 ] )
for i in range(len(self.N_layer_list)-2):
self.E_FA[ -2 - i] = torch.matmul( self.delta_FA[-1-i] , self.B[-1-i] )
self.delta_FA[-2-i] = torch.mul (self.derivative_activation1(self.Z[-2-i] ) , self.E_FA[ -2 - i ] )
for i in range(len(self.N_layer_list)-1):
self.DeltaW_fa[i] = torch.matmul( torch.transpose( self.L[i] , 0,1) , self.delta_FA[i+1] )
for i in range(1,len(self.N_layer_list)):
self.Delta_bias_fa[i] = self.delta_FA[i] . sum(dim=0) .view(1,-1)
return self.DeltaW_fa , self.Delta_bias_fa
def activation1(self,x):
return torch.tanh(torch.relu(x))
def derivative_activation1(self,x):
m = (x > 0) * 1
x = torch.cosh(x)
x = torch.mul ( x , x )
x = 1/x
return torch.mul ( x , m )
def set_learning_rate(self , lr):
self.lr = lr
def step_W(self,deltaW):
for i in range(len(self.N_layer_list)-1):
self.W[i] = torch.add( self.W[i] , deltaW[i] ,alpha= self.lr )
def step_bias(self,delta_bias):
for i in range(1,len(self.N_layer_list)):
self.bias_vec[i] = torch.add( self.bias_vec[i] , delta_bias[i] ,alpha= self.lr )
def normalize_W(self , norm = None ):
if norm==None:
for i in range(len(self.N_layer_list)-1):
self.W[i] = self.W[i] / self.W[i].norm() * self.norm_W_list[i]
elif type(norm)==list :
for i in range(len(self.N_layer_list)-1):
self.W[i] = self.W[i] / self.W[i].norm() * norm [i]
else :
for i in range(len(self.N_layer_list)-1):
self.W[i] = self.W[i] / self.W[i].norm() * norm
def normalize_B(self , norm = None ):
if norm==None:
for i in range(len(self.N_layer_list)-1):
self.B[i] = self.B[i] / self.B[i].norm() * self.norm_B_list[i]
elif type(norm)==list :
for i in range(len(self.N_layer_list)-1):
self.B[i] = self.B[i] / self.B[i].norm() * norm [i]
else :
for i in range(len(self.N_layer_list)-1):
self.B[i] = self.B[i] / self.B[i].norm() * norm
def match_B_norm_to_W_norm(self):
for i in range(len(self.N_layer_list)-1):
self.B[i] = self.B[i] / self.B[i].norm() *self.W[i].norm()
def save_norm( self ):
self.norm_W_list = []
self.norm_B_list = []
for i in range(len(self.N_layer_list)-1):
self.norm_W_list . append( self.W[i] .norm() )
self.norm_B_list . append( self.B[i] .norm() )
def column_normalize_W(self , norm = None ):
if norm==None:
for l in range(len(self.N_layer_list)-1):
self.W[l] = torch.mul( torch.div( self.W[l] , self.W[l].norm(dim=0).view( [1,-1] ) ) , self.norm_W_list[l].view( [1,-1] ) )
elif type(norm)==list :
for l in range(len(self.N_layer_list)-1):
self.W[l] = torch.div( self.W[l] , self.W[l].norm(dim=0).view( [1,-1] ) ) * norm[l]
else :
for l in range(len(self.N_layer_list)-1):
self.W[l] = torch.div( self.W[l] , self.W[l].norm(dim=0).view( [1,-1] ) ) * norm
def column_normalize_B(self , norm = None ):
if norm==None:
for l in range(len(self.N_layer_list)-1):
self.B[l] = torch.mul( torch.div( self.B[l] , self.B[l].norm(dim=0).view( [1,-1] ) ) , self.norm_B_list[l].view( [1,-1] ) )
elif type(norm)==list :
for l in range(len(self.N_layer_list)-1):
self.B[l] = torch.div( self.B[l] , self.B[l].norm(dim=0).view( [1,-1] ) ) * norm[l]
else :
for l in range(len(self.N_layer_list)-1):
self.B[l] = torch.div( self.B[l] , self.B[l].norm(dim=0).view( [1,-1] ) ) * norm
def seed_norms( self , from1 , till1 ):
for l in range(len(self.N_layer_list)-1):
self.norm_W_list .append( torch.rand( self.N_layer_list[l+1] ) .to(device) .to(dtype) *( till1 - from1 ) + from1 )
self.norm_B_list .append( torch.rand( self.N_layer_list[l] ) .to(device).to(dtype) *( till1 - from1 ) + from1 )
def set_norm( self , norm ):
self.norm_W_list = []
self.norm_B_list = []
for i in range(len(self.N_layer_list)-1):
self.norm_W_list . append(norm )
self.norm_B_list . append(norm )
def set_norm_LIST( self , norm_LIST ):
self.norm_W_list = []
for i in range(len(self.N_layer_list)-1):
self.norm_W_list . append(norm_LIST[i] )
```
|
github_jupyter
|
import numpy as np
import torch
dtype = torch.float
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# bias = False : Initial bias vectors with zero vectors
# bias = True : Initial bias vectors with N(initial_mean , initial_var )
# N_layer_list : A list which contains number of neurons in each layer. For example N_layer_list = [100,20,30] creates a network with input layer of size 100, a single hidden layer of size 20, and output layer of size 30
# Forward and backward weight matrices are initialized from N(initial_mean,initial_var )
class my_network1():
def __init__(self , N_layer_list , bias = False , initial_mean = 0 , initial_var = 0.1):
self.norm_W_list = []
self.norm_B_list = []
self.bias_vec = []
self.N_layer_list = N_layer_list
self.W =[]
self.L =[]
self.Z =[]
self.delta_BP =[]
self.delta_FA =[]
self.E_BP =[]
self.E_FA =[]
self.B =[]
self.DeltaW_bp = []
self.DeltaW_fa = []
self.Delta_bias_bp = []
self.Delta_bias_fa = []
self.bias_vec.append(None)
self.Delta_bias_bp.append(None)
self.Delta_bias_fa.append(None)
self.num_layers = len(self.N_layer_list)-1
for i in range( self.num_layers ):
self.W.append( ((torch.randn( N_layer_list[i] , N_layer_list[i+1] )) ) .to(device).to(dtype) * initial_var + initial_mean )
self.DeltaW_bp.append( torch.zeros_like(self.W[-1]).to(device) )
self.DeltaW_fa.append( torch.zeros_like(self.W[-1]).to(device) )
self.B.append( ((torch.randn(self.N_layer_list[i] , self.N_layer_list[i+1])) ).t().to(device).to(dtype) * initial_var + initial_mean )
if bias==True :
self.bias_vec.append ( torch.randn( 1 , self.N_layer_list[i+1] ) .to(device).to(dtype) * initial_var + initial_mean )
elif bias==False :
self.bias_vec.append ( torch.zeros( 1 , self.N_layer_list[i+1] ) .to(device).to(dtype) )
self.Delta_bias_bp.append( torch.zeros( 1 , self.N_layer_list[i+1] ).to(device).to(dtype) )
self.Delta_bias_fa.append( torch.zeros( 1 , self.N_layer_list[i+1] ) .to(device).to(dtype) )
for i in range( self.num_layers + 1 ):
self.L.append( None )
self.Z.append( None )
self.delta_FA.append( None )
self.delta_BP.append( None )
self.E_FA.append( None )
self.E_BP.append( None )
def forward( self , X ):
self.L[0] = X
for i in range( self.num_layers ):
self.Z[i+1] = torch.matmul( self.L[i] , self.W[i] ) + self.bias_vec [i+1]
self.L[i+1] = self.activation1( self.Z[i+1] )
return self.L[-1]
def backprop(self, E_l ):
self.E_BP[-1] = E_l
self.delta_BP[-1] = torch.mul( self.derivative_activation1( self.Z[-1] ) , self.E_BP[-1] )
for i in range(len(self.N_layer_list)-2):
self.E_BP[ -2 - i] = torch.matmul( self.delta_BP[-1-i] , torch.transpose( self.W[-1-i] , 0,1) )
self.delta_BP[-2-i] = torch.mul ( self.derivative_activation1(self.Z[-2-i] ) , self.E_BP[ -2 - i] )
for i in range(len(self.N_layer_list)-1):
self.DeltaW_bp[i] = torch.matmul( torch.transpose( self.L[i] , 0,1) , self.delta_BP[i+1] )
for i in range(1,len(self.N_layer_list)):
self.Delta_bias_bp[i] = self.delta_BP[i] . sum(dim=0) .view(1,-1)
return self.DeltaW_bp , self.Delta_bias_bp
def BP_TRW(self, E ):
self.E_FA[-1 ] = E
self.delta_FA[-1] = torch.mul( self.derivative_activation1( self.Z[-1] ) , self.E_FA[-1 ] )
for i in range(len(self.N_layer_list)-2):
self.E_FA[ -2 - i] = torch.matmul( self.delta_FA[-1-i] , self.B[-1-i] )
self.delta_FA[-2-i] = torch.mul (self.derivative_activation1(self.Z[-2-i] ) , self.E_FA[ -2 - i ] )
for i in range(len(self.N_layer_list)-1):
self.DeltaW_fa[i] = torch.matmul( torch.transpose( self.L[i] , 0,1) , self.delta_FA[i+1] )
for i in range(1,len(self.N_layer_list)):
self.Delta_bias_fa[i] = self.delta_FA[i] . sum(dim=0) .view(1,-1)
return self.DeltaW_fa , self.Delta_bias_fa
def activation1(self,x):
return torch.tanh(torch.relu(x))
def derivative_activation1(self,x):
m = (x > 0) * 1
x = torch.cosh(x)
x = torch.mul ( x , x )
x = 1/x
return torch.mul ( x , m )
def set_learning_rate(self , lr):
self.lr = lr
def step_W(self,deltaW):
for i in range(len(self.N_layer_list)-1):
self.W[i] = torch.add( self.W[i] , deltaW[i] ,alpha= self.lr )
def step_bias(self,delta_bias):
for i in range(1,len(self.N_layer_list)):
self.bias_vec[i] = torch.add( self.bias_vec[i] , delta_bias[i] ,alpha= self.lr )
def normalize_W(self , norm = None ):
if norm==None:
for i in range(len(self.N_layer_list)-1):
self.W[i] = self.W[i] / self.W[i].norm() * self.norm_W_list[i]
elif type(norm)==list :
for i in range(len(self.N_layer_list)-1):
self.W[i] = self.W[i] / self.W[i].norm() * norm [i]
else :
for i in range(len(self.N_layer_list)-1):
self.W[i] = self.W[i] / self.W[i].norm() * norm
def normalize_B(self , norm = None ):
if norm==None:
for i in range(len(self.N_layer_list)-1):
self.B[i] = self.B[i] / self.B[i].norm() * self.norm_B_list[i]
elif type(norm)==list :
for i in range(len(self.N_layer_list)-1):
self.B[i] = self.B[i] / self.B[i].norm() * norm [i]
else :
for i in range(len(self.N_layer_list)-1):
self.B[i] = self.B[i] / self.B[i].norm() * norm
def match_B_norm_to_W_norm(self):
for i in range(len(self.N_layer_list)-1):
self.B[i] = self.B[i] / self.B[i].norm() *self.W[i].norm()
def save_norm( self ):
self.norm_W_list = []
self.norm_B_list = []
for i in range(len(self.N_layer_list)-1):
self.norm_W_list . append( self.W[i] .norm() )
self.norm_B_list . append( self.B[i] .norm() )
def column_normalize_W(self , norm = None ):
if norm==None:
for l in range(len(self.N_layer_list)-1):
self.W[l] = torch.mul( torch.div( self.W[l] , self.W[l].norm(dim=0).view( [1,-1] ) ) , self.norm_W_list[l].view( [1,-1] ) )
elif type(norm)==list :
for l in range(len(self.N_layer_list)-1):
self.W[l] = torch.div( self.W[l] , self.W[l].norm(dim=0).view( [1,-1] ) ) * norm[l]
else :
for l in range(len(self.N_layer_list)-1):
self.W[l] = torch.div( self.W[l] , self.W[l].norm(dim=0).view( [1,-1] ) ) * norm
def column_normalize_B(self , norm = None ):
if norm==None:
for l in range(len(self.N_layer_list)-1):
self.B[l] = torch.mul( torch.div( self.B[l] , self.B[l].norm(dim=0).view( [1,-1] ) ) , self.norm_B_list[l].view( [1,-1] ) )
elif type(norm)==list :
for l in range(len(self.N_layer_list)-1):
self.B[l] = torch.div( self.B[l] , self.B[l].norm(dim=0).view( [1,-1] ) ) * norm[l]
else :
for l in range(len(self.N_layer_list)-1):
self.B[l] = torch.div( self.B[l] , self.B[l].norm(dim=0).view( [1,-1] ) ) * norm
def seed_norms( self , from1 , till1 ):
for l in range(len(self.N_layer_list)-1):
self.norm_W_list .append( torch.rand( self.N_layer_list[l+1] ) .to(device) .to(dtype) *( till1 - from1 ) + from1 )
self.norm_B_list .append( torch.rand( self.N_layer_list[l] ) .to(device).to(dtype) *( till1 - from1 ) + from1 )
def set_norm( self , norm ):
self.norm_W_list = []
self.norm_B_list = []
for i in range(len(self.N_layer_list)-1):
self.norm_W_list . append(norm )
self.norm_B_list . append(norm )
def set_norm_LIST( self , norm_LIST ):
self.norm_W_list = []
for i in range(len(self.N_layer_list)-1):
self.norm_W_list . append(norm_LIST[i] )
| 0.502686 | 0.804521 |
# Riskfolio-Lib Tutorial:
<br>__[Financionerioncios](https://financioneroncios.wordpress.com)__
<br>__[Orenji](https://www.orenj-i.net)__
<br>__[Riskfolio-Lib](https://riskfolio-lib.readthedocs.io/en/latest/)__
<br>__[Dany Cajas](https://www.linkedin.com/in/dany-cajas/)__
<a href='https://ko-fi.com/B0B833SXD' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://cdn.ko-fi.com/cdn/kofi1.png?v=2' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
## Tutorial 34: Comparing Covariance Estimators Methods
## 1. Downloading the data:
```
import numpy as np
import pandas as pd
import yfinance as yf
import warnings
warnings.filterwarnings("ignore")
pd.options.display.float_format = '{:.4%}'.format
# Date range
start = '2016-01-01'
end = '2019-12-30'
# Tickers of assets
assets = ['JCI', 'TGT', 'CMCSA', 'CPB', 'MO', 'APA', 'MMC', 'JPM',
'ZION', 'PSA', 'BAX', 'BMY', 'LUV', 'PCAR', 'TXT', 'TMO',
'DE', 'MSFT', 'HPQ', 'SEE', 'VZ', 'CNP', 'NI', 'T', 'BA']
assets.sort()
# Downloading data
data = yf.download(assets, start = start, end = end)
data = data.loc[:,('Adj Close', slice(None))]
data.columns = assets
# Calculating returns
Y = data[assets].pct_change().dropna()
display(Y.head())
```
## 2. Estimating Mean Variance Portfolios
### 2.1 Calculating the portfolio that minimizes Variance.
```
import riskfolio as rp
# Building the portfolio object
port = rp.Portfolio(returns=Y)
# Calculating optimal portfolio
# Select method and estimate input parameters:
method_mu='hist' # Method to estimate expected returns based on historical data.
method_covs = ['hist', 'ledoit', 'oas', 'shrunk', 'gl', 'ewma1',
'ewma2','jlogo', 'fixed', 'spectral', 'shrink']
# Estimate optimal portfolio:
model='Classic' # Could be Classic (historical), BL (Black Litterman) or FM (Factor Model)
rm = 'MV' # Risk measure used, this time will be variance
obj = 'MinRisk' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe
hist = True # Use historical scenarios for risk measures that depend on scenarios
rf = 0 # Risk free rate
l = 0 # Risk aversion factor, only useful when obj is 'Utility'
w_s = pd.DataFrame([])
for i in method_covs:
port.assets_stats(method_mu=method_mu, method_cov=i, d=0.94)
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
w_s = pd.concat([w_s, w], axis=1)
w_s.columns = method_covs
display(w_s.style.format("{:.2%}").background_gradient(cmap='YlGn'))
import matplotlib.pyplot as plt
# Plotting a comparison of assets weights for each portfolio
fig, ax = plt.subplots(figsize=(14,6))
w_s.plot.bar(ax=ax, width=0.8)
```
### 2.2 Calculating the portfolio that maximizes Sharpe ratio.
```
obj = 'Sharpe' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe
w_s = pd.DataFrame([])
for i in method_covs:
port.assets_stats(method_mu=method_mu, method_cov=i, d=0.94)
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
w_s = pd.concat([w_s, w], axis=1)
w_s.columns = method_covs
display(w_s.style.format("{:.2%}").background_gradient(cmap='YlGn'))
# Plotting a comparison of assets weights for each portfolio
ax, fig = plt.subplots(figsize=(14,6))
w_s.plot.bar(ax=fig, width=0.8)
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import yfinance as yf
import warnings
warnings.filterwarnings("ignore")
pd.options.display.float_format = '{:.4%}'.format
# Date range
start = '2016-01-01'
end = '2019-12-30'
# Tickers of assets
assets = ['JCI', 'TGT', 'CMCSA', 'CPB', 'MO', 'APA', 'MMC', 'JPM',
'ZION', 'PSA', 'BAX', 'BMY', 'LUV', 'PCAR', 'TXT', 'TMO',
'DE', 'MSFT', 'HPQ', 'SEE', 'VZ', 'CNP', 'NI', 'T', 'BA']
assets.sort()
# Downloading data
data = yf.download(assets, start = start, end = end)
data = data.loc[:,('Adj Close', slice(None))]
data.columns = assets
# Calculating returns
Y = data[assets].pct_change().dropna()
display(Y.head())
import riskfolio as rp
# Building the portfolio object
port = rp.Portfolio(returns=Y)
# Calculating optimal portfolio
# Select method and estimate input parameters:
method_mu='hist' # Method to estimate expected returns based on historical data.
method_covs = ['hist', 'ledoit', 'oas', 'shrunk', 'gl', 'ewma1',
'ewma2','jlogo', 'fixed', 'spectral', 'shrink']
# Estimate optimal portfolio:
model='Classic' # Could be Classic (historical), BL (Black Litterman) or FM (Factor Model)
rm = 'MV' # Risk measure used, this time will be variance
obj = 'MinRisk' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe
hist = True # Use historical scenarios for risk measures that depend on scenarios
rf = 0 # Risk free rate
l = 0 # Risk aversion factor, only useful when obj is 'Utility'
w_s = pd.DataFrame([])
for i in method_covs:
port.assets_stats(method_mu=method_mu, method_cov=i, d=0.94)
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
w_s = pd.concat([w_s, w], axis=1)
w_s.columns = method_covs
display(w_s.style.format("{:.2%}").background_gradient(cmap='YlGn'))
import matplotlib.pyplot as plt
# Plotting a comparison of assets weights for each portfolio
fig, ax = plt.subplots(figsize=(14,6))
w_s.plot.bar(ax=ax, width=0.8)
obj = 'Sharpe' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe
w_s = pd.DataFrame([])
for i in method_covs:
port.assets_stats(method_mu=method_mu, method_cov=i, d=0.94)
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
w_s = pd.concat([w_s, w], axis=1)
w_s.columns = method_covs
display(w_s.style.format("{:.2%}").background_gradient(cmap='YlGn'))
# Plotting a comparison of assets weights for each portfolio
ax, fig = plt.subplots(figsize=(14,6))
w_s.plot.bar(ax=fig, width=0.8)
| 0.733833 | 0.935993 |
```
import pandas as pd
import numpy as np
from sklearn.preprocessing import MinMaxScaler
with open('/content/drive/MyDrive/Colab Notebooks/PRCO304HK deep learning for ids/kddcup.names', 'r') as infile:
kdd_names = infile.readlines()
kdd_cols = [x.split(':')[0] for x in kdd_names[1:]]
kdd_cols += ['class', 'difficulty']
kdd = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/PRCO304HK deep learning for ids/KDDTrain+.txt', names=kdd_cols)
kdd_t = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/PRCO304HK deep learning for ids/KDDTest+.txt', names=kdd_cols)
kdd.head()
kdd_cols = [kdd.columns[0]] + sorted(list(set(kdd.protocol_type.values))) + sorted(list(set(kdd.service.values))) + sorted(list(set(kdd.flag.values))) + kdd.columns[4:].tolist()
attack_map = [x.strip().split() for x in open('/content/drive/MyDrive/Colab Notebooks/PRCO304HK deep learning for ids/training_attack_types', 'r')]
attack_map = {k:v for (k,v) in attack_map}
attack_map
kdd['class'] = kdd['class'].replace(attack_map)
kdd_t['class'] = kdd_t['class'].replace(attack_map)
def cat_encode(df, col):
return pd.concat([df.drop(col, axis=1), pd.get_dummies(df[col].values)], axis=1)
def log_trns(df, col):
return df[col].apply(np.log1p)
cat_lst = ['protocol_type', 'service', 'flag']
for col in cat_lst:
kdd = cat_encode(kdd, col)
kdd_t = cat_encode(kdd_t, col)
log_lst = ['duration', 'src_bytes', 'dst_bytes']
for col in log_lst:
kdd[col] = log_trns(kdd, col)
kdd_t[col] = log_trns(kdd_t, col)
kdd = kdd[kdd_cols]
for col in kdd_cols:
if col not in kdd_t.columns:
kdd_t[col] = 0
kdd_t = kdd_t[kdd_cols]
kdd.head()
difficulty = kdd.pop('difficulty')
target = kdd.pop('class')
y_diff = kdd_t.pop('difficulty')
y_test = kdd_t.pop('class')
target = pd.get_dummies(target)
y_test = pd.get_dummies(y_test)
target
y_test
target = target.values
train = kdd.values
test = kdd_t.values
y_test = y_test.values
min_max_scaler = MinMaxScaler()
train = min_max_scaler.fit_transform(train)
test = min_max_scaler.transform(test)
train.shape
for idx, col in enumerate(list(kdd.columns)):
print(idx, col)
from keras.callbacks import EarlyStopping
from keras.models import Sequential
from keras.layers import Dense, Activation, Reshape, Dropout
from keras.layers.embeddings import Embedding
def build_network():
models = []
model = Sequential()
model.add(Dense(64, input_dim=122))
model.add(Dense(32))
model.add(Activation('relu'))
model.add(Dropout(.15))
model.add(Dense(32))
model.add(Activation('relu'))
model.add(Dropout(.15))
model.add(Dense(32))
model.add(Activation('relu'))
model.add(Dropout(.15))
model.add(Dense(5))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
NN = build_network()
early_stopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=3, verbose=0, mode='auto')
NN.fit(x=train, y=target, epochs=100, validation_split=0.1, batch_size=128, callbacks=[early_stopping])
from sklearn.metrics import confusion_matrix
preds = NN.predict(test)
pred_lbls = np.argmax(preds, axis=1)
true_lbls = np.argmax(y_test, axis=1)
NN.evaluate(test, y_test)
confusion_matrix(true_lbls, pred_lbls)
from sklearn.metrics import f1_score
f1_score(true_lbls, pred_lbls, average='weighted')
```
|
github_jupyter
|
import pandas as pd
import numpy as np
from sklearn.preprocessing import MinMaxScaler
with open('/content/drive/MyDrive/Colab Notebooks/PRCO304HK deep learning for ids/kddcup.names', 'r') as infile:
kdd_names = infile.readlines()
kdd_cols = [x.split(':')[0] for x in kdd_names[1:]]
kdd_cols += ['class', 'difficulty']
kdd = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/PRCO304HK deep learning for ids/KDDTrain+.txt', names=kdd_cols)
kdd_t = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/PRCO304HK deep learning for ids/KDDTest+.txt', names=kdd_cols)
kdd.head()
kdd_cols = [kdd.columns[0]] + sorted(list(set(kdd.protocol_type.values))) + sorted(list(set(kdd.service.values))) + sorted(list(set(kdd.flag.values))) + kdd.columns[4:].tolist()
attack_map = [x.strip().split() for x in open('/content/drive/MyDrive/Colab Notebooks/PRCO304HK deep learning for ids/training_attack_types', 'r')]
attack_map = {k:v for (k,v) in attack_map}
attack_map
kdd['class'] = kdd['class'].replace(attack_map)
kdd_t['class'] = kdd_t['class'].replace(attack_map)
def cat_encode(df, col):
return pd.concat([df.drop(col, axis=1), pd.get_dummies(df[col].values)], axis=1)
def log_trns(df, col):
return df[col].apply(np.log1p)
cat_lst = ['protocol_type', 'service', 'flag']
for col in cat_lst:
kdd = cat_encode(kdd, col)
kdd_t = cat_encode(kdd_t, col)
log_lst = ['duration', 'src_bytes', 'dst_bytes']
for col in log_lst:
kdd[col] = log_trns(kdd, col)
kdd_t[col] = log_trns(kdd_t, col)
kdd = kdd[kdd_cols]
for col in kdd_cols:
if col not in kdd_t.columns:
kdd_t[col] = 0
kdd_t = kdd_t[kdd_cols]
kdd.head()
difficulty = kdd.pop('difficulty')
target = kdd.pop('class')
y_diff = kdd_t.pop('difficulty')
y_test = kdd_t.pop('class')
target = pd.get_dummies(target)
y_test = pd.get_dummies(y_test)
target
y_test
target = target.values
train = kdd.values
test = kdd_t.values
y_test = y_test.values
min_max_scaler = MinMaxScaler()
train = min_max_scaler.fit_transform(train)
test = min_max_scaler.transform(test)
train.shape
for idx, col in enumerate(list(kdd.columns)):
print(idx, col)
from keras.callbacks import EarlyStopping
from keras.models import Sequential
from keras.layers import Dense, Activation, Reshape, Dropout
from keras.layers.embeddings import Embedding
def build_network():
models = []
model = Sequential()
model.add(Dense(64, input_dim=122))
model.add(Dense(32))
model.add(Activation('relu'))
model.add(Dropout(.15))
model.add(Dense(32))
model.add(Activation('relu'))
model.add(Dropout(.15))
model.add(Dense(32))
model.add(Activation('relu'))
model.add(Dropout(.15))
model.add(Dense(5))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
NN = build_network()
early_stopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=3, verbose=0, mode='auto')
NN.fit(x=train, y=target, epochs=100, validation_split=0.1, batch_size=128, callbacks=[early_stopping])
from sklearn.metrics import confusion_matrix
preds = NN.predict(test)
pred_lbls = np.argmax(preds, axis=1)
true_lbls = np.argmax(y_test, axis=1)
NN.evaluate(test, y_test)
confusion_matrix(true_lbls, pred_lbls)
from sklearn.metrics import f1_score
f1_score(true_lbls, pred_lbls, average='weighted')
| 0.517815 | 0.264335 |
```
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv("survey_results_public.csv")
df.head()
df = df[["Country", "EdLevel", "YearsCodePro", "Employment", "ConvertedComp"]]
df = df.rename({"ConvertedComp": "Salary"}, axis=1)
df.head()
df = df[df["Salary"].notnull()]
df.head()
df.info()
df = df.dropna()
df.isnull().sum()
df = df[df["Employment"] == "Employed full-time"]
df = df.drop("Employment", axis=1)
df.info()
df['Country'].value_counts()
def shorten_categories(categories, cutoff):
categorical_map = {}
for i in range(len(categories)):
if categories.values[i] >= cutoff:
categorical_map[categories.index[i]] = categories.index[i]
else:
categorical_map[categories.index[i]] = 'Other'
return categorical_map
country_map = shorten_categories(df.Country.value_counts(), 400)
df['Country'] = df['Country'].map(country_map)
df.Country.value_counts()
fig, ax = plt.subplots(1,1, figsize=(12, 7))
df.boxplot('Salary', 'Country', ax=ax)
plt.suptitle('Salary (US$) v Country')
plt.title('')
plt.ylabel('Salary')
plt.xticks(rotation=90)
plt.show()
df = df[df["Salary"] <= 250000]
df = df[df["Salary"] >= 10000]
df = df[df['Country'] != 'Other']
fig, ax = plt.subplots(1,1, figsize=(12, 7))
df.boxplot('Salary', 'Country', ax=ax)
plt.suptitle('Salary (US$) v Country')
plt.title('')
plt.ylabel('Salary')
plt.xticks(rotation=90)
plt.show()
df["YearsCodePro"].unique()
def clean_experience(x):
if x == 'More than 50 years':
return 50
if x == 'Less than 1 year':
return 0.5
return float(x)
df['YearsCodePro'] = df['YearsCodePro'].apply(clean_experience)
df["EdLevel"].unique()
def clean_education(x):
if 'Bachelor’s degree' in x:
return 'Bachelor’s degree'
if 'Master’s degree' in x:
return 'Master’s degree'
if 'Professional degree' in x or 'Other doctoral' in x:
return 'Post grad'
return 'Less than a Bachelors'
df['EdLevel'] = df['EdLevel'].apply(clean_education)
df["EdLevel"].unique()
from sklearn.preprocessing import LabelEncoder
le_education = LabelEncoder()
df['EdLevel'] = le_education.fit_transform(df['EdLevel'])
df["EdLevel"].unique()
#le.classes_
le_country = LabelEncoder()
df['Country'] = le_country.fit_transform(df['Country'])
df["Country"].unique()
X = df.drop("Salary", axis=1)
y = df["Salary"]
from sklearn.linear_model import LinearRegression
linear_reg = LinearRegression()
linear_reg.fit(X, y.values)
y_pred = linear_reg.predict(X)
from sklearn.metrics import mean_squared_error, mean_absolute_error
import numpy as np
error = np.sqrt(mean_squared_error(y, y_pred))
error
from sklearn.tree import DecisionTreeRegressor
dec_tree_reg = DecisionTreeRegressor(random_state=0)
dec_tree_reg.fit(X, y.values)
y_pred = dec_tree_reg.predict(X)
error = np.sqrt(mean_squared_error(y, y_pred))
print("${:,.02f}".format(error))
from sklearn.ensemble import RandomForestRegressor
random_forest_reg = RandomForestRegressor(random_state=0)
random_forest_reg.fit(X, y.values)
y_pred = random_forest_reg.predict(X)
error = np.sqrt(mean_squared_error(y, y_pred))
print("${:,.02f}".format(error))
from sklearn.model_selection import GridSearchCV
max_depth = [None, 2,4,6,8,10,12]
parameters = {"max_depth": max_depth}
regressor = DecisionTreeRegressor(random_state=0)
gs = GridSearchCV(regressor, parameters, scoring='neg_mean_squared_error')
gs.fit(X, y.values)
regressor = gs.best_estimator_
regressor.fit(X, y.values)
y_pred = regressor.predict(X)
error = np.sqrt(mean_squared_error(y, y_pred))
print("${:,.02f}".format(error))
X
# country, edlevel, yearscode
X = np.array([["United States", 'Master’s degree', 15 ]])
X
X[:, 0] = le_country.transform(X[:,0])
X[:, 1] = le_education.transform(X[:,1])
X = X.astype(float)
X
y_pred = regressor.predict(X)
y_pred
import pickle
data = {"model": regressor, "le_country": le_country, "le_education": le_education}
with open('saved_steps.pkl', 'wb') as file:
pickle.dump(data, file)
with open('saved_steps.pkl', 'rb') as file:
data = pickle.load(file)
regressor_loaded = data["model"]
le_country = data["le_country"]
le_education = data["le_education"]
y_pred = regressor_loaded.predict(X)
y_pred
```
|
github_jupyter
|
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv("survey_results_public.csv")
df.head()
df = df[["Country", "EdLevel", "YearsCodePro", "Employment", "ConvertedComp"]]
df = df.rename({"ConvertedComp": "Salary"}, axis=1)
df.head()
df = df[df["Salary"].notnull()]
df.head()
df.info()
df = df.dropna()
df.isnull().sum()
df = df[df["Employment"] == "Employed full-time"]
df = df.drop("Employment", axis=1)
df.info()
df['Country'].value_counts()
def shorten_categories(categories, cutoff):
categorical_map = {}
for i in range(len(categories)):
if categories.values[i] >= cutoff:
categorical_map[categories.index[i]] = categories.index[i]
else:
categorical_map[categories.index[i]] = 'Other'
return categorical_map
country_map = shorten_categories(df.Country.value_counts(), 400)
df['Country'] = df['Country'].map(country_map)
df.Country.value_counts()
fig, ax = plt.subplots(1,1, figsize=(12, 7))
df.boxplot('Salary', 'Country', ax=ax)
plt.suptitle('Salary (US$) v Country')
plt.title('')
plt.ylabel('Salary')
plt.xticks(rotation=90)
plt.show()
df = df[df["Salary"] <= 250000]
df = df[df["Salary"] >= 10000]
df = df[df['Country'] != 'Other']
fig, ax = plt.subplots(1,1, figsize=(12, 7))
df.boxplot('Salary', 'Country', ax=ax)
plt.suptitle('Salary (US$) v Country')
plt.title('')
plt.ylabel('Salary')
plt.xticks(rotation=90)
plt.show()
df["YearsCodePro"].unique()
def clean_experience(x):
if x == 'More than 50 years':
return 50
if x == 'Less than 1 year':
return 0.5
return float(x)
df['YearsCodePro'] = df['YearsCodePro'].apply(clean_experience)
df["EdLevel"].unique()
def clean_education(x):
if 'Bachelor’s degree' in x:
return 'Bachelor’s degree'
if 'Master’s degree' in x:
return 'Master’s degree'
if 'Professional degree' in x or 'Other doctoral' in x:
return 'Post grad'
return 'Less than a Bachelors'
df['EdLevel'] = df['EdLevel'].apply(clean_education)
df["EdLevel"].unique()
from sklearn.preprocessing import LabelEncoder
le_education = LabelEncoder()
df['EdLevel'] = le_education.fit_transform(df['EdLevel'])
df["EdLevel"].unique()
#le.classes_
le_country = LabelEncoder()
df['Country'] = le_country.fit_transform(df['Country'])
df["Country"].unique()
X = df.drop("Salary", axis=1)
y = df["Salary"]
from sklearn.linear_model import LinearRegression
linear_reg = LinearRegression()
linear_reg.fit(X, y.values)
y_pred = linear_reg.predict(X)
from sklearn.metrics import mean_squared_error, mean_absolute_error
import numpy as np
error = np.sqrt(mean_squared_error(y, y_pred))
error
from sklearn.tree import DecisionTreeRegressor
dec_tree_reg = DecisionTreeRegressor(random_state=0)
dec_tree_reg.fit(X, y.values)
y_pred = dec_tree_reg.predict(X)
error = np.sqrt(mean_squared_error(y, y_pred))
print("${:,.02f}".format(error))
from sklearn.ensemble import RandomForestRegressor
random_forest_reg = RandomForestRegressor(random_state=0)
random_forest_reg.fit(X, y.values)
y_pred = random_forest_reg.predict(X)
error = np.sqrt(mean_squared_error(y, y_pred))
print("${:,.02f}".format(error))
from sklearn.model_selection import GridSearchCV
max_depth = [None, 2,4,6,8,10,12]
parameters = {"max_depth": max_depth}
regressor = DecisionTreeRegressor(random_state=0)
gs = GridSearchCV(regressor, parameters, scoring='neg_mean_squared_error')
gs.fit(X, y.values)
regressor = gs.best_estimator_
regressor.fit(X, y.values)
y_pred = regressor.predict(X)
error = np.sqrt(mean_squared_error(y, y_pred))
print("${:,.02f}".format(error))
X
# country, edlevel, yearscode
X = np.array([["United States", 'Master’s degree', 15 ]])
X
X[:, 0] = le_country.transform(X[:,0])
X[:, 1] = le_education.transform(X[:,1])
X = X.astype(float)
X
y_pred = regressor.predict(X)
y_pred
import pickle
data = {"model": regressor, "le_country": le_country, "le_education": le_education}
with open('saved_steps.pkl', 'wb') as file:
pickle.dump(data, file)
with open('saved_steps.pkl', 'rb') as file:
data = pickle.load(file)
regressor_loaded = data["model"]
le_country = data["le_country"]
le_education = data["le_education"]
y_pred = regressor_loaded.predict(X)
y_pred
| 0.519765 | 0.418756 |
# Working comparing reciprocal and standard records and reading data
```
#load all packages
import datetime
import pickle
import copy
import os
from pathlib import Path
import numpy as np
import pandas as pd
import pyvista as pv
import matplotlib.pyplot as plt
from sys import argv
from matplotlib.colors import Normalize
from pyaspect.model.gridmod3d import gridmod3d as gm
from pyaspect.model.bbox import bbox as bb
from pyaspect.model.gm3d_utils import *
from pyaspect.moment_tensor import MomentTensor
from pyaspect.specfemio.headers import *
from pyaspect.specfemio.write import *
from pyaspect.specfemio.read import *
from pyaspect.specfemio.utils import *
import pyaspect.events.gevents as gevents
import pyaspect.events.gstations as gstations
from pyaspect.events.munge.knmi import correct_station_depths as csd_f
import pyaspect.events.mtensors as mtensors
from obspy.imaging.beachball import beach
from obspy import UTCDateTime
import shapefile as sf
```
## seting up dirs
```
data_in_dir = 'data/output/'
data_out_dir = data_in_dir
proj_dirs = f'{data_out_dir}tmp/TestProjects/Computed_Forward_and_Reciprocity_Test'
standard_proj_dir = f'{proj_dirs}/ForwardTestProject'
reciprocal_proj_dir = f'{proj_dirs}/ReciprocalTestProject'
print('Standard Project Contents:')
print('--------------------------')
!ls {standard_proj_dir}
print()
print('Reciprocal Project Contents:')
print('--------------------------')
!ls {reciprocal_proj_dir}
```
## read records
```
standard_records_fqp = os.path.join(standard_proj_dir,'pyheader.proj_records')
reciprocal_records_fqp = os.path.join(reciprocal_proj_dir,'pyheader.proj_records')
standard_records_h = read_records(standard_records_fqp)
reciprocal_records_h = read_records(reciprocal_records_fqp)
'''
''';
print(f'Standard Project Records:\n')
for record in standard_records_h:
print(f'{len(record.get_solutions_header_list())}\n')
print(f'{len(record.get_stations_header_list())}\n')
print(f'Reciprocal Project Records:\n')
for record in reciprocal_records_h:
#print(f'{record}\n')
print(f'{len(record.get_solutions_header_list())}\n')
print(f'{len(record.get_stations_header_list())}\n')
recip_rec = reciprocal_records_h[0]
idx = pd.IndexSlice
#print(recip_rec.get_solutions_header_list(key='depth',value=3.14))
r_df = recip_rec.solutions_df
#r_df = recip_rec.stations_df
#print(r_df.columns)
key = 'sid'
value = 1
#c_df = r_df.loc[r_df[key] == value]
#cidx = idx[slice(0),slice(0,1),slice(0,1),slice(0,7,2)]
#cidx = idx[slice(0),slice(0,1)]
tupix = (slice(0),slice(0,1))
#print('type(tupix):',type(tupix))
cidx = idx[tupix]
#cidx = idx[slice(0)]
#print(f'weird:\n{cidx}')
c_df = r_df.loc[cidx,:]
#print(f'r_df:\n{r_df}\n')
#print(f'c_df:\n{c_df}')
print()
print('-------------------- SLICED -------------------------------------------')
#s_rec = recip_rec[:,:,::2,::2]
s_rec = recip_rec.copy()
#print(s_rec)
print('get_val:',s_rec.solutions_df.index.get_level_values('sid').nunique)
print('level:',s_rec.solutions_df.index.levels[0])
print()
s_rec.solutions_df.reset_index()
print('count:',len(s_rec.stations_df))
print(dir(s_rec.stations_df.index))
print('series:',s_rec.stations_df.index.to_series().nunique())
src_df = recip_rec.solutions_df.copy()
rec_df = recip_rec.stations_df.copy()
print(f'Source DF:\n{src_df}')
print('\n\n')
print(f'Receiver DF:\n{rec_df}')
print('\n\n')
print('------------------------------- Renamed ---------------------------------------')
#src_df.rename(columns = {'lat_yc': 'src.lat_yc', 'lon_xc': 'src.lon_xc'}, inplace=True)
#src_df.columns = ['src.' + col if col != 'col1' and col != 'col2' else col for col in df.columns]
src_df.reset_index(inplace=True)
src_df.set_index(['proj_id','eid','sid'],inplace=True)
src_df.columns = 'src.' + src_df.columns
rec_df.reset_index(inplace=True)
rec_df.set_index(['proj_id','eid','sid','trid','gid'],inplace=True)
rec_df.columns = 'rec.' + rec_df.columns
#new_df = rec_df + src_df
idx = pd.IndexSlice
print(f'Source DF:\n{src_df}')
print('\n\n')
print(f'Receiver DF:\n{rec_df}')
print('\n\n')
print(f'slice rec DF:\n{rec_df.loc[idx[0,0,0,:,:],:]}')
class TRac():
def __init__(self):
myzeros = np.arange(10)
myones = 10 + myzeros.copy()
mytwos = 10 + myones.copy()
mythrees = 10 + mytwos.copy()
self.data = {'zero':myzeros,'one':myones,'two':mytwos,'three':mythrees}
def append(self, item):
self.data.append(item)
def __getitem__(self, *kslice):
print('len(kslice):',len(kslice))
print('kslice:',kslice)
print('type(kslice[0]):',type(kslice[0]))
rval = None
if isinstance(*kslice,slice):
print('is instance')
j = 0
if 1 < len(*kslice) and len(*kslice) <= 2:
j = 1
else:
raise Exception(f'too many slice indices')
rval = []
for i in range(kslice[0].start,kslice[0].stop,kslice[0].step):
print('i:',i)
rval.append(self.data[list(self.data.keys())[i]][kslice[j].start:kslice[j].stop:kslice[j].step])
elif isinstance(kslice[0],str):
rval = self.data[kslice]
else:
raise Exception(f'incorrect index type')
return rval
trac = TRac()
#x = trac[0:6:2,2:4,0:3,1:3]
#x = trac['depth']
#x = trac[0:4:2,2:4]
#x = trac[1:3:2,0:9:3]
x = trac[1:3:2]
print(x)
print()
y = trac['three']
print(y)
print()
#print(x[0])
#print()
#print(dir(x[0]))
#print('start:',x[0].start)
#print('stop: ',x[0].stop)
#print('step: ',x[0].step)
def test_func(*args):
print(len(args))
test_func(slice(0))
#yt = (1,2,3,4,5,6)
yt = (1,)
print(yt[0:2])
```
|
github_jupyter
|
#load all packages
import datetime
import pickle
import copy
import os
from pathlib import Path
import numpy as np
import pandas as pd
import pyvista as pv
import matplotlib.pyplot as plt
from sys import argv
from matplotlib.colors import Normalize
from pyaspect.model.gridmod3d import gridmod3d as gm
from pyaspect.model.bbox import bbox as bb
from pyaspect.model.gm3d_utils import *
from pyaspect.moment_tensor import MomentTensor
from pyaspect.specfemio.headers import *
from pyaspect.specfemio.write import *
from pyaspect.specfemio.read import *
from pyaspect.specfemio.utils import *
import pyaspect.events.gevents as gevents
import pyaspect.events.gstations as gstations
from pyaspect.events.munge.knmi import correct_station_depths as csd_f
import pyaspect.events.mtensors as mtensors
from obspy.imaging.beachball import beach
from obspy import UTCDateTime
import shapefile as sf
data_in_dir = 'data/output/'
data_out_dir = data_in_dir
proj_dirs = f'{data_out_dir}tmp/TestProjects/Computed_Forward_and_Reciprocity_Test'
standard_proj_dir = f'{proj_dirs}/ForwardTestProject'
reciprocal_proj_dir = f'{proj_dirs}/ReciprocalTestProject'
print('Standard Project Contents:')
print('--------------------------')
!ls {standard_proj_dir}
print()
print('Reciprocal Project Contents:')
print('--------------------------')
!ls {reciprocal_proj_dir}
standard_records_fqp = os.path.join(standard_proj_dir,'pyheader.proj_records')
reciprocal_records_fqp = os.path.join(reciprocal_proj_dir,'pyheader.proj_records')
standard_records_h = read_records(standard_records_fqp)
reciprocal_records_h = read_records(reciprocal_records_fqp)
'''
''';
print(f'Standard Project Records:\n')
for record in standard_records_h:
print(f'{len(record.get_solutions_header_list())}\n')
print(f'{len(record.get_stations_header_list())}\n')
print(f'Reciprocal Project Records:\n')
for record in reciprocal_records_h:
#print(f'{record}\n')
print(f'{len(record.get_solutions_header_list())}\n')
print(f'{len(record.get_stations_header_list())}\n')
recip_rec = reciprocal_records_h[0]
idx = pd.IndexSlice
#print(recip_rec.get_solutions_header_list(key='depth',value=3.14))
r_df = recip_rec.solutions_df
#r_df = recip_rec.stations_df
#print(r_df.columns)
key = 'sid'
value = 1
#c_df = r_df.loc[r_df[key] == value]
#cidx = idx[slice(0),slice(0,1),slice(0,1),slice(0,7,2)]
#cidx = idx[slice(0),slice(0,1)]
tupix = (slice(0),slice(0,1))
#print('type(tupix):',type(tupix))
cidx = idx[tupix]
#cidx = idx[slice(0)]
#print(f'weird:\n{cidx}')
c_df = r_df.loc[cidx,:]
#print(f'r_df:\n{r_df}\n')
#print(f'c_df:\n{c_df}')
print()
print('-------------------- SLICED -------------------------------------------')
#s_rec = recip_rec[:,:,::2,::2]
s_rec = recip_rec.copy()
#print(s_rec)
print('get_val:',s_rec.solutions_df.index.get_level_values('sid').nunique)
print('level:',s_rec.solutions_df.index.levels[0])
print()
s_rec.solutions_df.reset_index()
print('count:',len(s_rec.stations_df))
print(dir(s_rec.stations_df.index))
print('series:',s_rec.stations_df.index.to_series().nunique())
src_df = recip_rec.solutions_df.copy()
rec_df = recip_rec.stations_df.copy()
print(f'Source DF:\n{src_df}')
print('\n\n')
print(f'Receiver DF:\n{rec_df}')
print('\n\n')
print('------------------------------- Renamed ---------------------------------------')
#src_df.rename(columns = {'lat_yc': 'src.lat_yc', 'lon_xc': 'src.lon_xc'}, inplace=True)
#src_df.columns = ['src.' + col if col != 'col1' and col != 'col2' else col for col in df.columns]
src_df.reset_index(inplace=True)
src_df.set_index(['proj_id','eid','sid'],inplace=True)
src_df.columns = 'src.' + src_df.columns
rec_df.reset_index(inplace=True)
rec_df.set_index(['proj_id','eid','sid','trid','gid'],inplace=True)
rec_df.columns = 'rec.' + rec_df.columns
#new_df = rec_df + src_df
idx = pd.IndexSlice
print(f'Source DF:\n{src_df}')
print('\n\n')
print(f'Receiver DF:\n{rec_df}')
print('\n\n')
print(f'slice rec DF:\n{rec_df.loc[idx[0,0,0,:,:],:]}')
class TRac():
def __init__(self):
myzeros = np.arange(10)
myones = 10 + myzeros.copy()
mytwos = 10 + myones.copy()
mythrees = 10 + mytwos.copy()
self.data = {'zero':myzeros,'one':myones,'two':mytwos,'three':mythrees}
def append(self, item):
self.data.append(item)
def __getitem__(self, *kslice):
print('len(kslice):',len(kslice))
print('kslice:',kslice)
print('type(kslice[0]):',type(kslice[0]))
rval = None
if isinstance(*kslice,slice):
print('is instance')
j = 0
if 1 < len(*kslice) and len(*kslice) <= 2:
j = 1
else:
raise Exception(f'too many slice indices')
rval = []
for i in range(kslice[0].start,kslice[0].stop,kslice[0].step):
print('i:',i)
rval.append(self.data[list(self.data.keys())[i]][kslice[j].start:kslice[j].stop:kslice[j].step])
elif isinstance(kslice[0],str):
rval = self.data[kslice]
else:
raise Exception(f'incorrect index type')
return rval
trac = TRac()
#x = trac[0:6:2,2:4,0:3,1:3]
#x = trac['depth']
#x = trac[0:4:2,2:4]
#x = trac[1:3:2,0:9:3]
x = trac[1:3:2]
print(x)
print()
y = trac['three']
print(y)
print()
#print(x[0])
#print()
#print(dir(x[0]))
#print('start:',x[0].start)
#print('stop: ',x[0].stop)
#print('step: ',x[0].step)
def test_func(*args):
print(len(args))
test_func(slice(0))
#yt = (1,2,3,4,5,6)
yt = (1,)
print(yt[0:2])
| 0.095492 | 0.648981 |
```
# default_exp glp.data
%reload_ext autoreload
%autoreload 2
from nbdev import *
```
# glp.data
A collection of classes, utility functions, etc. to handle the obnoxiousness of biological data.
```
import sys
sys.path.append('../')
#export
from fastai.text import *
import pandas as pd
import numpy as np
```
## Fasta Files
The most common dataformat for sequence info is a Fasta file.
```
>seq1
MRATCRA
>seq2
MRATTRA
```
This often needs to get paired with phenotype information held in the fasta file itself (in the header) or using the header as a key to match information in the csv file. This pipeline is a modular tool for performing some of these common functions and creating easy to use files for downstream processing.
This is structured in a modular way for easy reusability.
So an `AbstractRecordExtractor` should:
- defines a `__call__` method that takes a seq_record and can recieve **kwargs from previous transforms.
```
#export
class AbstractRecordExtractor(object):
def __call__(self, seqR, **kwargs):
raise NotImplementedError
```
The simplest of which is likely the `SeqExtractor` which retreives the sequence from the record and optionally truncates it.
```
# export
class SeqExtractor(AbstractRecordExtractor):
def __init__(self, field = 'sequence', truncate = None, ungap = False):
self.field = field
self.truncate = truncate
self.ungap = ungap
def __call__(self, seqR, **kwargs):
seq = str(seqR.seq)
if self.ungap:
seq = seq.replace('-', '')
if self.truncate is not None:
seq = seq[:self.truncate]
return {self.field: seq}
from Bio.SeqRecord import SeqRecord
from Bio.Seq import Seq
seq_trial = SeqRecord(Seq('ACGTACGT'), id = 'SeqID')
seq_ext = SeqExtractor()
features = seq_ext(seq_trial)
assert 'sequence' in features
assert features['sequence'] == 'ACGTACGT'
seq_ext = SeqExtractor(truncate=4)
features = seq_ext(seq_trial)
assert 'sequence' in features
assert features['sequence'] == 'ACGT'
```
And something to read and process the id.
```
#export
class IDExtractor(AbstractRecordExtractor):
def __init__(self, field = 'id', keyfunc = None, returns_dict = False):
self.field = field
self.returns_dict = returns_dict
self.keyfunc = keyfunc if keyfunc else lambda x: x
def __call__(self, seqR, **kwargs):
_id = seqR.id
res = self.keyfunc(_id)
if self.returns_dict:
return res
else:
return {self.field: res}
seq_trial = SeqRecord(Seq('ACGTACGT'), id = 'SeqID')
seq_ext = IDExtractor()
features = seq_ext(seq_trial)
assert 'id' in features
assert features['id'] == 'SeqID'
seq_ext = IDExtractor(keyfunc = lambda _id: _id.lower())
features = seq_ext(seq_trial)
assert 'id' in features
assert features['id'] == 'seqid'
def split_func(_id):
return {'first': _id[:3], 'last': _id[-2:]}
seq_ext = IDExtractor(keyfunc = split_func, returns_dict=True)
features = seq_ext(seq_trial)
assert ('first' in features) and ('last' in features)
assert (features['first'] == 'Seq') and (features['last'] == 'ID')
```
These two processes cover >90% of use cases.
Let's see how to combine these into a `FastaPipeline` to pre-process fasta files into easier to use csv files.
```
#export
from itertools import chain
class FastaPipeline(object):
def __init__(self, extractors):
self.extractors = extractors
def process_seqrecord(self, seqR):
info = {}
for ext in self.extractors:
info.update(ext(seqR, **info))
return info
def process_seq_stream(self, stream):
seq_data = []
for seqR in stream:
seq_data.append(self.process_seqrecord(seqR))
seq_df = pd.DataFrame(seq_data)
return seq_df
def fasta2df(self, path = None, stream = None,
feature_data = None, merge_keys = None,
grouper = None):
if stream is None:
if type(path) == str:
stream = SeqIO.parse(open(path), 'fasta')
else:
stream = chain.from_iterable(SeqIO.parse(open(p), 'fasta') for p in path)
seq_df = self.process_seq_stream(stream)
if feature_data is not None:
assert merge_keys is not None, 'If feature_data is provided merge_keys must be provided'
feature_on, seq_on = merge_keys
res = pd.merge(feature_data, seq_df,
left_on = feature_on, right_on = seq_on)
else:
res = seq_df
if grouper is not None:
res = res.groupby(grouper, as_index = False).first()
return res
seq_stream = [SeqRecord(Seq('ACGTACGT'), id = 'SeqID1'),
SeqRecord(Seq('TGCATGCA'), id = 'SeqID2')]
df = pd.DataFrame([{'idl': 'seqid1', 'feature': 1},
{'idl': 'seqid2', 'feature': 2},
])
pipeline = FastaPipeline([SeqExtractor(),
IDExtractor(keyfunc=lambda _id: _id.lower())])
res = pipeline.fasta2df(stream = seq_stream,
feature_data=df,
merge_keys = ('idl', 'id'))
pd.testing.assert_series_equal(res['feature'], pd.Series([1, 2]), check_names=False)
pd.testing.assert_series_equal(res['sequence'], pd.Series(['ACGTACGT', 'TGCATGCA']), check_names=False)
res
```
|
github_jupyter
|
# default_exp glp.data
%reload_ext autoreload
%autoreload 2
from nbdev import *
import sys
sys.path.append('../')
#export
from fastai.text import *
import pandas as pd
import numpy as np
>seq1
MRATCRA
>seq2
MRATTRA
#export
class AbstractRecordExtractor(object):
def __call__(self, seqR, **kwargs):
raise NotImplementedError
# export
class SeqExtractor(AbstractRecordExtractor):
def __init__(self, field = 'sequence', truncate = None, ungap = False):
self.field = field
self.truncate = truncate
self.ungap = ungap
def __call__(self, seqR, **kwargs):
seq = str(seqR.seq)
if self.ungap:
seq = seq.replace('-', '')
if self.truncate is not None:
seq = seq[:self.truncate]
return {self.field: seq}
from Bio.SeqRecord import SeqRecord
from Bio.Seq import Seq
seq_trial = SeqRecord(Seq('ACGTACGT'), id = 'SeqID')
seq_ext = SeqExtractor()
features = seq_ext(seq_trial)
assert 'sequence' in features
assert features['sequence'] == 'ACGTACGT'
seq_ext = SeqExtractor(truncate=4)
features = seq_ext(seq_trial)
assert 'sequence' in features
assert features['sequence'] == 'ACGT'
#export
class IDExtractor(AbstractRecordExtractor):
def __init__(self, field = 'id', keyfunc = None, returns_dict = False):
self.field = field
self.returns_dict = returns_dict
self.keyfunc = keyfunc if keyfunc else lambda x: x
def __call__(self, seqR, **kwargs):
_id = seqR.id
res = self.keyfunc(_id)
if self.returns_dict:
return res
else:
return {self.field: res}
seq_trial = SeqRecord(Seq('ACGTACGT'), id = 'SeqID')
seq_ext = IDExtractor()
features = seq_ext(seq_trial)
assert 'id' in features
assert features['id'] == 'SeqID'
seq_ext = IDExtractor(keyfunc = lambda _id: _id.lower())
features = seq_ext(seq_trial)
assert 'id' in features
assert features['id'] == 'seqid'
def split_func(_id):
return {'first': _id[:3], 'last': _id[-2:]}
seq_ext = IDExtractor(keyfunc = split_func, returns_dict=True)
features = seq_ext(seq_trial)
assert ('first' in features) and ('last' in features)
assert (features['first'] == 'Seq') and (features['last'] == 'ID')
#export
from itertools import chain
class FastaPipeline(object):
def __init__(self, extractors):
self.extractors = extractors
def process_seqrecord(self, seqR):
info = {}
for ext in self.extractors:
info.update(ext(seqR, **info))
return info
def process_seq_stream(self, stream):
seq_data = []
for seqR in stream:
seq_data.append(self.process_seqrecord(seqR))
seq_df = pd.DataFrame(seq_data)
return seq_df
def fasta2df(self, path = None, stream = None,
feature_data = None, merge_keys = None,
grouper = None):
if stream is None:
if type(path) == str:
stream = SeqIO.parse(open(path), 'fasta')
else:
stream = chain.from_iterable(SeqIO.parse(open(p), 'fasta') for p in path)
seq_df = self.process_seq_stream(stream)
if feature_data is not None:
assert merge_keys is not None, 'If feature_data is provided merge_keys must be provided'
feature_on, seq_on = merge_keys
res = pd.merge(feature_data, seq_df,
left_on = feature_on, right_on = seq_on)
else:
res = seq_df
if grouper is not None:
res = res.groupby(grouper, as_index = False).first()
return res
seq_stream = [SeqRecord(Seq('ACGTACGT'), id = 'SeqID1'),
SeqRecord(Seq('TGCATGCA'), id = 'SeqID2')]
df = pd.DataFrame([{'idl': 'seqid1', 'feature': 1},
{'idl': 'seqid2', 'feature': 2},
])
pipeline = FastaPipeline([SeqExtractor(),
IDExtractor(keyfunc=lambda _id: _id.lower())])
res = pipeline.fasta2df(stream = seq_stream,
feature_data=df,
merge_keys = ('idl', 'id'))
pd.testing.assert_series_equal(res['feature'], pd.Series([1, 2]), check_names=False)
pd.testing.assert_series_equal(res['sequence'], pd.Series(['ACGTACGT', 'TGCATGCA']), check_names=False)
res
| 0.360602 | 0.856092 |
```
import pandas as pd
```
## Anonymize editor data
Find the editors who publish most of his/her papers in the journals that he/she edits and during editorship.
### Load data
```
editors = pd.read_csv("/scratch/fl1092/capstone/elsevier/editors.csv", sep='\t',
usecols=["NewAuthorId", "issn", "start_year", "end_year"],
dtype={"NewAuthorId":int, "issn":str, "start_year":int, "end_year":int})
assert(editors.issn.apply(lambda x: len(x) ==8).all())
assert(editors[(editors.start_year >= 2018) | (editors.start_year < 1950)].shape[0]==0)
print(f"editors: {editors.shape} unique: {editors.NewAuthorId.nunique()} unique journals: {editors.issn.nunique()}")
%%time
paper_journal = pd.read_csv("/scratch/fl1092/capstone/mag/PaperJournals.csv", sep='\t', memory_map=True,
usecols=['PaperId', 'JournalId'], dtype={'PaperId':int, 'JournalId':int})
print(paper_journal.shape) # (82468512, 2)
elsevier_journals = pd.read_csv("/scratch/fl1092/capstone/bigmem/Journals_matched.csv", sep="\t",
usecols=['JournalId','issn'],
dtype={'CitationCount':int,'DisplayName':str,'JournalId':int,
'PaperCount':int,'Rank':int,'issn':str})
%%time
papers = pd.read_csv("/scratch/fl1092/capstone/elsevier/EditorsPaperNoEditorials.csv", sep='\t',
dtype={'NewAuthorId':int, 'PaperId':int, 'Year':int})
assert(papers.duplicated(subset=['NewAuthorId','PaperId']).any()==False)
print(papers.shape) # (3295055, 3)
```
### Find out papers editors publish in total, and in the journals they edit
```
%%time
papers = papers.merge(editors, on='NewAuthorId')
assert(papers.duplicated(subset=['NewAuthorId','PaperId']).any()==True)
print(papers.shape) # (3855056, 6)
%%time
papers = papers.merge(paper_journal, on='PaperId')
print(papers.shape) # 2606040
%%time
papers = papers.merge(elsevier_journals, on='issn')
print(papers.shape) # 2611352
papers = papers.assign(edit=papers.JournalId_x == papers.JournalId_y)
papers = papers.assign(during = papers.apply(
lambda row: (row['Year'] >= row['start_year']) & (row['Year'] <= row['end_year']) ,axis=1))
papers = papers.drop(['JournalId_x','JournalId_y'], axis=1).drop_duplicates()
papers.shape # (2606404, 8)
papers = papers.sort_values(by=['edit','during'],ascending=False)
papers = papers.drop(['start_year','end_year'], axis=1).drop_duplicates()
papers.shape # (2606404, 6)
papers = papers.drop_duplicates(subset=['NewAuthorId','PaperId'], keep='first')
print(papers.shape) # (2228197, 10)
```
### Anonymize
```
papers[['PaperId']].drop_duplicates().reset_index(drop=True).reset_index().rename(
columns={'index':'AnoPaperId'}).to_csv('/scratch/fl1092/capstone/anonymize/PaperMap.csv',sep='\t',index=False)
editorMap = pd.read_csv('/scratch/fl1092/capstone/anonymize/EditorMap.csv',sep='\t',
dtype={'EditorId':int,'NewAuthorId':int})
issnMap = pd.read_csv('/scratch/fl1092/capstone/anonymize/IssnMap.csv',sep='\t',dtype={'IssnId':int,'issn':str})
paperMap = pd.read_csv('/scratch/fl1092/capstone/anonymize/PaperMap.csv',sep='\t',
dtype={'PaperId':int,'AnoPaperId':int})
def anonymize(df, anoPaper=True):
print(df.shape, end=' ')
df = df.merge(editorMap, on='NewAuthorId').drop('NewAuthorId',axis=1)
df = df.merge(issnMap, on='issn').drop('issn',axis=1)
if anoPaper:
df = df.merge(paperMap, on='PaperId').drop('PaperId',axis=1)
print(df.shape)
return df
ano_editors = anonymize(editors, False)
ano_papers = anonymize(papers)
```
### Finding the guys to plot
```
total = ano_papers.groupby('EditorId').AnoPaperId.nunique().reset_index().rename(columns={'AnoPaperId':'Count'})
conflict = ano_papers.groupby(['EditorId','during','edit']).AnoPaperId.nunique().reset_index().rename(
columns={'AnoPaperId':'Conflict'})
conflict = conflict[(conflict.during==True) & (conflict.edit==True)].drop(['during','edit'], axis=1)
print(conflict.shape, total.shape)
conflict = conflict.merge(total, on='EditorId')
print(conflict.shape) # (10327, 3)
conflict = conflict.assign(percent=conflict.Conflict/conflict.Count)
conflict[conflict.Count >= 30].sort_values(by='percent', ascending=False).head(3)
## three editors to plot
to_plot = [12054, 13531, 15203]
ano_papers = ano_papers[ano_papers.EditorId.isin(to_plot)]
ano_editors = ano_editors[ano_editors.EditorId.isin(to_plot)]
ano_papers.to_csv('../data/figure_3/EditorPapers.csv',sep='\t',index=False)
ano_editors.to_csv('../data/figure_3/Editors.csv',sep='\t',index=False)
```
## Anonymize journal data
Find out the percentage of papers in a journal that is authored by its editorial board.
### Load data
```
editors = pd.read_csv("/scratch/fl1092/capstone/elsevier/editors.csv", sep='\t',
usecols=["NewAuthorId", "issn", "start_year", "end_year"],
dtype={"NewAuthorId":int, "issn":str, "start_year":int, "end_year":int})
assert(editors.issn.apply(lambda x: len(x) ==8).all())
assert(editors[(editors.start_year >= 2018) | (editors.start_year < 1950)].shape[0]==0)
# < 2018 since we care about trend after becoming editor
# we consider papers up until 2018, but those who become editor until 2017 (inclusive)
print(f"editors: {editors.shape} unique: {editors.NewAuthorId.nunique()}")
%%time
paper_journal = pd.read_csv("/scratch/fl1092/capstone/mag/PaperJournals.csv", sep='\t', memory_map=True,
usecols=['PaperId', 'JournalId'], dtype={'PaperId':int, 'JournalId':int})
print(paper_journal.shape)
%%time
editor_papers = pd.read_csv("/scratch/fl1092/capstone/elsevier/EditorsPaperNoEditorials.csv", sep='\t',
dtype={'NewAuthorId':int, 'PaperId':int, 'Year':int})
assert(editor_papers.duplicated(subset=['NewAuthorId','PaperId']).any()==False)
print(editor_papers.shape) # (3295055, 3)
%%time
paper_year = pd.read_csv("/scratch/fl1092/capstone/mag/PaperYear.csv", sep='\t', usecols=['PaperId', 'Year'],
dtype={'PaperId':int, 'Year':int}, memory_map=True)
print(paper_year.shape) # (219006118, 2)
%%time
elsevier_journals = pd.read_csv("/scratch/fl1092/capstone/bigmem/Journals_matched.csv", sep="\t",
usecols=['JournalId','issn'],
dtype={'CitationCount':int,'DisplayName':str,'JournalId':int,
'PaperCount':int,'Rank':int,'issn':str})
print(elsevier_journals.shape) # (1817, 2)
```
### All papers
```
%%time
elsevier_papers = paper_journal.merge(elsevier_journals, on='JournalId').drop('JournalId', axis=1).drop_duplicates()
print(elsevier_papers.shape)
elsevier_papers = elsevier_papers.merge(paper_year, on='PaperId')
print(elsevier_papers.shape) # (10931065, 2)
%%time
total_papers = elsevier_papers.groupby(['issn','Year']).PaperId.nunique().reset_index().rename(
columns={'PaperId':'Total'})
print(total_papers.shape) # (58382, 3)
total_papers.to_csv("/scratch/fl1092/capstone/temp/JournalOutlierTotalPapers.csv", sep='\t', index=False)
```
### Papers by editors
```
%%time
papers = editor_papers.merge(editors, on='NewAuthorId')
assert(papers.duplicated(subset=['NewAuthorId','PaperId']).any()==True)
print(papers.shape) # (3855056, 6) # (3858984, 5) # (3855971, 5)
papers = papers.merge(paper_journal, on='PaperId')
print(papers.shape)
papers = papers.merge(elsevier_journals, on='issn')
print(papers.shape)
papers = papers.assign(edit=papers.JournalId_x == papers.JournalId_y)
papers = papers.assign(during = papers.apply(
lambda row: (row['Year'] >= row['start_year']) & (row['Year'] <= row['end_year']) ,axis=1))
papers = papers.drop(['JournalId_x','JournalId_y'], axis=1).drop_duplicates()
print(papers.shape)
papers = papers.sort_values(by=['edit','during'],ascending=False)
papers = papers.drop(['start_year','end_year'], axis=1).drop_duplicates()
print(papers.shape)
papers = papers.drop_duplicates(subset=['NewAuthorId','PaperId'], keep='first')
print(papers.shape) # (2228197, 10)
papers = papers[(papers.edit==True) & (papers.during==True)]
print(papers.shape)
papers = papers.drop_duplicates(subset=['PaperId'])
print(papers.shape)
# (60387, 6)
# (58141, 6)
editor_papers = papers.groupby(['issn','Year']).PaperId.nunique().reset_index().rename(
columns={'PaperId':'Count'})
editor_papers.to_csv("/scratch/fl1092/capstone/temp/JournalOutlierEditorPapers.csv", sep='\t', index=False)
```
### Find the outliers
```
editor_papers = pd.read_csv("/scratch/fl1092/capstone/temp/JournalOutlierEditorPapers.csv", sep='\t',
dtype={"issn":str,"Year":int,"Count":int})
total_papers = pd.read_csv("/scratch/fl1092/capstone/temp/JournalOutlierTotalPapers.csv", sep='\t',
dtype={"issn":str,"Year":int,"Count":int})
editor_papers.shape, total_papers.shape
bad_journals = total_papers.groupby(['issn']).Total.sum().reset_index().merge(
editor_papers.groupby('issn').Count.sum(), on='issn', how='left').fillna(0)
bad_journals.shape
bad_journals = bad_journals.assign(percent = bad_journals.Count/bad_journals.Total)
bad_journals[bad_journals.Total >= 30].merge(
issnMap, on='issn').drop('issn',axis=1).sort_values(by='percent',ascending=False).head(3)
editor_papers = editor_papers.merge(issnMap, on='issn').drop('issn',axis=1)
total_papers = total_papers.merge(issnMap, on='issn').drop('issn',axis=1)
to_plot = [6, 326, 1366]
editor_papers = editor_papers[editor_papers.IssnId.isin(to_plot)]
total_papers = total_papers[total_papers.IssnId.isin(to_plot)]
editor_papers.to_csv('../data/figure_3/EditorPapersInJournal.csv',sep='\t',index=False)
total_papers.to_csv('../data/figure_3/TotalPapersInJournal.csv',sep='\t',index=False)
```
|
github_jupyter
|
import pandas as pd
editors = pd.read_csv("/scratch/fl1092/capstone/elsevier/editors.csv", sep='\t',
usecols=["NewAuthorId", "issn", "start_year", "end_year"],
dtype={"NewAuthorId":int, "issn":str, "start_year":int, "end_year":int})
assert(editors.issn.apply(lambda x: len(x) ==8).all())
assert(editors[(editors.start_year >= 2018) | (editors.start_year < 1950)].shape[0]==0)
print(f"editors: {editors.shape} unique: {editors.NewAuthorId.nunique()} unique journals: {editors.issn.nunique()}")
%%time
paper_journal = pd.read_csv("/scratch/fl1092/capstone/mag/PaperJournals.csv", sep='\t', memory_map=True,
usecols=['PaperId', 'JournalId'], dtype={'PaperId':int, 'JournalId':int})
print(paper_journal.shape) # (82468512, 2)
elsevier_journals = pd.read_csv("/scratch/fl1092/capstone/bigmem/Journals_matched.csv", sep="\t",
usecols=['JournalId','issn'],
dtype={'CitationCount':int,'DisplayName':str,'JournalId':int,
'PaperCount':int,'Rank':int,'issn':str})
%%time
papers = pd.read_csv("/scratch/fl1092/capstone/elsevier/EditorsPaperNoEditorials.csv", sep='\t',
dtype={'NewAuthorId':int, 'PaperId':int, 'Year':int})
assert(papers.duplicated(subset=['NewAuthorId','PaperId']).any()==False)
print(papers.shape) # (3295055, 3)
%%time
papers = papers.merge(editors, on='NewAuthorId')
assert(papers.duplicated(subset=['NewAuthorId','PaperId']).any()==True)
print(papers.shape) # (3855056, 6)
%%time
papers = papers.merge(paper_journal, on='PaperId')
print(papers.shape) # 2606040
%%time
papers = papers.merge(elsevier_journals, on='issn')
print(papers.shape) # 2611352
papers = papers.assign(edit=papers.JournalId_x == papers.JournalId_y)
papers = papers.assign(during = papers.apply(
lambda row: (row['Year'] >= row['start_year']) & (row['Year'] <= row['end_year']) ,axis=1))
papers = papers.drop(['JournalId_x','JournalId_y'], axis=1).drop_duplicates()
papers.shape # (2606404, 8)
papers = papers.sort_values(by=['edit','during'],ascending=False)
papers = papers.drop(['start_year','end_year'], axis=1).drop_duplicates()
papers.shape # (2606404, 6)
papers = papers.drop_duplicates(subset=['NewAuthorId','PaperId'], keep='first')
print(papers.shape) # (2228197, 10)
papers[['PaperId']].drop_duplicates().reset_index(drop=True).reset_index().rename(
columns={'index':'AnoPaperId'}).to_csv('/scratch/fl1092/capstone/anonymize/PaperMap.csv',sep='\t',index=False)
editorMap = pd.read_csv('/scratch/fl1092/capstone/anonymize/EditorMap.csv',sep='\t',
dtype={'EditorId':int,'NewAuthorId':int})
issnMap = pd.read_csv('/scratch/fl1092/capstone/anonymize/IssnMap.csv',sep='\t',dtype={'IssnId':int,'issn':str})
paperMap = pd.read_csv('/scratch/fl1092/capstone/anonymize/PaperMap.csv',sep='\t',
dtype={'PaperId':int,'AnoPaperId':int})
def anonymize(df, anoPaper=True):
print(df.shape, end=' ')
df = df.merge(editorMap, on='NewAuthorId').drop('NewAuthorId',axis=1)
df = df.merge(issnMap, on='issn').drop('issn',axis=1)
if anoPaper:
df = df.merge(paperMap, on='PaperId').drop('PaperId',axis=1)
print(df.shape)
return df
ano_editors = anonymize(editors, False)
ano_papers = anonymize(papers)
total = ano_papers.groupby('EditorId').AnoPaperId.nunique().reset_index().rename(columns={'AnoPaperId':'Count'})
conflict = ano_papers.groupby(['EditorId','during','edit']).AnoPaperId.nunique().reset_index().rename(
columns={'AnoPaperId':'Conflict'})
conflict = conflict[(conflict.during==True) & (conflict.edit==True)].drop(['during','edit'], axis=1)
print(conflict.shape, total.shape)
conflict = conflict.merge(total, on='EditorId')
print(conflict.shape) # (10327, 3)
conflict = conflict.assign(percent=conflict.Conflict/conflict.Count)
conflict[conflict.Count >= 30].sort_values(by='percent', ascending=False).head(3)
## three editors to plot
to_plot = [12054, 13531, 15203]
ano_papers = ano_papers[ano_papers.EditorId.isin(to_plot)]
ano_editors = ano_editors[ano_editors.EditorId.isin(to_plot)]
ano_papers.to_csv('../data/figure_3/EditorPapers.csv',sep='\t',index=False)
ano_editors.to_csv('../data/figure_3/Editors.csv',sep='\t',index=False)
editors = pd.read_csv("/scratch/fl1092/capstone/elsevier/editors.csv", sep='\t',
usecols=["NewAuthorId", "issn", "start_year", "end_year"],
dtype={"NewAuthorId":int, "issn":str, "start_year":int, "end_year":int})
assert(editors.issn.apply(lambda x: len(x) ==8).all())
assert(editors[(editors.start_year >= 2018) | (editors.start_year < 1950)].shape[0]==0)
# < 2018 since we care about trend after becoming editor
# we consider papers up until 2018, but those who become editor until 2017 (inclusive)
print(f"editors: {editors.shape} unique: {editors.NewAuthorId.nunique()}")
%%time
paper_journal = pd.read_csv("/scratch/fl1092/capstone/mag/PaperJournals.csv", sep='\t', memory_map=True,
usecols=['PaperId', 'JournalId'], dtype={'PaperId':int, 'JournalId':int})
print(paper_journal.shape)
%%time
editor_papers = pd.read_csv("/scratch/fl1092/capstone/elsevier/EditorsPaperNoEditorials.csv", sep='\t',
dtype={'NewAuthorId':int, 'PaperId':int, 'Year':int})
assert(editor_papers.duplicated(subset=['NewAuthorId','PaperId']).any()==False)
print(editor_papers.shape) # (3295055, 3)
%%time
paper_year = pd.read_csv("/scratch/fl1092/capstone/mag/PaperYear.csv", sep='\t', usecols=['PaperId', 'Year'],
dtype={'PaperId':int, 'Year':int}, memory_map=True)
print(paper_year.shape) # (219006118, 2)
%%time
elsevier_journals = pd.read_csv("/scratch/fl1092/capstone/bigmem/Journals_matched.csv", sep="\t",
usecols=['JournalId','issn'],
dtype={'CitationCount':int,'DisplayName':str,'JournalId':int,
'PaperCount':int,'Rank':int,'issn':str})
print(elsevier_journals.shape) # (1817, 2)
%%time
elsevier_papers = paper_journal.merge(elsevier_journals, on='JournalId').drop('JournalId', axis=1).drop_duplicates()
print(elsevier_papers.shape)
elsevier_papers = elsevier_papers.merge(paper_year, on='PaperId')
print(elsevier_papers.shape) # (10931065, 2)
%%time
total_papers = elsevier_papers.groupby(['issn','Year']).PaperId.nunique().reset_index().rename(
columns={'PaperId':'Total'})
print(total_papers.shape) # (58382, 3)
total_papers.to_csv("/scratch/fl1092/capstone/temp/JournalOutlierTotalPapers.csv", sep='\t', index=False)
%%time
papers = editor_papers.merge(editors, on='NewAuthorId')
assert(papers.duplicated(subset=['NewAuthorId','PaperId']).any()==True)
print(papers.shape) # (3855056, 6) # (3858984, 5) # (3855971, 5)
papers = papers.merge(paper_journal, on='PaperId')
print(papers.shape)
papers = papers.merge(elsevier_journals, on='issn')
print(papers.shape)
papers = papers.assign(edit=papers.JournalId_x == papers.JournalId_y)
papers = papers.assign(during = papers.apply(
lambda row: (row['Year'] >= row['start_year']) & (row['Year'] <= row['end_year']) ,axis=1))
papers = papers.drop(['JournalId_x','JournalId_y'], axis=1).drop_duplicates()
print(papers.shape)
papers = papers.sort_values(by=['edit','during'],ascending=False)
papers = papers.drop(['start_year','end_year'], axis=1).drop_duplicates()
print(papers.shape)
papers = papers.drop_duplicates(subset=['NewAuthorId','PaperId'], keep='first')
print(papers.shape) # (2228197, 10)
papers = papers[(papers.edit==True) & (papers.during==True)]
print(papers.shape)
papers = papers.drop_duplicates(subset=['PaperId'])
print(papers.shape)
# (60387, 6)
# (58141, 6)
editor_papers = papers.groupby(['issn','Year']).PaperId.nunique().reset_index().rename(
columns={'PaperId':'Count'})
editor_papers.to_csv("/scratch/fl1092/capstone/temp/JournalOutlierEditorPapers.csv", sep='\t', index=False)
editor_papers = pd.read_csv("/scratch/fl1092/capstone/temp/JournalOutlierEditorPapers.csv", sep='\t',
dtype={"issn":str,"Year":int,"Count":int})
total_papers = pd.read_csv("/scratch/fl1092/capstone/temp/JournalOutlierTotalPapers.csv", sep='\t',
dtype={"issn":str,"Year":int,"Count":int})
editor_papers.shape, total_papers.shape
bad_journals = total_papers.groupby(['issn']).Total.sum().reset_index().merge(
editor_papers.groupby('issn').Count.sum(), on='issn', how='left').fillna(0)
bad_journals.shape
bad_journals = bad_journals.assign(percent = bad_journals.Count/bad_journals.Total)
bad_journals[bad_journals.Total >= 30].merge(
issnMap, on='issn').drop('issn',axis=1).sort_values(by='percent',ascending=False).head(3)
editor_papers = editor_papers.merge(issnMap, on='issn').drop('issn',axis=1)
total_papers = total_papers.merge(issnMap, on='issn').drop('issn',axis=1)
to_plot = [6, 326, 1366]
editor_papers = editor_papers[editor_papers.IssnId.isin(to_plot)]
total_papers = total_papers[total_papers.IssnId.isin(to_plot)]
editor_papers.to_csv('../data/figure_3/EditorPapersInJournal.csv',sep='\t',index=False)
total_papers.to_csv('../data/figure_3/TotalPapersInJournal.csv',sep='\t',index=False)
| 0.214774 | 0.61555 |
**Número de grupo:** 25
**Nombre de los integrantes del grupo:**
- David Bugoi
- Daniela Alejanda Córdova
- Erik Karlgren Domercq
Lab 11
# Práctica 1
> __Fecha de entrega: 11 de abril de 2021__
## Parte 1: consultas SPARQL sobre Wikidata.
En esta práctica vamos a usar el punto de acceso [SPARQL](https://query.wikidata.org/) de Wikidata para contestar las preguntas que se formulan a continuación. Cada pregunta debe ser respondida realizando una única consulta SPARQL. Para cada una de las entidades recuperadas se mostrará __tanto su identificador como su etiqueta__ (nombre de la entidad en lenguaje natural).
Para cada una de las preguntas debes mostrar tanto la consulta como la respuesta obtenida.
- La __consulta__ debe estar en una celda de tipo _Raw NBConvert_ para que jupyter no trate de interpretarla. Cada tripleta de la consulta debe tener un breve comentario a la derecha que la explique (los comentarios empiezan con #).
- La __respuesta__ debe estar en una celda de tipo _Markdown_. Puedes descargar las respuestas usando la opción _Descargar >> HTML Table_ y copiar el código HTML en esta celda. Al ejecutar la celda se mostrará en forma de tabla.
- Si lo consideras necesario, puedes añadir celdas adicionales en formato _Markdown_ para explicar decisiones que hayas tomado al crear la consulta o cualquier otro dato que consideres interesante.
__Para resolver estas consultas necesitarás aprender algo más de SPARQL de lo que hemos contado en clase__. Los dos recursos que te recomendamos consultar son:
- [Este tutorial de SPARQL](https://www.wikidata.org/wiki/Wikidata:SPARQL_tutorial).
- [Esta recopilación de ejemplos](https://www.wikidata.org/wiki/Wikidata:SPARQL_query_service/queries/examples)
### Ejemplo
Recuperar todas las instancias directas de la clase [Cabra (Q2934)](https://www.wikidata.org/wiki/Q2934) que aparecen en la base de conocimiento.
<html><head><meta charset="utf-8"></head><body><table><thead><tr><th>item</th><th>itemLabel</th></tr></thead><tbody><tr><td>http://www.wikidata.org/entity/Q151345</td><td>Billygoat Hennes</td></tr><tr><td>http://www.wikidata.org/entity/Q3569037</td><td>William Windsor</td></tr><tr><td>http://www.wikidata.org/entity/Q23003932</td><td>His Whiskers</td></tr><tr><td>http://www.wikidata.org/entity/Q24287064</td><td>Taffy</td></tr><tr><td>http://www.wikidata.org/entity/Q41239734</td><td>Lance Corporal Shenkin III</td></tr><tr><td>http://www.wikidata.org/entity/Q41240892</td><td>Lance Corporal Shenkin II</td></tr><tr><td>http://www.wikidata.org/entity/Q41241416</td><td>Lance Corporal Shenkin I</td></tr><tr><td>http://www.wikidata.org/entity/Q65326499</td><td>Konkan kanyal</td></tr></tbody></table></body></html>
### Consulta 1
[Steven Allan Spielberg (Q8877)](https://www.wikidata.org/wiki/Q8877) es uno de los directores más reconocidos y populares de la industria cinematográfica mundial. Vamos a comenzar por averiguar su fecha y lugar de nacimiento.
<html><head><meta charset="utf-8"></head><body><table><thead><tr><th>birthplace</th><th>birthdate</th><th>birthplaceLabel</th></tr></thead><tbody><tr><td>http://www.wikidata.org/entity/Q43196</td><td>1946-12-18T00:00:00Z</td><td>Cincinnati</td></tr></tbody></table></body></html>
### Consulta 2
A continuación vamos a averiguar todas las distintas profesiones (ocupaciones) que se le reconocen en la base de conocimiento. Queremos obtener los resultados ordenados alfabéticamente por el nombre de la profesión.
<html><head><meta charset="utf-8"></head><body><table><thead><tr><th>ocup</th><th>ocupLabel</th></tr></thead><tbody><tr><td>http://www.wikidata.org/entity/Q33999</td><td>actor</td></tr><tr><td>http://www.wikidata.org/entity/Q10800557</td><td>actor de cine</td></tr><tr><td>http://www.wikidata.org/entity/Q10732476</td><td>coleccionista de arte</td></tr><tr><td>http://www.wikidata.org/entity/Q2526255</td><td>director de cine</td></tr><tr><td>http://www.wikidata.org/entity/Q7042855</td><td>editor de cine</td></tr><tr><td>http://www.wikidata.org/entity/Q43845</td><td>empresario</td></tr><tr><td>http://www.wikidata.org/entity/Q18844224</td><td>escritor de ciencia ficción</td></tr><tr><td>http://www.wikidata.org/entity/Q28389</td><td>guionista</td></tr><tr><td>http://www.wikidata.org/entity/Q3282637</td><td>productor de cine</td></tr><tr><td>http://www.wikidata.org/entity/Q578109</td><td>productor de televisión</td></tr><tr><td>http://www.wikidata.org/entity/Q1053574</td><td>productor ejecutivo</td></tr><tr><td>http://www.wikidata.org/entity/Q3455803</td><td>realizador</td></tr></tbody></table></body></html>
### Consulta 3
¿Cuales de esas profesiones corresponden a un tipo determinado de [Artista (Q483501)](https://www.wikidata.org/wiki/Q483501)? Ten en cuenta que la jerarquía de tipos de artistas puede ser compleja.
<html><head><meta charset="utf-8"></head><body><table><thead><tr><th>ocup</th><th>ocupLabel</th></tr></thead><tbody><tr><td>http://www.wikidata.org/entity/Q7042855</td><td>editor de cine</td></tr><tr><td>http://www.wikidata.org/entity/Q33999</td><td>actor</td></tr><tr><td>http://www.wikidata.org/entity/Q2526255</td><td>director de cine</td></tr><tr><td>http://www.wikidata.org/entity/Q10800557</td><td>actor de cine</td></tr></tbody></table></body></html>
### Consulta 4
Spielberg ha recibido muchas nominaciones y premios a lo largo de su carrera. Queremos obtener una lista de nominaciones y para cada una de ellas el trabajo por el cual fue nominado y la ceremonia en la que se produjo la nominación. Para resolver esta consulta necesitarás acceder a los cualificadores de nodos sentencia y necesitarás entender los prefijos que usa Wikidata.
<html><head><meta charset="utf-8"></head><body><table><thead><tr><th>nominations</th><th>nominationsLabel</th><th>work</th><th>workLabel</th><th>cer</th><th>cerLabel</th></tr></thead><tbody><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q152456</td><td>Múnich</td><td>http://www.wikidata.org/entity/Q319132</td><td>Premios Óscar de 2005</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q165817</td><td>Saving Private Ryan</td><td>http://www.wikidata.org/entity/Q2405223</td><td>11th European Film Awards</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q483941</td><td>La lista de Schindler</td><td>http://www.wikidata.org/entity/Q944352</td><td>Premios Óscar de 1993</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q11621</td><td>E.T., el extraterrestre</td><td>http://www.wikidata.org/entity/Q41918</td><td>Anexo:55.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q275187</td><td>Cartas desde Iwo Jima</td><td>http://www.wikidata.org/entity/Q213699</td><td>Premios Óscar de 2006</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q244604</td><td>Minority Report</td><td>http://www.wikidata.org/entity/Q585310</td><td>15th European Film Awards</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q152456</td><td>Múnich</td><td>http://www.wikidata.org/entity/Q319132</td><td>Premios Óscar de 2005</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q165817</td><td>Saving Private Ryan</td><td>http://www.wikidata.org/entity/Q263239</td><td>71.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q32433</td><td>Lincoln</td><td>http://www.wikidata.org/entity/Q248688</td><td>85.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q11621</td><td>E.T., el extraterrestre</td><td>http://www.wikidata.org/entity/Q41918</td><td>Anexo:55.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q218589</td><td>War Horse</td><td>http://www.wikidata.org/entity/Q180675</td><td>Premios Óscar de 2011</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q18067135</td><td>Bridge of Spies</td><td>http://www.wikidata.org/entity/Q20022969</td><td>Anexo:88.ª Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q174284</td><td>Raiders of the Lost Ark</td><td>http://www.wikidata.org/entity/Q28969</td><td>Anexo:Premios Óscar de 1981</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q483941</td><td>La lista de Schindler</td><td>http://www.wikidata.org/entity/Q944352</td><td>Premios Óscar de 1993</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q320588</td><td>Close Encounters of the Third Kind</td><td>http://www.wikidata.org/entity/Q282159</td><td>Anexo:Premios Óscar de 1977</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q32433</td><td>Lincoln</td><td>http://www.wikidata.org/entity/Q248688</td><td>85.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q30203425</td><td>The Post</td><td>http://www.wikidata.org/entity/Q24636843</td><td>90.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q223299</td><td>El color púrpura</td><td>http://www.wikidata.org/entity/Q938235</td><td>Premios Óscar de 1985</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q165817</td><td>Saving Private Ryan</td><td>http://www.wikidata.org/entity/Q263239</td><td>71.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q152456</td><td>Múnich</td><td>http://www.wikidata.org/entity/Q319132</td><td>Premios Óscar de 2005</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q165817</td><td>Saving Private Ryan</td><td>http://www.wikidata.org/entity/Q2405223</td><td>11th European Film Awards</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q483941</td><td>La lista de Schindler</td><td>http://www.wikidata.org/entity/Q944352</td><td>Premios Óscar de 1993</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q11621</td><td>E.T., el extraterrestre</td><td>http://www.wikidata.org/entity/Q41918</td><td>Anexo:55.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q275187</td><td>Cartas desde Iwo Jima</td><td>http://www.wikidata.org/entity/Q213699</td><td>Premios Óscar de 2006</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q244604</td><td>Minority Report</td><td>http://www.wikidata.org/entity/Q585310</td><td>15th European Film Awards</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q152456</td><td>Múnich</td><td>http://www.wikidata.org/entity/Q319132</td><td>Premios Óscar de 2005</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q165817</td><td>Saving Private Ryan</td><td>http://www.wikidata.org/entity/Q263239</td><td>71.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q32433</td><td>Lincoln</td><td>http://www.wikidata.org/entity/Q248688</td><td>85.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q11621</td><td>E.T., el extraterrestre</td><td>http://www.wikidata.org/entity/Q41918</td><td>Anexo:55.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q218589</td><td>War Horse</td><td>http://www.wikidata.org/entity/Q180675</td><td>Premios Óscar de 2011</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q18067135</td><td>Bridge of Spies</td><td>http://www.wikidata.org/entity/Q20022969</td><td>Anexo:88.ª Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q174284</td><td>Raiders of the Lost Ark</td><td>http://www.wikidata.org/entity/Q28969</td><td>Anexo:Premios Óscar de 1981</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q483941</td><td>La lista de Schindler</td><td>http://www.wikidata.org/entity/Q944352</td><td>Premios Óscar de 1993</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q320588</td><td>Close Encounters of the Third Kind</td><td>http://www.wikidata.org/entity/Q282159</td><td>Anexo:Premios Óscar de 1977</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q32433</td><td>Lincoln</td><td>http://www.wikidata.org/entity/Q248688</td><td>85.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q30203425</td><td>The Post</td><td>http://www.wikidata.org/entity/Q24636843</td><td>90.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q223299</td><td>El color púrpura</td><td>http://www.wikidata.org/entity/Q938235</td><td>Premios Óscar de 1985</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q165817</td><td>Saving Private Ryan</td><td>http://www.wikidata.org/entity/Q263239</td><td>71.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q152456</td><td>Múnich</td><td>http://www.wikidata.org/entity/Q319132</td><td>Premios Óscar de 2005</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q165817</td><td>Saving Private Ryan</td><td>http://www.wikidata.org/entity/Q2405223</td><td>11th European Film Awards</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q483941</td><td>La lista de Schindler</td><td>http://www.wikidata.org/entity/Q944352</td><td>Premios Óscar de 1993</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q11621</td><td>E.T., el extraterrestre</td><td>http://www.wikidata.org/entity/Q41918</td><td>Anexo:55.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q275187</td><td>Cartas desde Iwo Jima</td><td>http://www.wikidata.org/entity/Q213699</td><td>Premios Óscar de 2006</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q244604</td><td>Minority Report</td><td>http://www.wikidata.org/entity/Q585310</td><td>15th European Film Awards</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q152456</td><td>Múnich</td><td>http://www.wikidata.org/entity/Q319132</td><td>Premios Óscar de 2005</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q165817</td><td>Saving Private Ryan</td><td>http://www.wikidata.org/entity/Q263239</td><td>71.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q32433</td><td>Lincoln</td><td>http://www.wikidata.org/entity/Q248688</td><td>85.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q11621</td><td>E.T., el extraterrestre</td><td>http://www.wikidata.org/entity/Q41918</td><td>Anexo:55.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q218589</td><td>War Horse</td><td>http://www.wikidata.org/entity/Q180675</td><td>Premios Óscar de 2011</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q18067135</td><td>Bridge of Spies</td><td>http://www.wikidata.org/entity/Q20022969</td><td>Anexo:88.ª Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q174284</td><td>Raiders of the Lost Ark</td><td>http://www.wikidata.org/entity/Q28969</td><td>Anexo:Premios Óscar de 1981</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q483941</td><td>La lista de Schindler</td><td>http://www.wikidata.org/entity/Q944352</td><td>Premios Óscar de 1993</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q320588</td><td>Close Encounters of the Third Kind</td><td>http://www.wikidata.org/entity/Q282159</td><td>Anexo:Premios Óscar de 1977</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q32433</td><td>Lincoln</td><td>http://www.wikidata.org/entity/Q248688</td><td>85.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q30203425</td><td>The Post</td><td>http://www.wikidata.org/entity/Q24636843</td><td>90.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q223299</td><td>El color púrpura</td><td>http://www.wikidata.org/entity/Q938235</td><td>Premios Óscar de 1985</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q165817</td><td>Saving Private Ryan</td><td>http://www.wikidata.org/entity/Q263239</td><td>71.º Premios Óscar</td></tr></tbody></table></body></html>
### Consulta 5
Ahora queremos conocer el título de todas las películas que Spielberg ha dirigido. Se mostrarán ordenadas alfabéticamente y debes tener cuidado de no mostrar resultados repetidos. Ten en cuenta que puede haber distintos tipos de películas.
<html><head><meta charset="utf-8"></head><body><table><thead><tr><th>film</th><th>filmLabel</th></tr></thead><tbody><tr><td>http://www.wikidata.org/entity/Q207482</td><td>1941</td></tr><tr><td>http://www.wikidata.org/entity/Q221113</td><td>A.I. Artificial Intelligence</td></tr><tr><td>http://www.wikidata.org/entity/Q449743</td><td>Always</td></tr><tr><td>http://www.wikidata.org/entity/Q457886</td><td>Amblin'</td></tr><tr><td>http://www.wikidata.org/entity/Q472361</td><td>Amistad</td></tr><tr><td>http://www.wikidata.org/entity/Q18067135</td><td>Bridge of Spies</td></tr><tr><td>http://www.wikidata.org/entity/Q208108</td><td>Catch Me If You Can</td></tr><tr><td>http://www.wikidata.org/entity/Q320588</td><td>Close Encounters of the Third Kind</td></tr><tr><td>http://www.wikidata.org/entity/Q583407</td><td>Duel</td></tr><tr><td>http://www.wikidata.org/entity/Q11621</td><td>E.T. the Extra-Terrestrial</td></tr><tr><td>http://www.wikidata.org/entity/Q271281</td><td>Empire of the Sun</td></tr><tr><td>http://www.wikidata.org/entity/Q3057871</td><td>Escape to Nowhere</td></tr><tr><td>http://www.wikidata.org/entity/Q591320</td><td>Firelight</td></tr><tr><td>http://www.wikidata.org/entity/Q646389</td><td>Hook</td></tr><tr><td>http://www.wikidata.org/entity/Q182373</td><td>Indiana Jones and the Kingdom of the Crystal Skull</td></tr><tr><td>http://www.wikidata.org/entity/Q185658</td><td>Indiana Jones and the Last Crusade</td></tr><tr><td>http://www.wikidata.org/entity/Q179215</td><td>Indiana Jones and the Temple of Doom</td></tr><tr><td>http://www.wikidata.org/entity/Q189505</td><td>Jaws</td></tr><tr><td>http://www.wikidata.org/entity/Q167726</td><td>Jurassic Park</td></tr><tr><td>http://www.wikidata.org/entity/Q18276472</td><td>Jurassic Park 3D</td></tr><tr><td>http://www.wikidata.org/entity/Q32433</td><td>Lincoln</td></tr><tr><td>http://www.wikidata.org/entity/Q244604</td><td>Minority Report</td></tr><tr><td>http://www.wikidata.org/entity/Q152456</td><td>Munich</td></tr><tr><td>http://www.wikidata.org/entity/Q4468634</td><td>Murder by the Book</td></tr><tr><td>http://www.wikidata.org/entity/Q174284</td><td>Raiders of the Lost Ark</td></tr><tr><td>http://www.wikidata.org/entity/Q22000542</td><td>Ready Player One</td></tr><tr><td>http://www.wikidata.org/entity/Q165817</td><td>Saving Private Ryan</td></tr><tr><td>http://www.wikidata.org/entity/Q483941</td><td>Schindler's List</td></tr><tr><td>http://www.wikidata.org/entity/Q7540939</td><td>Slipstream</td></tr><tr><td>http://www.wikidata.org/entity/Q167022</td><td>Something Evil</td></tr><tr><td>http://www.wikidata.org/entity/Q980041</td><td>The Adventures of Tintin</td></tr><tr><td>http://www.wikidata.org/entity/Q19689203</td><td>The BFG</td></tr><tr><td>http://www.wikidata.org/entity/Q223299</td><td>The Color Purple</td></tr><tr><td>http://www.wikidata.org/entity/Q200873</td><td>The Lost World: Jurassic Park</td></tr><tr><td>http://www.wikidata.org/entity/Q30203425</td><td>The Post</td></tr><tr><td>http://www.wikidata.org/entity/Q432526</td><td>The Sugarland Express</td></tr><tr><td>http://www.wikidata.org/entity/Q318766</td><td>The Terminal</td></tr><tr><td>http://www.wikidata.org/entity/Q11791805</td><td>The Unfinished Journey</td></tr><tr><td>http://www.wikidata.org/entity/Q1330737</td><td>Twilight Zone: The Movie</td></tr><tr><td>http://www.wikidata.org/entity/Q218589</td><td>War Horse</td></tr><tr><td>http://www.wikidata.org/entity/Q202028</td><td>War of the Worlds</td></tr><tr><td>http://www.wikidata.org/entity/Q2956251</td><td>Watch Dog</td></tr><tr><td>http://www.wikidata.org/entity/Q63643994</td><td>West Side Story</td></tr></tbody></table></body></html>
### Consulta 6
Spielberg es sin duda un director prolífico. ¿Exactamente cuántas películas de ciencia ficción ha dirigido?
Otra forma de hacerlo:
<html><head><meta charset="utf-8"></head><body><table><thead><tr><th>numFilms</th></tr></thead><tbody><tr><td>10</td></tr></tbody></table></body></html>
### Consulta 7
Es importante que las películas tengan una duración adecuada, ni muy cortas ni demasiado largas. De todas las películas que ha dirigido Spielberg, ¿cuales duran entre 90 y 150 minutos? Para cada película muestra el título y la duración. Los resultados se deben mostrar ordenados alfabéticamente.
<html><head><meta charset="utf-8"></head><body><table><thead><tr><th>film</th><th>filmLabel</th><th>dur</th></tr></thead><tbody><tr><td>http://www.wikidata.org/entity/Q207482</td><td>1941</td><td>113</td></tr><tr><td>http://www.wikidata.org/entity/Q221113</td><td>A.I. Artificial Intelligence</td><td>146</td></tr><tr><td>http://www.wikidata.org/entity/Q449743</td><td>Always</td><td>117</td></tr><tr><td>http://www.wikidata.org/entity/Q18067135</td><td>Bridge of Spies</td><td>142</td></tr><tr><td>http://www.wikidata.org/entity/Q208108</td><td>Catch Me If You Can</td><td>135</td></tr><tr><td>http://www.wikidata.org/entity/Q320588</td><td>Close Encounters of the Third Kind</td><td>134</td></tr><tr><td>http://www.wikidata.org/entity/Q583407</td><td>Duel</td><td>90</td></tr><tr><td>http://www.wikidata.org/entity/Q11621</td><td>E.T. the Extra-Terrestrial</td><td>115</td></tr><tr><td>http://www.wikidata.org/entity/Q591320</td><td>Firelight</td><td>135</td></tr><tr><td>http://www.wikidata.org/entity/Q646389</td><td>Hook</td><td>136</td></tr><tr><td>http://www.wikidata.org/entity/Q182373</td><td>Indiana Jones and the Kingdom of the Crystal Skull</td><td>123</td></tr><tr><td>http://www.wikidata.org/entity/Q185658</td><td>Indiana Jones and the Last Crusade</td><td>122</td></tr><tr><td>http://www.wikidata.org/entity/Q179215</td><td>Indiana Jones and the Temple of Doom</td><td>114</td></tr><tr><td>http://www.wikidata.org/entity/Q189505</td><td>Jaws</td><td>124</td></tr><tr><td>http://www.wikidata.org/entity/Q167726</td><td>Jurassic Park</td><td>123</td></tr><tr><td>http://www.wikidata.org/entity/Q18276472</td><td>Jurassic Park 3D</td><td>126</td></tr><tr><td>http://www.wikidata.org/entity/Q32433</td><td>Lincoln</td><td>150</td></tr><tr><td>http://www.wikidata.org/entity/Q244604</td><td>Minority Report</td><td>145</td></tr><tr><td>http://www.wikidata.org/entity/Q174284</td><td>Raiders of the Lost Ark</td><td>111</td></tr><tr><td>http://www.wikidata.org/entity/Q22000542</td><td>Ready Player One</td><td>140</td></tr><tr><td>http://www.wikidata.org/entity/Q980041</td><td>The Adventures of Tintin</td><td>107</td></tr><tr><td>http://www.wikidata.org/entity/Q19689203</td><td>The BFG</td><td>115</td></tr><tr><td>http://www.wikidata.org/entity/Q200873</td><td>The Lost World: Jurassic Park</td><td>129</td></tr><tr><td>http://www.wikidata.org/entity/Q30203425</td><td>The Post</td><td>115</td></tr><tr><td>http://www.wikidata.org/entity/Q432526</td><td>The Sugarland Express</td><td>106</td></tr><tr><td>http://www.wikidata.org/entity/Q318766</td><td>The Terminal</td><td>124</td></tr><tr><td>http://www.wikidata.org/entity/Q1330737</td><td>Twilight Zone: The Movie</td><td>101</td></tr><tr><td>http://www.wikidata.org/entity/Q218589</td><td>War Horse</td><td>146</td></tr><tr><td>http://www.wikidata.org/entity/Q202028</td><td>War of the Worlds</td><td>116</td></tr></tbody></table></body></html>
### Consulta 8
Vamos a recuperar ahora las películas más actuales que ha dirigido Spielberg. Estamos interesados específicamente en películas que se hayan estrenado a partir del año 2000.
Seguramente, en tu primer intento verás que cada película aparece repetida varias veces con fechas distintas porque Wikidata contiene las fechas de estreno en cada país. Vamos a considerar que la fecha real de estreno de la película es la fecha más antigua de todas ellas.
Para que cada película aparezca sólo una vez con la fecha correcta necesitarás agrupar las respuestas por película y título, y aplicar una función de agregación sobre las fechas de publicación. Los resultados se tienen que mostrar ordenados alfabéticamente.
<html><head><meta charset="utf-8"></head><body><table><thead><tr><th>film</th><th>filmLabel</th><th>fMin</th></tr></thead><tbody><tr><td>http://www.wikidata.org/entity/Q207482</td><td>1941</td><td>1979-12-13T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q221113</td><td>A.I. Artificial Intelligence</td><td>2001-06-29T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q449743</td><td>Always</td><td>1989-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q457886</td><td>Amblin'</td><td>1968-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q472361</td><td>Amistad</td><td>1997-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q18067135</td><td>Bridge of Spies</td><td>2015-10-16T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q208108</td><td>Catch Me If You Can</td><td>2002-12-16T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q320588</td><td>Close Encounters of the Third Kind</td><td>1977-11-16T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q583407</td><td>Duel</td><td>1971-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q11621</td><td>E.T. the Extra-Terrestrial</td><td>1982-05-26T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q271281</td><td>Empire of the Sun</td><td>1987-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q3057871</td><td>Escape to Nowhere</td><td>1961-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q591320</td><td>Firelight</td><td>1964-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q646389</td><td>Hook</td><td>1991-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q182373</td><td>Indiana Jones and the Kingdom of the Crystal Skull</td><td>2008-05-21T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q185658</td><td>Indiana Jones and the Last Crusade</td><td>1989-05-24T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q179215</td><td>Indiana Jones and the Temple of Doom</td><td>1984-05-23T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q189505</td><td>Jaws</td><td>1975-06-20T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q167726</td><td>Jurassic Park</td><td>1993-06-11T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q18276472</td><td>Jurassic Park 3D</td><td>2013-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q32433</td><td>Lincoln</td><td>2012-10-08T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q244604</td><td>Minority Report</td><td>2002-06-17T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q152456</td><td>Munich</td><td>2005-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q4468634</td><td>Murder by the Book</td><td>1971-09-15T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q174284</td><td>Raiders of the Lost Ark</td><td>1981-06-12T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q22000542</td><td>Ready Player One</td><td>2018-03-11T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q165817</td><td>Saving Private Ryan</td><td>1998-07-24T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q483941</td><td>Schindler's List</td><td>1993-12-15T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q7540939</td><td>Slipstream</td><td>1967-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q167022</td><td>Something Evil</td><td>1972-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q980041</td><td>The Adventures of Tintin</td><td>2011-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q19689203</td><td>The BFG</td><td>2016-07-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q223299</td><td>The Color Purple</td><td>1985-12-16T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q200873</td><td>The Lost World: Jurassic Park</td><td>1997-05-23T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q30203425</td><td>The Post</td><td>2017-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q432526</td><td>The Sugarland Express</td><td>1974-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q318766</td><td>The Terminal</td><td>2004-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q11791805</td><td>The Unfinished Journey</td><td>1999-12-31T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q1330737</td><td>Twilight Zone: The Movie</td><td>1983-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q218589</td><td>War Horse</td><td>2011-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q202028</td><td>War of the Worlds</td><td>2005-06-29T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q2956251</td><td>Watch Dog</td><td>1973-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q63643994</td><td>West Side Story</td><td>2021-01-01T00:00:00Z</td></tr></tbody></table></body></html>
### Consulta 9
¿Qué actores han trabajado en películas dirigidas por Spielberg? Para cada uno de ellos muestra su nombre y, si está disponible, su fecha de nacimiento y defunción. Los resultados deben aparecer ordenados alfabéticamente.
Como en las películas trabajan muchos actores sólo estamos interesados en los primeros 50 resultados.
<html><head><meta charset="utf-8"></head><body><table><thead><tr><th>actor</th><th>actorLabel</th><th>fNacimiento</th><th>fMuerte</th></tr></thead><tbody><tr><td>http://www.wikidata.org/entity/Q3603296</td><td>Abbe Lane</td><td>1932-12-14T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q2821029</td><td>Abdelhafid Metalsi</td><td>1969-01-01T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q4678990</td><td>Adam Driver</td><td>1983-11-19T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q1060758</td><td>Adam Godley</td><td>1964-07-22T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q281964</td><td>Adam Goldberg</td><td>1970-10-25T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q365292</td><td>Adolph Caesar</td><td>1933-12-05T00:00:00Z</td><td>1986-03-06T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q240869</td><td>Adrian Grenier</td><td>1976-07-10T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q2826948</td><td>Agnieszka Wagner</td><td>1970-12-17T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q175392</td><td>Akosua Busia</td><td>1966-12-30T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q310394</td><td>Alan Alda</td><td>1936-01-28T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q350194</td><td>Alan Dale</td><td>1947-05-06T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q356303</td><td>Albert Brooks</td><td>1947-07-22T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q2641365</td><td>Alex Hyde-White</td><td>1959-01-30T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q67026</td><td>Alexander Beyer</td><td>1973-06-24T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q87676</td><td>Alexander Held</td><td>1958-10-19T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q1521335</td><td>Alexander Strobele</td><td>1953-05-06T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q970287</td><td>Alexei Sayle</td><td>1952-08-07T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q296028</td><td>Alfred Molina</td><td>1953-05-24T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q235328</td><td>Alison Brie</td><td>1982-12-29T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q261193</td><td>Alison Doody</td><td>1966-11-11T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q2438420</td><td>Alon Abutbul</td><td>1965-05-28T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q2844743</td><td>Amelia Jacob</td><td>1998-12-03T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q6848483</td><td>Ami Weinberg</td><td>1953-10-26T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q2907291</td><td>Amos Lavi</td><td>1953-01-01T00:00:00Z</td><td>2010-11-09T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q333443</td><td>Amrish Puri</td><td>1932-06-22T00:00:00Z</td><td>2005-01-12T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q152773</td><td>Amy Acker</td><td>1976-12-05T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q481832</td><td>Amy Adams</td><td>1974-08-20T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q231203</td><td>Amy Ryan</td><td>1968-05-03T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q465628</td><td>Andrew Divoff</td><td>1955-07-02T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q507322</td><td>Andrew Scott</td><td>1976-10-21T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q515592</td><td>Andrzej Seweryn</td><td>1946-04-25T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q527862</td><td>Andy Tennant</td><td>1955-06-15T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q16213296</td><td>Andy Thompson</td><td>1970-01-01T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q546106</td><td>Anian Zollner</td><td>1969-02-21T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q452047</td><td>Ann Robinson</td><td>1929-05-25T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q4767298</td><td>Anna Maria Horsford</td><td>1948-03-06T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q2850579</td><td>Anna Mucha</td><td>1980-04-26T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q199884</td><td>Anna Paquin</td><td>1982-07-24T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q461742</td><td>Anne Lockhart</td><td>1953-09-06T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q17198635</td><td>Annika Boras</td><td>1981-02-28T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q13426679</td><td>Ansel Elgort</td><td>1994-03-14T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q573343</td><td>Anthony Higgins</td><td>1947-05-09T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q65932</td><td>Anthony Hopkins</td><td>1937-12-31T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q24706484</td><td>Anthony Ingram</td><td>1966-11-27T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q509013</td><td>April Grace</td><td>1962-05-12T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q264559</td><td>Ariana Richards</td><td>1979-09-11T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q675776</td><td>Arliss Howard</td><td>1954-10-18T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q22212873</td><td>Armand Schultz</td><td>1959-05-17T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q2865227</td><td>Arthur Malet</td><td>1927-09-24T00:00:00Z</td><td>2013-05-18T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q717982</td><td>Arye Gross</td><td>1960-03-17T00:00:00Z</td><td></td></tr></tbody></table></body></html>
### Consulta 10
¿Cuántos asertos hay sobre Spielber en Wikidata? Ten en cuenta que Spielberg puede aparece tanto como sujeto como objeto de cada tripleta.
En teoría Steven Spielberg podría aparecer en ambos lados de un aserto, pero al ejecutar la consulta buscando en qué casos ocurre no encontramos ningún caso. Por tanto, no haber considerado ese caso no modifica los resultados.
<html><head><meta charset="utf-8"></head><body><table><thead><tr><th>total</th></tr></thead><tbody><tr><td>1492</td></tr></tbody></table></body></html>
__Fecha de las consultas:__ 11 de abril
|
github_jupyter
|
**Número de grupo:** 25
**Nombre de los integrantes del grupo:**
- David Bugoi
- Daniela Alejanda Córdova
- Erik Karlgren Domercq
Lab 11
# Práctica 1
> __Fecha de entrega: 11 de abril de 2021__
## Parte 1: consultas SPARQL sobre Wikidata.
En esta práctica vamos a usar el punto de acceso [SPARQL](https://query.wikidata.org/) de Wikidata para contestar las preguntas que se formulan a continuación. Cada pregunta debe ser respondida realizando una única consulta SPARQL. Para cada una de las entidades recuperadas se mostrará __tanto su identificador como su etiqueta__ (nombre de la entidad en lenguaje natural).
Para cada una de las preguntas debes mostrar tanto la consulta como la respuesta obtenida.
- La __consulta__ debe estar en una celda de tipo _Raw NBConvert_ para que jupyter no trate de interpretarla. Cada tripleta de la consulta debe tener un breve comentario a la derecha que la explique (los comentarios empiezan con #).
- La __respuesta__ debe estar en una celda de tipo _Markdown_. Puedes descargar las respuestas usando la opción _Descargar >> HTML Table_ y copiar el código HTML en esta celda. Al ejecutar la celda se mostrará en forma de tabla.
- Si lo consideras necesario, puedes añadir celdas adicionales en formato _Markdown_ para explicar decisiones que hayas tomado al crear la consulta o cualquier otro dato que consideres interesante.
__Para resolver estas consultas necesitarás aprender algo más de SPARQL de lo que hemos contado en clase__. Los dos recursos que te recomendamos consultar son:
- [Este tutorial de SPARQL](https://www.wikidata.org/wiki/Wikidata:SPARQL_tutorial).
- [Esta recopilación de ejemplos](https://www.wikidata.org/wiki/Wikidata:SPARQL_query_service/queries/examples)
### Ejemplo
Recuperar todas las instancias directas de la clase [Cabra (Q2934)](https://www.wikidata.org/wiki/Q2934) que aparecen en la base de conocimiento.
<html><head><meta charset="utf-8"></head><body><table><thead><tr><th>item</th><th>itemLabel</th></tr></thead><tbody><tr><td>http://www.wikidata.org/entity/Q151345</td><td>Billygoat Hennes</td></tr><tr><td>http://www.wikidata.org/entity/Q3569037</td><td>William Windsor</td></tr><tr><td>http://www.wikidata.org/entity/Q23003932</td><td>His Whiskers</td></tr><tr><td>http://www.wikidata.org/entity/Q24287064</td><td>Taffy</td></tr><tr><td>http://www.wikidata.org/entity/Q41239734</td><td>Lance Corporal Shenkin III</td></tr><tr><td>http://www.wikidata.org/entity/Q41240892</td><td>Lance Corporal Shenkin II</td></tr><tr><td>http://www.wikidata.org/entity/Q41241416</td><td>Lance Corporal Shenkin I</td></tr><tr><td>http://www.wikidata.org/entity/Q65326499</td><td>Konkan kanyal</td></tr></tbody></table></body></html>
### Consulta 1
[Steven Allan Spielberg (Q8877)](https://www.wikidata.org/wiki/Q8877) es uno de los directores más reconocidos y populares de la industria cinematográfica mundial. Vamos a comenzar por averiguar su fecha y lugar de nacimiento.
<html><head><meta charset="utf-8"></head><body><table><thead><tr><th>birthplace</th><th>birthdate</th><th>birthplaceLabel</th></tr></thead><tbody><tr><td>http://www.wikidata.org/entity/Q43196</td><td>1946-12-18T00:00:00Z</td><td>Cincinnati</td></tr></tbody></table></body></html>
### Consulta 2
A continuación vamos a averiguar todas las distintas profesiones (ocupaciones) que se le reconocen en la base de conocimiento. Queremos obtener los resultados ordenados alfabéticamente por el nombre de la profesión.
<html><head><meta charset="utf-8"></head><body><table><thead><tr><th>ocup</th><th>ocupLabel</th></tr></thead><tbody><tr><td>http://www.wikidata.org/entity/Q33999</td><td>actor</td></tr><tr><td>http://www.wikidata.org/entity/Q10800557</td><td>actor de cine</td></tr><tr><td>http://www.wikidata.org/entity/Q10732476</td><td>coleccionista de arte</td></tr><tr><td>http://www.wikidata.org/entity/Q2526255</td><td>director de cine</td></tr><tr><td>http://www.wikidata.org/entity/Q7042855</td><td>editor de cine</td></tr><tr><td>http://www.wikidata.org/entity/Q43845</td><td>empresario</td></tr><tr><td>http://www.wikidata.org/entity/Q18844224</td><td>escritor de ciencia ficción</td></tr><tr><td>http://www.wikidata.org/entity/Q28389</td><td>guionista</td></tr><tr><td>http://www.wikidata.org/entity/Q3282637</td><td>productor de cine</td></tr><tr><td>http://www.wikidata.org/entity/Q578109</td><td>productor de televisión</td></tr><tr><td>http://www.wikidata.org/entity/Q1053574</td><td>productor ejecutivo</td></tr><tr><td>http://www.wikidata.org/entity/Q3455803</td><td>realizador</td></tr></tbody></table></body></html>
### Consulta 3
¿Cuales de esas profesiones corresponden a un tipo determinado de [Artista (Q483501)](https://www.wikidata.org/wiki/Q483501)? Ten en cuenta que la jerarquía de tipos de artistas puede ser compleja.
<html><head><meta charset="utf-8"></head><body><table><thead><tr><th>ocup</th><th>ocupLabel</th></tr></thead><tbody><tr><td>http://www.wikidata.org/entity/Q7042855</td><td>editor de cine</td></tr><tr><td>http://www.wikidata.org/entity/Q33999</td><td>actor</td></tr><tr><td>http://www.wikidata.org/entity/Q2526255</td><td>director de cine</td></tr><tr><td>http://www.wikidata.org/entity/Q10800557</td><td>actor de cine</td></tr></tbody></table></body></html>
### Consulta 4
Spielberg ha recibido muchas nominaciones y premios a lo largo de su carrera. Queremos obtener una lista de nominaciones y para cada una de ellas el trabajo por el cual fue nominado y la ceremonia en la que se produjo la nominación. Para resolver esta consulta necesitarás acceder a los cualificadores de nodos sentencia y necesitarás entender los prefijos que usa Wikidata.
<html><head><meta charset="utf-8"></head><body><table><thead><tr><th>nominations</th><th>nominationsLabel</th><th>work</th><th>workLabel</th><th>cer</th><th>cerLabel</th></tr></thead><tbody><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q152456</td><td>Múnich</td><td>http://www.wikidata.org/entity/Q319132</td><td>Premios Óscar de 2005</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q165817</td><td>Saving Private Ryan</td><td>http://www.wikidata.org/entity/Q2405223</td><td>11th European Film Awards</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q483941</td><td>La lista de Schindler</td><td>http://www.wikidata.org/entity/Q944352</td><td>Premios Óscar de 1993</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q11621</td><td>E.T., el extraterrestre</td><td>http://www.wikidata.org/entity/Q41918</td><td>Anexo:55.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q275187</td><td>Cartas desde Iwo Jima</td><td>http://www.wikidata.org/entity/Q213699</td><td>Premios Óscar de 2006</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q244604</td><td>Minority Report</td><td>http://www.wikidata.org/entity/Q585310</td><td>15th European Film Awards</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q152456</td><td>Múnich</td><td>http://www.wikidata.org/entity/Q319132</td><td>Premios Óscar de 2005</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q165817</td><td>Saving Private Ryan</td><td>http://www.wikidata.org/entity/Q263239</td><td>71.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q32433</td><td>Lincoln</td><td>http://www.wikidata.org/entity/Q248688</td><td>85.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q11621</td><td>E.T., el extraterrestre</td><td>http://www.wikidata.org/entity/Q41918</td><td>Anexo:55.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q218589</td><td>War Horse</td><td>http://www.wikidata.org/entity/Q180675</td><td>Premios Óscar de 2011</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q18067135</td><td>Bridge of Spies</td><td>http://www.wikidata.org/entity/Q20022969</td><td>Anexo:88.ª Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q174284</td><td>Raiders of the Lost Ark</td><td>http://www.wikidata.org/entity/Q28969</td><td>Anexo:Premios Óscar de 1981</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q483941</td><td>La lista de Schindler</td><td>http://www.wikidata.org/entity/Q944352</td><td>Premios Óscar de 1993</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q320588</td><td>Close Encounters of the Third Kind</td><td>http://www.wikidata.org/entity/Q282159</td><td>Anexo:Premios Óscar de 1977</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q32433</td><td>Lincoln</td><td>http://www.wikidata.org/entity/Q248688</td><td>85.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q30203425</td><td>The Post</td><td>http://www.wikidata.org/entity/Q24636843</td><td>90.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q223299</td><td>El color púrpura</td><td>http://www.wikidata.org/entity/Q938235</td><td>Premios Óscar de 1985</td></tr><tr><td>http://www.wikidata.org/entity/Q102427</td><td>Óscar a la mejor película</td><td>http://www.wikidata.org/entity/Q165817</td><td>Saving Private Ryan</td><td>http://www.wikidata.org/entity/Q263239</td><td>71.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q152456</td><td>Múnich</td><td>http://www.wikidata.org/entity/Q319132</td><td>Premios Óscar de 2005</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q165817</td><td>Saving Private Ryan</td><td>http://www.wikidata.org/entity/Q2405223</td><td>11th European Film Awards</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q483941</td><td>La lista de Schindler</td><td>http://www.wikidata.org/entity/Q944352</td><td>Premios Óscar de 1993</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q11621</td><td>E.T., el extraterrestre</td><td>http://www.wikidata.org/entity/Q41918</td><td>Anexo:55.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q275187</td><td>Cartas desde Iwo Jima</td><td>http://www.wikidata.org/entity/Q213699</td><td>Premios Óscar de 2006</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q244604</td><td>Minority Report</td><td>http://www.wikidata.org/entity/Q585310</td><td>15th European Film Awards</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q152456</td><td>Múnich</td><td>http://www.wikidata.org/entity/Q319132</td><td>Premios Óscar de 2005</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q165817</td><td>Saving Private Ryan</td><td>http://www.wikidata.org/entity/Q263239</td><td>71.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q32433</td><td>Lincoln</td><td>http://www.wikidata.org/entity/Q248688</td><td>85.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q11621</td><td>E.T., el extraterrestre</td><td>http://www.wikidata.org/entity/Q41918</td><td>Anexo:55.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q218589</td><td>War Horse</td><td>http://www.wikidata.org/entity/Q180675</td><td>Premios Óscar de 2011</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q18067135</td><td>Bridge of Spies</td><td>http://www.wikidata.org/entity/Q20022969</td><td>Anexo:88.ª Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q174284</td><td>Raiders of the Lost Ark</td><td>http://www.wikidata.org/entity/Q28969</td><td>Anexo:Premios Óscar de 1981</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q483941</td><td>La lista de Schindler</td><td>http://www.wikidata.org/entity/Q944352</td><td>Premios Óscar de 1993</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q320588</td><td>Close Encounters of the Third Kind</td><td>http://www.wikidata.org/entity/Q282159</td><td>Anexo:Premios Óscar de 1977</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q32433</td><td>Lincoln</td><td>http://www.wikidata.org/entity/Q248688</td><td>85.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q30203425</td><td>The Post</td><td>http://www.wikidata.org/entity/Q24636843</td><td>90.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q223299</td><td>El color púrpura</td><td>http://www.wikidata.org/entity/Q938235</td><td>Premios Óscar de 1985</td></tr><tr><td>http://www.wikidata.org/entity/Q103360</td><td>Óscar al mejor director</td><td>http://www.wikidata.org/entity/Q165817</td><td>Saving Private Ryan</td><td>http://www.wikidata.org/entity/Q263239</td><td>71.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q152456</td><td>Múnich</td><td>http://www.wikidata.org/entity/Q319132</td><td>Premios Óscar de 2005</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q165817</td><td>Saving Private Ryan</td><td>http://www.wikidata.org/entity/Q2405223</td><td>11th European Film Awards</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q483941</td><td>La lista de Schindler</td><td>http://www.wikidata.org/entity/Q944352</td><td>Premios Óscar de 1993</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q11621</td><td>E.T., el extraterrestre</td><td>http://www.wikidata.org/entity/Q41918</td><td>Anexo:55.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q275187</td><td>Cartas desde Iwo Jima</td><td>http://www.wikidata.org/entity/Q213699</td><td>Premios Óscar de 2006</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q244604</td><td>Minority Report</td><td>http://www.wikidata.org/entity/Q585310</td><td>15th European Film Awards</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q152456</td><td>Múnich</td><td>http://www.wikidata.org/entity/Q319132</td><td>Premios Óscar de 2005</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q165817</td><td>Saving Private Ryan</td><td>http://www.wikidata.org/entity/Q263239</td><td>71.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q32433</td><td>Lincoln</td><td>http://www.wikidata.org/entity/Q248688</td><td>85.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q11621</td><td>E.T., el extraterrestre</td><td>http://www.wikidata.org/entity/Q41918</td><td>Anexo:55.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q218589</td><td>War Horse</td><td>http://www.wikidata.org/entity/Q180675</td><td>Premios Óscar de 2011</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q18067135</td><td>Bridge of Spies</td><td>http://www.wikidata.org/entity/Q20022969</td><td>Anexo:88.ª Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q174284</td><td>Raiders of the Lost Ark</td><td>http://www.wikidata.org/entity/Q28969</td><td>Anexo:Premios Óscar de 1981</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q483941</td><td>La lista de Schindler</td><td>http://www.wikidata.org/entity/Q944352</td><td>Premios Óscar de 1993</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q320588</td><td>Close Encounters of the Third Kind</td><td>http://www.wikidata.org/entity/Q282159</td><td>Anexo:Premios Óscar de 1977</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q32433</td><td>Lincoln</td><td>http://www.wikidata.org/entity/Q248688</td><td>85.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q30203425</td><td>The Post</td><td>http://www.wikidata.org/entity/Q24636843</td><td>90.º Premios Óscar</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q223299</td><td>El color púrpura</td><td>http://www.wikidata.org/entity/Q938235</td><td>Premios Óscar de 1985</td></tr><tr><td>http://www.wikidata.org/entity/Q1377772</td><td>European Film Award for Best Non-European Film</td><td>http://www.wikidata.org/entity/Q165817</td><td>Saving Private Ryan</td><td>http://www.wikidata.org/entity/Q263239</td><td>71.º Premios Óscar</td></tr></tbody></table></body></html>
### Consulta 5
Ahora queremos conocer el título de todas las películas que Spielberg ha dirigido. Se mostrarán ordenadas alfabéticamente y debes tener cuidado de no mostrar resultados repetidos. Ten en cuenta que puede haber distintos tipos de películas.
<html><head><meta charset="utf-8"></head><body><table><thead><tr><th>film</th><th>filmLabel</th></tr></thead><tbody><tr><td>http://www.wikidata.org/entity/Q207482</td><td>1941</td></tr><tr><td>http://www.wikidata.org/entity/Q221113</td><td>A.I. Artificial Intelligence</td></tr><tr><td>http://www.wikidata.org/entity/Q449743</td><td>Always</td></tr><tr><td>http://www.wikidata.org/entity/Q457886</td><td>Amblin'</td></tr><tr><td>http://www.wikidata.org/entity/Q472361</td><td>Amistad</td></tr><tr><td>http://www.wikidata.org/entity/Q18067135</td><td>Bridge of Spies</td></tr><tr><td>http://www.wikidata.org/entity/Q208108</td><td>Catch Me If You Can</td></tr><tr><td>http://www.wikidata.org/entity/Q320588</td><td>Close Encounters of the Third Kind</td></tr><tr><td>http://www.wikidata.org/entity/Q583407</td><td>Duel</td></tr><tr><td>http://www.wikidata.org/entity/Q11621</td><td>E.T. the Extra-Terrestrial</td></tr><tr><td>http://www.wikidata.org/entity/Q271281</td><td>Empire of the Sun</td></tr><tr><td>http://www.wikidata.org/entity/Q3057871</td><td>Escape to Nowhere</td></tr><tr><td>http://www.wikidata.org/entity/Q591320</td><td>Firelight</td></tr><tr><td>http://www.wikidata.org/entity/Q646389</td><td>Hook</td></tr><tr><td>http://www.wikidata.org/entity/Q182373</td><td>Indiana Jones and the Kingdom of the Crystal Skull</td></tr><tr><td>http://www.wikidata.org/entity/Q185658</td><td>Indiana Jones and the Last Crusade</td></tr><tr><td>http://www.wikidata.org/entity/Q179215</td><td>Indiana Jones and the Temple of Doom</td></tr><tr><td>http://www.wikidata.org/entity/Q189505</td><td>Jaws</td></tr><tr><td>http://www.wikidata.org/entity/Q167726</td><td>Jurassic Park</td></tr><tr><td>http://www.wikidata.org/entity/Q18276472</td><td>Jurassic Park 3D</td></tr><tr><td>http://www.wikidata.org/entity/Q32433</td><td>Lincoln</td></tr><tr><td>http://www.wikidata.org/entity/Q244604</td><td>Minority Report</td></tr><tr><td>http://www.wikidata.org/entity/Q152456</td><td>Munich</td></tr><tr><td>http://www.wikidata.org/entity/Q4468634</td><td>Murder by the Book</td></tr><tr><td>http://www.wikidata.org/entity/Q174284</td><td>Raiders of the Lost Ark</td></tr><tr><td>http://www.wikidata.org/entity/Q22000542</td><td>Ready Player One</td></tr><tr><td>http://www.wikidata.org/entity/Q165817</td><td>Saving Private Ryan</td></tr><tr><td>http://www.wikidata.org/entity/Q483941</td><td>Schindler's List</td></tr><tr><td>http://www.wikidata.org/entity/Q7540939</td><td>Slipstream</td></tr><tr><td>http://www.wikidata.org/entity/Q167022</td><td>Something Evil</td></tr><tr><td>http://www.wikidata.org/entity/Q980041</td><td>The Adventures of Tintin</td></tr><tr><td>http://www.wikidata.org/entity/Q19689203</td><td>The BFG</td></tr><tr><td>http://www.wikidata.org/entity/Q223299</td><td>The Color Purple</td></tr><tr><td>http://www.wikidata.org/entity/Q200873</td><td>The Lost World: Jurassic Park</td></tr><tr><td>http://www.wikidata.org/entity/Q30203425</td><td>The Post</td></tr><tr><td>http://www.wikidata.org/entity/Q432526</td><td>The Sugarland Express</td></tr><tr><td>http://www.wikidata.org/entity/Q318766</td><td>The Terminal</td></tr><tr><td>http://www.wikidata.org/entity/Q11791805</td><td>The Unfinished Journey</td></tr><tr><td>http://www.wikidata.org/entity/Q1330737</td><td>Twilight Zone: The Movie</td></tr><tr><td>http://www.wikidata.org/entity/Q218589</td><td>War Horse</td></tr><tr><td>http://www.wikidata.org/entity/Q202028</td><td>War of the Worlds</td></tr><tr><td>http://www.wikidata.org/entity/Q2956251</td><td>Watch Dog</td></tr><tr><td>http://www.wikidata.org/entity/Q63643994</td><td>West Side Story</td></tr></tbody></table></body></html>
### Consulta 6
Spielberg es sin duda un director prolífico. ¿Exactamente cuántas películas de ciencia ficción ha dirigido?
Otra forma de hacerlo:
<html><head><meta charset="utf-8"></head><body><table><thead><tr><th>numFilms</th></tr></thead><tbody><tr><td>10</td></tr></tbody></table></body></html>
### Consulta 7
Es importante que las películas tengan una duración adecuada, ni muy cortas ni demasiado largas. De todas las películas que ha dirigido Spielberg, ¿cuales duran entre 90 y 150 minutos? Para cada película muestra el título y la duración. Los resultados se deben mostrar ordenados alfabéticamente.
<html><head><meta charset="utf-8"></head><body><table><thead><tr><th>film</th><th>filmLabel</th><th>dur</th></tr></thead><tbody><tr><td>http://www.wikidata.org/entity/Q207482</td><td>1941</td><td>113</td></tr><tr><td>http://www.wikidata.org/entity/Q221113</td><td>A.I. Artificial Intelligence</td><td>146</td></tr><tr><td>http://www.wikidata.org/entity/Q449743</td><td>Always</td><td>117</td></tr><tr><td>http://www.wikidata.org/entity/Q18067135</td><td>Bridge of Spies</td><td>142</td></tr><tr><td>http://www.wikidata.org/entity/Q208108</td><td>Catch Me If You Can</td><td>135</td></tr><tr><td>http://www.wikidata.org/entity/Q320588</td><td>Close Encounters of the Third Kind</td><td>134</td></tr><tr><td>http://www.wikidata.org/entity/Q583407</td><td>Duel</td><td>90</td></tr><tr><td>http://www.wikidata.org/entity/Q11621</td><td>E.T. the Extra-Terrestrial</td><td>115</td></tr><tr><td>http://www.wikidata.org/entity/Q591320</td><td>Firelight</td><td>135</td></tr><tr><td>http://www.wikidata.org/entity/Q646389</td><td>Hook</td><td>136</td></tr><tr><td>http://www.wikidata.org/entity/Q182373</td><td>Indiana Jones and the Kingdom of the Crystal Skull</td><td>123</td></tr><tr><td>http://www.wikidata.org/entity/Q185658</td><td>Indiana Jones and the Last Crusade</td><td>122</td></tr><tr><td>http://www.wikidata.org/entity/Q179215</td><td>Indiana Jones and the Temple of Doom</td><td>114</td></tr><tr><td>http://www.wikidata.org/entity/Q189505</td><td>Jaws</td><td>124</td></tr><tr><td>http://www.wikidata.org/entity/Q167726</td><td>Jurassic Park</td><td>123</td></tr><tr><td>http://www.wikidata.org/entity/Q18276472</td><td>Jurassic Park 3D</td><td>126</td></tr><tr><td>http://www.wikidata.org/entity/Q32433</td><td>Lincoln</td><td>150</td></tr><tr><td>http://www.wikidata.org/entity/Q244604</td><td>Minority Report</td><td>145</td></tr><tr><td>http://www.wikidata.org/entity/Q174284</td><td>Raiders of the Lost Ark</td><td>111</td></tr><tr><td>http://www.wikidata.org/entity/Q22000542</td><td>Ready Player One</td><td>140</td></tr><tr><td>http://www.wikidata.org/entity/Q980041</td><td>The Adventures of Tintin</td><td>107</td></tr><tr><td>http://www.wikidata.org/entity/Q19689203</td><td>The BFG</td><td>115</td></tr><tr><td>http://www.wikidata.org/entity/Q200873</td><td>The Lost World: Jurassic Park</td><td>129</td></tr><tr><td>http://www.wikidata.org/entity/Q30203425</td><td>The Post</td><td>115</td></tr><tr><td>http://www.wikidata.org/entity/Q432526</td><td>The Sugarland Express</td><td>106</td></tr><tr><td>http://www.wikidata.org/entity/Q318766</td><td>The Terminal</td><td>124</td></tr><tr><td>http://www.wikidata.org/entity/Q1330737</td><td>Twilight Zone: The Movie</td><td>101</td></tr><tr><td>http://www.wikidata.org/entity/Q218589</td><td>War Horse</td><td>146</td></tr><tr><td>http://www.wikidata.org/entity/Q202028</td><td>War of the Worlds</td><td>116</td></tr></tbody></table></body></html>
### Consulta 8
Vamos a recuperar ahora las películas más actuales que ha dirigido Spielberg. Estamos interesados específicamente en películas que se hayan estrenado a partir del año 2000.
Seguramente, en tu primer intento verás que cada película aparece repetida varias veces con fechas distintas porque Wikidata contiene las fechas de estreno en cada país. Vamos a considerar que la fecha real de estreno de la película es la fecha más antigua de todas ellas.
Para que cada película aparezca sólo una vez con la fecha correcta necesitarás agrupar las respuestas por película y título, y aplicar una función de agregación sobre las fechas de publicación. Los resultados se tienen que mostrar ordenados alfabéticamente.
<html><head><meta charset="utf-8"></head><body><table><thead><tr><th>film</th><th>filmLabel</th><th>fMin</th></tr></thead><tbody><tr><td>http://www.wikidata.org/entity/Q207482</td><td>1941</td><td>1979-12-13T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q221113</td><td>A.I. Artificial Intelligence</td><td>2001-06-29T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q449743</td><td>Always</td><td>1989-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q457886</td><td>Amblin'</td><td>1968-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q472361</td><td>Amistad</td><td>1997-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q18067135</td><td>Bridge of Spies</td><td>2015-10-16T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q208108</td><td>Catch Me If You Can</td><td>2002-12-16T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q320588</td><td>Close Encounters of the Third Kind</td><td>1977-11-16T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q583407</td><td>Duel</td><td>1971-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q11621</td><td>E.T. the Extra-Terrestrial</td><td>1982-05-26T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q271281</td><td>Empire of the Sun</td><td>1987-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q3057871</td><td>Escape to Nowhere</td><td>1961-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q591320</td><td>Firelight</td><td>1964-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q646389</td><td>Hook</td><td>1991-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q182373</td><td>Indiana Jones and the Kingdom of the Crystal Skull</td><td>2008-05-21T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q185658</td><td>Indiana Jones and the Last Crusade</td><td>1989-05-24T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q179215</td><td>Indiana Jones and the Temple of Doom</td><td>1984-05-23T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q189505</td><td>Jaws</td><td>1975-06-20T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q167726</td><td>Jurassic Park</td><td>1993-06-11T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q18276472</td><td>Jurassic Park 3D</td><td>2013-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q32433</td><td>Lincoln</td><td>2012-10-08T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q244604</td><td>Minority Report</td><td>2002-06-17T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q152456</td><td>Munich</td><td>2005-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q4468634</td><td>Murder by the Book</td><td>1971-09-15T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q174284</td><td>Raiders of the Lost Ark</td><td>1981-06-12T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q22000542</td><td>Ready Player One</td><td>2018-03-11T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q165817</td><td>Saving Private Ryan</td><td>1998-07-24T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q483941</td><td>Schindler's List</td><td>1993-12-15T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q7540939</td><td>Slipstream</td><td>1967-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q167022</td><td>Something Evil</td><td>1972-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q980041</td><td>The Adventures of Tintin</td><td>2011-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q19689203</td><td>The BFG</td><td>2016-07-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q223299</td><td>The Color Purple</td><td>1985-12-16T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q200873</td><td>The Lost World: Jurassic Park</td><td>1997-05-23T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q30203425</td><td>The Post</td><td>2017-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q432526</td><td>The Sugarland Express</td><td>1974-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q318766</td><td>The Terminal</td><td>2004-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q11791805</td><td>The Unfinished Journey</td><td>1999-12-31T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q1330737</td><td>Twilight Zone: The Movie</td><td>1983-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q218589</td><td>War Horse</td><td>2011-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q202028</td><td>War of the Worlds</td><td>2005-06-29T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q2956251</td><td>Watch Dog</td><td>1973-01-01T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q63643994</td><td>West Side Story</td><td>2021-01-01T00:00:00Z</td></tr></tbody></table></body></html>
### Consulta 9
¿Qué actores han trabajado en películas dirigidas por Spielberg? Para cada uno de ellos muestra su nombre y, si está disponible, su fecha de nacimiento y defunción. Los resultados deben aparecer ordenados alfabéticamente.
Como en las películas trabajan muchos actores sólo estamos interesados en los primeros 50 resultados.
<html><head><meta charset="utf-8"></head><body><table><thead><tr><th>actor</th><th>actorLabel</th><th>fNacimiento</th><th>fMuerte</th></tr></thead><tbody><tr><td>http://www.wikidata.org/entity/Q3603296</td><td>Abbe Lane</td><td>1932-12-14T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q2821029</td><td>Abdelhafid Metalsi</td><td>1969-01-01T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q4678990</td><td>Adam Driver</td><td>1983-11-19T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q1060758</td><td>Adam Godley</td><td>1964-07-22T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q281964</td><td>Adam Goldberg</td><td>1970-10-25T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q365292</td><td>Adolph Caesar</td><td>1933-12-05T00:00:00Z</td><td>1986-03-06T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q240869</td><td>Adrian Grenier</td><td>1976-07-10T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q2826948</td><td>Agnieszka Wagner</td><td>1970-12-17T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q175392</td><td>Akosua Busia</td><td>1966-12-30T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q310394</td><td>Alan Alda</td><td>1936-01-28T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q350194</td><td>Alan Dale</td><td>1947-05-06T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q356303</td><td>Albert Brooks</td><td>1947-07-22T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q2641365</td><td>Alex Hyde-White</td><td>1959-01-30T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q67026</td><td>Alexander Beyer</td><td>1973-06-24T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q87676</td><td>Alexander Held</td><td>1958-10-19T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q1521335</td><td>Alexander Strobele</td><td>1953-05-06T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q970287</td><td>Alexei Sayle</td><td>1952-08-07T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q296028</td><td>Alfred Molina</td><td>1953-05-24T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q235328</td><td>Alison Brie</td><td>1982-12-29T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q261193</td><td>Alison Doody</td><td>1966-11-11T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q2438420</td><td>Alon Abutbul</td><td>1965-05-28T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q2844743</td><td>Amelia Jacob</td><td>1998-12-03T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q6848483</td><td>Ami Weinberg</td><td>1953-10-26T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q2907291</td><td>Amos Lavi</td><td>1953-01-01T00:00:00Z</td><td>2010-11-09T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q333443</td><td>Amrish Puri</td><td>1932-06-22T00:00:00Z</td><td>2005-01-12T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q152773</td><td>Amy Acker</td><td>1976-12-05T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q481832</td><td>Amy Adams</td><td>1974-08-20T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q231203</td><td>Amy Ryan</td><td>1968-05-03T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q465628</td><td>Andrew Divoff</td><td>1955-07-02T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q507322</td><td>Andrew Scott</td><td>1976-10-21T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q515592</td><td>Andrzej Seweryn</td><td>1946-04-25T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q527862</td><td>Andy Tennant</td><td>1955-06-15T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q16213296</td><td>Andy Thompson</td><td>1970-01-01T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q546106</td><td>Anian Zollner</td><td>1969-02-21T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q452047</td><td>Ann Robinson</td><td>1929-05-25T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q4767298</td><td>Anna Maria Horsford</td><td>1948-03-06T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q2850579</td><td>Anna Mucha</td><td>1980-04-26T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q199884</td><td>Anna Paquin</td><td>1982-07-24T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q461742</td><td>Anne Lockhart</td><td>1953-09-06T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q17198635</td><td>Annika Boras</td><td>1981-02-28T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q13426679</td><td>Ansel Elgort</td><td>1994-03-14T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q573343</td><td>Anthony Higgins</td><td>1947-05-09T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q65932</td><td>Anthony Hopkins</td><td>1937-12-31T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q24706484</td><td>Anthony Ingram</td><td>1966-11-27T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q509013</td><td>April Grace</td><td>1962-05-12T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q264559</td><td>Ariana Richards</td><td>1979-09-11T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q675776</td><td>Arliss Howard</td><td>1954-10-18T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q22212873</td><td>Armand Schultz</td><td>1959-05-17T00:00:00Z</td><td></td></tr><tr><td>http://www.wikidata.org/entity/Q2865227</td><td>Arthur Malet</td><td>1927-09-24T00:00:00Z</td><td>2013-05-18T00:00:00Z</td></tr><tr><td>http://www.wikidata.org/entity/Q717982</td><td>Arye Gross</td><td>1960-03-17T00:00:00Z</td><td></td></tr></tbody></table></body></html>
### Consulta 10
¿Cuántos asertos hay sobre Spielber en Wikidata? Ten en cuenta que Spielberg puede aparece tanto como sujeto como objeto de cada tripleta.
En teoría Steven Spielberg podría aparecer en ambos lados de un aserto, pero al ejecutar la consulta buscando en qué casos ocurre no encontramos ningún caso. Por tanto, no haber considerado ese caso no modifica los resultados.
<html><head><meta charset="utf-8"></head><body><table><thead><tr><th>total</th></tr></thead><tbody><tr><td>1492</td></tr></tbody></table></body></html>
__Fecha de las consultas:__ 11 de abril
| 0.466603 | 0.621914 |
# SLU03-Visualization with Pandas & Matplotlib: Exercise notebook
In this notebook you will practice the following:
- Scatterplots
- Line charts
- Bar charts
- Histograms
- Box plots
- Scaling plots
To learn about data visualization, we are going to use a modified version of [The Movies Dataset](https://www.kaggle.com/rounakbanik/the-movies-dataset) which has information about movies
The dataset is located at `data/movies.csv`, and has the following fields
```
budget: Movie budget (in $).
genre: Genre the movie belongs to.
original_language: Language the movie was originally filmed in.
production_company: Name of the production company.
production_country: Country where the movie was produced.
release_year: Year the movie was released.
revenue: Movie ticket sales (in $).
runtime: Movie duration (in minutes).
title: Movie title.
vote_average: Average rating in MovieLens.
vote_count: Number of votes in MovieLens.
release_year: Year the movie was released
```
```
import pandas as pd
import numpy as np
movies = pd.read_csv("data/movies.csv")
movies.shape
movies.head()
```
Import matplotlib, pyplot and the matplotlib inline magic.
```
import matplotlib.pyplot as plt
%matplotlib inline
assert plt
```
Change the default chart size to 8 inches width and 8 inches height
```
plt.rcParams['figure.figsize'] = [8,8]
assert plt.rcParams["figure.figsize"][0] == 8
assert plt.rcParams["figure.figsize"][1] == 8
```
<hr>
### Note about the grading
Grading plots is difficult, we are using `plotchecker` to grade the plots with nbgrader.
For `plotchecker` to work with nbgrader, we need to add on each cell, the line
`axis = plt.gca();`
<div class="alert alert-danger">
<b>NOTE:</b>If you get the ImportError, plotchecker not defined, make sure you activate the right environment for this unit!
</div>
**After the code required to do the plot**.
For example, if we want to plot a scatter plot showing the relationship between revenue and runtime we would do as follows:
```
# code required to plot
movies[["budget", "revenue"]].plot.scatter(x="budget",y="revenue" )
# last line in the cell required to "capture" the cell and being able to grade it with nbgrader
axis = plt.gca();
```
<hr>
### How does the vote count correlate with the revenue?
```
plt.scatter(x='vote_count', y='revenue', data=movies)
plt.xlabel('vote_count')
plt.ylabel('revenue')
axis = plt.gca();
from plotchecker import PlotChecker
def get_data(p, ax=0):
all_x_data = []
lines = p.axis.get_lines()
collections = axis.collections
if len(lines) > 0:
all_x_data.append(np.concatenate([x.get_xydata()[:, ax] for x in lines]))
if len(collections) > 0:
all_x_data.append(np.concatenate([x.get_offsets()[:, ax] for x in collections]))
return np.concatenate(all_x_data, axis=0)
pc = PlotChecker(axis)
data = get_data(pc)
assert len(data) == 707
assert set([pc.xlabel] + [pc.ylabel]) == set(["revenue", "vote_count"])
np.testing.assert_equal(get_data(pc,1), movies[movies.revenue.notnull()].revenue)
print("Success!")
```
### How does the average revenue of movies evolves over time? Set the plot title to "Average Movie Revenue by year"
To calculate the average revenue by year we need to perform an [aggregation](https://jakevdp.github.io/PythonDataScienceHandbook/03.08-aggregation-and-grouping.html), pandas support this by doing a technique called [Split-Apply-Combine](https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html). This will be explained in the Data Wrangling Specialization.
For now we will do the grouping for you:
```
avg_revenue_by_year = movies.groupby("release_year")["revenue"].mean().reset_index()
avg_revenue_by_year.columns = ["release_year", "avg_revenue"]
avg_revenue_by_year.head()
```
<div class="alert alert-danger">
<b>NOTE:</b>Make sure you use the dataframe named avg_revenue_by_year for the next exercise
</div>
```
avg_revenue_by_year.plot(x='release_year', y='avg_revenue', kind='line', title='Average Movie Revenue by year')
plt.ylabel('revenue')
axis = plt.gca();
pc = PlotChecker(axis)
np.testing.assert_equal(get_data(pc), sorted(movies[movies.runtime.notnull()].release_year.unique()))
np.testing.assert_equal(get_data(pc, ax=1), movies.groupby("release_year")["revenue"].mean())
assert set([pc.xlabel] + [pc.ylabel]) == set(["release_year", "revenue"])
pc.assert_title_equal("Average Movie Revenue by year")
print("Success!")
```
### How does the median revenue vary by movie genre? Label the x-axis as "Median Revenue"
Again, we will do the grouping for you:
```
median_revenue_by_genre = movies.groupby("genre")["revenue"].median().reset_index()
median_revenue_by_genre.columns = ["genre", "median_revenue"]
median_revenue_by_genre
```
<div class="alert alert-danger">
<b>NOTE:</b>Make sure you use the dataframe named median_revenue_by_genre for the next exercise
</div>
```
median_revenue_by_genre.plot(x='genre', y='median_revenue', kind='barh', colormap='winter')
plt.xlabel('Median Revenue')
axis = plt.gca();
pc = PlotChecker(axis)
pc._patches = np.array(pc.axis.patches)
pc._patches = pc._patches[np.argsort([p.get_x() for p in pc._patches])]
pc.widths = np.array([p.get_width() for p in pc._patches])
pc.heights = np.array([p.get_height() for p in pc._patches])
assert len(pc._patches) == len(movies.groupby("genre").groups)
np.testing.assert_equal(pc.widths, movies.groupby("genre")["revenue"].median().values)
pc.assert_xlabel_equal("Median Revenue")
print("Success!")
```
### How is the variable vote_average distributed? (set the x axis limit to [0, 9] and the number of bins to 10. Change the bar color to `red`
```
movies.vote_average.describe()
movies.vote_average.plot.hist(bins=10, color='r')
plt.xlim(0,9)
axis = plt.gca();
pc = PlotChecker(axis)
pc._patches = np.array(pc.axis.patches)
pc._patches = pc._patches[np.argsort([p.get_x() for p in pc._patches])]
pc.widths = np.array([p.get_width() for p in pc._patches])
pc.heights = np.array([p.get_height() for p in pc._patches])
np.testing.assert_allclose(pc.heights, [ 5., 1., 1., 8., 14., 58., 202., 231., 172., 20.])
np.testing.assert_allclose(pc.widths, [0.86 for i in range(len(pc.widths))])
assert pc.xlim[1] == 9
assert pc._patches[0].get_facecolor() == (1., 0., 0., 1.)
print("Success!")
```
### Change the default plot style to `ggplot`. Make a plot that displays the vote count broken by movie language and that allows us to check if there are outliers.
```
plt.style.use('ggplot')
movies.boxplot(column='vote_count', by='original_language');
axis = plt.gca();
pc = PlotChecker(axis)
pc._lines = pc.axis.get_lines()
pc.colors = np.array([pc._color2rgb(x.get_color()) for x in pc._lines])
np.testing.assert_allclose(pc.colors[0],[0.88627451, 0.29019608, 0.2])
np.testing.assert_allclose(pc.yticks,np.array([-1,0,1,2,3,4,5,6])*1e3)
assert pc.xticklabels == ['en', 'fr', 'hi', 'it', 'ru']
print("Success!")
```
# Ungraded Exercise
Load the file misterious_data.csv and use data visualization to answer the following questions:
* How is the distribution of x in general?
* Are there any outlier in any of the fields?
* Which 2 charts better represent the underlying data?. Change their style to `bmh` and add titles to each chart explaining them
```
mist_data = pd.read_csv('data/misterious_data.csv')
mist_data.describe()
mist_data.x.plot(kind='hist');
mist_data.boxplot();
mist_data.sort_values('x', inplace=True)
mist_data.boxplot(by='category');
```
|
github_jupyter
|
budget: Movie budget (in $).
genre: Genre the movie belongs to.
original_language: Language the movie was originally filmed in.
production_company: Name of the production company.
production_country: Country where the movie was produced.
release_year: Year the movie was released.
revenue: Movie ticket sales (in $).
runtime: Movie duration (in minutes).
title: Movie title.
vote_average: Average rating in MovieLens.
vote_count: Number of votes in MovieLens.
release_year: Year the movie was released
import pandas as pd
import numpy as np
movies = pd.read_csv("data/movies.csv")
movies.shape
movies.head()
import matplotlib.pyplot as plt
%matplotlib inline
assert plt
plt.rcParams['figure.figsize'] = [8,8]
assert plt.rcParams["figure.figsize"][0] == 8
assert plt.rcParams["figure.figsize"][1] == 8
# code required to plot
movies[["budget", "revenue"]].plot.scatter(x="budget",y="revenue" )
# last line in the cell required to "capture" the cell and being able to grade it with nbgrader
axis = plt.gca();
plt.scatter(x='vote_count', y='revenue', data=movies)
plt.xlabel('vote_count')
plt.ylabel('revenue')
axis = plt.gca();
from plotchecker import PlotChecker
def get_data(p, ax=0):
all_x_data = []
lines = p.axis.get_lines()
collections = axis.collections
if len(lines) > 0:
all_x_data.append(np.concatenate([x.get_xydata()[:, ax] for x in lines]))
if len(collections) > 0:
all_x_data.append(np.concatenate([x.get_offsets()[:, ax] for x in collections]))
return np.concatenate(all_x_data, axis=0)
pc = PlotChecker(axis)
data = get_data(pc)
assert len(data) == 707
assert set([pc.xlabel] + [pc.ylabel]) == set(["revenue", "vote_count"])
np.testing.assert_equal(get_data(pc,1), movies[movies.revenue.notnull()].revenue)
print("Success!")
avg_revenue_by_year = movies.groupby("release_year")["revenue"].mean().reset_index()
avg_revenue_by_year.columns = ["release_year", "avg_revenue"]
avg_revenue_by_year.head()
avg_revenue_by_year.plot(x='release_year', y='avg_revenue', kind='line', title='Average Movie Revenue by year')
plt.ylabel('revenue')
axis = plt.gca();
pc = PlotChecker(axis)
np.testing.assert_equal(get_data(pc), sorted(movies[movies.runtime.notnull()].release_year.unique()))
np.testing.assert_equal(get_data(pc, ax=1), movies.groupby("release_year")["revenue"].mean())
assert set([pc.xlabel] + [pc.ylabel]) == set(["release_year", "revenue"])
pc.assert_title_equal("Average Movie Revenue by year")
print("Success!")
median_revenue_by_genre = movies.groupby("genre")["revenue"].median().reset_index()
median_revenue_by_genre.columns = ["genre", "median_revenue"]
median_revenue_by_genre
median_revenue_by_genre.plot(x='genre', y='median_revenue', kind='barh', colormap='winter')
plt.xlabel('Median Revenue')
axis = plt.gca();
pc = PlotChecker(axis)
pc._patches = np.array(pc.axis.patches)
pc._patches = pc._patches[np.argsort([p.get_x() for p in pc._patches])]
pc.widths = np.array([p.get_width() for p in pc._patches])
pc.heights = np.array([p.get_height() for p in pc._patches])
assert len(pc._patches) == len(movies.groupby("genre").groups)
np.testing.assert_equal(pc.widths, movies.groupby("genre")["revenue"].median().values)
pc.assert_xlabel_equal("Median Revenue")
print("Success!")
movies.vote_average.describe()
movies.vote_average.plot.hist(bins=10, color='r')
plt.xlim(0,9)
axis = plt.gca();
pc = PlotChecker(axis)
pc._patches = np.array(pc.axis.patches)
pc._patches = pc._patches[np.argsort([p.get_x() for p in pc._patches])]
pc.widths = np.array([p.get_width() for p in pc._patches])
pc.heights = np.array([p.get_height() for p in pc._patches])
np.testing.assert_allclose(pc.heights, [ 5., 1., 1., 8., 14., 58., 202., 231., 172., 20.])
np.testing.assert_allclose(pc.widths, [0.86 for i in range(len(pc.widths))])
assert pc.xlim[1] == 9
assert pc._patches[0].get_facecolor() == (1., 0., 0., 1.)
print("Success!")
plt.style.use('ggplot')
movies.boxplot(column='vote_count', by='original_language');
axis = plt.gca();
pc = PlotChecker(axis)
pc._lines = pc.axis.get_lines()
pc.colors = np.array([pc._color2rgb(x.get_color()) for x in pc._lines])
np.testing.assert_allclose(pc.colors[0],[0.88627451, 0.29019608, 0.2])
np.testing.assert_allclose(pc.yticks,np.array([-1,0,1,2,3,4,5,6])*1e3)
assert pc.xticklabels == ['en', 'fr', 'hi', 'it', 'ru']
print("Success!")
mist_data = pd.read_csv('data/misterious_data.csv')
mist_data.describe()
mist_data.x.plot(kind='hist');
mist_data.boxplot();
mist_data.sort_values('x', inplace=True)
mist_data.boxplot(by='category');
| 0.798698 | 0.970155 |
<a href="https://colab.research.google.com/github/LambdaTheda/DS-Unit-2-Linear-Models/blob/master/Unit_2_TEST_1_5pm_retake_jan28_DS_Sprint_Challenge_Linear_Models_copy.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
_Lambda School Data Science, Unit 2_
# Linear Models Sprint Challenge
To demonstrate mastery on your Sprint Challenge, do all the required, numbered instructions in this notebook.
To earn a score of "3", also do all the stretch goals.
You are permitted and encouraged to do as much data exploration as you want.
### Part 1, Classification
- 1.1. Do train/test split. Arrange data into X features matrix and y target vector
- 1.2. Use scikit-learn to fit a logistic regression model
- 1.3. Report classification metric: accuracy
### Part 2, Regression
- 2.1. Begin with baselines for regression
- 2.2. Do train/validate/test split
- 2.3. Arrange data into X features matrix and y target vector
- 2.4. Do one-hot encoding
- 2.5. Use scikit-learn to fit a linear regression or ridge regression model
- 2.6. Report validation MAE and $R^2$
### Stretch Goals, Regression
- Make at least 2 visualizations to explore relationships between features and target. You may use any visualization library
- Try at least 3 feature combinations. You may select features manually, or automatically
- Report validation MAE and $R^2$ for each feature combination you try
- Report test MAE and $R^2$ for your final model
- Print or plot the coefficients for the features in your model
```
# If you're in Colab...
import sys
if 'google.colab' in sys.modules:
!pip install category_encoders==2.*
!pip install pandas-profiling==2.*
!pip install plotly==4.*
```
# Part 1, Classification: Predict Blood Donations 🚑
Our dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.
The goal is to predict whether the donor made a donation in March 2007, using information about each donor's history.
Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need.
```
import pandas as pd
donors = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
assert donors.shape == (748,5)
donors = donors.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
```
Notice that the majority class (did not donate blood in March 2007) occurs about 3/4 of the time.
This is the accuracy score for the "majority class baseline" (the accuracy score we'd get by just guessing the majority class every time).
```
donors['made_donation_in_march_2007'].value_counts(normalize=True)
#explore data
donors.describe()
donors.shape
#change dataset name & look at 1st 5 rows
df = donors
df.head()
```
## 1.1. Do train/test split. Arrange data into X features matrix and y target vector
Do these steps in either order.
Use scikit-learn's train/test split function to split randomly. (You can include 75% of the data in the train set, and hold out 25% for the test set, which is the default.)
```
#1.1
#Use scikit-learn's train/test split function to split randomly.
#(You can include 75% of the data in the train set, and hold out 25% for the test set, which is the default.)
'''
target = 'made_donation_in_march_2007'
features = train.columes.drop([target])
X_train = train[features]
y_train[] = train[made_donation_in_march_2007]
X_test = test[features]
y_test = test[made_donation_in_march_2007]
'''
from sklearn.model_selection import train_test_split
X = df.drop(columns='made_donation_in_march_2007')
y = df.made_donation_in_march_2007
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=43)
#(You can include 75% of the data in the train set, and hold out 25% for the test set, which is the default.)
train_test_split(X, y, random_state=43, train_size=0.75)
```
## 1.2. Use scikit-learn to fit a logistic regression model
You may use any number of features
```
#1.2
import category_encoders as ce
from sklearn.linear_model import LogisticRegressionCV
from sklearn.preprocessing import StandardScaler
from sklearn.impute import SimpleImputer
target = 'made_donation_in_march_2007'
features = df.columns.drop([target])
X_val = X_test
y_val = y_test
print(X_train.shape, X_val.shape)
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train)
X_val_encoded = encoder.transform(X_val)
print(X_train_encoded.shape, X_val_encoded.shape)
imputer = SimpleImputer(strategy='mean')
X_train_imputed = imputer.fit_transform(X_train_encoded)
X_val_imputed = imputer.transform(X_val_encoded)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_imputed)
X_val_scaled = scaler.transform(X_val_imputed)
model = LogisticRegressionCV(cv=5, n_jobs=-1, random_state=42)
model.fit(X_train_scaled, y_train)
```
## 1.3. Report classification metric: accuracy
What is your model's accuracy on the test set?
Don't worry if your model doesn't beat the majority class baseline. That's okay!
_"The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data."_ —[John Tukey](https://en.wikiquote.org/wiki/John_Tukey)
(Also, if we used recall score instead of accuracy score, then your model would almost certainly beat the baseline. We'll discuss how to choose and interpret evaluation metrics throughout this unit.)
```
print('Validation Accuracy', model.score(X_val_scaled, y_val))
```
# Part 2, Regression: Predict home prices in Ames, Iowa 🏠
You'll use historical housing data. ***There's a data dictionary at the bottom of the notebook.***
Run this code cell to load the dataset:
```
import pandas as pd
URL = 'https://drive.google.com/uc?export=download&id=1522WlEW6HFss36roD_Cd9nybqSuiVcCK'
homes = pd.read_csv(URL)
assert homes.shape == (2904, 47)
```
## 2.1. Begin with baselines
What is the Mean Absolute Error and R^2 score for a mean baseline? (You can get these estimated scores using all your data, before splitting it.)
```
df = homes
df.describe()
df.shape
df.head()
df
# Common Regression baseline model: guessing the mean for every sample
# Common Classification baseline model: guessing the most common category for every sample
guess = df['SalePrice'].mean()
errors = guess - df['SalePrice']
errors
from sklearn.metrics import mean_absolute_error, r2_score
'''
https://scikit-learn.org/stable/modules/model_evaluation.html#r2-score DOCUMENTATION:
The r2_score function computes the coefficient of determination, usually denoted as R².
It represents the proportion of variance (of y) that has been explained by the independent variables in the model. It provides an indication of goodness of fit and therefore a measure of how well unseen samples are likely to be predicted by the model, through the proportion of explained variance.
As such variance is dataset dependent, R² may not be meaningfully comparable across different datasets. Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R² score of 0.0.
'''
#MAE
mean_absolute_error = errors.abs().mean()
mean_absolute_error
'''
print(f'If we just guessed every Ames, Iowa home sold for ${guess:,.0f},')
print(f'we would be off by ${int(mean_absolute_error)} on average.')
'''
#R^2 Score
r2_score(homes['SalePrice'], errors)
# Drop some high cardinality categoricals
'''
df['1st_Flr_SF', 'Bedroom_AbvGr', 'Bldg_Type', 'Bsmt_Full_Bath',
'Bsmt_Half_Bath', 'Central_Air', 'Condition_1', 'Condition_2',
'Electrical', 'Exter_Cond', 'Exter_Qual', 'Exterior_1st',
'Exterior_2nd', 'Foundation', 'Full_Bath', 'Functional', 'Gr_Liv_Area',
'Half_Bath', 'Heating', 'Heating_QC', 'House_Style', 'Kitchen_AbvGr',
'Kitchen_Qual', 'Land_Contour', 'Land_Slope', 'Lot_Area', 'Lot_Config',
'Lot_Shape', 'MS_SubClass', 'MS_Zoning', 'Mas_Vnr_Type', 'Mo_Sold',
'Neighborhood', 'Overall_Cond', 'Overall_Qual', 'Paved_Drive',
'Roof_Matl', 'Roof_Style', 'SalePrice', 'Sale_Condition', 'Sale_Type',
'Street', 'TotRms_AbvGrd', 'Utilities', 'Year_Built', 'Year_Remod/Add',
'Yr_Sold'].describe()
'''
# Drop some columns to prevent "leakage"
#df = df.drop(columns=[])
```
## 2.2. Do train/validate/test split
Train on houses sold in the years 2006 - 2008. (1,920 rows)
Validate on house sold in 2009. (644 rows)
Test on houses sold in 2010. (340 rows)
```
# Drop some columns to prevent "leakage"
# df = df.drop(columns=['Rec', 'overall'])
#Filter train data; Train on houses sold in the years 2006 - 2008. (1,920 rows)
#syntax from https://stackoverflow.com/questions/22898824/filtering-pandas-dataframes-on-dates
#Convert Date to DateTime
# df['Yr_Sold'] = pd.to_datetime(df['Yr_Sold'], infer_datetime_format=True, errors='coerce')
#filter df['Yr_Sold']= 2006-2008 Attempt 1: get empty df
'''
train = df[(df['Yr_Sold'] >= '2006') & (df['Yr_Sold'] <= '2008')]
print(train)
'''
#TRAIN on houses sold in the years 2006 - 2008. (1,920 rows)
#filter df['Yr_Sold']= 2006-2008 Attempt 2: SEEMS TO WORK
df.loc['2006-01-01':'2008-31-12']
#Filter train data
train = df[df['Yr_Sold'].isin([2006, 2007, 2008])]
train
train.shape
#filter df['Yr_Sold']= 2006-2008 Attempt 3
'''
train2006to = df[df['Yr_Sold'] >= '2006']
train2006to
to_train2008 = train2006to[df['Yr_Sold'] <= '2008']
'''
#VALIDATE on house sold in 2009. (644 rows)
#Filter test data
val = df[df['Yr_Sold'] == 2009]
val
val.shape
#TEST on houses sold in 2010. (340 rows)
#Filter and set aside test data
test = df[df['Yr_Sold']== 2010]
test
test.shape
```
## 2.3. Arrange data into X features matrix and y target vector
Select at least one numeric feature and at least one categorical feature.
Otherwise, you may choose whichever features and however many you want.
```
#y target vector
target = 'SalePrice'
print(f'Our y target vector is the home\'s {target}.')
df.dtypes
# X features matrix
features = ['Bldg_Type', 'Overall_Cond' ]
print(f' Our X features matrix consists of {len(features)} categorical and numerical features aka columns, respectively: {features}.')
#Set X and Ys; split into train /val/ test
X_train = train[features]
y_train = train[target]
X_test = test[features]
y_test = test[target]
X_val = val[features]
y_val = val[target]
print(f'X_train is {X_train.shape}')
#set X-val & y-val for feature scaling?
#X_val = val[features]
#y_val = val[target]
print(f'X_test is {X_test.shape} and y_test is {y_test.shape}')
'''
X_train['Bldg_Type'] = X_train['Bldg_Type'].astype('category').cat.codes
X_test['Bldg_Type'] = X_test['Bldg_Type'].astype('category').cat.codes
X_val['Bldg_Type'] = X_val['Bldg_Type'].astype('category').cat.codes
'''
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LogisticRegressionCV
from sklearn.preprocessing import StandardScaler
'''
imputer = SimpleImputer(strategy='most_frequent')
X_train_imputed = imputer.fit_transform(X_train)
X_val_imputed = imputer.transform(X_val)
'''
# scaler = StandardScaler()
# X_train_scaled = scaler.fit_transform(X_train_imputed)
# X_val_scaled = scaler.transform(X_val_imputed)
# print(X_train_scaled)
#Imputing attempt 2: get ValueError: X has 4 features per sample, expected 1
'''
imputer = SimpleImputer()
X_train_imputed = imputer.fit_transform(X_train_encoded)
X_val_imputed = imputer.transform(X_val_encoded) #get err here: ValueError: X has 4 features per sample, expected 1
'''
```
## 2.4. Do one-hot encoding
Encode your categorical feature(s).
```
import category_encoders as ce
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train)
X_val_encoded = encoder.transform(X_val)
X_test_encoded = encoder.transform(X_test)
# model = LogisticRegressionCV(cv=5, n_jobs=-1, random_state=42)
# model.fit(X_train_scaled, y_train) # ValueError here: Found input variables with inconsistent numbers of samples: [561, 2]
# print('Validation Accuracy', model.score(X_val_scaled, y_val))
'''
#Attempt 2: using https://www.geeksforgeeks.org/ml-one-hot-encoding-of-datasets-in-python/
# importing one hot encoder from sklearn
from sklearn.preprocessing import OneHotEncoder
# creating one hot encoder object with categorical feature
onehotencoder = OneHotEncoder(categorical_features = ['Bldg_Type']) #TypeError here: __init__() got an unexpected keyword argument 'categorical_features'
data = onehotencoder.fit_transform(data).toarray()
'''
#Attempt 3: using LS_DSPT4_214.ipynb:
import category_encoders as ce
from sklearn.linear_model import LogisticRegressionCV
from sklearn.preprocessing import StandardScaler
'''
using notes from: https://www.geeksforgeeks.org/ml-one-hot-encoding-of-datasets-in-python/
One Hot Encoding –
It refers to splitting the column which contains numerical categorical data to many columns
depending on the number of categories present in that column. Each column contains “0” or “1”
corresponding to which column it has been placed.
'''
# Data Dictionary
target = 'SalePrice'
features = ['Bldg_Type', 'Overall_Cond']
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
print(X_train.shape, X_val.shape)
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train)
X_val_encoded = encoder.transform(X_val)
X_test_encoded = encoder.transform(X_test)
print(X_train_encoded.shape, X_val_encoded.shape, X_test_encoded.shape)
'''
imputer = SimpleImputer(strategy='mean')
#X_train_imputed = imputer.fit_transform(X_train_encoded) #TypeError here: invalid type promotion
X_val_imputed = imputer.fit_transform(X_val_encoded)
'''
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_imputed)
X_val_scaled = scaler.transform(X_val_imputed)
```
## 2.5. Use scikit-learn to fit a linear regression or ridge regression model
Fit your model.
```
'''
import category_encoders as ce
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train)
X_val_encoded = encoder.transform(X_val)
X_test_encoded = encoder.transform(X_test)'''
# model = LogisticRegressionCV(cv=5, n_jobs=-1, random_state=42)
# model.fit(X_train_scaled, y_train) # ValueError here: Found input variables with inconsistent numbers of samples: [561, 2]
# print('Validation Accuracy', model.score(X_val_scaled, y_val))
'''
#Attempt 2: using https://www.geeksforgeeks.org/ml-one-hot-encoding-of-datasets-in-python/
# importing one hot encoder from sklearn
from sklearn.preprocessing import OneHotEncoder
# creating one hot encoder object with categorical feature
onehotencoder = OneHotEncoder(categorical_features = ['Bldg_Type']) #TypeError here: __init__() got an unexpected keyword argument 'categorical_features'
data = onehotencoder.fit_transform(data).toarray()
'''
'''
#Attempt 3: using LS_DSPT4_214.ipynb:
import category_encoders as ce
from sklearn.linear_model import LogisticRegressionCV
from sklearn.preprocessing import StandardScaler
using notes from: https://www.geeksforgeeks.org/ml-one-hot-encoding-of-datasets-in-python/
One Hot Encoding –
It refers to splitting the column which contains numerical categorical data to many columns
depending on the number of categories present in that column. Each column contains “0” or “1”
corresponding to which column it has been placed.
# Data Dictionary
target = 'SalePrice'
features = ['Bldg_Type', 'Roof_Style']
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
print(X_train.shape, X_val.shape)
'''
'''
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train)
X_val_encoded = encoder.transform(X_val)
print(X_train_encoded.shape, X_val_encoded.shape)
imputer = SimpleImputer(strategy='mean')
#X_train_imputed = imputer.fit_transform(X_train_encoded) #TypeError here: invalid type promotion
X_val_imputed = imputer.fit_transform(X_val_encoded)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_imputed)
X_val_scaled = scaler.transform(X_val_imputed)
'''
# ATTEMPT 1: Using LINEAR REGRESSION LS_DSPT4_211.ipynb; Following the 5 step process
# 1. Import the appropriate estimator class from Scikit-Learn
from sklearn.linear_model import LinearRegression
# 2. Instantiate this class
model = LinearRegression()
'''
# SEE SECTION 2.3
# 3. Arrange X features matrix & y target vector
X = X_train_encoded
y = y_test
print(X.shape, y.shape)
'''
# 4. Fit the model
#model.fit(X, y)
model.fit(X_train_encoded, y_train)
# 5. Apply the model to new data
X_test = X_test_encoded
y_pred = model.predict(X_test)
#Using LINEAR and notes from bottom or LS_DS_213_assignment_solution (RIDGEL ASS. SOL.)
#ATTEMPT 2 w/ridge regression using LS_DS_213_assignment_solution
```
## 2.6. Report validation MAE and $R^2$
What is your model's Mean Absolute Error and $R^2$ score on the validation set? (You are not graded on how high or low your validation scores are.)
```
#MAE
model.errors.abs().mean
#R^2 score using https://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html
# MAE & R^2; Attempt #2:
features = ['BldgType']
# Assign to X, y
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
```
# Stretch Goals, Regression
- Make at least 2 visualizations to explore relationships between features and target. You may use any visualization library
- Try at least 3 feature combinations. You may select features manually, or automatically
- Report validation MAE and $R^2$ for each feature combination you try
- Report test MAE and $R^2$ for your final model
- Print or plot the coefficients for the features in your model
```
```
# Data Dictionary
Here's a description of the data fields:
```
1st_Flr_SF: First Floor square feet
Bedroom_AbvGr: Bedrooms above grade (does NOT include basement bedrooms)
Bldg_Type: Type of dwelling
1Fam Single-family Detached
2FmCon Two-family Conversion; originally built as one-family dwelling
Duplx Duplex
TwnhsE Townhouse End Unit
TwnhsI Townhouse Inside Unit
Bsmt_Half_Bath: Basement half bathrooms
Bsmt_Full_Bath: Basement full bathrooms
Central_Air: Central air conditioning
N No
Y Yes
Condition_1: Proximity to various conditions
Artery Adjacent to arterial street
Feedr Adjacent to feeder street
Norm Normal
RRNn Within 200' of North-South Railroad
RRAn Adjacent to North-South Railroad
PosN Near positive off-site feature--park, greenbelt, etc.
PosA Adjacent to postive off-site feature
RRNe Within 200' of East-West Railroad
RRAe Adjacent to East-West Railroad
Condition_2: Proximity to various conditions (if more than one is present)
Artery Adjacent to arterial street
Feedr Adjacent to feeder street
Norm Normal
RRNn Within 200' of North-South Railroad
RRAn Adjacent to North-South Railroad
PosN Near positive off-site feature--park, greenbelt, etc.
PosA Adjacent to postive off-site feature
RRNe Within 200' of East-West Railroad
RRAe Adjacent to East-West Railroad
Electrical: Electrical system
SBrkr Standard Circuit Breakers & Romex
FuseA Fuse Box over 60 AMP and all Romex wiring (Average)
FuseF 60 AMP Fuse Box and mostly Romex wiring (Fair)
FuseP 60 AMP Fuse Box and mostly knob & tube wiring (poor)
Mix Mixed
Exter_Cond: Evaluates the present condition of the material on the exterior
Ex Excellent
Gd Good
TA Average/Typical
Fa Fair
Po Poor
Exter_Qual: Evaluates the quality of the material on the exterior
Ex Excellent
Gd Good
TA Average/Typical
Fa Fair
Po Poor
Exterior_1st: Exterior covering on house
AsbShng Asbestos Shingles
AsphShn Asphalt Shingles
BrkComm Brick Common
BrkFace Brick Face
CBlock Cinder Block
CemntBd Cement Board
HdBoard Hard Board
ImStucc Imitation Stucco
MetalSd Metal Siding
Other Other
Plywood Plywood
PreCast PreCast
Stone Stone
Stucco Stucco
VinylSd Vinyl Siding
Wd Sdng Wood Siding
WdShing Wood Shingles
Exterior_2nd: Exterior covering on house (if more than one material)
AsbShng Asbestos Shingles
AsphShn Asphalt Shingles
BrkComm Brick Common
BrkFace Brick Face
CBlock Cinder Block
CemntBd Cement Board
HdBoard Hard Board
ImStucc Imitation Stucco
MetalSd Metal Siding
Other Other
Plywood Plywood
PreCast PreCast
Stone Stone
Stucco Stucco
VinylSd Vinyl Siding
Wd Sdng Wood Siding
WdShing Wood Shingles
Foundation: Type of foundation
BrkTil Brick & Tile
CBlock Cinder Block
PConc Poured Contrete
Slab Slab
Stone Stone
Wood Wood
Full_Bath: Full bathrooms above grade
Functional: Home functionality (Assume typical unless deductions are warranted)
Typ Typical Functionality
Min1 Minor Deductions 1
Min2 Minor Deductions 2
Mod Moderate Deductions
Maj1 Major Deductions 1
Maj2 Major Deductions 2
Sev Severely Damaged
Sal Salvage only
Gr_Liv_Area: Above grade (ground) living area square feet
Half_Bath: Half baths above grade
Heating: Type of heating
Floor Floor Furnace
GasA Gas forced warm air furnace
GasW Gas hot water or steam heat
Grav Gravity furnace
OthW Hot water or steam heat other than gas
Wall Wall furnace
Heating_QC: Heating quality and condition
Ex Excellent
Gd Good
TA Average/Typical
Fa Fair
Po Poor
House_Style: Style of dwelling
1Story One story
1.5Fin One and one-half story: 2nd level finished
1.5Unf One and one-half story: 2nd level unfinished
2Story Two story
2.5Fin Two and one-half story: 2nd level finished
2.5Unf Two and one-half story: 2nd level unfinished
SFoyer Split Foyer
SLvl Split Level
Kitchen_AbvGr: Kitchens above grade
Kitchen_Qual: Kitchen quality
Ex Excellent
Gd Good
TA Typical/Average
Fa Fair
Po Poor
LandContour: Flatness of the property
Lvl Near Flat/Level
Bnk Banked - Quick and significant rise from street grade to building
HLS Hillside - Significant slope from side to side
Low Depression
Land_Slope: Slope of property
Gtl Gentle slope
Mod Moderate Slope
Sev Severe Slope
Lot_Area: Lot size in square feet
Lot_Config: Lot configuration
Inside Inside lot
Corner Corner lot
CulDSac Cul-de-sac
FR2 Frontage on 2 sides of property
FR3 Frontage on 3 sides of property
Lot_Shape: General shape of property
Reg Regular
IR1 Slightly irregular
IR2 Moderately Irregular
IR3 Irregular
MS_SubClass: Identifies the type of dwelling involved in the sale.
20 1-STORY 1946 & NEWER ALL STYLES
30 1-STORY 1945 & OLDER
40 1-STORY W/FINISHED ATTIC ALL AGES
45 1-1/2 STORY - UNFINISHED ALL AGES
50 1-1/2 STORY FINISHED ALL AGES
60 2-STORY 1946 & NEWER
70 2-STORY 1945 & OLDER
75 2-1/2 STORY ALL AGES
80 SPLIT OR MULTI-LEVEL
85 SPLIT FOYER
90 DUPLEX - ALL STYLES AND AGES
120 1-STORY PUD (Planned Unit Development) - 1946 & NEWER
150 1-1/2 STORY PUD - ALL AGES
160 2-STORY PUD - 1946 & NEWER
180 PUD - MULTILEVEL - INCL SPLIT LEV/FOYER
190 2 FAMILY CONVERSION - ALL STYLES AND AGES
MS_Zoning: Identifies the general zoning classification of the sale.
A Agriculture
C Commercial
FV Floating Village Residential
I Industrial
RH Residential High Density
RL Residential Low Density
RP Residential Low Density Park
RM Residential Medium Density
Mas_Vnr_Type: Masonry veneer type
BrkCmn Brick Common
BrkFace Brick Face
CBlock Cinder Block
None None
Stone Stone
Mo_Sold: Month Sold (MM)
Neighborhood: Physical locations within Ames city limits
Blmngtn Bloomington Heights
Blueste Bluestem
BrDale Briardale
BrkSide Brookside
ClearCr Clear Creek
CollgCr College Creek
Crawfor Crawford
Edwards Edwards
Gilbert Gilbert
IDOTRR Iowa DOT and Rail Road
MeadowV Meadow Village
Mitchel Mitchell
Names North Ames
NoRidge Northridge
NPkVill Northpark Villa
NridgHt Northridge Heights
NWAmes Northwest Ames
OldTown Old Town
SWISU South & West of Iowa State University
Sawyer Sawyer
SawyerW Sawyer West
Somerst Somerset
StoneBr Stone Brook
Timber Timberland
Veenker Veenker
Overall_Cond: Rates the overall condition of the house
10 Very Excellent
9 Excellent
8 Very Good
7 Good
6 Above Average
5 Average
4 Below Average
3 Fair
2 Poor
1 Very Poor
Overall_Qual: Rates the overall material and finish of the house
10 Very Excellent
9 Excellent
8 Very Good
7 Good
6 Above Average
5 Average
4 Below Average
3 Fair
2 Poor
1 Very Poor
Paved_Drive: Paved driveway
Y Paved
P Partial Pavement
N Dirt/Gravel
Roof_Matl: Roof material
ClyTile Clay or Tile
CompShg Standard (Composite) Shingle
Membran Membrane
Metal Metal
Roll Roll
Tar&Grv Gravel & Tar
WdShake Wood Shakes
WdShngl Wood Shingles
Roof_Style: Type of roof
Flat Flat
Gable Gable
Gambrel Gabrel (Barn)
Hip Hip
Mansard Mansard
Shed Shed
SalePrice: the sales price for each house
Sale_Condition: Condition of sale
Normal Normal Sale
Abnorml Abnormal Sale - trade, foreclosure, short sale
AdjLand Adjoining Land Purchase
Alloca Allocation - two linked properties with separate deeds, typically condo with a garage unit
Family Sale between family members
Partial Home was not completed when last assessed (associated with New Homes)
Sale_Type: Type of sale
WD Warranty Deed - Conventional
CWD Warranty Deed - Cash
VWD Warranty Deed - VA Loan
New Home just constructed and sold
COD Court Officer Deed/Estate
Con Contract 15% Down payment regular terms
ConLw Contract Low Down payment and low interest
ConLI Contract Low Interest
ConLD Contract Low Down
Oth Other
Street: Type of road access to property
Grvl Gravel
Pave Paved
TotRms_AbvGrd: Total rooms above grade (does not include bathrooms)
Utilities: Type of utilities available
AllPub All public Utilities (E,G,W,& S)
NoSewr Electricity, Gas, and Water (Septic Tank)
NoSeWa Electricity and Gas Only
ELO Electricity only
Year_Built: Original construction date
Year_Remod/Add: Remodel date (same as construction date if no remodeling or additions)
Yr_Sold: Year Sold (YYYY)
```
|
github_jupyter
|
# If you're in Colab...
import sys
if 'google.colab' in sys.modules:
!pip install category_encoders==2.*
!pip install pandas-profiling==2.*
!pip install plotly==4.*
import pandas as pd
donors = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data')
assert donors.shape == (748,5)
donors = donors.rename(columns={
'Recency (months)': 'months_since_last_donation',
'Frequency (times)': 'number_of_donations',
'Monetary (c.c. blood)': 'total_volume_donated',
'Time (months)': 'months_since_first_donation',
'whether he/she donated blood in March 2007': 'made_donation_in_march_2007'
})
donors['made_donation_in_march_2007'].value_counts(normalize=True)
#explore data
donors.describe()
donors.shape
#change dataset name & look at 1st 5 rows
df = donors
df.head()
#1.1
#Use scikit-learn's train/test split function to split randomly.
#(You can include 75% of the data in the train set, and hold out 25% for the test set, which is the default.)
'''
target = 'made_donation_in_march_2007'
features = train.columes.drop([target])
X_train = train[features]
y_train[] = train[made_donation_in_march_2007]
X_test = test[features]
y_test = test[made_donation_in_march_2007]
'''
from sklearn.model_selection import train_test_split
X = df.drop(columns='made_donation_in_march_2007')
y = df.made_donation_in_march_2007
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=43)
#(You can include 75% of the data in the train set, and hold out 25% for the test set, which is the default.)
train_test_split(X, y, random_state=43, train_size=0.75)
#1.2
import category_encoders as ce
from sklearn.linear_model import LogisticRegressionCV
from sklearn.preprocessing import StandardScaler
from sklearn.impute import SimpleImputer
target = 'made_donation_in_march_2007'
features = df.columns.drop([target])
X_val = X_test
y_val = y_test
print(X_train.shape, X_val.shape)
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train)
X_val_encoded = encoder.transform(X_val)
print(X_train_encoded.shape, X_val_encoded.shape)
imputer = SimpleImputer(strategy='mean')
X_train_imputed = imputer.fit_transform(X_train_encoded)
X_val_imputed = imputer.transform(X_val_encoded)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_imputed)
X_val_scaled = scaler.transform(X_val_imputed)
model = LogisticRegressionCV(cv=5, n_jobs=-1, random_state=42)
model.fit(X_train_scaled, y_train)
print('Validation Accuracy', model.score(X_val_scaled, y_val))
import pandas as pd
URL = 'https://drive.google.com/uc?export=download&id=1522WlEW6HFss36roD_Cd9nybqSuiVcCK'
homes = pd.read_csv(URL)
assert homes.shape == (2904, 47)
df = homes
df.describe()
df.shape
df.head()
df
# Common Regression baseline model: guessing the mean for every sample
# Common Classification baseline model: guessing the most common category for every sample
guess = df['SalePrice'].mean()
errors = guess - df['SalePrice']
errors
from sklearn.metrics import mean_absolute_error, r2_score
'''
https://scikit-learn.org/stable/modules/model_evaluation.html#r2-score DOCUMENTATION:
The r2_score function computes the coefficient of determination, usually denoted as R².
It represents the proportion of variance (of y) that has been explained by the independent variables in the model. It provides an indication of goodness of fit and therefore a measure of how well unseen samples are likely to be predicted by the model, through the proportion of explained variance.
As such variance is dataset dependent, R² may not be meaningfully comparable across different datasets. Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R² score of 0.0.
'''
#MAE
mean_absolute_error = errors.abs().mean()
mean_absolute_error
'''
print(f'If we just guessed every Ames, Iowa home sold for ${guess:,.0f},')
print(f'we would be off by ${int(mean_absolute_error)} on average.')
'''
#R^2 Score
r2_score(homes['SalePrice'], errors)
# Drop some high cardinality categoricals
'''
df['1st_Flr_SF', 'Bedroom_AbvGr', 'Bldg_Type', 'Bsmt_Full_Bath',
'Bsmt_Half_Bath', 'Central_Air', 'Condition_1', 'Condition_2',
'Electrical', 'Exter_Cond', 'Exter_Qual', 'Exterior_1st',
'Exterior_2nd', 'Foundation', 'Full_Bath', 'Functional', 'Gr_Liv_Area',
'Half_Bath', 'Heating', 'Heating_QC', 'House_Style', 'Kitchen_AbvGr',
'Kitchen_Qual', 'Land_Contour', 'Land_Slope', 'Lot_Area', 'Lot_Config',
'Lot_Shape', 'MS_SubClass', 'MS_Zoning', 'Mas_Vnr_Type', 'Mo_Sold',
'Neighborhood', 'Overall_Cond', 'Overall_Qual', 'Paved_Drive',
'Roof_Matl', 'Roof_Style', 'SalePrice', 'Sale_Condition', 'Sale_Type',
'Street', 'TotRms_AbvGrd', 'Utilities', 'Year_Built', 'Year_Remod/Add',
'Yr_Sold'].describe()
'''
# Drop some columns to prevent "leakage"
#df = df.drop(columns=[])
# Drop some columns to prevent "leakage"
# df = df.drop(columns=['Rec', 'overall'])
#Filter train data; Train on houses sold in the years 2006 - 2008. (1,920 rows)
#syntax from https://stackoverflow.com/questions/22898824/filtering-pandas-dataframes-on-dates
#Convert Date to DateTime
# df['Yr_Sold'] = pd.to_datetime(df['Yr_Sold'], infer_datetime_format=True, errors='coerce')
#filter df['Yr_Sold']= 2006-2008 Attempt 1: get empty df
'''
train = df[(df['Yr_Sold'] >= '2006') & (df['Yr_Sold'] <= '2008')]
print(train)
'''
#TRAIN on houses sold in the years 2006 - 2008. (1,920 rows)
#filter df['Yr_Sold']= 2006-2008 Attempt 2: SEEMS TO WORK
df.loc['2006-01-01':'2008-31-12']
#Filter train data
train = df[df['Yr_Sold'].isin([2006, 2007, 2008])]
train
train.shape
#filter df['Yr_Sold']= 2006-2008 Attempt 3
'''
train2006to = df[df['Yr_Sold'] >= '2006']
train2006to
to_train2008 = train2006to[df['Yr_Sold'] <= '2008']
'''
#VALIDATE on house sold in 2009. (644 rows)
#Filter test data
val = df[df['Yr_Sold'] == 2009]
val
val.shape
#TEST on houses sold in 2010. (340 rows)
#Filter and set aside test data
test = df[df['Yr_Sold']== 2010]
test
test.shape
#y target vector
target = 'SalePrice'
print(f'Our y target vector is the home\'s {target}.')
df.dtypes
# X features matrix
features = ['Bldg_Type', 'Overall_Cond' ]
print(f' Our X features matrix consists of {len(features)} categorical and numerical features aka columns, respectively: {features}.')
#Set X and Ys; split into train /val/ test
X_train = train[features]
y_train = train[target]
X_test = test[features]
y_test = test[target]
X_val = val[features]
y_val = val[target]
print(f'X_train is {X_train.shape}')
#set X-val & y-val for feature scaling?
#X_val = val[features]
#y_val = val[target]
print(f'X_test is {X_test.shape} and y_test is {y_test.shape}')
'''
X_train['Bldg_Type'] = X_train['Bldg_Type'].astype('category').cat.codes
X_test['Bldg_Type'] = X_test['Bldg_Type'].astype('category').cat.codes
X_val['Bldg_Type'] = X_val['Bldg_Type'].astype('category').cat.codes
'''
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LogisticRegressionCV
from sklearn.preprocessing import StandardScaler
'''
imputer = SimpleImputer(strategy='most_frequent')
X_train_imputed = imputer.fit_transform(X_train)
X_val_imputed = imputer.transform(X_val)
'''
# scaler = StandardScaler()
# X_train_scaled = scaler.fit_transform(X_train_imputed)
# X_val_scaled = scaler.transform(X_val_imputed)
# print(X_train_scaled)
#Imputing attempt 2: get ValueError: X has 4 features per sample, expected 1
'''
imputer = SimpleImputer()
X_train_imputed = imputer.fit_transform(X_train_encoded)
X_val_imputed = imputer.transform(X_val_encoded) #get err here: ValueError: X has 4 features per sample, expected 1
'''
import category_encoders as ce
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train)
X_val_encoded = encoder.transform(X_val)
X_test_encoded = encoder.transform(X_test)
# model = LogisticRegressionCV(cv=5, n_jobs=-1, random_state=42)
# model.fit(X_train_scaled, y_train) # ValueError here: Found input variables with inconsistent numbers of samples: [561, 2]
# print('Validation Accuracy', model.score(X_val_scaled, y_val))
'''
#Attempt 2: using https://www.geeksforgeeks.org/ml-one-hot-encoding-of-datasets-in-python/
# importing one hot encoder from sklearn
from sklearn.preprocessing import OneHotEncoder
# creating one hot encoder object with categorical feature
onehotencoder = OneHotEncoder(categorical_features = ['Bldg_Type']) #TypeError here: __init__() got an unexpected keyword argument 'categorical_features'
data = onehotencoder.fit_transform(data).toarray()
'''
#Attempt 3: using LS_DSPT4_214.ipynb:
import category_encoders as ce
from sklearn.linear_model import LogisticRegressionCV
from sklearn.preprocessing import StandardScaler
'''
using notes from: https://www.geeksforgeeks.org/ml-one-hot-encoding-of-datasets-in-python/
One Hot Encoding –
It refers to splitting the column which contains numerical categorical data to many columns
depending on the number of categories present in that column. Each column contains “0” or “1”
corresponding to which column it has been placed.
'''
# Data Dictionary
target = 'SalePrice'
features = ['Bldg_Type', 'Overall_Cond']
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
print(X_train.shape, X_val.shape)
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train)
X_val_encoded = encoder.transform(X_val)
X_test_encoded = encoder.transform(X_test)
print(X_train_encoded.shape, X_val_encoded.shape, X_test_encoded.shape)
'''
imputer = SimpleImputer(strategy='mean')
#X_train_imputed = imputer.fit_transform(X_train_encoded) #TypeError here: invalid type promotion
X_val_imputed = imputer.fit_transform(X_val_encoded)
'''
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_imputed)
X_val_scaled = scaler.transform(X_val_imputed)
'''
import category_encoders as ce
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train)
X_val_encoded = encoder.transform(X_val)
X_test_encoded = encoder.transform(X_test)'''
# model = LogisticRegressionCV(cv=5, n_jobs=-1, random_state=42)
# model.fit(X_train_scaled, y_train) # ValueError here: Found input variables with inconsistent numbers of samples: [561, 2]
# print('Validation Accuracy', model.score(X_val_scaled, y_val))
'''
#Attempt 2: using https://www.geeksforgeeks.org/ml-one-hot-encoding-of-datasets-in-python/
# importing one hot encoder from sklearn
from sklearn.preprocessing import OneHotEncoder
# creating one hot encoder object with categorical feature
onehotencoder = OneHotEncoder(categorical_features = ['Bldg_Type']) #TypeError here: __init__() got an unexpected keyword argument 'categorical_features'
data = onehotencoder.fit_transform(data).toarray()
'''
'''
#Attempt 3: using LS_DSPT4_214.ipynb:
import category_encoders as ce
from sklearn.linear_model import LogisticRegressionCV
from sklearn.preprocessing import StandardScaler
using notes from: https://www.geeksforgeeks.org/ml-one-hot-encoding-of-datasets-in-python/
One Hot Encoding –
It refers to splitting the column which contains numerical categorical data to many columns
depending on the number of categories present in that column. Each column contains “0” or “1”
corresponding to which column it has been placed.
# Data Dictionary
target = 'SalePrice'
features = ['Bldg_Type', 'Roof_Style']
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
print(X_train.shape, X_val.shape)
'''
'''
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train)
X_val_encoded = encoder.transform(X_val)
print(X_train_encoded.shape, X_val_encoded.shape)
imputer = SimpleImputer(strategy='mean')
#X_train_imputed = imputer.fit_transform(X_train_encoded) #TypeError here: invalid type promotion
X_val_imputed = imputer.fit_transform(X_val_encoded)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_imputed)
X_val_scaled = scaler.transform(X_val_imputed)
'''
# ATTEMPT 1: Using LINEAR REGRESSION LS_DSPT4_211.ipynb; Following the 5 step process
# 1. Import the appropriate estimator class from Scikit-Learn
from sklearn.linear_model import LinearRegression
# 2. Instantiate this class
model = LinearRegression()
'''
# SEE SECTION 2.3
# 3. Arrange X features matrix & y target vector
X = X_train_encoded
y = y_test
print(X.shape, y.shape)
'''
# 4. Fit the model
#model.fit(X, y)
model.fit(X_train_encoded, y_train)
# 5. Apply the model to new data
X_test = X_test_encoded
y_pred = model.predict(X_test)
#Using LINEAR and notes from bottom or LS_DS_213_assignment_solution (RIDGEL ASS. SOL.)
#ATTEMPT 2 w/ridge regression using LS_DS_213_assignment_solution
#MAE
model.errors.abs().mean
#R^2 score using https://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html
# MAE & R^2; Attempt #2:
features = ['BldgType']
# Assign to X, y
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
```
# Data Dictionary
Here's a description of the data fields:
| 0.53777 | 0.984546 |
# Preprocessing
*Written by Luke Chang* (modified by Arvid Lundervold for the BMED360 course)
Being able to study brain activity associated with cognitive processes in humans is an amazing achievement. However, as we have noted throughout this course, there is an extraordinary amount of noise and a very low levels of signal, which makes it difficult to make inferences about the function of the brain using this BOLD imaging. A critical step before we can perform any analyses is to do our best to remove as much of the noise as possible. The series of steps to remove noise comprise our *neuroimaging data **preprocessing** pipeline*.

In this lab, we will go over the basics of preprocessing fMRI data using the [fmriprep](https://fmriprep.readthedocs.io/en/stable/) preprocessing pipeline. We will cover:
- Image transformations
- Head motion correction
- Spatial Normalization
- Spatial Smoothing
There are other preprocessing steps that are also common, but not necessarily performed by all labs such as slice timing and distortion correction. We will not be discussing these in depth outside of the videos.
Let's start with watching a short video by Martin Lindquist to get a general overview of the main steps of preprocessing and the basics of how to transform images and register them to other images.
```
from IPython.display import YouTubeVideo
YouTubeVideo('Qc3rRaJWOc4')
```
# Image Transformations
Ok, now let's dive deeper into how we can transform images into different spaces using linear transformations.
Recall from our introduction to neuroimaging data lab, that neuroimaging data is typically stored in a nifti container, which contains a 3D or 4D matrix of the voxel intensities and also an affine matrix, which provides instructions for how to transform the matrix into another space.
Let's create an interactive plot using ipywidgets so that we can get an intuition for how these affine matrices can be used to transform a 3D image.
We can move the sliders to play with applying rigid body transforms to a 3D cube. A rigid body transformation has 6 parameters: translation in x,y, & z, and rotation around each of these axes. The key thing to remember is that a rigid body transform doesn't allow the image to be fundamentally changed. A full 12 parameter affine transformation adds an additional 3 parameters each for scaling and shearing, which can change the shape of the cube.
Try moving some of the sliders around. Note that the viewer is a little slow. Each time you move a slider it is applying an affine transformation to the matrix and re-plotting.
Translation moves the cube in x, y, and z dimensions.
We can also rotate the cube around the x, y, and z axes where the origin is the center point. Continuing to rotate around the point will definitely lead to the cube leaving the current field of view, but it will come back if you keep rotating it.
You'll notice that every time we change the slider and apply a new affine transformation that the cube gets a little distorted with aliasing. Often we need to interpolate the image after applying a transformation to fill in the gaps after applying a transformation. It is important to keep in mind that every time we apply an affine transformation to our images, it is actually not a perfect representation of the original data. Additional steps like reslicing, interpolation, and spatial smoothing can help with this.
```
%matplotlib inline
from mpl_toolkits import mplot3d
import numpy as np
import matplotlib.pyplot as plt
from nibabel.affines import apply_affine, from_matvec, to_matvec
from scipy.ndimage import affine_transform, map_coordinates
import nibabel as nib
from ipywidgets import interact, FloatSlider
def plot_rigid_body_transformation(trans_x=0, trans_y=0, trans_z=0, rot_x=0, rot_y=0, rot_z=0):
'''This plot creates an interactive demo to illustrate the parameters of a rigid body transformation'''
fov = 30
radius = 10
x, y, z = np.indices((fov, fov, fov))
cube = ((x > fov//2 - radius//2) & (x < fov//2 + radius//2)) & ((y > fov//2 - radius//2) & (y < fov//2 + radius//2)) & ((z > fov//2 - radius//2) & (z < fov//2 + radius//2 ))
cube = cube.astype(int)
vec = np.array([trans_x, trans_y, trans_z])
rot_x = np.radians(rot_x)
rot_y = np.radians(rot_y)
rot_z = np.radians(rot_z)
rot_axis1 = np.array([[1, 0, 0],
[0, np.cos(rot_x), -np.sin(rot_x)],
[0, np.sin(rot_x), np.cos(rot_x)]])
rot_axis2 = np.array([[np.cos(rot_y), 0, np.sin(rot_y)],
[0, 1, 0],
[-np.sin(rot_y), 0, np.cos(rot_y)]])
rot_axis3 = np.array([[np.cos(rot_z), -np.sin(rot_z), 0],
[np.sin(rot_z), np.cos(rot_z), 0],
[0, 0, 1]])
rotation = rot_axis1 @ rot_axis2 @ rot_axis3
affine = from_matvec(rotation, vec)
i_coords, j_coords, k_coords = np.meshgrid(range(cube.shape[0]), range(cube.shape[1]), range(cube.shape[2]), indexing='ij')
coordinate_grid = np.array([i_coords, j_coords, k_coords])
coords_last = coordinate_grid.transpose(1, 2, 3, 0)
transformed = apply_affine(affine, coords_last)
coords_first = transformed.transpose(3, 0, 1, 2)
fig = plt.figure(figsize=(15, 12))
ax = plt.axes(projection='3d')
ax.voxels(map_coordinates(cube, coords_first))
ax.set_xlabel('x', fontsize=16)
ax.set_ylabel('y', fontsize=16)
ax.set_zlabel('z', fontsize=16)
interact(plot_rigid_body_transformation,
trans_x=FloatSlider(value=0, min=-10, max=10, step=1),
trans_y=FloatSlider(value=0, min=-10, max=10, step=1),
trans_z=FloatSlider(value=0, min=-10, max=10, step=1),
rot_x=FloatSlider(value=0, min=0, max=360, step=15),
rot_y=FloatSlider(value=0, min=0, max=360, step=15),
rot_z=FloatSlider(value=0, min=0, max=360, step=15))
```

Ok, so what's going on behind the sliders?
Let's borrow some of the material available in the nibabel [documentation](https://nipy.org/nibabel/coordinate_systems.html) to understand how these transformations work.
The affine matrix is a way to transform images between spaces. In general, we have some voxel space coordinate $(i, j, k)$, and we want to figure out how to remap this into a reference space coordinate $(x, y, z)$.
It can be useful to think of this as a coordinate transform function $f$ that accepts a voxel coordinate in the original space as an *input* and returns a coordinate in the *output* reference space:
$$(x, y, z) = f(i, j, k)$$
In theory $f$ could be a complicated non-linear function, but in practice we typically assume that the relationship between $(i, j, k)$ and $(x, y, z)$ is linear (or *affine*), and can be encoded with linear affine transformations comprising translations, rotations, and zooms.
Scaling (zooming) in three dimensions can be represented by a diagonal 3 by 3
matrix. Here's how to zoom the first dimension by $p$, the second by $q$ and
the third by $r$ units:
$$
\begin{bmatrix}
x\\
y\\
z
\end{bmatrix}
\quad
=
\quad
\begin{bmatrix}
p & i\\
q & j\\
r & k
\end{bmatrix}
\quad
=
\quad
\begin{bmatrix}
p & 0 & 0 \\
0 & q & 0 \\
0 & 0 & r
\end{bmatrix}
\quad
\begin{bmatrix}
i\\
j\\
k
\end{bmatrix}
$$
A rotation in three dimensions can be represented as a 3 by 3 *rotation matrix* [wikipedia rotation matrix](https://en.wikipedia.org/wiki/Rotation_matrix). For example, here is a rotation by $\theta$ radians around the third array axis:
$$
\begin{bmatrix}
x \\
y \\
z
\end{bmatrix}
\quad
=
\quad
\begin{bmatrix}
\cos(\theta) & -\sin(\theta) & 0 \\
\sin(\theta) & \cos(\theta) & 0 \\
0 & 0 & 1 \\
\end{bmatrix}
\quad
\begin{bmatrix}
i \\
j \\
k
\end{bmatrix}
$$
This is a rotation by $\phi$ radians around the second array axis:
$$
\begin{bmatrix}
x \\
y \\
z \\
\end{bmatrix}
\quad
=
\quad
\begin{bmatrix}
\cos(\phi) & 0 & \sin(\phi) \\
0 & 1 & 0 \\
-\sin(\phi) & 0 & \cos(\phi) \\
\end{bmatrix}
\quad
\begin{bmatrix}
i \\
j \\
k
\end{bmatrix}
$$
A rotation of $\gamma$ radians around the first array axis:
$$
\begin{bmatrix}
x\\
y\\
z
\end{bmatrix}
\quad
=
\quad
\begin{bmatrix}
1 & 0 & 0 \\
0 & \cos(\gamma) & -\sin(\gamma) \\
0 & \sin(\gamma) & \cos(\gamma) \\
\end{bmatrix}
\quad
\begin{bmatrix}
i \\
j \\
k
\end{bmatrix}
$$
Zoom and rotation matrices can be combined by matrix multiplication.
Here's a scaling of $p, q, r$ units followed by a rotation of $\theta$ radians
around the third axis followed by a rotation of $\phi$ radians around the
second axis:
$$
\begin{bmatrix}
x \\
y \\
z
\end{bmatrix}
\quad
=
\quad
\begin{bmatrix}
\cos(\phi) & 0 & \sin(\phi) \\
0 & 1 & 0 \\
-\sin(\phi) & 0 & \cos(\phi) \\
\end{bmatrix}
\quad
\begin{bmatrix}
\cos(\theta) & -\sin(\theta) & 0 \\
\sin(\theta) & \cos(\theta) & 0 \\
0 & 0 & 1 \\
\end{bmatrix}
\quad
\begin{bmatrix}
p & 0 & 0 \\
0 & q & 0 \\
0 & 0 & r \\
\end{bmatrix}
\quad
\begin{bmatrix}
i\\
j\\
k\\
\end{bmatrix}
$$
This can also be written:
$$
M
\quad
=
\quad
\begin{bmatrix}
\cos(\phi) & 0 & \sin(\phi) \\
0 & 1 & 0 \\
-\sin(\phi) & 0 & \cos(\phi) \\
\end{bmatrix}
\quad
\begin{bmatrix}
\cos(\theta) & -\sin(\theta) & 0 \\
\sin(\theta) & \cos(\theta) & 0 \\
0 & 0 & 1 \\
\end{bmatrix}
\quad
\begin{bmatrix}
p & 0 & 0 \\
0 & q & 0 \\
0 & 0 & r \\
\end{bmatrix}
$$
$$
\begin{bmatrix}
x \\
y \\
z
\end{bmatrix}
\quad
=
\quad
M
\quad
\begin{bmatrix}
i \\
j \\
k
\end{bmatrix}
$$
This might be obvious because the matrix multiplication is the result of
applying each transformation in turn on the coordinates output from the
previous transformation. Combining the transformations into a single matrix
$M$ works because matrix multiplication is associative -- $ABCD = (ABC)D$.
A translation in three dimensions can be represented as a length 3 vector to
be added to the length 3 coordinate. For example, a translation of $a$ units
on the first axis, $b$ on the second and $c$ on the third might be written
as:
$$
\begin{bmatrix}
x \\
y \\
z
\end{bmatrix}
\quad
=
\quad
\begin{bmatrix}
i \\
j \\
k
\end{bmatrix}
\quad
+
\quad
\begin{bmatrix}
a \\
b \\
c
\end{bmatrix}
$$
We can write our function $f$ as a combination of matrix multiplication by some 3 by 3 rotation / zoom matrix $M$ followed by addition of a 3 by 1 translation vector $(a, b, c)$
$$
\begin{bmatrix}
x \\
y \\
z
\end{bmatrix}
\quad
=
\quad
M
\quad
\begin{bmatrix}
i \\
j \\
k
\end{bmatrix}
\quad
+
\quad
\begin{bmatrix}
a \\
b \\
c
\end{bmatrix}
$$
We could record the parameters necessary for $f$ as the 3 by 3 matrix, $M$
and the 3 by 1 vector $(a, b, c)$.
In fact, the 4 by 4 image *affine array* includes this exact information. If $m_{i,j}$ is the value in row $i$ column $j$ of matrix $M$, then the image affine matrix $A$ is:
$$
A
\quad
=
\quad
\begin{bmatrix}
m_{1,1} & m_{1,2} & m_{1,3} & a \\
m_{2,1} & m_{2,2} & m_{2,3} & b \\
m_{3,1} & m_{3,2} & m_{3,3} & c \\
0 & 0 & 0 & 1 \\
\end{bmatrix}
$$
Why the extra row of $[0, 0, 0, 1]$? We need this row because we have rephrased the combination of rotations / zooms and translations as a transformation in *homogenous coordinates* (see [wikipedia homogenous
coordinates](https://en.wikipedia.org/wiki/Homogeneous_coordinates)). This is a trick that allows us to put the translation part into the same matrix as the rotations / zooms, so that both translations and rotations / zooms can be applied by matrix multiplication. In order to make this work, we have to add an extra 1 to our input and output coordinate vectors:
$$
\begin{bmatrix}
x \\
y \\
z \\
1
\end{bmatrix}
\quad
=
\quad
\begin{bmatrix}
m_{1,1} & m_{1,2} & m_{1,3} & a \\
m_{2,1} & m_{2,2} & m_{2,3} & b \\
m_{3,1} & m_{3,2} & m_{3,3} & c \\
0 & 0 & 0 & 1 \\
\end{bmatrix}
\quad
\begin{bmatrix}
i \\
j \\
k \\
1
\end{bmatrix}
$$
This results in the same transformation as applying $M$ and $(a, b, c)$ separately. One advantage of encoding transformations this way is that we can combine two sets of rotations, zooms, translations by matrix multiplication of the two corresponding affine matrices.
In practice, although it is common to combine 3D transformations using 4 x 4 affine matrices, we usually *apply* the transformations by breaking up the affine matrix into its component $M$ matrix and $(a, b, c)$ vector and doing:
$$
\begin{bmatrix}
x \\
y \\
z
\end{bmatrix}
\quad
=
\quad
M
\quad
\begin{bmatrix}
i \\
j \\
k
\end{bmatrix}
\quad
+
\quad
\begin{bmatrix}
a \\
b \\
c
\end{bmatrix}
$$
As long as the last row of the 4 by 4 is $[0, 0, 0, 1]$, applying the transformations in this way is mathematically the same as using the full 4 by 4 form, without the inconvenience of adding the extra 1 to our input and output vectors.
You can think of the image affine as a combination of a series of transformations to go from voxel coordinates to mm coordinates in terms of the magnet isocenter. Here is the EPI affine broken down into a series of transformations, with the results shown on the localizer image:
<img src="https://nipy.org/nibabel/_images/illustrating_affine.png" />
Applying different affine transformations allows us to rotate, reflect, scale, and shear the image.
# Cost Functions
Now that we have learned how affine transformations can be applied to transform images into different spaces, how can we use this to register one brain image to another image?
The key is to identify a way to quantify how aligned the two images are to each other. Our visual systems are very good at identifying when two images are aligned, however, we need to create an alignment measure. These measures are often called *cost functions*.
There are many different types of cost functions depending on the types of images that are being aligned. For example, a common cost function is called minimizing the sum of the squared differences and is similar to how regression lines are fit to minimize deviations from the observed data. This measure works best if the images are of the same type and have roughly equivalent signal intensities.
Let's create another interactive plot and find the optimal X & Y translation parameters that minimize the difference between a two-dimensional target image to a reference image.
```
from copy import deepcopy
def plot_affine_cost(trans_x=0, trans_y=0):
'''This function creates an interactive demo to highlight how a cost function works in image registration.'''
fov = 30
radius = 15
x, y = np.indices((fov, fov))
square1 = (x < radius-2) & (y < radius-2)
square2 = ((x > fov//2 - radius//2) & (x < fov//2 + radius//2)) & ((y > fov//2 - radius//2) & (y < fov//2 + radius//2))
square1 = square1.astype(float)
square2 = square2.astype(float)
vec = np.array([trans_y, trans_x])
affine = from_matvec(np.eye(2), vec)
i_coords, j_coords = np.meshgrid(range(square1.shape[0]), range(square1.shape[1]), indexing='ij')
coordinate_grid = np.array([i_coords, j_coords])
coords_last = coordinate_grid.transpose(1, 2, 0)
transformed = apply_affine(affine, coords_last)
coords_first = transformed.transpose(2, 0, 1)
transformed_square = map_coordinates(square1, coords_first)
f,a = plt.subplots(ncols=3, figsize=(15, 5))
a[0].imshow(transformed_square)
a[0].set_xlabel('x', fontsize=16)
a[0].set_ylabel('y', fontsize=16)
a[0].set_title('Target Image', fontsize=18)
a[1].imshow(square2)
a[1].set_xlabel('x', fontsize=16)
a[1].set_ylabel('y', fontsize=16)
a[1].set_title('Reference Image', fontsize=18)
point_x = deepcopy(trans_x)
point_y = deepcopy(trans_y)
sse = np.sum((transformed_square - square2)**2)
a[2].bar(0, sse)
a[2].set_ylim([0, 350])
a[2].set_ylabel('SSE', fontsize=18)
a[2].set_xlabel('Cost Function', fontsize=18)
a[2].set_xticks([])
a[2].set_title(f'Parameters: ({int(trans_x)},{int(trans_y)})', fontsize=20)
plt.tight_layout()
interact(plot_affine_cost,
trans_x=FloatSlider(value=0, min=-30, max=0, step=1),
trans_y=FloatSlider(value=0, min=-30, max=0, step=1))
```

You probably had to move the sliders around back and forth until you were able to reduce the sum of squared error to zero. This cost function increases exponentially the further you are away from your target. The process of minimizing (or sometimes maximizing) cost functions to identify the best fitting parameters is called *optimization* and is a concept that is core to fitting models to data across many different disciplines.
| Cost Function | Use Case | Example |
|:---:|:---:|:---:|
| Sum of Squared Error | Images of same modality and scaling | Two T2* images |
| Normalized correlation | Images of same modality | two T1 images |
| Correlation ratio | Any modality | T1 and FLAIR |
| Mutual information or normalized mutual information | Any modality | T1 and CT |
| Boundary Based Registration | Images with some contrast across boundaries of interest | EPI and T1 |
# Realignment
Now let's put everything we learned together to understand how we can correct for head motion in functional images that occurred during a scanning session. It is extremely important to make sure that a specific voxel has the same 3D coordinate across all time points to be able to model neural processes. This of course is made difficult by the fact that participants move during a scanning session and also in between runs.
Realignment is the preprocessing step in which a rigid body transformation is applied to each volume to align them to a common space. One typically needs to choose a reference volume, which might be the first, middle, or last volume, or the mean of all volumes.
Let's look at an example of the translation and rotation parameters after running realignment on our first subject.
```
from os.path import expanduser, join
home = expanduser('~')
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from bids import BIDSLayout, BIDSValidator
import os
data_dir = '%s/prj/DartBrains/data/localizer' % (home)
layout = BIDSLayout(data_dir, derivatives=True)
data = pd.read_csv(layout.get(subject='S01', scope='derivatives', extension='.tsv')[0].path, sep='\t')
f,a = plt.subplots(ncols=2, figsize=(15,5))
data.loc[:,['trans_x','trans_y','trans_z']].plot(ax=a[0])
a[0].set_ylabel('Translation (mm)', fontsize=16)
a[0].set_xlabel('Time (TR)', fontsize=16)
a[0].set_title('Translation', fontsize=18)
data.loc[:,['rot_x','rot_y','rot_z']].plot(ax=a[1])
a[1].set_ylabel('Rotation (radian)', fontsize=16)
a[1].set_xlabel('Time (TR)', fontsize=16)
a[1].set_title('Rotation', fontsize=18)
```
Don't forget that even though we can approximately put each volume into a similar position with realignment that head motion always distorts the magnetic field and can lead to nonlinear changes in signal intensity that will not be addressed by this procedure. In the resting-state literature, where many analyses are based on functional connectivity, head motion can lead to spurious correlations. Some researchers choose to exclude any subject that moved more than certain amount. Other's choose to remove the impact of these time points in their data through removing the volumes via *scrubbing* or modeling out the volume with a dummy code in the first level general linear models.
## Spatial Normalization
There are several other preprocessing steps that involve image registration. The main one is called *spatial normalization*, in which each subject's brain data is warped into a common stereotactic space. Talaraich is an older space, that has been subsumed by various standards developed by the Montreal Neurological Institute.
There are a variety of algorithms to warp subject data into stereotactic space. Linear 12 parameter affine transformation have been increasingly been replaced by more complicated nonlinear normalizations that have hundreds to thousands of parameters.
One nonlinear algorithm that has performed very well across comparison studies is *diffeomorphic registration*, which can also be inverted so that subject space can be transformed into stereotactic space and back to subject space. This is the core of the [ANTs](http://stnava.github.io/ANTs/) algorithm that is implemented in fmriprep. See this [overview](https://elef.soic.indiana.edu/documentation/0.15.0.dev/examples_built/syn_registration_2d/) for more details.
Let's watch another short video by Martin Lindquist and Tor Wager to learn more about the core preprocessing steps.
```
YouTubeVideo('qamRGWSC-6g')
```
There are many different steps involved in the spatial normalization process and these details vary widely across various imaging software packages. We will briefly discuss some of the steps involved in the anatomical preprocessing pipeline implemented by fMRIprep and will be showing example figures from the output generated by the pipeline.
First, brains are extracted from the skull and surrounding dura mater. You can check and see how well the algorithm performed by examining the red outline.

Next, the anatomical images are segmented into different tissue types, these tissue maps are used for various types of analyses, including providing a grey matter mask to reduce the computational time in estimating statistics. In addition, they provide masks to aid in extracting average activity in CSF, or white matter, which might be used as covariates in the statistical analyses to account for physiological noise.

### Spatial normalization of the anatomical T1w reference
fmriprep uses the [ANTs](http://stnava.github.io/ANTs/) to perform nonlinear spatial normaliziation. It is easy to check to see how well the algorithm performed by viewing the results of aligning the T1w reference to the stereotactic reference space. Hover on the panels with the mouse pointer to transition between both spaces. We are using the MNI152NLin2009cAsym template.

### Alignment of functional and anatomical MRI data
Next, we can evaluate the quality of alignment of the functional data to the anatomical T1 image. FSL `flirt` was used to generate transformations from EPI-space to T1w-space - The white matter mask calculated with FSL `fast` (brain tissue segmentation) was used for BBR. Note that Nearest Neighbor interpolation is used in the reportlets in order to highlight potential spin-history and other artifacts, whereas final images are resampled using Lanczos interpolation. Notice these images are much blurrier and show some distortion compared to the T1s.

# Spatial Smoothing
The last step we will cover in the preprocessing pipeline is *spatial smoothing*. This step involves applying a filter to the image, which removes high frequency spatial information. This step is identical to convolving a kernel to a 1-D signal that we covered in the signal processing lab, but the kernel here is a 3-D Gaussian kernel. The amount of smoothing is determined by specifying the width of the distribution (i.e., the standard deviation) using the Full Width at Half Maximum (FWHM) parameter.
Why we would want to decrease our image resolution with spatial smoothing after we tried very hard to increase our resolution at the data acquisition stage? This is because this step may help increase the signal to noise ratio by reducing the impact of partial volume effects, residual anatomical differences following normalization, and other aliasing from applying spatial transformation.
Here is what a 3D gaussian kernel looks like.
```
def plot_gaussian(sigma=2, kind='surface', cmap='viridis', linewidth=1, **kwargs):
'''Generates a 3D matplotlib plot of a Gaussian distribution'''
mean=0
domain=10
x = np.arange(-domain + mean, domain + mean, sigma/10)
y = np.arange(-domain + mean, domain + mean, sigma/10)
x, y = np.meshgrid(x, x)
r = (x ** 2 + y ** 2) / (2 * sigma ** 2)
z = 1 / (np.pi * sigma ** 4) * (1 - r) * np.exp(-r)
fig = plt.figure(figsize=(12, 6))
ax = plt.axes(projection='3d')
if kind=='wire':
ax.plot_wireframe(x, y, z, cmap=cmap, linewidth=linewidth, **kwargs)
elif kind=='surface':
ax.plot_surface(x, y, z, cmap=cmap, linewidth=linewidth, **kwargs)
else:
NotImplemented
ax.set_xlabel('x', fontsize=16)
ax.set_ylabel('y', fontsize=16)
ax.set_zlabel('z', fontsize=16)
plt.axis('off')
plot_gaussian(kind='surface', linewidth=1)
```
# fmriprep
Throughout this lab and course, you have frequently heard about [fmriprep](https://fmriprep.readthedocs.io/en/stable/), which is a functional magnetic resonance imaging (fMRI) data preprocessing pipeline that was developed by a team at the [Center for Reproducible Research](http://reproducibility.stanford.edu/) led by Russ Poldrack and Chris Gorgolewski. Fmriprep was designed to provide an easily accessible, state-of-the-art interface that is robust to variations in scan acquisition protocols, requires minimal user input, and provides easily interpretable and comprehensive error and output reporting. Fmriprep performs basic processing steps (coregistration, normalization, unwarping, noise component extraction, segmentation, skullstripping etc.) providing outputs that are ready for data analysis.
fmriprep was built on top of [nipype](https://nipype.readthedocs.io/en/latest/), which is a tool to build preprocessing pipelines in python using graphs. This provides a completely flexible way to create custom pipelines using any type of software while also facilitating easy parallelization of steps across the pipeline on high performance computing platforms. Nipype is completely flexible, but has a fairly steep learning curve and is best for researchers who have strong opinions about how they want to preprocess their data, or are working with nonstandard data that might require adjusting the preprocessing steps or parameters. In practice, most researchers typically use similar preprocessing steps and do not need to tweak the pipelines very often. In addition, many researchers do not fully understand how each preprocessing step will impact their results and would prefer if somebody else picked suitable defaults based on current best practices in the literature. The fmriprep pipeline uses a combination of tools from well-known software packages, including FSL_, ANTs_, FreeSurfer_ and AFNI_. This pipeline was designed to provide the best software implementation for each state of preprocessing, and is quickly being updated as methods evolve and bugs are discovered by a growing user base.
This tool allows you to easily do the following:
- Take fMRI data from raw to fully preprocessed form.
- Implement tools from different software packages.
- Achieve optimal data processing quality by using the best tools available.
- Generate preprocessing quality reports, with which the user can easily identify outliers.
- Receive verbose output concerning the stage of preprocessing for each subject, including meaningful errors.
- Automate and parallelize processing steps, which provides a significant speed-up from typical linear, manual processing.
- More information and documentation can be found at https://fmriprep.readthedocs.io/

## Running fmriprep
Running fmriprep is a (mostly) trivial process of running a single line in the command line specifying a few choices and locations for the output data. One of the annoying things about older neuroimaging software that was developed by academics is that the packages were developed using many different development environments and on different operating systems (e.g., unix, windows, mac). It can be a nightmare getting some of these packages to install on more modern computing systems. As fmriprep uses many different packages, they have made it much easier to circumvent the time-consuming process of installing many different packages by releasing a [docker container](https://fmriprep.readthedocs.io/en/stable/docker.html) that contains everything you need to run the pipeline.
Unfortunately, our AWS cloud instances running our jupyter server are not equipped with enough computational resources to run fmriprep at this time. However, if you're interested in running this on your local computer, here is the code you could use to run it in a jupyter notebook, or even better in the command line on a high performance computing environment.
```
import os
base_dir = '/Users/lukechang/Dropbox/Dartbrains/Data'
data_path = os.path.join(base_dir, 'localizer')
output_path = os.path.join(base_dir, 'preproc')
work_path = os.path.join(base_dir, 'work')
sub = 'S01'
subs = [f'S{x:0>2d}' for x in range(10)]
for sub in subs:
!fmriprep-docker {data_path} {output_path} participant --participant_label sub-{sub} --write-graph --fs-no-reconall --notrack --fs-license-file ~/Dropbox/Dartbrains/License/license.txt --work-dir {work_path}
```
## Quick primer on High Performance Computing
We could run fmriprep on our computer, but this could take a long time if we have a lot of participants. Because we have a limited amount of computational resources on our laptops (e.g., cpus, and memory), we would have to run each participant sequentially. For example, if we had 50 participants, it would take 50 times longer to run all participants than a single one.
Imagine if you had 50 computers and ran each participant separate at the same time in parallel across all of the computers. This would allow us to run 50 participants in the same amount of time as a single participant. This is the basic idea behind high performance computing, which contains a cluster of many computers that have been installed in racks. Below is a picture of what Dartmouth's [Discovery cluster](https://rc.dartmouth.edu/index.php/discovery-overview/) looks like:

A cluster is simply a collection of nodes. A node can be thought of as an individual computer. Each node contains processors, which encompass multiple cores. Discovery contains 3000+ cores, which is certainly a lot more than your laptop!
In order to submit a job, you can create a Portable Batch System (PBS) script that sets up the parameters (e.g., how much time you want your script to run, specifying directory to run, etc) and submits your job to a queue.
**NOTE**: For this class, we will only be using the jupyterhub server, but if you end up working in a lab in the future, you will need to request access to the *discovery* system using this [link](https://rcweb.dartmouth.edu/accounts/).
## fmriprep output
You can see a summary of the operations fmriprep performed by examining the .html files in the `derivatives/fmriprep` folder within the `localizer` data directory.
We will load the first subject's output file. Spend some time looking at the outputs and feel free to examine other subjects as well. Currently, the first 10 subjects should be available on the jupyterhub.
```
from IPython.display import HTML
HTML('sub-S01.html')
```
# Limitations of fmriprep
In general, we recommend using this pipeline if you want a sensible default. Considerable thought has gone into selecting reasonable default parameters and selecting preprocessing steps based on best practices in the field (as determined by the developers). This is not necessarily the case for any of the default settings in any of the more conventional software packages (e.g., spm, fsl, afni, etc).
However, there is an important tradeoff in using this tool. On the one hand, it's nice in that it is incredibly straightforward to use (one line of code!), has excellent documentation, and is actively being developed to fix bugs and improve the overall functionality. There is also a growing user base to ask questions. [Neurostars](https://neurostars.org/) is an excellent form to post questions and learn from others. On the other hand, fmriprep, is unfortunately in its current state not easily customizable. If you disagree with the developers about the order or specific preprocessing steps, it is very difficult to modify. Future versions will hopefully be more modular and easier to make custom pipelines. If you need this type of customizability we strongly recommend using nipype over fmriprep.
In practice, it's alway a little bit finicky to get everything set up on a particular system. Sometimes you might run into issues with a specific missing file like the [freesurfer license](https://fmriprep.readthedocs.io/en/stable/usage.html#the-freesurfer-license) even if you're not using it. You might also run into issues with the format of the data that might have some conflicts with the [bids-validator](https://github.com/bids-standard/bids-validator). In our experience, there is always some frustrations getting this to work, but it's very nice once it's done.
# Exercises
## Exercise 1. Inspect HTML output of other participants.
For this exercise, you will need to navigate to the derivatives folder containing the fmriprep preprocessed data `.../data/localizer/derivatives/fmriprep` and inspect the html output of other subjects (ie., not 'S01'). Did the preprocessing steps works? are there any issues with the data that we should be concerned about?
|
github_jupyter
|
from IPython.display import YouTubeVideo
YouTubeVideo('Qc3rRaJWOc4')
%matplotlib inline
from mpl_toolkits import mplot3d
import numpy as np
import matplotlib.pyplot as plt
from nibabel.affines import apply_affine, from_matvec, to_matvec
from scipy.ndimage import affine_transform, map_coordinates
import nibabel as nib
from ipywidgets import interact, FloatSlider
def plot_rigid_body_transformation(trans_x=0, trans_y=0, trans_z=0, rot_x=0, rot_y=0, rot_z=0):
'''This plot creates an interactive demo to illustrate the parameters of a rigid body transformation'''
fov = 30
radius = 10
x, y, z = np.indices((fov, fov, fov))
cube = ((x > fov//2 - radius//2) & (x < fov//2 + radius//2)) & ((y > fov//2 - radius//2) & (y < fov//2 + radius//2)) & ((z > fov//2 - radius//2) & (z < fov//2 + radius//2 ))
cube = cube.astype(int)
vec = np.array([trans_x, trans_y, trans_z])
rot_x = np.radians(rot_x)
rot_y = np.radians(rot_y)
rot_z = np.radians(rot_z)
rot_axis1 = np.array([[1, 0, 0],
[0, np.cos(rot_x), -np.sin(rot_x)],
[0, np.sin(rot_x), np.cos(rot_x)]])
rot_axis2 = np.array([[np.cos(rot_y), 0, np.sin(rot_y)],
[0, 1, 0],
[-np.sin(rot_y), 0, np.cos(rot_y)]])
rot_axis3 = np.array([[np.cos(rot_z), -np.sin(rot_z), 0],
[np.sin(rot_z), np.cos(rot_z), 0],
[0, 0, 1]])
rotation = rot_axis1 @ rot_axis2 @ rot_axis3
affine = from_matvec(rotation, vec)
i_coords, j_coords, k_coords = np.meshgrid(range(cube.shape[0]), range(cube.shape[1]), range(cube.shape[2]), indexing='ij')
coordinate_grid = np.array([i_coords, j_coords, k_coords])
coords_last = coordinate_grid.transpose(1, 2, 3, 0)
transformed = apply_affine(affine, coords_last)
coords_first = transformed.transpose(3, 0, 1, 2)
fig = plt.figure(figsize=(15, 12))
ax = plt.axes(projection='3d')
ax.voxels(map_coordinates(cube, coords_first))
ax.set_xlabel('x', fontsize=16)
ax.set_ylabel('y', fontsize=16)
ax.set_zlabel('z', fontsize=16)
interact(plot_rigid_body_transformation,
trans_x=FloatSlider(value=0, min=-10, max=10, step=1),
trans_y=FloatSlider(value=0, min=-10, max=10, step=1),
trans_z=FloatSlider(value=0, min=-10, max=10, step=1),
rot_x=FloatSlider(value=0, min=0, max=360, step=15),
rot_y=FloatSlider(value=0, min=0, max=360, step=15),
rot_z=FloatSlider(value=0, min=0, max=360, step=15))
from copy import deepcopy
def plot_affine_cost(trans_x=0, trans_y=0):
'''This function creates an interactive demo to highlight how a cost function works in image registration.'''
fov = 30
radius = 15
x, y = np.indices((fov, fov))
square1 = (x < radius-2) & (y < radius-2)
square2 = ((x > fov//2 - radius//2) & (x < fov//2 + radius//2)) & ((y > fov//2 - radius//2) & (y < fov//2 + radius//2))
square1 = square1.astype(float)
square2 = square2.astype(float)
vec = np.array([trans_y, trans_x])
affine = from_matvec(np.eye(2), vec)
i_coords, j_coords = np.meshgrid(range(square1.shape[0]), range(square1.shape[1]), indexing='ij')
coordinate_grid = np.array([i_coords, j_coords])
coords_last = coordinate_grid.transpose(1, 2, 0)
transformed = apply_affine(affine, coords_last)
coords_first = transformed.transpose(2, 0, 1)
transformed_square = map_coordinates(square1, coords_first)
f,a = plt.subplots(ncols=3, figsize=(15, 5))
a[0].imshow(transformed_square)
a[0].set_xlabel('x', fontsize=16)
a[0].set_ylabel('y', fontsize=16)
a[0].set_title('Target Image', fontsize=18)
a[1].imshow(square2)
a[1].set_xlabel('x', fontsize=16)
a[1].set_ylabel('y', fontsize=16)
a[1].set_title('Reference Image', fontsize=18)
point_x = deepcopy(trans_x)
point_y = deepcopy(trans_y)
sse = np.sum((transformed_square - square2)**2)
a[2].bar(0, sse)
a[2].set_ylim([0, 350])
a[2].set_ylabel('SSE', fontsize=18)
a[2].set_xlabel('Cost Function', fontsize=18)
a[2].set_xticks([])
a[2].set_title(f'Parameters: ({int(trans_x)},{int(trans_y)})', fontsize=20)
plt.tight_layout()
interact(plot_affine_cost,
trans_x=FloatSlider(value=0, min=-30, max=0, step=1),
trans_y=FloatSlider(value=0, min=-30, max=0, step=1))
from os.path import expanduser, join
home = expanduser('~')
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from bids import BIDSLayout, BIDSValidator
import os
data_dir = '%s/prj/DartBrains/data/localizer' % (home)
layout = BIDSLayout(data_dir, derivatives=True)
data = pd.read_csv(layout.get(subject='S01', scope='derivatives', extension='.tsv')[0].path, sep='\t')
f,a = plt.subplots(ncols=2, figsize=(15,5))
data.loc[:,['trans_x','trans_y','trans_z']].plot(ax=a[0])
a[0].set_ylabel('Translation (mm)', fontsize=16)
a[0].set_xlabel('Time (TR)', fontsize=16)
a[0].set_title('Translation', fontsize=18)
data.loc[:,['rot_x','rot_y','rot_z']].plot(ax=a[1])
a[1].set_ylabel('Rotation (radian)', fontsize=16)
a[1].set_xlabel('Time (TR)', fontsize=16)
a[1].set_title('Rotation', fontsize=18)
YouTubeVideo('qamRGWSC-6g')
def plot_gaussian(sigma=2, kind='surface', cmap='viridis', linewidth=1, **kwargs):
'''Generates a 3D matplotlib plot of a Gaussian distribution'''
mean=0
domain=10
x = np.arange(-domain + mean, domain + mean, sigma/10)
y = np.arange(-domain + mean, domain + mean, sigma/10)
x, y = np.meshgrid(x, x)
r = (x ** 2 + y ** 2) / (2 * sigma ** 2)
z = 1 / (np.pi * sigma ** 4) * (1 - r) * np.exp(-r)
fig = plt.figure(figsize=(12, 6))
ax = plt.axes(projection='3d')
if kind=='wire':
ax.plot_wireframe(x, y, z, cmap=cmap, linewidth=linewidth, **kwargs)
elif kind=='surface':
ax.plot_surface(x, y, z, cmap=cmap, linewidth=linewidth, **kwargs)
else:
NotImplemented
ax.set_xlabel('x', fontsize=16)
ax.set_ylabel('y', fontsize=16)
ax.set_zlabel('z', fontsize=16)
plt.axis('off')
plot_gaussian(kind='surface', linewidth=1)
import os
base_dir = '/Users/lukechang/Dropbox/Dartbrains/Data'
data_path = os.path.join(base_dir, 'localizer')
output_path = os.path.join(base_dir, 'preproc')
work_path = os.path.join(base_dir, 'work')
sub = 'S01'
subs = [f'S{x:0>2d}' for x in range(10)]
for sub in subs:
!fmriprep-docker {data_path} {output_path} participant --participant_label sub-{sub} --write-graph --fs-no-reconall --notrack --fs-license-file ~/Dropbox/Dartbrains/License/license.txt --work-dir {work_path}
from IPython.display import HTML
HTML('sub-S01.html')
| 0.666822 | 0.986879 |
```
import pandas as pd
import numpy as np
from statsmodels.graphics.regressionplots import influence_plot
import matplotlib.pyplot as plt
toyota = pd.read_csv("ToyotaCorolla.csv")
toyota.shape
toyota1= toyota.iloc[:,[2,3,6,8,12,13,15,16,17]]
toyota1.rename(columns={"Age_08_04":"Age"},inplace=True)
toyota1.corr()
import seaborn as sns
sns.set_style(style ="darkgrid")
sns.pairplot(toyota1)
import statsmodels.formula.api as smf
model1 = smf.ols('Price~Age+KM+HP+Doors+cc+Gears+Quarterly_Tax+Weight', data=toyota1).fit()
model1.summary()
model_influence = model1.get_influence()
(c, _) = model_influence.cooks_distance
c
fig = plt.subplots(figsize=(20,7))
plt.stem(np.arange(len(toyota)), np.round(c,3))
plt.xlabel('Row Index')
plt.ylabel('Cooks Distance')
plt.show()
np.argmax(c), np.max(c)
from statsmodels.graphics.regressionplots import influence_plot
influence_plot(model1)
plt.show()
k = toyota1.shape[1]
n = toyota1.shape[0]
leverage_cutoff = 3*((k+1)/n)
leverage_cutoff
toyota_new = toyota1.drop(toyota1.index[[80,960,221,601]],axis=0).reset_index()
toyota3=toyota_new.drop(['index'], axis=1)
toyota3
import statsmodels.formula.api as smf
model2 = smf.ols('Price~Age+KM+HP+Doors+cc+Gears+Quarterly_Tax+Weight', data=toyota3).fit()
model2.summary()
finalmodel = smf.ols("Price~Age+KM+HP+cc+Doors+Gears+Quarterly_Tax+Weight", data = toyota3).fit()
finalmodel.summary()
finalmodel_pred = finalmodel.predict(toyota3)
plt.scatter(toyota3["Price"],finalmodel_pred,color='blue');plt.xlabel("Observed values");plt.ylabel("Predicted values")
plt.scatter(finalmodel_pred, finalmodel.resid_pearson,color='red');
plt.axhline(y=0,color='blue');
plt.xlabel("Fitted values");
plt.ylabel("Residuals")
plt.hist(finalmodel.resid_pearson)
import pylab
import scipy.stats as st
st.probplot(finalmodel.resid_pearson, dist='norm',plot=pylab)
new_data=pd.DataFrame({'Age':25,'KM':40000,'HP':80,'cc':1500,'Doors':3,'Gears':5,'Quarterly_Tax':180,'Weight':1050}, index=[1])
new_data
finalmodel.predict(new_data)
pred_y=finalmodel.predict(toyota1)
pred_y
```
# training and testing the data
```
from sklearn.model_selection import train_test_split
train_data,test_Data= train_test_split(toyota1,test_size=0.3)
finalmodel1 = smf.ols("Price~Age+KM+HP+cc+Doors+Gears+Quarterly_Tax+Weight", data = train_data).fit()
finalmodel1.summary()
finalmodel_pred = finalmodel1.predict(train_data)
finalmodel_res = train_data["Price"]-finalmodel_pred
finalmodel_rmse = np.sqrt(np.mean(finalmodel_res*finalmodel_res))
finalmodel_testpred = finalmodel1.predict(test_Data)
finalmodel_testres= test_Data["Price"]-finalmodel_testpred
finalmodel_testrmse = np.sqrt(np.mean(finalmodel_testres*finalmodel_testres))
finalmodel_testrmse
```
|
github_jupyter
|
import pandas as pd
import numpy as np
from statsmodels.graphics.regressionplots import influence_plot
import matplotlib.pyplot as plt
toyota = pd.read_csv("ToyotaCorolla.csv")
toyota.shape
toyota1= toyota.iloc[:,[2,3,6,8,12,13,15,16,17]]
toyota1.rename(columns={"Age_08_04":"Age"},inplace=True)
toyota1.corr()
import seaborn as sns
sns.set_style(style ="darkgrid")
sns.pairplot(toyota1)
import statsmodels.formula.api as smf
model1 = smf.ols('Price~Age+KM+HP+Doors+cc+Gears+Quarterly_Tax+Weight', data=toyota1).fit()
model1.summary()
model_influence = model1.get_influence()
(c, _) = model_influence.cooks_distance
c
fig = plt.subplots(figsize=(20,7))
plt.stem(np.arange(len(toyota)), np.round(c,3))
plt.xlabel('Row Index')
plt.ylabel('Cooks Distance')
plt.show()
np.argmax(c), np.max(c)
from statsmodels.graphics.regressionplots import influence_plot
influence_plot(model1)
plt.show()
k = toyota1.shape[1]
n = toyota1.shape[0]
leverage_cutoff = 3*((k+1)/n)
leverage_cutoff
toyota_new = toyota1.drop(toyota1.index[[80,960,221,601]],axis=0).reset_index()
toyota3=toyota_new.drop(['index'], axis=1)
toyota3
import statsmodels.formula.api as smf
model2 = smf.ols('Price~Age+KM+HP+Doors+cc+Gears+Quarterly_Tax+Weight', data=toyota3).fit()
model2.summary()
finalmodel = smf.ols("Price~Age+KM+HP+cc+Doors+Gears+Quarterly_Tax+Weight", data = toyota3).fit()
finalmodel.summary()
finalmodel_pred = finalmodel.predict(toyota3)
plt.scatter(toyota3["Price"],finalmodel_pred,color='blue');plt.xlabel("Observed values");plt.ylabel("Predicted values")
plt.scatter(finalmodel_pred, finalmodel.resid_pearson,color='red');
plt.axhline(y=0,color='blue');
plt.xlabel("Fitted values");
plt.ylabel("Residuals")
plt.hist(finalmodel.resid_pearson)
import pylab
import scipy.stats as st
st.probplot(finalmodel.resid_pearson, dist='norm',plot=pylab)
new_data=pd.DataFrame({'Age':25,'KM':40000,'HP':80,'cc':1500,'Doors':3,'Gears':5,'Quarterly_Tax':180,'Weight':1050}, index=[1])
new_data
finalmodel.predict(new_data)
pred_y=finalmodel.predict(toyota1)
pred_y
from sklearn.model_selection import train_test_split
train_data,test_Data= train_test_split(toyota1,test_size=0.3)
finalmodel1 = smf.ols("Price~Age+KM+HP+cc+Doors+Gears+Quarterly_Tax+Weight", data = train_data).fit()
finalmodel1.summary()
finalmodel_pred = finalmodel1.predict(train_data)
finalmodel_res = train_data["Price"]-finalmodel_pred
finalmodel_rmse = np.sqrt(np.mean(finalmodel_res*finalmodel_res))
finalmodel_testpred = finalmodel1.predict(test_Data)
finalmodel_testres= test_Data["Price"]-finalmodel_testpred
finalmodel_testrmse = np.sqrt(np.mean(finalmodel_testres*finalmodel_testres))
finalmodel_testrmse
| 0.503418 | 0.679089 |
<a href="https://colab.research.google.com/github/novoda/spikes/blob/master/sentiment-analysis/Sentiment_Analysis_for_the_general_Slack_channel.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
**Sentiment Analysis for Slack channels**
What we will be doing:
1. Retrieve the `channel_id` for `#general`
2. Retrieve the channel history from 01/06/2019
3. Filter out non-user messages
4. Run sentiment analysis on the list of retrieved messages. Values `<0` are for negative sentiment, values `>0` for positive sentiment.
5. Some data processing to group by date and split the data in two columns, one for `number of possitive messages` and one for `number of negative messages`
6. Plot the data!
---------------------
First of all, let's install `slack_sdk` package, which is what we'll be using to make calls to the Slack API
```
!pip install slack_sdk
```
Now, let's import the dependencies for our project
```
import logging
import os
from slack_sdk import WebClient
from slack_sdk.errors import SlackApiError
import nltk
from nltk.sentiment import SentimentIntensityAnalyzer
from datetime import datetime
import pandas as pd
import matplotlib.pyplot as plt
```
Instantiate a logger for the possible error messages
```
logger = logging.getLogger(__name__)
```
-------------------------------
**Data Retrieval**
We will instantiate `client` for making calls to the Slack API
`token` is the token for your Slack App or Bot.
When creating your App or Bot there is something to consider. `OAuth` `channels:history` and `channels:read` scopes needs to be added to either `Bot Token Scopes` or `User Token` scopes. This is under your app management settings on `OAuth & Permissions`.
```
client = WebClient(token="YOUR-API-TOKEN-HERE")
```
Now we need to retrieve the `channel_id` which identifies the channel on which we are willing to perform the sentiment analysis.
1. Request `conversations_list`
2. Loop throught the list of channels until we find a match by `name`
3. Get the `id` for the channel that matched the `name`
```
channel_id = None
target = 'general'
try:
result = client.conversations_list()
for response in result:
for channel in result["channels"]:
if channel["name"] == target:
channel_id = channel["id"]
break
except SlackApiError as e:
logger.error("Error: {}".format(e))
```
Once we have the `channel_id` for the channel we are going to be working on, it's time to retrieve the channel history.
We retrieve messages from 01/06/2019, but there is a limitation on the `conversations_history` method, there is a limit of `1000` messages for call, so we need to perform multiple calls for different
There is a constraint on the
```
conversation_history = []
try:
# Call the conversations.history method using the WebClient
# conversations.history returns the first 100 messages by default, with a maximum of 1000
pending_messages = True
start_date = 1559347200 # 1 Jun 2019
conversation_history = []
while pending_messages:
result = client.conversations_history(channel=channel_id, inclusive=False, oldest=start_date, count=1000)["messages"]
conversation_history = conversation_history + result
if len(result) == 0:
pending_messages = False
else:
start_date = result[0].get('ts')
pending_messages = True
except SlackApiError as e:
logger.error("Error: {}".format(e))
```
Next thing we do is to filter out everything which is not a message (like notifications) and messages that are not from users (there is no `client_message_id` associated to it).
```
filtered_history = list(
filter(
lambda x: x.get('type') == 'message' and x.get('client_msg_id') is not None,
conversation_history
)
)
```
And a matrix is created with `date` and `text` of the message as columns
```
conversation_history_messages_by_date = list(
map(
lambda x: [datetime.fromtimestamp(float(x.get('ts'))).strftime("%Y%m%d"), x.get('text')],
filtered_history
)
)
conversation_history_messages_by_date
```
---------------------------------
**Sentiment Analysis**
The first thing we are going do to towards this approach is to download the `VADER` lexicon. An ordered series of words mapped to the sentiment they transmit. `VADER` stands for Valence Aware Dictionary and sEntiment Reasoner.
This is going to be used by [Neural Language Tool Kit](https://www.nltk.org/), `nltk`, in order to be able to analyse sentiments of messages.
```
nltk.download('vader_lexicon')
```
Once the dictionary is downloaded, we instantiate our `SentimentIntensityAnalyzer`
```
analyzer = SentimentIntensityAnalyzer()
```
And we analyze sentiments. We map the `conversation_history_messages_by_date` soo the `message` column is transformed to `sentiment`.
A value under 0 is considered a negative sentiment
A value over 0 is considered a positive sentiment
```
conversation_history_sentiments_by_date = list(
map(
lambda x: [x[0], analyzer.polarity_scores(x[1]).get('compound')],
conversation_history_messages_by_date
),
)
```
Finally, for data representation, we are grouping the data by `date` and creating two separate columns.
1. `negative` containing the number of negative messages on a date
2. `positive` containing the number of positive messages on a date
As a facility for doing this, [pandas](https://pandas.pydata.org/) was used. Pandas is a Data Anylsis library for Python.
```
net_sentiment_by_date = pd.DataFrame(conversation_history_sentiments_by_date, columns=["date", "sentiment"])
net_sentiment_by_date = net_sentiment_by_date.groupby('date')['sentiment'].agg(
positive=lambda x: x.gt(0).sum(),
negative=lambda x: x.lt(0).sum()
)
net_sentiment_by_date
```
And we plot our data. We are done!
Checkout the [pandas plotting](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.plot.html) API reference page for other available plotting options.
```
net_sentiment_by_date[['positive', 'negative']].plot(figsize=(50,25))
plt.show()
```
|
github_jupyter
|
!pip install slack_sdk
import logging
import os
from slack_sdk import WebClient
from slack_sdk.errors import SlackApiError
import nltk
from nltk.sentiment import SentimentIntensityAnalyzer
from datetime import datetime
import pandas as pd
import matplotlib.pyplot as plt
logger = logging.getLogger(__name__)
client = WebClient(token="YOUR-API-TOKEN-HERE")
channel_id = None
target = 'general'
try:
result = client.conversations_list()
for response in result:
for channel in result["channels"]:
if channel["name"] == target:
channel_id = channel["id"]
break
except SlackApiError as e:
logger.error("Error: {}".format(e))
conversation_history = []
try:
# Call the conversations.history method using the WebClient
# conversations.history returns the first 100 messages by default, with a maximum of 1000
pending_messages = True
start_date = 1559347200 # 1 Jun 2019
conversation_history = []
while pending_messages:
result = client.conversations_history(channel=channel_id, inclusive=False, oldest=start_date, count=1000)["messages"]
conversation_history = conversation_history + result
if len(result) == 0:
pending_messages = False
else:
start_date = result[0].get('ts')
pending_messages = True
except SlackApiError as e:
logger.error("Error: {}".format(e))
filtered_history = list(
filter(
lambda x: x.get('type') == 'message' and x.get('client_msg_id') is not None,
conversation_history
)
)
conversation_history_messages_by_date = list(
map(
lambda x: [datetime.fromtimestamp(float(x.get('ts'))).strftime("%Y%m%d"), x.get('text')],
filtered_history
)
)
conversation_history_messages_by_date
nltk.download('vader_lexicon')
analyzer = SentimentIntensityAnalyzer()
conversation_history_sentiments_by_date = list(
map(
lambda x: [x[0], analyzer.polarity_scores(x[1]).get('compound')],
conversation_history_messages_by_date
),
)
net_sentiment_by_date = pd.DataFrame(conversation_history_sentiments_by_date, columns=["date", "sentiment"])
net_sentiment_by_date = net_sentiment_by_date.groupby('date')['sentiment'].agg(
positive=lambda x: x.gt(0).sum(),
negative=lambda x: x.lt(0).sum()
)
net_sentiment_by_date
net_sentiment_by_date[['positive', 'negative']].plot(figsize=(50,25))
plt.show()
| 0.47658 | 0.967778 |
```
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
import collections
import itertools
import pickle
import glob
!unzip titles_authors.zip
authors_names = glob.glob("titles_authors/authors*")
titles_path = glob.glob("titles_authors/titles*")
```
### Opening Data
```
authors = []
for author_name in authors_names:
with open(author_name,"rb") as f:
dat = pickle.load(f)
authors+=dat
titles = []
for author_name in titles_path:
with open(author_name,"rb") as f:
dat = pickle.load(f)
titles+=dat
print(len(titles))
print(len(authors))
auts = []
for author_list in authors:
auts+=author_list
auts_set = set(auts)
auts_list = list(auts_set)
```
### Defining network
```
G = nx.Graph()
id_to_authors_dict = {idx:elem for idx, elem in enumerate(auts_list)}
authors_to_id_dict = {elem:idx for idx, elem in enumerate(auts_list)}
for idx, elem in enumerate(authors):
if idx%10000 == 0:
print(idx)
if len(elem)>1:
comb_list = list(itertools.combinations(elem, 2))
for combination in comb_list:
G.add_edge(authors_to_id_dict[combination[0]],authors_to_id_dict[combination[1]])
avg_degree_connect = nx.average_degree_connectivity(G)
degrees = G.degree()
avg_degree = np.mean(list(dict(degrees).values()))
```
##### Avg Degree
```
avg_degree
```
##### Number of Edges
```
G.number_of_edges()
components = nx.connected_components(G)
idx = 0
for comps in components:
idx+=1
```
##### Giant Component
```
giant = max(nx.connected_components(G), key=len)
len(giant)
```
##### Largest Component over all
```
len(giant)/len(G)
```
##### Degree Distribution
```
degrees = [val for (node, val) in G.degree()]
degree_sequence = sorted(degrees, reverse=True) # degree sequence
degreeCount = collections.Counter(degree_sequence)
deg, cnt = zip(*degreeCount.items())
cnt = np.array(list(cnt))
cnt = cnt / cnt.sum()
fig, ax = plt.subplots(1)
plt.yscale("log")
plt.xscale("log")
ax.set_xlim(left=0, right=100)
plt.xlabel('k')
plt.ylabel('pk')
ax.scatter(deg, cnt, color="b")
plt.show()
```
##### Fitting Power law
```
import powerlaw
fit = powerlaw.Fit(degrees)
fig2 = fit.plot_pdf(color='b', linewidth=2)
plt.xlabel('k')
plt.ylabel('pk')
fit.power_law.plot_pdf(color='g', linestyle='--', ax=fig2)
fit.power_law.alpha, fit.power_law.sigma, fit.xmin
```
##### Average Clustering
```
nx.average_clustering(G)
```
### Most common names
```
degrees_dict = {node: val for (node, val) in G.degree()}
names = []
for name, degree_cnt in degrees_dict.items():
if degree_cnt in list(deg[:10]):
names.append(name)
# break
deg[:10]
names
for name in names:
print(id_to_authors_dict[name])
```
### Shortest Path By Name
```
def shortest_path(G, name_1, name_2):
try:
au1 = authors_to_id_dict[name_1]
except:
raise KeyError(f"No person {name_1} found in authors")
try:
au2 = authors_to_id_dict[name_2]
except:
raise KeyError(f"No person {name_2} found in authors")
path = nx.shortest_path(G, au1, au2)
return [id_to_authors_dict[idx] for idx in path]
shortest_path(G, "Alexandre Lopes","Miguel A. L. Nicolelis")
shortest_path(G, "Rodrigo Nogueira", "Albert-László Barabási")
```
### Are you in largest component?
```
def you_in_largest(giant, name):
try:
au1 = authors_to_id_dict[name]
except:
raise KeyError(f"No person {name} found in authors")
return au1 in giant
you_in_largest(giant, "João Meidanis")
you_in_largest(giant, "Alexandre Lopes")
```
### Shortest Name (Wrong Search) - SLOW
```
import difflib
def find_closest(auts_list, input_name, elements=1):
return difflib.get_close_matches(input_name, auts_list)[:elements]
find_closest(auts_list, "Miguel Nicolelis", 3)
shortest_path(G, "Alexandre Lopes", "Miguel A. L. Nicolelis")
```
|
github_jupyter
|
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
import collections
import itertools
import pickle
import glob
!unzip titles_authors.zip
authors_names = glob.glob("titles_authors/authors*")
titles_path = glob.glob("titles_authors/titles*")
authors = []
for author_name in authors_names:
with open(author_name,"rb") as f:
dat = pickle.load(f)
authors+=dat
titles = []
for author_name in titles_path:
with open(author_name,"rb") as f:
dat = pickle.load(f)
titles+=dat
print(len(titles))
print(len(authors))
auts = []
for author_list in authors:
auts+=author_list
auts_set = set(auts)
auts_list = list(auts_set)
G = nx.Graph()
id_to_authors_dict = {idx:elem for idx, elem in enumerate(auts_list)}
authors_to_id_dict = {elem:idx for idx, elem in enumerate(auts_list)}
for idx, elem in enumerate(authors):
if idx%10000 == 0:
print(idx)
if len(elem)>1:
comb_list = list(itertools.combinations(elem, 2))
for combination in comb_list:
G.add_edge(authors_to_id_dict[combination[0]],authors_to_id_dict[combination[1]])
avg_degree_connect = nx.average_degree_connectivity(G)
degrees = G.degree()
avg_degree = np.mean(list(dict(degrees).values()))
avg_degree
G.number_of_edges()
components = nx.connected_components(G)
idx = 0
for comps in components:
idx+=1
giant = max(nx.connected_components(G), key=len)
len(giant)
len(giant)/len(G)
degrees = [val for (node, val) in G.degree()]
degree_sequence = sorted(degrees, reverse=True) # degree sequence
degreeCount = collections.Counter(degree_sequence)
deg, cnt = zip(*degreeCount.items())
cnt = np.array(list(cnt))
cnt = cnt / cnt.sum()
fig, ax = plt.subplots(1)
plt.yscale("log")
plt.xscale("log")
ax.set_xlim(left=0, right=100)
plt.xlabel('k')
plt.ylabel('pk')
ax.scatter(deg, cnt, color="b")
plt.show()
import powerlaw
fit = powerlaw.Fit(degrees)
fig2 = fit.plot_pdf(color='b', linewidth=2)
plt.xlabel('k')
plt.ylabel('pk')
fit.power_law.plot_pdf(color='g', linestyle='--', ax=fig2)
fit.power_law.alpha, fit.power_law.sigma, fit.xmin
nx.average_clustering(G)
degrees_dict = {node: val for (node, val) in G.degree()}
names = []
for name, degree_cnt in degrees_dict.items():
if degree_cnt in list(deg[:10]):
names.append(name)
# break
deg[:10]
names
for name in names:
print(id_to_authors_dict[name])
def shortest_path(G, name_1, name_2):
try:
au1 = authors_to_id_dict[name_1]
except:
raise KeyError(f"No person {name_1} found in authors")
try:
au2 = authors_to_id_dict[name_2]
except:
raise KeyError(f"No person {name_2} found in authors")
path = nx.shortest_path(G, au1, au2)
return [id_to_authors_dict[idx] for idx in path]
shortest_path(G, "Alexandre Lopes","Miguel A. L. Nicolelis")
shortest_path(G, "Rodrigo Nogueira", "Albert-László Barabási")
def you_in_largest(giant, name):
try:
au1 = authors_to_id_dict[name]
except:
raise KeyError(f"No person {name} found in authors")
return au1 in giant
you_in_largest(giant, "João Meidanis")
you_in_largest(giant, "Alexandre Lopes")
import difflib
def find_closest(auts_list, input_name, elements=1):
return difflib.get_close_matches(input_name, auts_list)[:elements]
find_closest(auts_list, "Miguel Nicolelis", 3)
shortest_path(G, "Alexandre Lopes", "Miguel A. L. Nicolelis")
| 0.173498 | 0.767559 |
<a href="https://colab.research.google.com/github/PHoepner/big-sleep/blob/main/Looped_Gif_Creator.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Just enter your text in the form below and select Runtime-> Runall.
It will play a pleasant beep and attempt to download the file.
You should be able to just run this as-is. I can try to answer questions at https://www.reddit.com/user/exquisite_corpsed
```
#@title Text to dream from
DreamText = "man" #@param {type:"string"}
DreamText2 = "bear" #@param {type:"string"}
DreamText3 = "pig" #@param {type:"string"}
!git clone https://github.com/phoepner/big-sleep.git
!pip install ftfy
!pip install boto3
!pip install einops
!apt install gifsicle
import subprocess
for _ in range(2):
CUDA_version = [s for s in subprocess.check_output(["nvcc", "--version"]).decode("UTF-8").split(", ") if s.startswith("release")][0].split(" ")[-1]
print("CUDA version:", CUDA_version)
if CUDA_version == "10.0":
torch_version_suffix = "+cu100"
elif CUDA_version == "10.1":
torch_version_suffix = "+cu101"
elif CUDA_version == "10.2":
torch_version_suffix = ""
else:
torch_version_suffix = "+cu110"
!pip install torch==1.7.1{torch_version_suffix} torchvision==0.8.2{torch_version_suffix} -f https://download.pytorch.org/whl/torch_stable.html ftfy regex
%cd /content/big-sleep
SearchText = "\"" + DreamText + "\""
!python gen_gif.py $SearchText\
SearchText2 = "\"" + DreamText2 + "\""
!python gen_gif.py $SearchText2\
SearchText3 = "\"" + DreamText3 + "\""
!python gen_gif.py $SearchText3\
from google.colab import files
from google.colab import output
fileName = DreamText.replace(" ", "_")
fileName2 = DreamText2.replace(" ", "_")
fileName3 = DreamText3.replace(" ", "_")
FileStartIndex = 99
NumberOfFrames = 30
FirstPathIndex = 99
SecondPathIndex = 0
!python /content/big-sleep/interpolation.py --lat1 /content/big-sleep/$fileName\.$FirstPathIndex\.pth --lat2 /content/big-sleep/$fileName2\.$SecondPathIndex\.pth --steps $NumberOfFrames --startFileIndex $FileStartIndex --fileName $fileName
!python /content/big-sleep/interpolation.py --lat1 /content/big-sleep/$fileName2\.$FirstPathIndex\.pth --lat2 /content/big-sleep/$fileName3\.$SecondPathIndex\.pth --steps $NumberOfFrames --startFileIndex $FileStartIndex --fileName $fileName2
!python /content/big-sleep/interpolation.py --lat1 /content/big-sleep/$fileName3\.$FirstPathIndex\.pth --lat2 /content/big-sleep/$fileName\.$SecondPathIndex\.pth --steps $NumberOfFrames --startFileIndex $FileStartIndex --fileName $fileName3
!ffmpeg -start_number 0 -i $fileName\.%d.png $fileName\.gif
!ffmpeg -start_number 0 -i $fileName2\.%d.png $fileName2\.gif
!ffmpeg -start_number 0 -i $fileName3\.%d.png $fileName3\.gif
!gifsicle $fileName\.gif $fileName2\.gif $fileName3\.gif > combo.gif
files.download("combo.gif")
output.eval_js('new Audio("https://upload.wikimedia.org/wikipedia/commons/0/05/Beep-09.ogg").play()')
```
|
github_jupyter
|
#@title Text to dream from
DreamText = "man" #@param {type:"string"}
DreamText2 = "bear" #@param {type:"string"}
DreamText3 = "pig" #@param {type:"string"}
!git clone https://github.com/phoepner/big-sleep.git
!pip install ftfy
!pip install boto3
!pip install einops
!apt install gifsicle
import subprocess
for _ in range(2):
CUDA_version = [s for s in subprocess.check_output(["nvcc", "--version"]).decode("UTF-8").split(", ") if s.startswith("release")][0].split(" ")[-1]
print("CUDA version:", CUDA_version)
if CUDA_version == "10.0":
torch_version_suffix = "+cu100"
elif CUDA_version == "10.1":
torch_version_suffix = "+cu101"
elif CUDA_version == "10.2":
torch_version_suffix = ""
else:
torch_version_suffix = "+cu110"
!pip install torch==1.7.1{torch_version_suffix} torchvision==0.8.2{torch_version_suffix} -f https://download.pytorch.org/whl/torch_stable.html ftfy regex
%cd /content/big-sleep
SearchText = "\"" + DreamText + "\""
!python gen_gif.py $SearchText\
SearchText2 = "\"" + DreamText2 + "\""
!python gen_gif.py $SearchText2\
SearchText3 = "\"" + DreamText3 + "\""
!python gen_gif.py $SearchText3\
from google.colab import files
from google.colab import output
fileName = DreamText.replace(" ", "_")
fileName2 = DreamText2.replace(" ", "_")
fileName3 = DreamText3.replace(" ", "_")
FileStartIndex = 99
NumberOfFrames = 30
FirstPathIndex = 99
SecondPathIndex = 0
!python /content/big-sleep/interpolation.py --lat1 /content/big-sleep/$fileName\.$FirstPathIndex\.pth --lat2 /content/big-sleep/$fileName2\.$SecondPathIndex\.pth --steps $NumberOfFrames --startFileIndex $FileStartIndex --fileName $fileName
!python /content/big-sleep/interpolation.py --lat1 /content/big-sleep/$fileName2\.$FirstPathIndex\.pth --lat2 /content/big-sleep/$fileName3\.$SecondPathIndex\.pth --steps $NumberOfFrames --startFileIndex $FileStartIndex --fileName $fileName2
!python /content/big-sleep/interpolation.py --lat1 /content/big-sleep/$fileName3\.$FirstPathIndex\.pth --lat2 /content/big-sleep/$fileName\.$SecondPathIndex\.pth --steps $NumberOfFrames --startFileIndex $FileStartIndex --fileName $fileName3
!ffmpeg -start_number 0 -i $fileName\.%d.png $fileName\.gif
!ffmpeg -start_number 0 -i $fileName2\.%d.png $fileName2\.gif
!ffmpeg -start_number 0 -i $fileName3\.%d.png $fileName3\.gif
!gifsicle $fileName\.gif $fileName2\.gif $fileName3\.gif > combo.gif
files.download("combo.gif")
output.eval_js('new Audio("https://upload.wikimedia.org/wikipedia/commons/0/05/Beep-09.ogg").play()')
| 0.356671 | 0.742445 |
<center>
<img src="../../img/ods_stickers.jpg">
## Open Machine Learning Course
<center>Author: [Yury Kashnitsky](https://www.linkedin.com/in/festline/), Data Scientist @ Mail.Ru Group <br>All content is distributed under the [Creative Commons CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license.
# <center> Assignment #10 (demo)
## <center> Gradient boosting
Your task is to beat at least 2 benchmarks in this [Kaggle Inclass competition](https://www.kaggle.com/c/flight-delays-spring-2018). Here you won’t be provided with detailed instructions. We only give you a brief description of how the second benchmark was achieved using Xgboost. Hopefully, at this stage of the course, it's enough for you to take a quick look at the data in order to understand that this is the type of task where gradient boosting will perform well. Most likely it will be Xgboost, however, we’ve got plenty of categorical features here.
<img src='../../img/xgboost_meme.jpg' width=40% />
```
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from xgboost import XGBClassifier
train = pd.read_csv("../../data/flight_delays_train.csv")
test = pd.read_csv("../../data/flight_delays_test.csv")
train.head()
test.head()
```
Given flight departure time, carrier's code, departure airport, destination location, and flight distance, you have to predict departure delay for more than 15 minutes. As the simplest benchmark, let's take Xgboost classifier and two features that are easiest to take: DepTime and Distance. Such model results in 0.68202 on the LB.
```
X_train = train[["Distance", "DepTime"]].values
y_train = train["dep_delayed_15min"].map({"Y": 1, "N": 0}).values
X_test = test[["Distance", "DepTime"]].values
X_train_part, X_valid, y_train_part, y_valid = train_test_split(
X_train, y_train, test_size=0.3, random_state=17
)
```
We'll train Xgboost with default parameters on part of data and estimate holdout ROC AUC.
```
xgb_model = XGBClassifier(seed=17)
xgb_model.fit(X_train_part, y_train_part)
xgb_valid_pred = xgb_model.predict_proba(X_valid)[:, 1]
roc_auc_score(y_valid, xgb_valid_pred)
```
Now we do the same with the whole training set, make predictions to test set and form a submission file. This is how you beat the first benchmark.
```
xgb_model.fit(X_train, y_train)
xgb_test_pred = xgb_model.predict_proba(X_test)[:, 1]
pd.Series(xgb_test_pred, name="dep_delayed_15min").to_csv(
"xgb_2feat.csv", index_label="id", header=True
)
```
The second benchmark in the leaderboard was achieved as follows:
- Features `Distance` and `DepTime` were taken unchanged
- A feature `Flight` was created from features `Origin` and `Dest`
- Features `Month`, `DayofMonth`, `DayOfWeek`, `UniqueCarrier` and `Flight` were transformed with OHE (`LabelBinarizer`)
- Logistic regression and gradient boosting (xgboost) were trained. Xgboost hyperparameters were tuned via cross-validation. First, the hyperparameters responsible for model complexity were optimized, then the number of trees was fixed at 500 and learning step was tuned.
- Predicted probabilities were made via cross-validation using `cross_val_predict`. A linear mixture of logistic regression and gradient boosting predictions was set in the form $w_1 * p_{logit} + (1 - w_1) * p_{xgb}$, where $p_{logit}$ is a probability of class 1, predicted by logistic regression, and $p_{xgb}$ – the same for xgboost. $w_1$ weight was selected manually.
- A similar combination of predictions was made for test set.
Following the same steps is not mandatory. That’s just a description of how the result was achieved by the author of this assignment. Perhaps you might not want to follow the same steps, and instead, let’s say, add a couple of good features and train a random forest of a thousand trees.
Good luck!
|
github_jupyter
|
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from xgboost import XGBClassifier
train = pd.read_csv("../../data/flight_delays_train.csv")
test = pd.read_csv("../../data/flight_delays_test.csv")
train.head()
test.head()
X_train = train[["Distance", "DepTime"]].values
y_train = train["dep_delayed_15min"].map({"Y": 1, "N": 0}).values
X_test = test[["Distance", "DepTime"]].values
X_train_part, X_valid, y_train_part, y_valid = train_test_split(
X_train, y_train, test_size=0.3, random_state=17
)
xgb_model = XGBClassifier(seed=17)
xgb_model.fit(X_train_part, y_train_part)
xgb_valid_pred = xgb_model.predict_proba(X_valid)[:, 1]
roc_auc_score(y_valid, xgb_valid_pred)
xgb_model.fit(X_train, y_train)
xgb_test_pred = xgb_model.predict_proba(X_test)[:, 1]
pd.Series(xgb_test_pred, name="dep_delayed_15min").to_csv(
"xgb_2feat.csv", index_label="id", header=True
)
| 0.487063 | 0.954984 |
## Testing Notebook
```
path = '/opt/spark-data'
from __future__ import print_function
import os
import sys
from pyspark.sql import SparkSession
from pyspark.sql.types import Row, StructField, StructType, StringType, IntegerType, ArrayType
from pyspark.sql.functions import col, split, udf, size, element_at, explode, when, lit
import pyspark.sql.functions as F
spark = SparkSession \
.builder \
.appName("S3_Analysis") \
.master("spark://spark-master:7077") \
.config("spark.executor.cores", "2") \
.config("spark.num.executors", "6") \
.config("spark.executor.memory", "2g") \
.enableHiveSupport() \
.getOrCreate()
## Open the parquet and have a look
s3_stats = spark.read.parquet(os.path.join(path, "s3logs"))
s3_stats.createOrReplaceTempView("s3_stats")
spark.sql("SELECT distinct requesthour from s3_stats").collect()
key_data = spark.sql("SELECT `key` FROM s3_stats")
# check dates
spark.sql("SELECT min(requesttimestamp) FROM s3_stats").collect()
```
We need to derive the tree structure from the key
Scenarios:
- same name different parent
- same name same entity
```
## Create some the parent child pairs we need to create out structure
def zip_pairs(value):
"""
Args:
value (list(str)): split up list of folder path
ie [ db, table, partition, file.parquet ]
Returns:
result (list((str, str)))
"""
lead_list = value.copy()
lead_list.pop()
lead_list.insert(0,None)
result = [item for item in zip(lead_list,value)]
return result
expr = ".%25."
pairZip = udf(zip_pairs, ArrayType(ArrayType(StringType())) )
df2 = key_data.select("key").withColumn("key_split", split(col("key"), "/")) \
.withColumn("depth", size(col("key_split"))) \
.withColumn("file", element_at(col("key_split"), -1) ) \
.withColumn("pairs", pairZip(col("key_split"))) \
.withColumn("_tmp", explode(col("pairs"))) \
.withColumn("parent", col("_tmp")[0]) \
.withColumn("object", col("_tmp")[1]) \
.withColumn("_object_split", split(col("object"), "\.")) \
.withColumn("type", when(size("_object_split")>1, "file")
.when(col("object").rlike(expr), "partition")
.otherwise("folder")) \
.drop("_tmp") \
.drop("_object_split")
```
## Exploring Entities
```
relationship_table = df2.select("parent", "object", "type").distinct()
relationship_table.createOrReplaceTempView("relationship_table")
#relationship_table.write.format("hive").saveAsTable("logging_demo.relationship_table")
relationship_table.count()
spark.sql("select parent, count(*) FILTER( WHERE type =='file' ) as `count(*)` from relationship_table group by parent").head(10)
## Exploring just the files
storage_files = df2.select("key", "parent", "child", "file").filter( df2.child == df2.file )
storage_files.filter(storage_files.parent.contains("=")).show(40,truncate=False)
## Exploring the folders in the tree
folders = df2.select("key", "parent", "child", "file").filter( df2.child != df2.file )
folders.select("parent", "child").show(10, truncate=False)
distinct_folders = folders.select("child").distinct()
entities = spark.sql("SELECT parent as stage from relationship_table \
UNION SELECT child as stage from relationship_table")
```
## End
```
spark.stop()
```
|
github_jupyter
|
path = '/opt/spark-data'
from __future__ import print_function
import os
import sys
from pyspark.sql import SparkSession
from pyspark.sql.types import Row, StructField, StructType, StringType, IntegerType, ArrayType
from pyspark.sql.functions import col, split, udf, size, element_at, explode, when, lit
import pyspark.sql.functions as F
spark = SparkSession \
.builder \
.appName("S3_Analysis") \
.master("spark://spark-master:7077") \
.config("spark.executor.cores", "2") \
.config("spark.num.executors", "6") \
.config("spark.executor.memory", "2g") \
.enableHiveSupport() \
.getOrCreate()
## Open the parquet and have a look
s3_stats = spark.read.parquet(os.path.join(path, "s3logs"))
s3_stats.createOrReplaceTempView("s3_stats")
spark.sql("SELECT distinct requesthour from s3_stats").collect()
key_data = spark.sql("SELECT `key` FROM s3_stats")
# check dates
spark.sql("SELECT min(requesttimestamp) FROM s3_stats").collect()
## Create some the parent child pairs we need to create out structure
def zip_pairs(value):
"""
Args:
value (list(str)): split up list of folder path
ie [ db, table, partition, file.parquet ]
Returns:
result (list((str, str)))
"""
lead_list = value.copy()
lead_list.pop()
lead_list.insert(0,None)
result = [item for item in zip(lead_list,value)]
return result
expr = ".%25."
pairZip = udf(zip_pairs, ArrayType(ArrayType(StringType())) )
df2 = key_data.select("key").withColumn("key_split", split(col("key"), "/")) \
.withColumn("depth", size(col("key_split"))) \
.withColumn("file", element_at(col("key_split"), -1) ) \
.withColumn("pairs", pairZip(col("key_split"))) \
.withColumn("_tmp", explode(col("pairs"))) \
.withColumn("parent", col("_tmp")[0]) \
.withColumn("object", col("_tmp")[1]) \
.withColumn("_object_split", split(col("object"), "\.")) \
.withColumn("type", when(size("_object_split")>1, "file")
.when(col("object").rlike(expr), "partition")
.otherwise("folder")) \
.drop("_tmp") \
.drop("_object_split")
relationship_table = df2.select("parent", "object", "type").distinct()
relationship_table.createOrReplaceTempView("relationship_table")
#relationship_table.write.format("hive").saveAsTable("logging_demo.relationship_table")
relationship_table.count()
spark.sql("select parent, count(*) FILTER( WHERE type =='file' ) as `count(*)` from relationship_table group by parent").head(10)
## Exploring just the files
storage_files = df2.select("key", "parent", "child", "file").filter( df2.child == df2.file )
storage_files.filter(storage_files.parent.contains("=")).show(40,truncate=False)
## Exploring the folders in the tree
folders = df2.select("key", "parent", "child", "file").filter( df2.child != df2.file )
folders.select("parent", "child").show(10, truncate=False)
distinct_folders = folders.select("child").distinct()
entities = spark.sql("SELECT parent as stage from relationship_table \
UNION SELECT child as stage from relationship_table")
spark.stop()
| 0.515376 | 0.76465 |
```
# Based on https://github.com/akeaswaran/cfb_fourth_down but in Python
import pandas as pd
import numpy as np
import xgboost as xgb
def retrieveCfbDataFile(endpoint, year):
return pd.read_csv(f"data/{endpoint}/{year}.csv", encoding='latin-1')
pbp = pd.DataFrame()
line_data = pd.DataFrame()
for x in range(2014, 2021):
print(f"loading year: {x}")
plys = pd.read_parquet(f"https://raw.githubusercontent.com/saiemgilani/cfbfastR-data/master/data/parquet/pbp_players_pos_{x}.parquet")
ln = retrieveCfbDataFile('lines',x)
print(f"loaded year: {x}")
pbp = pbp.append(plys, sort=False)
ln['year'] = x
line_data = line_data.append(ln, sort=False)
print(f"Total Plays: {len(pbp)}")
print(f"Spreads imported: {len(line_data)}")
model_vars = pbp[
(pbp.down.isin([3,4]))
& ((pbp["rush"] == 1) | (pbp["pass"] == 1))
& (pbp.offense_play.notna())
& (pbp.yards_to_goal.notna())
& (pbp.score_diff.notna())
# & (pbp.offense_conference.notna())
# & (pbp.defense_conference.notna())
]
model_vars.head()
# model_line_data = line_data[
# (line_data.lineProvider == "consensus")
# ]
# model_line_data.head()
# line_data.head()
grouped_lines = line_data.groupby("id")
grouped_lines = grouped_lines.apply(lambda x: x[(x.overUnder.notna()) & (x.spread.notna())].head(1))
grouped_lines = grouped_lines.reset_index(drop=True)
# first_values.head()
grouped_lines.head()
merged_vars = pd.merge(model_vars, grouped_lines[["id","spread","overUnder"]], left_on="game_id", right_on="id", how='left')
merged_vars.head()
model_vars.columns.to_list()
merged_vars["first_down_penalty"] = merged_vars["firstD_by_penalty"]
merged_vars.yards_gained = merged_vars.yards_gained.clip(-10, 65)
merged_vars["home_total"] = (merged_vars.spread + merged_vars.overUnder) / 2
merged_vars["away_total"] = (merged_vars.overUnder - merged_vars.spread) / 2
merged_vars["posteam_total"] = np.where(merged_vars.offense_play == merged_vars.home, merged_vars.home_total, merged_vars.away_total)
merged_vars["posteam_spread"] = np.where(merged_vars.offense_play == merged_vars.home, merged_vars.spread, -1 * merged_vars.spread)
merged_vars.head()
filtered_vars = merged_vars[
(((merged_vars["rush"] + merged_vars["pass"]) == 1) | (merged_vars.first_down_penalty == 1))
& (merged_vars.distance > 0)
& (merged_vars.yards_to_goal > 0)
& (merged_vars.distance > merged_vars.yards_to_goal)
& (merged_vars.posteam_total.notna())
& (merged_vars.posteam_spread.notna())
]
filtered_vars["label"] = (filtered_vars.yards_gained.astype(float) + 10).astype(int)
filtered_vars = filtered_vars[["down","distance","yards_to_goal","posteam_total","posteam_spread","label"]]
filtered_vars.head()
nrounds = 157
params = {
"booster": "gbtree",
"objective": "multi:softprob",
"eval_metric": "mlogloss",
"num_class": 76,
"eta": .07,
"gamma": 4.325037e-09,
"subsample": 0.5385424,
"colsample_bytree": 0.6666667,
'max_depth': 4,
"min_child_weight": 7
}
full_train = xgb.DMatrix(filtered_vars[["down","distance","yards_to_goal","posteam_total","posteam_spread"]], label = filtered_vars.label)
fd_model = xgb.train(params, full_train, nrounds)
fd_model
# full_train = xgboost::xgb.DMatrix(model.matrix(~.+0, data = model_vars %>% dplyr::select(-label)), label = as.integer(model_vars$label))
# fd_model <- xgboost::xgboost(params = params, data = full_train, nrounds = nrounds, verbose = 2)
fd_model.save_model("fd_model.model")
import coremltools as cml
cml_model = cml.converters.xgboost.convert(fd_model, force_32bit_float=False, mode="classifier", feature_names=[
"down","distance","yards_to_goal","posteam_total","posteam_spread"
], n_classes=76)
cml_model.author = 'Jason Lee for original R code; Akshay Easwaran for Python and CoreML conversion.'
cml_model.license = 'MIT'
cml_model.short_description = 'Projects number of yards gained on a fourth-down play.'
cml_model
# # Set feature descriptions manually
cml_model.input_description['posteam_spread'] = 'The spread for the game from the current offense\'s perspective. Note that a home favorite will have a negative spread value.'
cml_model.input_description['posteam_total'] = 'The over/under for the game from the current offense\'s perspective.'
cml_model.input_description['yards_to_goal'] = 'The yards left to gain towards the end zone.'
cml_model.input_description['distance'] = 'The number of yards to gain a first down.'
cml_model.input_description['down'] = 'The current down.'
# cml_model.output_description['target'] = "The projected number of yards gained on this fourth-down play."
# # # Save the model
cml_model.save('FourthDownYards.mlmodel')
cml_model
```
|
github_jupyter
|
# Based on https://github.com/akeaswaran/cfb_fourth_down but in Python
import pandas as pd
import numpy as np
import xgboost as xgb
def retrieveCfbDataFile(endpoint, year):
return pd.read_csv(f"data/{endpoint}/{year}.csv", encoding='latin-1')
pbp = pd.DataFrame()
line_data = pd.DataFrame()
for x in range(2014, 2021):
print(f"loading year: {x}")
plys = pd.read_parquet(f"https://raw.githubusercontent.com/saiemgilani/cfbfastR-data/master/data/parquet/pbp_players_pos_{x}.parquet")
ln = retrieveCfbDataFile('lines',x)
print(f"loaded year: {x}")
pbp = pbp.append(plys, sort=False)
ln['year'] = x
line_data = line_data.append(ln, sort=False)
print(f"Total Plays: {len(pbp)}")
print(f"Spreads imported: {len(line_data)}")
model_vars = pbp[
(pbp.down.isin([3,4]))
& ((pbp["rush"] == 1) | (pbp["pass"] == 1))
& (pbp.offense_play.notna())
& (pbp.yards_to_goal.notna())
& (pbp.score_diff.notna())
# & (pbp.offense_conference.notna())
# & (pbp.defense_conference.notna())
]
model_vars.head()
# model_line_data = line_data[
# (line_data.lineProvider == "consensus")
# ]
# model_line_data.head()
# line_data.head()
grouped_lines = line_data.groupby("id")
grouped_lines = grouped_lines.apply(lambda x: x[(x.overUnder.notna()) & (x.spread.notna())].head(1))
grouped_lines = grouped_lines.reset_index(drop=True)
# first_values.head()
grouped_lines.head()
merged_vars = pd.merge(model_vars, grouped_lines[["id","spread","overUnder"]], left_on="game_id", right_on="id", how='left')
merged_vars.head()
model_vars.columns.to_list()
merged_vars["first_down_penalty"] = merged_vars["firstD_by_penalty"]
merged_vars.yards_gained = merged_vars.yards_gained.clip(-10, 65)
merged_vars["home_total"] = (merged_vars.spread + merged_vars.overUnder) / 2
merged_vars["away_total"] = (merged_vars.overUnder - merged_vars.spread) / 2
merged_vars["posteam_total"] = np.where(merged_vars.offense_play == merged_vars.home, merged_vars.home_total, merged_vars.away_total)
merged_vars["posteam_spread"] = np.where(merged_vars.offense_play == merged_vars.home, merged_vars.spread, -1 * merged_vars.spread)
merged_vars.head()
filtered_vars = merged_vars[
(((merged_vars["rush"] + merged_vars["pass"]) == 1) | (merged_vars.first_down_penalty == 1))
& (merged_vars.distance > 0)
& (merged_vars.yards_to_goal > 0)
& (merged_vars.distance > merged_vars.yards_to_goal)
& (merged_vars.posteam_total.notna())
& (merged_vars.posteam_spread.notna())
]
filtered_vars["label"] = (filtered_vars.yards_gained.astype(float) + 10).astype(int)
filtered_vars = filtered_vars[["down","distance","yards_to_goal","posteam_total","posteam_spread","label"]]
filtered_vars.head()
nrounds = 157
params = {
"booster": "gbtree",
"objective": "multi:softprob",
"eval_metric": "mlogloss",
"num_class": 76,
"eta": .07,
"gamma": 4.325037e-09,
"subsample": 0.5385424,
"colsample_bytree": 0.6666667,
'max_depth': 4,
"min_child_weight": 7
}
full_train = xgb.DMatrix(filtered_vars[["down","distance","yards_to_goal","posteam_total","posteam_spread"]], label = filtered_vars.label)
fd_model = xgb.train(params, full_train, nrounds)
fd_model
# full_train = xgboost::xgb.DMatrix(model.matrix(~.+0, data = model_vars %>% dplyr::select(-label)), label = as.integer(model_vars$label))
# fd_model <- xgboost::xgboost(params = params, data = full_train, nrounds = nrounds, verbose = 2)
fd_model.save_model("fd_model.model")
import coremltools as cml
cml_model = cml.converters.xgboost.convert(fd_model, force_32bit_float=False, mode="classifier", feature_names=[
"down","distance","yards_to_goal","posteam_total","posteam_spread"
], n_classes=76)
cml_model.author = 'Jason Lee for original R code; Akshay Easwaran for Python and CoreML conversion.'
cml_model.license = 'MIT'
cml_model.short_description = 'Projects number of yards gained on a fourth-down play.'
cml_model
# # Set feature descriptions manually
cml_model.input_description['posteam_spread'] = 'The spread for the game from the current offense\'s perspective. Note that a home favorite will have a negative spread value.'
cml_model.input_description['posteam_total'] = 'The over/under for the game from the current offense\'s perspective.'
cml_model.input_description['yards_to_goal'] = 'The yards left to gain towards the end zone.'
cml_model.input_description['distance'] = 'The number of yards to gain a first down.'
cml_model.input_description['down'] = 'The current down.'
# cml_model.output_description['target'] = "The projected number of yards gained on this fourth-down play."
# # # Save the model
cml_model.save('FourthDownYards.mlmodel')
cml_model
| 0.503662 | 0.360517 |
<div align="center"><a href="https://colab.research.google.com/github/institutohumai/cursos-python/blob/master/Automatizacion/edicion_imagenes.ipynb"> <img src='https://colab.research.google.com/assets/colab-badge.svg'/> </a> <br> Recordá abrir en una nueva pestaña</div>
## Manipulación de imágenes con Python
Pillow es un módulo para interactuar con archivos de imagen. Éste posee muchas funciones que facilitan por ejemplo la edición, los recortes y los cambios de tamaño. Con el poder de manipular imágenes de la misma manera en la que se haría con otro software como Microsoft Paint o Adobe Photoshop, Python puede editar automáticamente cientas o miles de imágenes con facilidad.
# Color
#### Función *ImageColor.getcolor()*:
Usualmente los colores se codifican en una tupla RGBA de cuatro elementos: Red, Green, Blue y Alpha. Los tres primeros representan el grado de cada uno de los colores rojo, verde y azul, mientras que Alpha indica la transparencia. Cada parámetro tiene un mínimo de 0 y un máximo de 255. Pillow nos brinda una función que nos puede ayudar a identificar el código RGBA de un color a partir de su nombre en inglés:
```
from PIL import ImageColor
print(ImageColor.getcolor('red', 'RGBA'))
print(ImageColor.getcolor('RED', 'RGBA'))
print(ImageColor.getcolor('Black', 'RGBA'))
print(ImageColor.getcolor('chocolate', 'RGBA'))
print(ImageColor.getcolor('CornflowerBlue', 'RGBA'))
```
#Abrir imágenes
Para manipular una imagen con Pillow, primero necesitamos abrirla mediante la función *Image.open()*
```
!wget https://ihum.ai/static/logos/ISOTIPO.png -O logo.png #Traemos el logo de Humai desde la página oficial :)
from PIL import Image
logo = Image.open('logo.png')
logo
```
Todas las rotaciones, cambios de tamaño, cortes, dibujos y otras tareas vamos a hacerlas llamando métodos de este objeto Image.
```
type(logo)
```
#Tamaño
Las dimensiones de una imagen se odenan a partir de una *Box tuple* de cuatro coordenadas:
* Left: La coordenada *x* del borde izquierdo de la imagen.
* Top: La coordenada *y* del borde superior de la imagen.
* Right: La coordenada *x* de un pixel a la derecha del borde derecho de la imagen. Este entero debe ser mayor a Left.
* Bottom: La coordenada *y* de un pixel más abajo del borde inferior de la imagen. Este entero debe ser mayor a Top.
(Observemos que los enteros de Left y Top están incluidos, mientras que los de Right y Bottom están excluidos)
# Métodos de manipulación
Con *size* podemos obtener el tamaño de nuestra imagenen una tupla de dos elementos:
```
logo.size
```
Separamos la tupla en ancho y alto:
```
ancho, alto = logo.size
print(ancho)
print(alto)
```
También podemos obtener información como el nombre del archivo de la imagen...
```
logo.filename
```
...el formato...
```
logo.format
```
...O incluso la descripción del formato.
```
logo.format_description
```
Con el método *save* guardamos una copia, pasándole como parámetro un string que especifique el nombre del archivo y el formato deseado:
```
logo.save('muñequito_humai.png')
```
Con *crop()* podemos recortar un rectángulo específico dentro de la imagen:
```
recorte = logo.crop((115, 0, 205, 70))
recorte
```
También podemos guardar el recorte realizado dentro de un archivo:
```
recorte.save('recorte.png')
```
El método *copy()* nos permite generar un nuevo objeto *Image* igual a aquél desde el cual fue llamado. Esto resulta ventajoso si queremos manipular una imagen pero al mismo tiempo mantener una versión intacta de la misma:
```
logo2 = logo.copy()
logo2
```
El método *paste()* nos permite pegar una imagen sobre otra. En este caso vamos a superponer el recorte que hicimos sobre nuestra copia:
```
logo2.paste(recorte, (0, 0))
logo2
```
¡¡¡Ahora el logo de Humai tiene dos pelotitas!!! Podemos agregarle una más del otro lado, modificando el parámetro que determina la coordenada superior izquierda de la imagen a pegar:
```
logo2.paste(recorte, (220, 0))
logo2
```
Nota: Ni el método *copy()* ni el método *paste()* utilizan el portapapeles de nuestra computadora.
Otra cosa que podemos hacer es pegar nuestro recorte de manera que recubra toda la imagen mediante un bucle anidado dentro de otro, de la siguiente manera:
```
ancho_copia, alto_copia = logo2.size #Separamos ancho y alto de ambos objetos.
ancho_recorte, alto_recorte = recorte.size
for left in range(0, ancho_copia, ancho_recorte): #Para un rango del ancho de la imagen, con un intervalo del ancho del recorte...
for top in range(0, alto_copia, alto_recorte): #...y a su vez para un rango del alto de la imagen, con un intervalo del alto del recorte...
logo2.paste(recorte, (left, top)) #...pegar el recorte.
logo2
```
Veamos algunos ejemplos un poco más complejos
```
!wget https://codersera.com/blog/wp-content/uploads/2021/06/Beginners-Guide-To-Python.jpeg -O img.jpeg
from PIL import ImageDraw, ImageFont
# Leemos la imagen
py = Image.open('img.jpeg').convert('RGBA')
# la achicamos
py.thumbnail((250, 250))
py
def storify(image, color=None, coloroff=1, ratio=16 / 9):
"""Función para llevar una imagen a una proporcion dada, completando con el color dominante.
image:= Imagen PIL
color:= Color a usar, si no se quiere el dominante,
coloroff := offset del color para no tomar el más predominante,
ratio := proporción a la que llevar la imagen
"""
old_size = image.size
# calculamos las nuevas dimensiones
max_dimension, min_dimension = max(old_size), min(old_size)
desired_size = (max_dimension, int(max_dimension * ratio))
# coordenadas medias para pegar la imagen original
x_mid = int(desired_size[0] / 2 - old_size[0] / 2)
y_mid = int(desired_size[1] / 2 - old_size[1] / 2)
position = (x_mid, y_mid)
# si el color es None, buscamos el color más presente
image = image.convert("RGBA")
if color is None:
pixels = image.getcolors(image.width * image.height) # get colors devuelve la cantidad de pixeles de cada color RGB
sorted_pixels = sorted(pixels, key=lambda t: t[0])
color = sorted_pixels[-coloroff][1][:-1]
# creamos una nueva imagen de la dimensión deseada y pegamos la original en el centro
blank_image = Image.new("RGB", desired_size, color=color)
blank_image.paste(image, position, image)
return blank_image, color, position
img, color, pos = storify(py, ratio=1)
img
img, color, pos = storify(py, color=None, ratio=16/9)
img
!wget -nc https://unket.s3.sa-east-1.amazonaws.com/static/Roboto-Regular.ttf -O font.ttf
def add_text_center(
img,
txt,
y_offset=0,
x_offset=0,
font_size=50,
color="black",
font_ttf="font.ttf",
):
"""Función para pegar un texto en una imagen PIL.
Por default, el texto se pega en el centro, con x e y offset se puede modificar.
"""
draw = ImageDraw.Draw(img)
# leemos la tipografía de un archivo ttf
font = ImageFont.truetype(font_ttf, int(font_size))
# vemos el tamaño resultante del texto para centrarlo
size = font.getsize(txt)
width, height = img.size
text_x, text_y = size
# por default el texto se centra, los offsets nos permiten correrlo del centro
x = (width - text_x) / 2 + x_offset
y = (height - text_y) / 2 + y_offset
draw.text((x, y), txt, font=font, fill=color)
return img
img, color, pos = storify(img, color='white')
img
for i in range(0, 100, 10):
img = add_text_center(img, 'Automatización con Python',
font_size=30 - (i/10),
y_offset=-img.size[1]*0.4 - i,
color=(0+i*5, 0+i*5, 0+i*5))
logo = logo.convert('RGBA')
img = img.convert('RGBA')
logo.thumbnail((100, 100))
img.alpha_composite(logo, (180, 650))
img
```
|
github_jupyter
|
from PIL import ImageColor
print(ImageColor.getcolor('red', 'RGBA'))
print(ImageColor.getcolor('RED', 'RGBA'))
print(ImageColor.getcolor('Black', 'RGBA'))
print(ImageColor.getcolor('chocolate', 'RGBA'))
print(ImageColor.getcolor('CornflowerBlue', 'RGBA'))
!wget https://ihum.ai/static/logos/ISOTIPO.png -O logo.png #Traemos el logo de Humai desde la página oficial :)
from PIL import Image
logo = Image.open('logo.png')
logo
type(logo)
logo.size
ancho, alto = logo.size
print(ancho)
print(alto)
logo.filename
logo.format
logo.format_description
logo.save('muñequito_humai.png')
recorte = logo.crop((115, 0, 205, 70))
recorte
recorte.save('recorte.png')
logo2 = logo.copy()
logo2
logo2.paste(recorte, (0, 0))
logo2
logo2.paste(recorte, (220, 0))
logo2
ancho_copia, alto_copia = logo2.size #Separamos ancho y alto de ambos objetos.
ancho_recorte, alto_recorte = recorte.size
for left in range(0, ancho_copia, ancho_recorte): #Para un rango del ancho de la imagen, con un intervalo del ancho del recorte...
for top in range(0, alto_copia, alto_recorte): #...y a su vez para un rango del alto de la imagen, con un intervalo del alto del recorte...
logo2.paste(recorte, (left, top)) #...pegar el recorte.
logo2
!wget https://codersera.com/blog/wp-content/uploads/2021/06/Beginners-Guide-To-Python.jpeg -O img.jpeg
from PIL import ImageDraw, ImageFont
# Leemos la imagen
py = Image.open('img.jpeg').convert('RGBA')
# la achicamos
py.thumbnail((250, 250))
py
def storify(image, color=None, coloroff=1, ratio=16 / 9):
"""Función para llevar una imagen a una proporcion dada, completando con el color dominante.
image:= Imagen PIL
color:= Color a usar, si no se quiere el dominante,
coloroff := offset del color para no tomar el más predominante,
ratio := proporción a la que llevar la imagen
"""
old_size = image.size
# calculamos las nuevas dimensiones
max_dimension, min_dimension = max(old_size), min(old_size)
desired_size = (max_dimension, int(max_dimension * ratio))
# coordenadas medias para pegar la imagen original
x_mid = int(desired_size[0] / 2 - old_size[0] / 2)
y_mid = int(desired_size[1] / 2 - old_size[1] / 2)
position = (x_mid, y_mid)
# si el color es None, buscamos el color más presente
image = image.convert("RGBA")
if color is None:
pixels = image.getcolors(image.width * image.height) # get colors devuelve la cantidad de pixeles de cada color RGB
sorted_pixels = sorted(pixels, key=lambda t: t[0])
color = sorted_pixels[-coloroff][1][:-1]
# creamos una nueva imagen de la dimensión deseada y pegamos la original en el centro
blank_image = Image.new("RGB", desired_size, color=color)
blank_image.paste(image, position, image)
return blank_image, color, position
img, color, pos = storify(py, ratio=1)
img
img, color, pos = storify(py, color=None, ratio=16/9)
img
!wget -nc https://unket.s3.sa-east-1.amazonaws.com/static/Roboto-Regular.ttf -O font.ttf
def add_text_center(
img,
txt,
y_offset=0,
x_offset=0,
font_size=50,
color="black",
font_ttf="font.ttf",
):
"""Función para pegar un texto en una imagen PIL.
Por default, el texto se pega en el centro, con x e y offset se puede modificar.
"""
draw = ImageDraw.Draw(img)
# leemos la tipografía de un archivo ttf
font = ImageFont.truetype(font_ttf, int(font_size))
# vemos el tamaño resultante del texto para centrarlo
size = font.getsize(txt)
width, height = img.size
text_x, text_y = size
# por default el texto se centra, los offsets nos permiten correrlo del centro
x = (width - text_x) / 2 + x_offset
y = (height - text_y) / 2 + y_offset
draw.text((x, y), txt, font=font, fill=color)
return img
img, color, pos = storify(img, color='white')
img
for i in range(0, 100, 10):
img = add_text_center(img, 'Automatización con Python',
font_size=30 - (i/10),
y_offset=-img.size[1]*0.4 - i,
color=(0+i*5, 0+i*5, 0+i*5))
logo = logo.convert('RGBA')
img = img.convert('RGBA')
logo.thumbnail((100, 100))
img.alpha_composite(logo, (180, 650))
img
| 0.475118 | 0.98661 |
<a href="https://colab.research.google.com/github/Rishit-dagli/AI-Workshop-for-beginners/blob/master/Week_3_Lab_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Cats vs dogs
```
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip \
-O /tmp/cats_and_dogs_filtered.zip
import os
import zipfile
import tensorflow as tf
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.preprocessing.image import ImageDataGenerator
IMG_HEIGHT = IMG_WIDTH = 150
local_zip = '/tmp/cats_and_dogs_filtered.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp')
zip_ref.close()
base_dir = '/tmp/cats_and_dogs_filtered'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
# Directory with our training cat pictures
train_cats_dir = os.path.join(train_dir, 'cats')
# Directory with our training dog pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')
# Directory with our validation cat pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')
# Directory with our validation dog pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), padding='same', activation='relu',input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Conv2D(64, (3,3), padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=1e-4),
metrics=['acc'])
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255)
# Flow training images in batches of 20 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
train_dir, # This is the source directory for training images
target_size=(150, 150), # All images will be resized to 150x150
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
# Flow validation images in batches of 20 using test_datagen generator
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
history = model.fit_generator(
train_generator,
steps_per_epoch=100, # 2000 images = batch_size * steps
epochs=100,
validation_data=validation_generator,
validation_steps=50, # 1000 images = batch_size * steps
verbose=1)
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training Loss')
plt.plot(epochs, val_loss, 'b', label='Validation Loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
import numpy as np
from google.colab import files
from keras.preprocessing import image
uploaded = files.upload()
for fn in uploaded.keys():
path = '/content/' + fn
img = image.load_img(path, target_size=(150, 150))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = model.predict(images, batch_size=10)
print(classes[0])
if classes[0]>0.5:
print(fn + " is a cat")
else:
print(fn + " is a dog")
```
|
github_jupyter
|
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip \
-O /tmp/cats_and_dogs_filtered.zip
import os
import zipfile
import tensorflow as tf
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.preprocessing.image import ImageDataGenerator
IMG_HEIGHT = IMG_WIDTH = 150
local_zip = '/tmp/cats_and_dogs_filtered.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp')
zip_ref.close()
base_dir = '/tmp/cats_and_dogs_filtered'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
# Directory with our training cat pictures
train_cats_dir = os.path.join(train_dir, 'cats')
# Directory with our training dog pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')
# Directory with our validation cat pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')
# Directory with our validation dog pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), padding='same', activation='relu',input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Conv2D(64, (3,3), padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=1e-4),
metrics=['acc'])
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255)
# Flow training images in batches of 20 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
train_dir, # This is the source directory for training images
target_size=(150, 150), # All images will be resized to 150x150
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
# Flow validation images in batches of 20 using test_datagen generator
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
history = model.fit_generator(
train_generator,
steps_per_epoch=100, # 2000 images = batch_size * steps
epochs=100,
validation_data=validation_generator,
validation_steps=50, # 1000 images = batch_size * steps
verbose=1)
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training Loss')
plt.plot(epochs, val_loss, 'b', label='Validation Loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
import numpy as np
from google.colab import files
from keras.preprocessing import image
uploaded = files.upload()
for fn in uploaded.keys():
path = '/content/' + fn
img = image.load_img(path, target_size=(150, 150))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = model.predict(images, batch_size=10)
print(classes[0])
if classes[0]>0.5:
print(fn + " is a cat")
else:
print(fn + " is a dog")
| 0.661814 | 0.931898 |
<a href="https://colab.research.google.com/github/ash12hub/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling/blob/master/module3-make-explanatory-visualizations/Ashwin_Raghav_Swamy_LS_DS_123_Make_Explanatory_Visualizations.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
_Lambda School Data Science_
# Make Explanatory Visualizations
### Objectives
- identify misleading visualizations and how to fix them
- use Seaborn to visualize distributions and relationships with continuous and discrete variables
- add emphasis and annotations to transform visualizations from exploratory to explanatory
- remove clutter from visualizations
### Links
- [How to Spot Visualization Lies](https://flowingdata.com/2017/02/09/how-to-spot-visualization-lies/)
- [Visual Vocabulary - Vega Edition](http://ft.com/vocabulary)
- [Choosing a Python Visualization Tool flowchart](http://pbpython.com/python-vis-flowchart.html)
- [Searborn example gallery](http://seaborn.pydata.org/examples/index.html) & [tutorial](http://seaborn.pydata.org/tutorial.html)
- [Strong Titles Are The Biggest Bang for Your Buck](http://stephanieevergreen.com/strong-titles/)
- [Remove to improve (the data-ink ratio)](https://www.darkhorseanalytics.com/blog/data-looks-better-naked)
- [How to Generate FiveThirtyEight Graphs in Python](https://www.dataquest.io/blog/making-538-plots/)
# Avoid Misleading Visualizations
Did you find/discuss any interesting misleading visualizations in your Walkie Talkie?
## What makes a visualization misleading?
[5 Ways Writers Use Misleading Graphs To Manipulate You](https://venngage.com/blog/misleading-graphs/)
## Two y-axes
<img src="https://kieranhealy.org/files/misc/two-y-by-four-sm.jpg" width="800">
Other Examples:
- [Spurious Correlations](https://tylervigen.com/spurious-correlations)
- <https://blog.datawrapper.de/dualaxis/>
- <https://kieranhealy.org/blog/archives/2016/01/16/two-y-axes/>
- <http://www.storytellingwithdata.com/blog/2016/2/1/be-gone-dual-y-axis>
## Y-axis doesn't start at zero.
<img src="https://i.pinimg.com/originals/22/53/a9/2253a944f54bb61f1983bc076ff33cdd.jpg" width="600">
## Pie Charts are bad
<img src="https://i1.wp.com/flowingdata.com/wp-content/uploads/2009/11/Fox-News-pie-chart.png?fit=620%2C465&ssl=1" width="600">
## Pie charts that omit data are extra bad
- A guy makes a misleading chart that goes viral
What does this chart imply at first glance? You don't want your user to have to do a lot of work in order to be able to interpret you graph correctly. You want that first-glance conclusions to be the correct ones.
<img src="https://pbs.twimg.com/media/DiaiTLHWsAYAEEX?format=jpg&name=medium" width='600'>
<https://twitter.com/michaelbatnick/status/1019680856837849090?lang=en>
- It gets picked up by overworked journalists (assuming incompetency before malice)
<https://www.marketwatch.com/story/this-1-chart-puts-mega-techs-trillions-of-market-value-into-eye-popping-perspective-2018-07-18>
- Even after the chart's implications have been refuted, it's hard a bad (although compelling) visualization from being passed around.
<https://www.linkedin.com/pulse/good-bad-pie-charts-karthik-shashidhar/>
**["yea I understand a pie chart was probably not the best choice to present this data."](https://twitter.com/michaelbatnick/status/1037036440494985216)**
## Pie Charts that compare unrelated things are next-level extra bad
<img src="http://www.painting-with-numbers.com/download/document/186/170403+Legalizing+Marijuana+Graph.jpg" width="600">
## Be careful about how you use volume to represent quantities:
radius vs diameter vs volume
<img src="https://static1.squarespace.com/static/5bfc8dbab40b9d7dd9054f41/t/5c32d86e0ebbe80a25873249/1546836082961/5474039-25383714-thumbnail.jpg?format=1500w" width="600">
## Don't cherrypick timelines or specific subsets of your data:
<img src="https://wattsupwiththat.com/wp-content/uploads/2019/02/Figure-1-1.png" width="600">
Look how specifically the writer has selected what years to show in the legend on the right side.
<https://wattsupwiththat.com/2019/02/24/strong-arctic-sea-ice-growth-this-year/>
Try the tool that was used to make the graphic for yourself
<http://nsidc.org/arcticseaicenews/charctic-interactive-sea-ice-graph/>
## Use Relative units rather than Absolute Units
<img src="https://imgs.xkcd.com/comics/heatmap_2x.png" width="600">
## Avoid 3D graphs unless having the extra dimension is effective
Usually you can Split 3D graphs into multiple 2D graphs
3D graphs that are interactive can be very cool. (See Plotly and Bokeh)
<img src="https://thumbor.forbes.com/thumbor/1280x868/https%3A%2F%2Fblogs-images.forbes.com%2Fthumbnails%2Fblog_1855%2Fpt_1855_811_o.jpg%3Ft%3D1339592470" width="600">
## Don't go against typical conventions
<img src="http://www.callingbullshit.org/twittercards/tools_misleading_axes.png" width="600">
# Tips for choosing an appropriate visualization:
## Use Appropriate "Visual Vocabulary"
[Visual Vocabulary - Vega Edition](http://ft.com/vocabulary)
## What are the properties of your data?
- Is your primary variable of interest continuous or discrete?
- Is in wide or long (tidy) format?
- Does your visualization involve multiple variables?
- How many dimensions do you need to include on your plot?
Can you express the main idea of your visualization in a single sentence?
How hard does your visualization make the user work in order to draw the intended conclusion?
## Which Visualization tool is most appropriate?
[Choosing a Python Visualization Tool flowchart](http://pbpython.com/python-vis-flowchart.html)
# Making Explanatory Visualizations with Seaborn
Today we will reproduce this [example by FiveThirtyEight:](https://fivethirtyeight.com/features/al-gores-new-movie-exposes-the-big-flaw-in-online-movie-ratings/)
```
from IPython.display import display, Image
url = 'https://fivethirtyeight.com/wp-content/uploads/2017/09/mehtahickey-inconvenient-0830-1.png'
example = Image(url=url, width=400)
display(example)
```
Using this data: https://github.com/fivethirtyeight/data/tree/master/inconvenient-sequel
Links
- [Strong Titles Are The Biggest Bang for Your Buck](http://stephanieevergreen.com/strong-titles/)
- [Remove to improve (the data-ink ratio)](https://www.darkhorseanalytics.com/blog/data-looks-better-naked)
- [How to Generate FiveThirtyEight Graphs in Python](https://www.dataquest.io/blog/making-538-plots/)
## Make prototypes
This helps us understand the problem
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
plt.style.use('fivethirtyeight')
fake = pd.Series([38, 3, 2, 1, 2, 4, 6, 5, 5, 33],
index=range(1,11))
fake.plot.bar(color='C1', width=0.9);
fake2 = pd.Series(
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
2, 2, 2,
3, 3, 3,
4, 4,
5, 5, 5,
6, 6, 6, 6,
7, 7, 7, 7, 7,
8, 8, 8, 8,
9, 9, 9, 9,
10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10])
fake2.value_counts().sort_index().plot.bar(color='C1', width=0.9);
display(example)
```
## Annotate with text
```
counts = [38, 3, 2, 1, 2, 4, 6, 5, 5, 33]
data_list = []
for i, c in enumerate(counts, 1):
data_list = data_list + [i]*c
fake2 = pd.Series(data_list)
plt.style.use('fivethirtyeight')
fake2.value_counts().sort_index().plot.bar(color='#ed713a', width=0.9, rot=0);
plt.text(x=-1,
y=50,
fontsize=16,
fontweight='bold',
s="'An Inconvenient Sequel: Truth To Power' is divisive")
plt.text(x=-1,
y=45,
fontsize=16,
s="IMDb ratings for the film as of Aug. 29")
plt.xlabel('Rating')
plt.ylabel('Percent of Total Votes')
plt.yticks(range(0, 50, 10));
```
## Reproduce with real data
```
df = pd.read_csv('https://raw.githubusercontent.com/fivethirtyeight/data/master/inconvenient-sequel/ratings.csv')
df.shape
df.head()
df.sample(1).T
df['timestamp'] = pd.to_datetime(df['timestamp'])
df.head()
df = df.set_index('timestamp')
df.head()
df['category'].value_counts()
df_imdb = df[df['category'] == 'IMDb users']
df_imdb.shape
lastday = df['2017-08-29']
lastday[lastday['category'] == 'IMDb users']['respondents'].plot()
# final = df.tail(1)
df = df.sort_index()
df_imdb = df[df['category'] == 'IMDb users']
final = df_imdb.tail(1)
final
columns = ['%s_pct' % i for i in range(1, 11)]
#columns = ['{}_pct'.format(i) for i in range(1, 11)]
#columns = [f'{i}_pct' for i in range(1, 11)] # python3 only
final[columns]
data = final[columns].T
data.index = range(1, 11)
plt.style.use('fivethirtyeight')
data.plot.bar(color='#ed713a', width=0.9, rot=0, legend=False);
plt.text(x=-2,
y=50,
fontsize=16,
fontweight='bold',
s="'An Inconvenient Sequel: Truth To Power' is divisive")
plt.text(x=-2,
y=45,
fontsize=16,
s="IMDb ratings for the film as of Aug. 29")
plt.xlabel('Rating')
plt.ylabel('Percent of Total Votes')
plt.yticks(range(0, 50, 10));
display(example)
```
# ASSIGNMENT
Replicate the lesson code. I recommend that you [do not copy-paste](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit).
# STRETCH OPTIONS
#### Reproduce another example from [FiveThityEight's shared data repository](https://data.fivethirtyeight.com/).
For example:
- [thanksgiving-2015](https://fivethirtyeight.com/features/heres-what-your-part-of-america-eats-on-thanksgiving/) (try the [`altair`](https://altair-viz.github.io/gallery/index.html#maps) library)
- [candy-power-ranking](https://fivethirtyeight.com/features/the-ultimate-halloween-candy-power-ranking/) (try the [`statsmodels`](https://www.statsmodels.org/stable/index.html) library)
- or another example of your choice!
#### Make more charts!
Choose a chart you want to make, from [Visual Vocabulary - Vega Edition](http://ft.com/vocabulary).
Find the chart in an example gallery of a Python data visualization library:
- [Seaborn](http://seaborn.pydata.org/examples/index.html)
- [Altair](https://altair-viz.github.io/gallery/index.html)
- [Matplotlib](https://matplotlib.org/gallery.html)
- [Pandas](https://pandas.pydata.org/pandas-docs/stable/visualization.html)
Reproduce the chart. [Optionally, try the "Ben Franklin Method."](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) If you want, experiment and make changes.
Take notes. Consider sharing your work with your cohort!
```
plt.style.use('fivethirtyeight')
fake=pd.Series([38, 3, 2, 1, 2, 4, 6, 5, 5, 33],
index=range(1,11))
fake.plot.bar(color='C1', width=0.8)
fake2 = pd.Series(
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
2, 2, 2,
3, 3, 3,
4, 4,
5, 5, 5,
6, 6, 6, 6,
7, 7, 7, 7, 7,
8, 8, 8, 8,
9, 9, 9, 9,
10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10])
fake2.value_counts().sort_index().plot.bar(color='C1', width=0.9);
df2 = pd.read_csv('https://raw.githubusercontent.com/fivethirtyeight/data/master/inconvenient-sequel/ratings.csv')
df2.dtypes
df2['timestamp'] = pd.to_datetime(df2['timestamp'])
df2.dtypes
df2 = df2.set_index('timestamp')
df2_imdb_staff = df2[df2['category'] == "IMDb staff"]
df2_imdb_staff
final2 = df2_imdb_staff.tail(1)
final2
columns = [f'{i}_pct' for i in range(1, 11)]
final2[columns]
lastday2 = df2['2017-08-29']
lastday2[lastday2['category'] == "IMDb staff"]['respondents'].plot()
data2 = final2[columns].T
data_index2 = range(1, 11)
plt.style.use('fivethirtyeight')
data2.plot.bar(width=0.7, legend=False)
plt.xlabel("Day")
plt.ylabel("IMDb staff")
```
|
github_jupyter
|
from IPython.display import display, Image
url = 'https://fivethirtyeight.com/wp-content/uploads/2017/09/mehtahickey-inconvenient-0830-1.png'
example = Image(url=url, width=400)
display(example)
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
plt.style.use('fivethirtyeight')
fake = pd.Series([38, 3, 2, 1, 2, 4, 6, 5, 5, 33],
index=range(1,11))
fake.plot.bar(color='C1', width=0.9);
fake2 = pd.Series(
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
2, 2, 2,
3, 3, 3,
4, 4,
5, 5, 5,
6, 6, 6, 6,
7, 7, 7, 7, 7,
8, 8, 8, 8,
9, 9, 9, 9,
10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10])
fake2.value_counts().sort_index().plot.bar(color='C1', width=0.9);
display(example)
counts = [38, 3, 2, 1, 2, 4, 6, 5, 5, 33]
data_list = []
for i, c in enumerate(counts, 1):
data_list = data_list + [i]*c
fake2 = pd.Series(data_list)
plt.style.use('fivethirtyeight')
fake2.value_counts().sort_index().plot.bar(color='#ed713a', width=0.9, rot=0);
plt.text(x=-1,
y=50,
fontsize=16,
fontweight='bold',
s="'An Inconvenient Sequel: Truth To Power' is divisive")
plt.text(x=-1,
y=45,
fontsize=16,
s="IMDb ratings for the film as of Aug. 29")
plt.xlabel('Rating')
plt.ylabel('Percent of Total Votes')
plt.yticks(range(0, 50, 10));
df = pd.read_csv('https://raw.githubusercontent.com/fivethirtyeight/data/master/inconvenient-sequel/ratings.csv')
df.shape
df.head()
df.sample(1).T
df['timestamp'] = pd.to_datetime(df['timestamp'])
df.head()
df = df.set_index('timestamp')
df.head()
df['category'].value_counts()
df_imdb = df[df['category'] == 'IMDb users']
df_imdb.shape
lastday = df['2017-08-29']
lastday[lastday['category'] == 'IMDb users']['respondents'].plot()
# final = df.tail(1)
df = df.sort_index()
df_imdb = df[df['category'] == 'IMDb users']
final = df_imdb.tail(1)
final
columns = ['%s_pct' % i for i in range(1, 11)]
#columns = ['{}_pct'.format(i) for i in range(1, 11)]
#columns = [f'{i}_pct' for i in range(1, 11)] # python3 only
final[columns]
data = final[columns].T
data.index = range(1, 11)
plt.style.use('fivethirtyeight')
data.plot.bar(color='#ed713a', width=0.9, rot=0, legend=False);
plt.text(x=-2,
y=50,
fontsize=16,
fontweight='bold',
s="'An Inconvenient Sequel: Truth To Power' is divisive")
plt.text(x=-2,
y=45,
fontsize=16,
s="IMDb ratings for the film as of Aug. 29")
plt.xlabel('Rating')
plt.ylabel('Percent of Total Votes')
plt.yticks(range(0, 50, 10));
display(example)
plt.style.use('fivethirtyeight')
fake=pd.Series([38, 3, 2, 1, 2, 4, 6, 5, 5, 33],
index=range(1,11))
fake.plot.bar(color='C1', width=0.8)
fake2 = pd.Series(
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
2, 2, 2,
3, 3, 3,
4, 4,
5, 5, 5,
6, 6, 6, 6,
7, 7, 7, 7, 7,
8, 8, 8, 8,
9, 9, 9, 9,
10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10])
fake2.value_counts().sort_index().plot.bar(color='C1', width=0.9);
df2 = pd.read_csv('https://raw.githubusercontent.com/fivethirtyeight/data/master/inconvenient-sequel/ratings.csv')
df2.dtypes
df2['timestamp'] = pd.to_datetime(df2['timestamp'])
df2.dtypes
df2 = df2.set_index('timestamp')
df2_imdb_staff = df2[df2['category'] == "IMDb staff"]
df2_imdb_staff
final2 = df2_imdb_staff.tail(1)
final2
columns = [f'{i}_pct' for i in range(1, 11)]
final2[columns]
lastday2 = df2['2017-08-29']
lastday2[lastday2['category'] == "IMDb staff"]['respondents'].plot()
data2 = final2[columns].T
data_index2 = range(1, 11)
plt.style.use('fivethirtyeight')
data2.plot.bar(width=0.7, legend=False)
plt.xlabel("Day")
plt.ylabel("IMDb staff")
| 0.454472 | 0.918699 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.