code
stringlengths 2.5k
6.36M
| kind
stringclasses 2
values | parsed_code
stringlengths 0
404k
| quality_prob
float64 0
0.98
| learning_prob
float64 0.03
1
|
---|---|---|---|---|
<h1><center>Module 6 Challenge</center></h1>
<h2><center>Part 2: Have Customers Narrow Their Travel Searches Based on Temperature and Precipitation</center></h2>
```
# Import the dependencies.
import pandas as pd
import gmaps
import requests
# Import the API key.
from config import g_key
# Store the CSV you saved created in part one into a DataFrame.
city_data_df = pd.read_csv('challenge_data/WeatherPy_database.csv')
city_data_df.head()
city_data_df.dtypes
# Configure gmaps
gmaps.configure(api_key=g_key)
# Ask the customer to add a minimum and maximum temperature value.
min_temp = float(input('What is the minimum temperature you would like for your trip?'))
max_temp = float(input('What is the maximum temperature you would like for your trip?'))
# Ask the customer if they would like it to be raining.
rain_amount = input('Do you want it to be raining? (yes/no)')
# Ask the customer if they would like it to be snowing.
snow_amount = input('Do you want it to be snowing? (yes/no)')
if rain_amount == 'no' and snow_amount == 'no':
filtered_cities_df = city_data_df.loc[(city_data_df['Max Temp'] <= max_temp) &
(city_data_df['Max Temp'] >= min_temp) &
(city_data_df['Rain (inches)'] == 0) &
(city_data_df['Snow (inches)'] == 0)]
elif rain_amount == 'no' and snow_amount == 'yes':
filtered_cities_df = city_data_df.loc[(city_data_df['Max Temp'] <= max_temp) &
(city_data_df['Max Temp'] >= min_temp) &
(city_data_df['Rain (inches)'] == 0) &
(city_data_df['Snow (inches)'] >= 0.0)]
elif rain_amount == 'yes' and snow_amount == 'no':
filtered_cities_df = city_data_df.loc[(city_data_df['Max Temp'] <= max_temp) &
(city_data_df['Max Temp'] >= min_temp) &
(city_data_df['Rain (inches)'] > 0.0) &
(city_data_df['Snow (inches)'] == 0)]
else:
filtered_cities_df = city_data_df.loc[(city_data_df['Max Temp'] <= max_temp) &
(city_data_df['Max Temp'] >= min_temp) &
(city_data_df['Rain (inches)'] > 0.0) &
(city_data_df['Snow (inches)'] > 0)]
filtered_cities_df.head()
filtered_cities_df.count()
filtered_cities_df = filtered_cities_df.dropna()
filtered_cities_df
filtered_cities_df.count()
hotel_df = filtered_cities_df[['City', 'Country', 'Max Temp', 'Lat', 'Lng', 'Current Description']].copy()
hotel_df['Hotel Name'] = ''
hotel_df.head()
params = {
'radius': 5000,
'type': 'lodging',
'key': g_key
}
# Iterate through.
for index, row in hotel_df.iterrows():
lat = row['Lat']
lng = row['Lng']
params['location'] = f'{lat},{lng}'
base_url = 'https://maps.googleapis.com/maps/api/place/nearbysearch/json'
hotels = requests.get(base_url, params=params).json()
try:
hotel_df.loc[index, 'Hotel Name'] = hotels['results'][0]['name']
except IndexError:
print('Hotel not found... skipping...')
hotel_df.head()
output_data_file = './challenge_data/WeatherPy_Vacation.csv'
hotel_df.to_csv(output_data_file, index_label='City_ID')
vacation_df = pd.read_csv('./challenge_data/WeatherPy_Vacation.csv')
vacation_df.head()
# Using the template add the hotel marks to the heatmap
info_box_template = '''
<dl>
<dt>Hotel Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
<dt>Current Description</dt><dd>{Current Description} at {Max Temp}</dd>
'''
hotel_info = [info_box_template.format(**row) for index, row in vacation_df.iterrows()]
locations = vacation_df[['Lat', 'Lng']]
marker_layer = gmaps.marker_layer(locations, info_box_content=hotel_info)
fig = gmaps.figure()
fig.add_layer(marker_layer)
fig
```
|
github_jupyter
|
# Import the dependencies.
import pandas as pd
import gmaps
import requests
# Import the API key.
from config import g_key
# Store the CSV you saved created in part one into a DataFrame.
city_data_df = pd.read_csv('challenge_data/WeatherPy_database.csv')
city_data_df.head()
city_data_df.dtypes
# Configure gmaps
gmaps.configure(api_key=g_key)
# Ask the customer to add a minimum and maximum temperature value.
min_temp = float(input('What is the minimum temperature you would like for your trip?'))
max_temp = float(input('What is the maximum temperature you would like for your trip?'))
# Ask the customer if they would like it to be raining.
rain_amount = input('Do you want it to be raining? (yes/no)')
# Ask the customer if they would like it to be snowing.
snow_amount = input('Do you want it to be snowing? (yes/no)')
if rain_amount == 'no' and snow_amount == 'no':
filtered_cities_df = city_data_df.loc[(city_data_df['Max Temp'] <= max_temp) &
(city_data_df['Max Temp'] >= min_temp) &
(city_data_df['Rain (inches)'] == 0) &
(city_data_df['Snow (inches)'] == 0)]
elif rain_amount == 'no' and snow_amount == 'yes':
filtered_cities_df = city_data_df.loc[(city_data_df['Max Temp'] <= max_temp) &
(city_data_df['Max Temp'] >= min_temp) &
(city_data_df['Rain (inches)'] == 0) &
(city_data_df['Snow (inches)'] >= 0.0)]
elif rain_amount == 'yes' and snow_amount == 'no':
filtered_cities_df = city_data_df.loc[(city_data_df['Max Temp'] <= max_temp) &
(city_data_df['Max Temp'] >= min_temp) &
(city_data_df['Rain (inches)'] > 0.0) &
(city_data_df['Snow (inches)'] == 0)]
else:
filtered_cities_df = city_data_df.loc[(city_data_df['Max Temp'] <= max_temp) &
(city_data_df['Max Temp'] >= min_temp) &
(city_data_df['Rain (inches)'] > 0.0) &
(city_data_df['Snow (inches)'] > 0)]
filtered_cities_df.head()
filtered_cities_df.count()
filtered_cities_df = filtered_cities_df.dropna()
filtered_cities_df
filtered_cities_df.count()
hotel_df = filtered_cities_df[['City', 'Country', 'Max Temp', 'Lat', 'Lng', 'Current Description']].copy()
hotel_df['Hotel Name'] = ''
hotel_df.head()
params = {
'radius': 5000,
'type': 'lodging',
'key': g_key
}
# Iterate through.
for index, row in hotel_df.iterrows():
lat = row['Lat']
lng = row['Lng']
params['location'] = f'{lat},{lng}'
base_url = 'https://maps.googleapis.com/maps/api/place/nearbysearch/json'
hotels = requests.get(base_url, params=params).json()
try:
hotel_df.loc[index, 'Hotel Name'] = hotels['results'][0]['name']
except IndexError:
print('Hotel not found... skipping...')
hotel_df.head()
output_data_file = './challenge_data/WeatherPy_Vacation.csv'
hotel_df.to_csv(output_data_file, index_label='City_ID')
vacation_df = pd.read_csv('./challenge_data/WeatherPy_Vacation.csv')
vacation_df.head()
# Using the template add the hotel marks to the heatmap
info_box_template = '''
<dl>
<dt>Hotel Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
<dt>Current Description</dt><dd>{Current Description} at {Max Temp}</dd>
'''
hotel_info = [info_box_template.format(**row) for index, row in vacation_df.iterrows()]
locations = vacation_df[['Lat', 'Lng']]
marker_layer = gmaps.marker_layer(locations, info_box_content=hotel_info)
fig = gmaps.figure()
fig.add_layer(marker_layer)
fig
| 0.418103 | 0.637595 |
<a href="https://colab.research.google.com/github/z0li627/DS-Unit-2-Linear-Models/blob/master/Zoltan_Gaspar_assignment_regression_classification_3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Lambda School Data Science
*Unit 2, Sprint 1, Module 3*
---
# Ridge Regression
## Assignment
We're going back to our other **New York City** real estate dataset. Instead of predicting apartment rents, you'll predict property sales prices.
But not just for condos in Tribeca...
- [ ] Use a subset of the data where `BUILDING_CLASS_CATEGORY` == `'01 ONE FAMILY DWELLINGS'` and the sale price was more than 100 thousand and less than 2 million.
- [ ] Do train/test split. Use data from January — March 2019 to train. Use data from April 2019 to test.
- [ ] Do one-hot encoding of categorical features.
- [ ] Do feature selection with `SelectKBest`.
- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html). Use the scaler's `fit_transform` method with the train set. Use the scaler's `transform` method with the test set.
- [ ] Fit a ridge regression model with multiple features.
- [ ] Get mean absolute error for the test set.
- [ ] As always, commit your notebook to your fork of the GitHub repo.
The [NYC Department of Finance](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page) has a glossary of property sales terms and NYC Building Class Code Descriptions. The data comes from the [NYC OpenData](https://data.cityofnewyork.us/browse?q=NYC%20calendar%20sales) portal.
## Stretch Goals
Don't worry, you aren't expected to do all these stretch goals! These are just ideas to consider and choose from.
- [ ] Add your own stretch goal(s) !
- [ ] Instead of `Ridge`, try `LinearRegression`. Depending on how many features you select, your errors will probably blow up! 💥
- [ ] Instead of `Ridge`, try [`RidgeCV`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html).
- [ ] Learn more about feature selection:
- ["Permutation importance"](https://www.kaggle.com/dansbecker/permutation-importance)
- [scikit-learn's User Guide for Feature Selection](https://scikit-learn.org/stable/modules/feature_selection.html)
- [mlxtend](http://rasbt.github.io/mlxtend/) library
- scikit-learn-contrib libraries: [boruta_py](https://github.com/scikit-learn-contrib/boruta_py) & [stability-selection](https://github.com/scikit-learn-contrib/stability-selection)
- [_Feature Engineering and Selection_](http://www.feat.engineering/) by Kuhn & Johnson.
- [ ] Try [statsmodels](https://www.statsmodels.org/stable/index.html) if you’re interested in more inferential statistical approach to linear regression and feature selection, looking at p values and 95% confidence intervals for the coefficients.
- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way.
- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).
```
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
import pandas as pd
import pandas_profiling
# Read New York City property sales data
df = pd.read_csv(DATA_PATH+'condos/NYC_Citywide_Rolling_Calendar_Sales.csv')
# Change column names: replace spaces with underscores
df.columns = [col.replace(' ', '_') for col in df]
# SALE_PRICE was read as strings.
# Remove symbols, convert to integer
df['SALE_PRICE'] = (
df['SALE_PRICE']
.str.replace('$','')
.str.replace('-','')
.str.replace(',','')
.astype(int)
)
# BOROUGH is a numeric column, but arguably should be a categorical feature,
# so convert it from a number to a string
df['BOROUGH'] = df['BOROUGH'].astype(str)
# Reduce cardinality for NEIGHBORHOOD feature
# Get a list of the top 10 neighborhoods
top10 = df['NEIGHBORHOOD'].value_counts()[:10].index
# At locations where the neighborhood is NOT in the top 10,
# replace the neighborhood with 'OTHER'
df.loc[~df['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'
df.T
df2 = (df[df['BUILDING_CLASS_CATEGORY'] == '01 ONE FAMILY DWELLINGS'])
df2.T
df3 = (df2[(df2['SALE_PRICE'] > 100000) & (df2['SALE_PRICE'] < 2000000 )])
df3.T
df3['SALE_DATE'].dtypes
df3['SALE_DATE'] = pd.to_datetime(df3['SALE_DATE'], infer_datetime_format=True)
df3['SALE_DATE'].dtypes
df3.T
df3 = df3.drop('EASE-MENT', 1)
df3 = df3.drop('APARTMENT_NUMBER', 1)
df3 = df3.drop('TAX_CLASS_AT_TIME_OF_SALE', 1)
cutoff = pd.to_datetime('2019-04-01')
train = df3[df3.SALE_DATE < cutoff ]
test = df3[df3.SALE_DATE >= cutoff]
train.shape, test.shape
train.isnull().sum()
train.select_dtypes(include='number').describe().T
train.select_dtypes(exclude='number').describe().T.sort_values(by='unique')
train.groupby('NEIGHBORHOOD')['SALE_PRICE'].describe()
train.groupby('NEIGHBORHOOD')['SALE_PRICE'].mean()
target = 'SALE_PRICE'
high_cardi = ['BUILDING_CLASS_AT_TIME_OF_SALE', 'BUILDING_CLASS_AT_PRESENT', 'SALE_DATE', 'LAND_SQUARE_FEET', 'ADDRESS', 'BUILDING_CLASS_CATEGORY', 'TAX_CLASS_AT_PRESENT', 'BOROUGH']
features = train.columns.drop([target] + high_cardi)
X_train = train[features]
y_train = train[target]
X_test = test[features]
y_test = test[target]
X_train.head().T
import category_encoders as ce
encoder = ce.OneHotEncoder(use_cat_names=True, return_df=True)
X_train = encoder.fit_transform(X_train)
X_test = encoder.transform(X_test)
X_train.head().T
from sklearn.feature_selection import f_regression, SelectKBest
selector = SelectKBest(score_func=f_regression, k=15)
X_train_selected = selector.fit_transform(X_train, y_train)
X_test_selected = selector.transform(X_test)
X_train_selected, X_test_selected
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error
for k in range(1, len(X_train.columns)+1):
print(f'{k} features')
selector = SelectKBest(score_func=f_regression, k=k)
X_train_selected = selector.fit_transform(X_train, y_train)
X_test_selected = selector.transform(X_test)
model = LinearRegression()
model.fit(X_train_selected, y_train)
y_pred = model.predict(X_test_selected)
mae = mean_absolute_error(y_test, y_pred)
print(f'Test Mean Absolute Error: ${mae:,.0f} \n')
%matplotlib inline
from IPython.display import display, HTML
from ipywidgets import interact
import matplotlib.pyplot as plt
from sklearn.linear_model import Ridge
from sklearn.preprocessing import StandardScaler
for zeta in [10**1, 10**2, 10**3]:
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
display(HTML(f'Ridge Regression, with alpha={zeta}'))
model = Ridge(alpha=zeta)
model.fit(X_train_scaled, y_train)
y_pred = model.predict(X_train_scaled)
trainmae = mean_absolute_error(y_train, y_pred)
display(HTML(f'Train Mean Absolute Error: ${trainmae:,.0f}'))
y_pred = model.predict(X_test_scaled)
testmae = mean_absolute_error(y_test, y_pred)
display(HTML(f'Test Mean Absolute Error: ${testmae:,.0f}'))
coefficients = pd.Series(model.coef_, X_train.columns)
plt.figure(figsize=(16,8))
coefficients.sort_values().plot.barh(color='blue')
plt.xlim(-200000,200000)
plt.show()
for gamma in [10**4, 10**5]:
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
display(HTML(f'Ridge Regression, with alpha={gamma}'))
model = Ridge(alpha=gamma)
model.fit(X_train_scaled, y_train)
y_pred = model.predict(X_train_scaled)
trainmae = mean_absolute_error(y_train, y_pred)
display(HTML(f'Train Mean Absolute Error: ${trainmae:,.0f}'))
y_pred = model.predict(X_test_scaled)
testmae = mean_absolute_error(y_test, y_pred)
display(HTML(f'Test Mean Absolute Error: ${testmae:,.0f}'))
coefficients = pd.Series(model.coef_, X_train.columns)
plt.figure(figsize=(16,8))
coefficients.sort_values().plot.barh(color='red')
plt.xlim(-10000,30000)
plt.show()
for theta in [10**6, 10**7]:
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
display(HTML(f'Ridge Regression, with alpha={theta}'))
model = Ridge(alpha=theta)
model.fit(X_train_scaled, y_train)
y_pred = model.predict(X_train_scaled)
trainmae = mean_absolute_error(y_train, y_pred)
display(HTML(f'Train Mean Absolute Error: ${trainmae:,.0f}'))
y_pred = model.predict(X_test_scaled)
testmae = mean_absolute_error(y_test, y_pred)
display(HTML(f'Test Mean Absolute Error: ${testmae:,.0f}'))
coefficients = pd.Series(model.coef_, X_train.columns)
plt.figure(figsize=(16,8))
coefficients.sort_values().plot.barh(color='green')
plt.xlim(-200,400)
plt.show()
```
|
github_jupyter
|
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
import pandas as pd
import pandas_profiling
# Read New York City property sales data
df = pd.read_csv(DATA_PATH+'condos/NYC_Citywide_Rolling_Calendar_Sales.csv')
# Change column names: replace spaces with underscores
df.columns = [col.replace(' ', '_') for col in df]
# SALE_PRICE was read as strings.
# Remove symbols, convert to integer
df['SALE_PRICE'] = (
df['SALE_PRICE']
.str.replace('$','')
.str.replace('-','')
.str.replace(',','')
.astype(int)
)
# BOROUGH is a numeric column, but arguably should be a categorical feature,
# so convert it from a number to a string
df['BOROUGH'] = df['BOROUGH'].astype(str)
# Reduce cardinality for NEIGHBORHOOD feature
# Get a list of the top 10 neighborhoods
top10 = df['NEIGHBORHOOD'].value_counts()[:10].index
# At locations where the neighborhood is NOT in the top 10,
# replace the neighborhood with 'OTHER'
df.loc[~df['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'
df.T
df2 = (df[df['BUILDING_CLASS_CATEGORY'] == '01 ONE FAMILY DWELLINGS'])
df2.T
df3 = (df2[(df2['SALE_PRICE'] > 100000) & (df2['SALE_PRICE'] < 2000000 )])
df3.T
df3['SALE_DATE'].dtypes
df3['SALE_DATE'] = pd.to_datetime(df3['SALE_DATE'], infer_datetime_format=True)
df3['SALE_DATE'].dtypes
df3.T
df3 = df3.drop('EASE-MENT', 1)
df3 = df3.drop('APARTMENT_NUMBER', 1)
df3 = df3.drop('TAX_CLASS_AT_TIME_OF_SALE', 1)
cutoff = pd.to_datetime('2019-04-01')
train = df3[df3.SALE_DATE < cutoff ]
test = df3[df3.SALE_DATE >= cutoff]
train.shape, test.shape
train.isnull().sum()
train.select_dtypes(include='number').describe().T
train.select_dtypes(exclude='number').describe().T.sort_values(by='unique')
train.groupby('NEIGHBORHOOD')['SALE_PRICE'].describe()
train.groupby('NEIGHBORHOOD')['SALE_PRICE'].mean()
target = 'SALE_PRICE'
high_cardi = ['BUILDING_CLASS_AT_TIME_OF_SALE', 'BUILDING_CLASS_AT_PRESENT', 'SALE_DATE', 'LAND_SQUARE_FEET', 'ADDRESS', 'BUILDING_CLASS_CATEGORY', 'TAX_CLASS_AT_PRESENT', 'BOROUGH']
features = train.columns.drop([target] + high_cardi)
X_train = train[features]
y_train = train[target]
X_test = test[features]
y_test = test[target]
X_train.head().T
import category_encoders as ce
encoder = ce.OneHotEncoder(use_cat_names=True, return_df=True)
X_train = encoder.fit_transform(X_train)
X_test = encoder.transform(X_test)
X_train.head().T
from sklearn.feature_selection import f_regression, SelectKBest
selector = SelectKBest(score_func=f_regression, k=15)
X_train_selected = selector.fit_transform(X_train, y_train)
X_test_selected = selector.transform(X_test)
X_train_selected, X_test_selected
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error
for k in range(1, len(X_train.columns)+1):
print(f'{k} features')
selector = SelectKBest(score_func=f_regression, k=k)
X_train_selected = selector.fit_transform(X_train, y_train)
X_test_selected = selector.transform(X_test)
model = LinearRegression()
model.fit(X_train_selected, y_train)
y_pred = model.predict(X_test_selected)
mae = mean_absolute_error(y_test, y_pred)
print(f'Test Mean Absolute Error: ${mae:,.0f} \n')
%matplotlib inline
from IPython.display import display, HTML
from ipywidgets import interact
import matplotlib.pyplot as plt
from sklearn.linear_model import Ridge
from sklearn.preprocessing import StandardScaler
for zeta in [10**1, 10**2, 10**3]:
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
display(HTML(f'Ridge Regression, with alpha={zeta}'))
model = Ridge(alpha=zeta)
model.fit(X_train_scaled, y_train)
y_pred = model.predict(X_train_scaled)
trainmae = mean_absolute_error(y_train, y_pred)
display(HTML(f'Train Mean Absolute Error: ${trainmae:,.0f}'))
y_pred = model.predict(X_test_scaled)
testmae = mean_absolute_error(y_test, y_pred)
display(HTML(f'Test Mean Absolute Error: ${testmae:,.0f}'))
coefficients = pd.Series(model.coef_, X_train.columns)
plt.figure(figsize=(16,8))
coefficients.sort_values().plot.barh(color='blue')
plt.xlim(-200000,200000)
plt.show()
for gamma in [10**4, 10**5]:
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
display(HTML(f'Ridge Regression, with alpha={gamma}'))
model = Ridge(alpha=gamma)
model.fit(X_train_scaled, y_train)
y_pred = model.predict(X_train_scaled)
trainmae = mean_absolute_error(y_train, y_pred)
display(HTML(f'Train Mean Absolute Error: ${trainmae:,.0f}'))
y_pred = model.predict(X_test_scaled)
testmae = mean_absolute_error(y_test, y_pred)
display(HTML(f'Test Mean Absolute Error: ${testmae:,.0f}'))
coefficients = pd.Series(model.coef_, X_train.columns)
plt.figure(figsize=(16,8))
coefficients.sort_values().plot.barh(color='red')
plt.xlim(-10000,30000)
plt.show()
for theta in [10**6, 10**7]:
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
display(HTML(f'Ridge Regression, with alpha={theta}'))
model = Ridge(alpha=theta)
model.fit(X_train_scaled, y_train)
y_pred = model.predict(X_train_scaled)
trainmae = mean_absolute_error(y_train, y_pred)
display(HTML(f'Train Mean Absolute Error: ${trainmae:,.0f}'))
y_pred = model.predict(X_test_scaled)
testmae = mean_absolute_error(y_test, y_pred)
display(HTML(f'Test Mean Absolute Error: ${testmae:,.0f}'))
coefficients = pd.Series(model.coef_, X_train.columns)
plt.figure(figsize=(16,8))
coefficients.sort_values().plot.barh(color='green')
plt.xlim(-200,400)
plt.show()
| 0.459076 | 0.963814 |
```
import MDAnalysis
from clustercode import ClusterEnsemble
import matplotlib
import matplotlib.pyplot as plt
'''
This is a small atomistic trajectory including a single large micelle of aggregation number 100.
The Micelle is composed of SDS which has a HSO4 headgroup and carbon tail groups
'''
xtc = "files/traj_small.xtc"
tpr = "files/topol_small.tpr"
# First we are going to look at the trajectory with some basic MDAnalysis commands
universe = MDAnalysis.Universe(tpr, xtc)
atom_names = set(universe.atoms.names)
residue_names = set(universe.residues.resnames)
atom_number = universe.atoms.n_atoms
residue_number = universe.residues.n_residues
print('Residue names: {:s}'.format(', '.join(residue_names)))
print('Number of residiue: {:d}'.format(residue_number))
print('Atom names: {:s}'.format(', '.join(atom_names)))
print('Number of atoms: {:d}'.format(atom_number))
print('Length of trajectory: {:d}'.format(universe.trajectory.n_frames))
'''
We want two molecules to be considered in a cluster when their carbon tails are close enough together.
Therefore the 'cluster_objects' will be C1 - C12
We initialise the ClusterEnsemble object as follows:
'''
cluster_objects = ['C{:d}'.format(i) for i in range(1,13)]
ClstrEns = ClusterEnsemble(tpr, xtc, cluster_objects)
'''
Run the analysis for each frame within the specified times. cut-off is in Angstroem and describes how close two
objects need to be to considered in the same cluster. This still depends on the measure parameter which is either
centre of geometry (COG), centre of mass (COM) or bead to bead (b2b).
'''
ClstrEns.cluster_analysis(cut_off=7.5, times=(60e3, 70e3), measure="COM", algorithm="dynamic", work_in="Residue", style="atom")
# This trajectory is larger and describes the inital phase of micellisations
xtc = "files/traj_large.xtc"
tpr = "files/topol_large.tpr"
ClstrEns = ClusterEnsemble(tpr, xtc, cluster_objects)
ClstrEns.cluster_analysis(cut_off=3.5, times=(0, 10000), measure="COM", algorithm="dynamic", work_in="Residue", style="atom")
"""
These function will be added to the object soond as of now they are
here to use.
"""
def get_cluster_size(cluster_list, frame):
'''
This function calculates the size of cluster in a single frame
'''
return [len(cluster) for cluster in cluster_list[frame]]
def get_cluster_sizes(cluster_list, first=None, last=None, stride=1):
'''
This function calculates the sizes of clusters in multiple frames.
start and stop are measured in frames.
'''
if first is None: first = 0
if last is None: last = len(cluster_list)
cluster_sizes = []
frames = (first, last, stride)
for frame in range(first, last, stride):
cluster_sizes.append(get_cluster_size(cluster_list, frame))
return cluster_sizes
cluster_list = ClstrEns.cluster_list
# Get the size distribution of clusters in the first frame:
frame_0_clusters = ClstrEns.cluster_sizes[50]
def print_clusters(frames, i):
print('Cluter sizes in {:d}. frame: {:s}'.format(i, ', '.join([str(item) for item in frames])))
print_clusters(frame_0_clusters, 0)
# Get the size distribution of several frames (from 50 to 54 here).
# The return value of _get_cluster_size and _get_cluster_sizes is actually
# a list and list of list respectively
frames_clusters = ClstrEns.cluster_sizes[50:55]
for frame, i in zip(frames_clusters, range(50, 55)):
print_clusters(frame, i)
fig, ax = plt.subplots()
ClstrEns.plot_histogram(ax, frames=[(50, 70, 1), (180, 200, 1)], density=True, maxbins=False)
'''
In the cluster_list we have all a list of molecules in each cluter.
We can therefore calculate all kinds of things with it.
'''
ClstrEns.universe.trajectory.rewind()
n_frame = 50
n_cluster = 1
for i, this_cluster in enumerate(ClstrEns.cluster_list):
print(i)
if i == n_frame:
special_cluster = this_cluster[n_cluster]
COM = special_cluster.center_of_mass()
print('Centre of mass: {:.2f}, {:.2f}, {:.2f}'.format(*COM))
# A cluster is a Residuegroup, we can use all the methods defined for
# Residuegroups
print('Type of special_cluster: {:s}'.format(type(special_cluster).__name__))
COM = special_cluster.center_of_mass()
print('Centre of mass: {:.2f}, {:.2f}, {:.2f}'.format(*COM))
print('Centre of geomety: {:.2f}, {:.2f}, {:.2f}'.format(*special_cluster.center_of_geometry()))
print('Radius of gyration: {:.3f}'.format(special_cluster.radius_of_gyration()))
print('Ids of residues(molecules) in this cluster: ')
print(special_cluster.resids)
print('Type of residues(molecules) in this cluster: ')
print(special_cluster.resnames)
'''
In the cluster_list we have all a list of molecules in each cluter.
We can therefore calculate all kinds of things with it.
'''
ClstrEns.universe.trajectory.rewind()
n_frame = 50
n_cluster = 1
for i, this_cluster in enumerate(ClstrEns.cluster_list):
if i%10. < 1.0e-6: print(i)
if i == n_frame:
special_cluster = this_cluster[n_cluster]
COM = special_cluster.center_of_mass()
print('Centre of mass: {:.2f}, {:.2f}, {:.2f}'.format(*COM))
ClstrEns.unwrap_cluster(special_cluster, verbosity=1)
COM = special_cluster.center_of_mass()
print('Centre of mass: {:.2f}, {:.2f}, {:.2f}'.format(*COM))
# A cluster is a Residuegroup, we can use all the methods defined for
# Residuegroups
print('Type of special_cluster: {:s}'.format(type(special_cluster).__name__))
COM = special_cluster.center_of_mass()
print('Centre of mass: {:.2f}, {:.2f}, {:.2f}'.format(*COM))
print('Centre of geomety: {:.2f}, {:.2f}, {:.2f}'.format(*special_cluster.center_of_geometry()))
print('Radius of gyration: {:.3f}'.format(special_cluster.radius_of_gyration()))
print('Ids of residues(molecules) in this cluster: ')
print(special_cluster.resids)
print('Type of residues(molecules) in this cluster: ')
print(special_cluster.resnames)
```
|
github_jupyter
|
import MDAnalysis
from clustercode import ClusterEnsemble
import matplotlib
import matplotlib.pyplot as plt
'''
This is a small atomistic trajectory including a single large micelle of aggregation number 100.
The Micelle is composed of SDS which has a HSO4 headgroup and carbon tail groups
'''
xtc = "files/traj_small.xtc"
tpr = "files/topol_small.tpr"
# First we are going to look at the trajectory with some basic MDAnalysis commands
universe = MDAnalysis.Universe(tpr, xtc)
atom_names = set(universe.atoms.names)
residue_names = set(universe.residues.resnames)
atom_number = universe.atoms.n_atoms
residue_number = universe.residues.n_residues
print('Residue names: {:s}'.format(', '.join(residue_names)))
print('Number of residiue: {:d}'.format(residue_number))
print('Atom names: {:s}'.format(', '.join(atom_names)))
print('Number of atoms: {:d}'.format(atom_number))
print('Length of trajectory: {:d}'.format(universe.trajectory.n_frames))
'''
We want two molecules to be considered in a cluster when their carbon tails are close enough together.
Therefore the 'cluster_objects' will be C1 - C12
We initialise the ClusterEnsemble object as follows:
'''
cluster_objects = ['C{:d}'.format(i) for i in range(1,13)]
ClstrEns = ClusterEnsemble(tpr, xtc, cluster_objects)
'''
Run the analysis for each frame within the specified times. cut-off is in Angstroem and describes how close two
objects need to be to considered in the same cluster. This still depends on the measure parameter which is either
centre of geometry (COG), centre of mass (COM) or bead to bead (b2b).
'''
ClstrEns.cluster_analysis(cut_off=7.5, times=(60e3, 70e3), measure="COM", algorithm="dynamic", work_in="Residue", style="atom")
# This trajectory is larger and describes the inital phase of micellisations
xtc = "files/traj_large.xtc"
tpr = "files/topol_large.tpr"
ClstrEns = ClusterEnsemble(tpr, xtc, cluster_objects)
ClstrEns.cluster_analysis(cut_off=3.5, times=(0, 10000), measure="COM", algorithm="dynamic", work_in="Residue", style="atom")
"""
These function will be added to the object soond as of now they are
here to use.
"""
def get_cluster_size(cluster_list, frame):
'''
This function calculates the size of cluster in a single frame
'''
return [len(cluster) for cluster in cluster_list[frame]]
def get_cluster_sizes(cluster_list, first=None, last=None, stride=1):
'''
This function calculates the sizes of clusters in multiple frames.
start and stop are measured in frames.
'''
if first is None: first = 0
if last is None: last = len(cluster_list)
cluster_sizes = []
frames = (first, last, stride)
for frame in range(first, last, stride):
cluster_sizes.append(get_cluster_size(cluster_list, frame))
return cluster_sizes
cluster_list = ClstrEns.cluster_list
# Get the size distribution of clusters in the first frame:
frame_0_clusters = ClstrEns.cluster_sizes[50]
def print_clusters(frames, i):
print('Cluter sizes in {:d}. frame: {:s}'.format(i, ', '.join([str(item) for item in frames])))
print_clusters(frame_0_clusters, 0)
# Get the size distribution of several frames (from 50 to 54 here).
# The return value of _get_cluster_size and _get_cluster_sizes is actually
# a list and list of list respectively
frames_clusters = ClstrEns.cluster_sizes[50:55]
for frame, i in zip(frames_clusters, range(50, 55)):
print_clusters(frame, i)
fig, ax = plt.subplots()
ClstrEns.plot_histogram(ax, frames=[(50, 70, 1), (180, 200, 1)], density=True, maxbins=False)
'''
In the cluster_list we have all a list of molecules in each cluter.
We can therefore calculate all kinds of things with it.
'''
ClstrEns.universe.trajectory.rewind()
n_frame = 50
n_cluster = 1
for i, this_cluster in enumerate(ClstrEns.cluster_list):
print(i)
if i == n_frame:
special_cluster = this_cluster[n_cluster]
COM = special_cluster.center_of_mass()
print('Centre of mass: {:.2f}, {:.2f}, {:.2f}'.format(*COM))
# A cluster is a Residuegroup, we can use all the methods defined for
# Residuegroups
print('Type of special_cluster: {:s}'.format(type(special_cluster).__name__))
COM = special_cluster.center_of_mass()
print('Centre of mass: {:.2f}, {:.2f}, {:.2f}'.format(*COM))
print('Centre of geomety: {:.2f}, {:.2f}, {:.2f}'.format(*special_cluster.center_of_geometry()))
print('Radius of gyration: {:.3f}'.format(special_cluster.radius_of_gyration()))
print('Ids of residues(molecules) in this cluster: ')
print(special_cluster.resids)
print('Type of residues(molecules) in this cluster: ')
print(special_cluster.resnames)
'''
In the cluster_list we have all a list of molecules in each cluter.
We can therefore calculate all kinds of things with it.
'''
ClstrEns.universe.trajectory.rewind()
n_frame = 50
n_cluster = 1
for i, this_cluster in enumerate(ClstrEns.cluster_list):
if i%10. < 1.0e-6: print(i)
if i == n_frame:
special_cluster = this_cluster[n_cluster]
COM = special_cluster.center_of_mass()
print('Centre of mass: {:.2f}, {:.2f}, {:.2f}'.format(*COM))
ClstrEns.unwrap_cluster(special_cluster, verbosity=1)
COM = special_cluster.center_of_mass()
print('Centre of mass: {:.2f}, {:.2f}, {:.2f}'.format(*COM))
# A cluster is a Residuegroup, we can use all the methods defined for
# Residuegroups
print('Type of special_cluster: {:s}'.format(type(special_cluster).__name__))
COM = special_cluster.center_of_mass()
print('Centre of mass: {:.2f}, {:.2f}, {:.2f}'.format(*COM))
print('Centre of geomety: {:.2f}, {:.2f}, {:.2f}'.format(*special_cluster.center_of_geometry()))
print('Radius of gyration: {:.3f}'.format(special_cluster.radius_of_gyration()))
print('Ids of residues(molecules) in this cluster: ')
print(special_cluster.resids)
print('Type of residues(molecules) in this cluster: ')
print(special_cluster.resnames)
| 0.538012 | 0.671437 |
# Name
Data preparation by executing an Apache Beam job in Cloud Dataflow
# Labels
GCP, Cloud Dataflow, Apache Beam, Python, Kubeflow
# Summary
A Kubeflow Pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner.
# Details
## Intended use
Use this component to run a Python Beam code to submit a Cloud Dataflow job as a step of a Kubeflow pipeline.
## Runtime arguments
Name | Description | Optional | Data type| Accepted values | Default |
:--- | :----------| :----------| :----------| :----------| :---------- |
python_file_path | The path to the Cloud Storage bucket or local directory containing the Python file to be run. | | GCSPath | | |
project_id | The ID of the Google Cloud Platform (GCP) project containing the Cloud Dataflow job.| | GCPProjectID | | |
staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information.This is done so that you can resume the job in case of failure. `staging_dir` is passed as the command line arguments (`staging_location` and `temp_location`) of the Beam code. | Yes | GCSPath | | None |
requirements_file_path | The path to the Cloud Storage bucket or local directory containing the pip requirements file. | Yes | GCSPath | | None |
args | The list of arguments to pass to the Python file. | No | List | A list of string arguments | None |
wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 |
## Input data schema
Before you use the component, the following files must be ready in a Cloud Storage bucket:
- A Beam Python code file.
- A `requirements.txt` file which includes a list of dependent packages.
The Beam Python code should follow the [Beam programming guide](https://beam.apache.org/documentation/programming-guide/) as well as the following additional requirements to be compatible with this component:
- It accepts the command line arguments `--project`, `--temp_location`, `--staging_location`, which are [standard Dataflow Runner options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-params#setting-other-cloud-pipeline-options).
- It enables `info logging` before the start of a Cloud Dataflow job in the Python code. This is important to allow the component to track the status and ID of the job that is created. For example, calling `logging.getLogger().setLevel(logging.INFO)` before any other code.
## Output
Name | Description
:--- | :----------
job_id | The id of the Cloud Dataflow job that is created.
## Cautions & requirements
To use the components, the following requirements must be met:
- Cloud Dataflow API is enabled.
- The component is running under a secret Kubeflow user service account in a Kubeflow Pipeline cluster. For example:
```
component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa'))
```
The Kubeflow user service account is a member of:
- `roles/dataflow.developer` role of the project.
- `roles/storage.objectViewer` role of the Cloud Storage Objects `python_file_path` and `requirements_file_path`.
- `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir`.
## Detailed description
The component does several things during the execution:
- Downloads `python_file_path` and `requirements_file_path` to local files.
- Starts a subprocess to launch the Python program.
- Monitors the logs produced from the subprocess to extract the Cloud Dataflow job information.
- Stores the Cloud Dataflow job information in `staging_dir` so the job can be resumed in case of failure.
- Waits for the job to finish.
The steps to use the component in a pipeline are:
1. Install the Kubeflow Pipelines SDK:
```
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
```
2. Load the component using KFP SDK
```
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/74d8e592174ae90175f66c3c00ba76a835cfba6d/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
```
### Sample
Note: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template.
In this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
```
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
```
#### Set sample parameters
```
# Required Parameters
PROJECT_ID = '<Please put your project ID here>'
GCS_STAGING_DIR = 'gs://<Please put your GCS path here>' # No ending slash
# Optional Parameters
EXPERIMENT_NAME = 'Dataflow - Launch Python'
OUTPUT_FILE = '{}/wc/wordcount.out'.format(GCS_STAGING_DIR)
```
#### Example pipeline that uses the component
```
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/wc.py',
project_id = PROJECT_ID,
staging_dir = GCS_STAGING_DIR,
requirements_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/requirements.txt',
args = json.dumps([
'--output', OUTPUT_FILE
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa'))
```
#### Compile the pipeline
```
pipeline_func = pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
```
#### Submit the pipeline for execution
```
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
```
#### Inspect the output
```
!gsutil cat $OUTPUT_FILE
```
## References
* [Component python code](https://github.com/kubeflow/pipelines/blob/master/component_sdk/python/kfp_component/google/dataflow/_launch_python.py)
* [Component docker file](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/Dockerfile)
* [Sample notebook](https://github.com/kubeflow/pipelines/blob/master/components/gcp/dataflow/launch_python/sample.ipynb)
* [Dataflow Python Quickstart](https://cloud.google.com/dataflow/docs/quickstarts/quickstart-python)
## License
By deploying or using this software you agree to comply with the [AI Hub Terms of Service](https://aihub.cloud.google.com/u/0/aihub-tos) and the [Google APIs Terms of Service](https://developers.google.com/terms/). To the extent of a direct conflict of terms, the AI Hub Terms of Service will control.
|
github_jupyter
|
component_op(...).apply(gcp.use_gcp_secret('user-gcp-sa'))
%%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
import kfp.components as comp
dataflow_python_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/74d8e592174ae90175f66c3c00ba76a835cfba6d/components/gcp/dataflow/launch_python/component.yaml')
help(dataflow_python_op)
!gsutil cat gs://ml-pipeline-playground/samples/dataflow/wc/wc.py
# Required Parameters
PROJECT_ID = '<Please put your project ID here>'
GCS_STAGING_DIR = 'gs://<Please put your GCS path here>' # No ending slash
# Optional Parameters
EXPERIMENT_NAME = 'Dataflow - Launch Python'
OUTPUT_FILE = '{}/wc/wordcount.out'.format(GCS_STAGING_DIR)
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Dataflow launch python pipeline',
description='Dataflow launch python pipeline'
)
def pipeline(
python_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/wc.py',
project_id = PROJECT_ID,
staging_dir = GCS_STAGING_DIR,
requirements_file_path = 'gs://ml-pipeline-playground/samples/dataflow/wc/requirements.txt',
args = json.dumps([
'--output', OUTPUT_FILE
]),
wait_interval = 30
):
dataflow_python_op(
python_file_path = python_file_path,
project_id = project_id,
staging_dir = staging_dir,
requirements_file_path = requirements_file_path,
args = args,
wait_interval = wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa'))
pipeline_func = pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
!gsutil cat $OUTPUT_FILE
| 0.442396 | 0.944842 |
## All the Presidents' Ages
```
# Run this cell to set up the notebook, but please don't change it.
# These lines import the Numpy and Datascience modules.
import numpy as np
from datascience import *
# These lines do some fancy plotting magic
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
import warnings
warnings.simplefilter('ignore', FutureWarning)
```
As of this writing, the US has had 43 Presidents, and 38 are deceased. Let's figure out how long they lived.
First, a note. These exercises are designed to give you practice *computing with arrays*. Since there are only 38 Presidents, you could avoid using arrays by copying each computation 38 times. You wouldn't learn much, so don't do that.
Our data from [PresidentsUSA.net](http://www.presidentsusa.net/birth.html) tell us the birth and death date of each President. The cell below loads these data, along with the Presidents' names. (We've used a table for presentation purposes; you don't need to know about tables to do this exercise.)
Note that the Presidents are presented in order by *birth date*, so for example John F. Kennedy (President from 1961-1963) comes after Richard M. Nixon (President from 1969-1974) because he was born earlier.
```
# Just run this cell.
presidents = Table.read_table("presidents.csv").select("Name", "Birth Year", "Death Year")
# This is an array of the birth years of the dead presidents. It's the data
# you see displayed in the "Birth Year" column below.
birth_years = presidents.column("Birth Year")
# This is an array of the death years of the dead presidents. It's the data
# you see displayed in the "Death Year" column below. The first element of
# this array describes the same president as the first element of birth_years,
# and so on.
death_years = presidents.column("Death Year")
presidents.show()
```
**Question 1.** Compute the number of years between each President's birth and death (their longevity). Put your answers in an array called `longevity`. The first item of `longevity` should be the longevity of the first president in `dead_presidents_years`, and so on. Use the arrays `death_years` and `birth_years`, which are loaded in the cell above.
```
longevity = ...
# This piece of code puts your results into a table for better
# display. You can ignore it.
presidents.with_column("Longevity", longevity).show()
```
Below, we've plotted the longevity of each president, which you just computed.
```
# Just run this cell.
Table().with_columns(
"President number (by birth date)", np.arange(1, presidents.num_rows+1),
"longevity (years)", longevity)\
.scatter(0)
```
**Question 2.** Suppose each President were [still alive](http://futurama.wikia.com/wiki/Richard_M._Nixon's_head) in 2016. How old would each one be?
```
ages_today = ...
# This piece of code puts your results into a table for better
# display. You can ignore it.
presidents.with_column("Age Today", ages_today).show()
```
**Question 3.** A colleague points out that John Adams died at age 90, but your answer to Question 1 probably says that he lived 91 years! John Adams was born October 30, 1735, and died July 4, 1826. Explain the discrepancy.
*Write your answer here, replacing this text.*
Let's fix this. Below, we've loaded a more precise dataset. Instead of just birth year and death year, we also have the number of *days* that passed since January 1 of those years. If someone was born on the 200th day of the year and died on the 100th day of the year, then their birthday hadn't already passed, so we should decrease their longevity by 1.
```
# Just run this cell.
detailed_ages = Table.read_table("presidents.csv").select("Name", "Birth Year", "Days since January 1 at Birth", "Death Year", "Days since January 1 at Death")
birth_days = detailed_ages.column("Days since January 1 at Birth")
death_days = detailed_ages.column("Days since January 1 at Death")
detailed_ages
```
**Question 4.** For each President, compute how many more days passed before their death in their year of death than before their birth in their year of birth. For example, that number for George Washington is 295, and for John Adams it's -118. We'll call this number the "net additional life days."
```
net_additional_life_days = death_days - birth_days
# This piece of code puts your results into a table for better
# display. You can ignore it.
detailed_ages.with_column("Net Additional Life Days", net_additional_life_days)
```
To get each President's actual age at death, we should subtract 1 from the longevity of Presidents whose net additional life days are negative. One way to do this is:
* Divide each net additional life day amount by 366 to get a fraction of a year.
* Round each fraction down to the nearest integer, using the function `np.floor`. (`np.floor` takes as its argument an array of numbers. It returns an array of those numbers rounded down to the nearest integer.)
* Add the result to each President's longevity.
**Question 5.** Compute each President's actual longevity by following the steps above.
*Hint 1:* Use the arrays you've already calculated in previous questions.
*Hint 2:* Our answer uses a single line with a compound expression, but you may find it simpler to perform each of the three steps on its own line, giving a name to each intermediate result so you can use it on the next line.
```
true_longevity = ...
# This piece of code puts your results into a table for better
# display. You can ignore it.
detailed_ages.with_column("True Longevity", true_longevity).show()
```
|
github_jupyter
|
# Run this cell to set up the notebook, but please don't change it.
# These lines import the Numpy and Datascience modules.
import numpy as np
from datascience import *
# These lines do some fancy plotting magic
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
import warnings
warnings.simplefilter('ignore', FutureWarning)
# Just run this cell.
presidents = Table.read_table("presidents.csv").select("Name", "Birth Year", "Death Year")
# This is an array of the birth years of the dead presidents. It's the data
# you see displayed in the "Birth Year" column below.
birth_years = presidents.column("Birth Year")
# This is an array of the death years of the dead presidents. It's the data
# you see displayed in the "Death Year" column below. The first element of
# this array describes the same president as the first element of birth_years,
# and so on.
death_years = presidents.column("Death Year")
presidents.show()
longevity = ...
# This piece of code puts your results into a table for better
# display. You can ignore it.
presidents.with_column("Longevity", longevity).show()
# Just run this cell.
Table().with_columns(
"President number (by birth date)", np.arange(1, presidents.num_rows+1),
"longevity (years)", longevity)\
.scatter(0)
ages_today = ...
# This piece of code puts your results into a table for better
# display. You can ignore it.
presidents.with_column("Age Today", ages_today).show()
# Just run this cell.
detailed_ages = Table.read_table("presidents.csv").select("Name", "Birth Year", "Days since January 1 at Birth", "Death Year", "Days since January 1 at Death")
birth_days = detailed_ages.column("Days since January 1 at Birth")
death_days = detailed_ages.column("Days since January 1 at Death")
detailed_ages
net_additional_life_days = death_days - birth_days
# This piece of code puts your results into a table for better
# display. You can ignore it.
detailed_ages.with_column("Net Additional Life Days", net_additional_life_days)
true_longevity = ...
# This piece of code puts your results into a table for better
# display. You can ignore it.
detailed_ages.with_column("True Longevity", true_longevity).show()
| 0.49292 | 0.951729 |
## Import Libraries and Process the Data
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset_training = pd.read_csv('MSFT_train.csv')
dataset_training.head()
training_data = dataset_training.iloc[:, 1:2].values
training_data
from sklearn.preprocessing import MinMaxScaler
sc = MinMaxScaler(feature_range = (0, 1))
training_data_scaled = sc.fit_transform(training_data)
training_data_scaled
```
## Create Data Time Stamps & Rehape the Data
```
X_train = []
y_train = []
for i in range(60, 1258):
X_train.append(training_data_scaled[i-60:i, 0])
y_train.append(training_data_scaled[i, 0])
X_train, y_train = np.array(X_train), np.array(y_train)
X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))
X_train
```
## Create & Compile an RNN Architecure
```
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
model = Sequential()
model.add(LSTM(units = 50, return_sequences = True, input_shape = (X_train.shape[1], 1)))
# Adding a second LSTM layer and some Dropout regularisation
model.add(LSTM(units = 50, return_sequences = True))
# Adding a third LSTM layer and some Dropout regularisation
model.add(LSTM(units = 50, return_sequences = True))
# Adding a fourth LSTM layer and some Dropout regularisation
model.add(LSTM(units = 50))
# Adding the output layer
model.add(Dense(units = 1))
# Compiling the RNN
model.compile(optimizer = 'adam', loss = 'mean_squared_error')
# Fitting the RNN to the Training set
model.fit(X_train, y_train, epochs = 100, batch_size = 32)
```
## Prepare the Test Data , Concatenate Test & Train Datasets
```
dataset_testing = pd.read_csv('MSFT_test.csv')
actual_stock_price = dataset_testing.iloc[:, 1:2].values
actual_stock_price
total_data = pd.concat((dataset_training['Open'], dataset_testing['Open']), axis = 0)
inputs = total_data[len(total_data) - len(dataset_testing) - 60:].values
inputs = inputs.reshape(-1,1)
inputs = sc.transform(inputs)
X_test = []
for i in range(60, 81):
X_test.append(inputs[i-60:i, 0])
X_test = np.array(X_test)
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))
predicted_stock_price = model.predict(X_test)
predicted_stock_price = sc.inverse_transform(predicted_stock_price)
```
## Visualize the Results
```
# Visualising the results
plt.plot(actual_stock_price, color = 'green', label = 'Real Microsoft Stock Price',ls='--')
plt.plot(predicted_stock_price, color = 'red', label = 'Predicted Microsoft Stock Price',ls='-')
plt.title('Predicted Stock Price')
plt.xlabel('Time in days')
plt.ylabel('Real Stock Price')
plt.legend()
plt.show()
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset_training = pd.read_csv('MSFT_train.csv')
dataset_training.head()
training_data = dataset_training.iloc[:, 1:2].values
training_data
from sklearn.preprocessing import MinMaxScaler
sc = MinMaxScaler(feature_range = (0, 1))
training_data_scaled = sc.fit_transform(training_data)
training_data_scaled
X_train = []
y_train = []
for i in range(60, 1258):
X_train.append(training_data_scaled[i-60:i, 0])
y_train.append(training_data_scaled[i, 0])
X_train, y_train = np.array(X_train), np.array(y_train)
X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))
X_train
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
model = Sequential()
model.add(LSTM(units = 50, return_sequences = True, input_shape = (X_train.shape[1], 1)))
# Adding a second LSTM layer and some Dropout regularisation
model.add(LSTM(units = 50, return_sequences = True))
# Adding a third LSTM layer and some Dropout regularisation
model.add(LSTM(units = 50, return_sequences = True))
# Adding a fourth LSTM layer and some Dropout regularisation
model.add(LSTM(units = 50))
# Adding the output layer
model.add(Dense(units = 1))
# Compiling the RNN
model.compile(optimizer = 'adam', loss = 'mean_squared_error')
# Fitting the RNN to the Training set
model.fit(X_train, y_train, epochs = 100, batch_size = 32)
dataset_testing = pd.read_csv('MSFT_test.csv')
actual_stock_price = dataset_testing.iloc[:, 1:2].values
actual_stock_price
total_data = pd.concat((dataset_training['Open'], dataset_testing['Open']), axis = 0)
inputs = total_data[len(total_data) - len(dataset_testing) - 60:].values
inputs = inputs.reshape(-1,1)
inputs = sc.transform(inputs)
X_test = []
for i in range(60, 81):
X_test.append(inputs[i-60:i, 0])
X_test = np.array(X_test)
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))
predicted_stock_price = model.predict(X_test)
predicted_stock_price = sc.inverse_transform(predicted_stock_price)
# Visualising the results
plt.plot(actual_stock_price, color = 'green', label = 'Real Microsoft Stock Price',ls='--')
plt.plot(predicted_stock_price, color = 'red', label = 'Predicted Microsoft Stock Price',ls='-')
plt.title('Predicted Stock Price')
plt.xlabel('Time in days')
plt.ylabel('Real Stock Price')
plt.legend()
plt.show()
| 0.803791 | 0.915167 |
# <font color='blue'>UNINOVE - Ciência de Dados</font>
## Tópico 11 - Python: Computação Científica com SciPy
```
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
```
<b>SciPy</b> é um conjunto de ferramentas open-source utilizadas principalmente para computação científica de alta performance. Os pacotes básicos de instalação do SciPy são: <i>NumPy, Matplotlib, Pandas, Sympy Nose, IPhyton e SciPy</i>.
https://scipy.org
### Matriz Inversa
```
# Importar as bibliotecas NumPy e SciPy
import numpy as np
```
https://scipy.github.io/devdocs/reference/linalg.html
```
from scipy import linalg
# Definindo matriz
A = np.array([[3, 0, 2],[9, 1, 7],[1, 0, 1]])
# Gerando a matriz inversa
Ainversa = linalg.inv(A)
print("Matriz A")
print(A)
print("Matriz A inversa")
print(Ainversa)
```
Podemos agora tirar a prova real, ou seja, verificar se realmente a multiplicação da matriz A pela sua inversa é igual a matriz Identidade.
```
# Importar bibliotecas
import numpy as np
from scipy import linalg
# matriz
A = np.array([[3, 0, 2],[9, 1, 7],[1, 0, 1]])
# matriz inversa
Ainversa = linalg.inv(A)
# matrix identidade
B = A.dot(Ainversa)
print(B)
```
### Sistema de equações lineares
Agora suponha que tenhamos as seguintes equações:
x + y + z = 6
8x + 3y - z = 8
2x -3y + z = 12
```
# Importar bibliotecas
import numpy as np
from scipy import linalg
# Definindo matrizes
A = np.array([[1, 1, 1],[8, 3, -1],[2, -3, 1]])
B = np.array([[6],[8],[12]])
# Matriz inversa
Ainversa = linalg.inv(A)
# Matriz identidade
C = Ainversa.dot(B)
# Imprimindo valores calculados
print(C)
print("Valor da variável x:", C[0][0]);
print("Valor da variável y:", C[1][0]);
print("Valor da variável z:", C[2][0]);
```
Outra forma de se resolver é utilizar a função <i>solve</i>.
https://scipy.github.io/devdocs/reference/generated/scipy.linalg.solve.html
```
# Importar bibliotecas
import numpy as np
from scipy import linalg
# Definido matrizes
A = np.array([[1, 1, 1],[8, 3, -1],[2, -3, 1]])
B = np.array([[6],[8],[12]])
# Calculando valor das variáveis da equação
C = np.linalg.solve(A, B)
print(C)
print("Valor da variável x:", C[0][0])
print("Valor da variável y:", C[1][0])
print("Valor da variável z:", C[2][0])
```
Note que houve uma pequena diferença de valores devido a questões de arredondamento. Caso necessário e desejado podemos usar a função <i>round</i> para arredondarmos valores decimais para o número de casas desejado.
Sintaxe: <i>round(valor_a_ser_arredondado, numero_de_casas_decimais)</i>
https://docs.python.org/3/library/functions.html#round
```
# Importar bibliotecas
import numpy as np
from scipy import linalg
# Definindo matrizes
A = np.array([[1, 1, 1],[8, 3, -1],[2, -3, 1]])
B = np.array([[6],[8],[12]])
# Calculando valor das variávies da equação
C = np.linalg.solve(A, B)
print(C)
# Resultados com arredondamento!!!
print("Valor da variável x:", round(C[0][0],2));
print("Valor da variável y:", round(C[1][0],2));
print("Valor da variável z:", round(C[2][0],2));
```
### Determinante
Utilizado com grande frequencia em álgebra linear. Aplica-se a matrizes quadradas, ou seja, que tem mesma quantidade de linhas e colunas.
```
# Importar bibliotecas
import numpy as np
from scipy import linalg
# Definindo matrizes
A = np.array([[8]])
B = np.array([[4,2],[3,3]])
C = np.array([[1,4,2],[1,3,3],[2,6,1]])
```
https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.det.html
```
# Matriz de ordem 1
print("Matriz A")
print (A)
# Calculando o Determinante
print("Determinante de A")
Res = round(np.linalg.det(A),2)
print(Res)
# Matriz ordem 2
print("Matriz B")
print (B)
print("Determinante de B")
Res = round(np.linalg.det(B),2)
print(Res)
# Matriz ordem 3
print("Matriz C")
print (C)
print("Determinante de C")
Res = round(np.linalg.det(C),2)
print(Res)
```
|
github_jupyter
|
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
# Importar as bibliotecas NumPy e SciPy
import numpy as np
from scipy import linalg
# Definindo matriz
A = np.array([[3, 0, 2],[9, 1, 7],[1, 0, 1]])
# Gerando a matriz inversa
Ainversa = linalg.inv(A)
print("Matriz A")
print(A)
print("Matriz A inversa")
print(Ainversa)
# Importar bibliotecas
import numpy as np
from scipy import linalg
# matriz
A = np.array([[3, 0, 2],[9, 1, 7],[1, 0, 1]])
# matriz inversa
Ainversa = linalg.inv(A)
# matrix identidade
B = A.dot(Ainversa)
print(B)
# Importar bibliotecas
import numpy as np
from scipy import linalg
# Definindo matrizes
A = np.array([[1, 1, 1],[8, 3, -1],[2, -3, 1]])
B = np.array([[6],[8],[12]])
# Matriz inversa
Ainversa = linalg.inv(A)
# Matriz identidade
C = Ainversa.dot(B)
# Imprimindo valores calculados
print(C)
print("Valor da variável x:", C[0][0]);
print("Valor da variável y:", C[1][0]);
print("Valor da variável z:", C[2][0]);
# Importar bibliotecas
import numpy as np
from scipy import linalg
# Definido matrizes
A = np.array([[1, 1, 1],[8, 3, -1],[2, -3, 1]])
B = np.array([[6],[8],[12]])
# Calculando valor das variáveis da equação
C = np.linalg.solve(A, B)
print(C)
print("Valor da variável x:", C[0][0])
print("Valor da variável y:", C[1][0])
print("Valor da variável z:", C[2][0])
# Importar bibliotecas
import numpy as np
from scipy import linalg
# Definindo matrizes
A = np.array([[1, 1, 1],[8, 3, -1],[2, -3, 1]])
B = np.array([[6],[8],[12]])
# Calculando valor das variávies da equação
C = np.linalg.solve(A, B)
print(C)
# Resultados com arredondamento!!!
print("Valor da variável x:", round(C[0][0],2));
print("Valor da variável y:", round(C[1][0],2));
print("Valor da variável z:", round(C[2][0],2));
# Importar bibliotecas
import numpy as np
from scipy import linalg
# Definindo matrizes
A = np.array([[8]])
B = np.array([[4,2],[3,3]])
C = np.array([[1,4,2],[1,3,3],[2,6,1]])
# Matriz de ordem 1
print("Matriz A")
print (A)
# Calculando o Determinante
print("Determinante de A")
Res = round(np.linalg.det(A),2)
print(Res)
# Matriz ordem 2
print("Matriz B")
print (B)
print("Determinante de B")
Res = round(np.linalg.det(B),2)
print(Res)
# Matriz ordem 3
print("Matriz C")
print (C)
print("Determinante de C")
Res = round(np.linalg.det(C),2)
print(Res)
| 0.263315 | 0.947672 |
# Object Detection
Object detection is an important computer vision task used to detect instances of visual objects of certain classes (for example, humans, animals, cars, or buildings) in digital images such as photos or video frames. The goal of object detection is to develop computational models that provide the most fundamental information needed by computer vision applications: "What objects are where?"
Object detection is one of the fundamental problems of computer vision. It forms the basis of many other downstream computer vision tasks, for example, instance segmentation, image captioning, object tracking, and more. Specific object detection applications include pedestrian detection, people counting, face detection, text detection, pose detection, or number-plate recognition.
### Milestones in state-of-the-art Object Detection
The field of object detection is not as new as it may seem. In fact, object detection has evolved over the past 20 years. The progress of object detection is usually separated into two separate historical periods (before and after the introduction of Deep Learning):
Before 2014 – Traditional Object Detection period
+ Viola-Jones Detector (2001), the pioneering work that started the development of traditional object detection methods
+ HOG Detector (2006), a popular feature descriptor for object detection in computer vision and image processing
+ DPM (2008) with the first introduction of bounding box regression
After 2014 – Deep Learning Detection period
Most important two-stage object detection algorithms
+ RCNN and SPPNet (2014)
+ Fast RCNN and Faster RCNN (2015)
+ Mask R-CNN (2017)
+ Pyramid Networks/FPN (2017)
+ G-RCNN (2021)
Most important one-stage object detection algorithms
+ YOLO (2016)
+ SSD (2016)
+ RetinaNet (2017)
+ YOLOv3 (2018)
+ YOLOv4 (2020)
+ YOLOR (2021)
To understand which algorithm is the best for a given use case, it is important to understand the main characteristics. First, we will look into the key differences of the relevant image recognition algorithms for object detection before discussing the individual algorithms.
### One-stage vs. two-stage deep learning object detectors
As you can see in the list above, the state-of-the-art object detection methods can be categorized into two main types: One-stage vs. two-stage object detectors.
In general, deep learning based object detectors extract features from the input image or video frame. An object detector solves two subsequent tasks:
+ Task #1: Find an arbitrary number of objects (possibly even zero), and
+ Task #2: Classify every single object and estimate its size with a bounding box.
To simplify the process, you can separate those tasks into two stages. Other methods combine both tasks into one step (single-stage detectors) to achieve higher performance at the cost of accuracy.
**Two-stage detectors:** In two-stage object detectors, the approximate object regions are proposed using deep features before these features are used for the classification as well as bounding box regression for the object candidate.
The two-stage architecture involves (1) object region proposal with conventional Computer Vision methods or deep networks, followed by (2) object classification based on features extracted from the proposed region with bounding-box regression.
Two-stage methods achieve the highest detection accuracy but are typically slower. Because of the many inference steps per image, the performance (frames per second) is not as good as one-stage detectors.
Various two-stage detectors include region convolutional neural network (RCNN), with evolutions Faster R-CNN or Mask R-CNN. The latest evolution is the granulated RCNN (G-RCNN).
Two-stage object detectors first find a region of interest and use this cropped region for classification. However, such multi-stage detectors are usually not end-to-end trainable because cropping is a non-differentiable operation.
**One-stage detectors:** One-stage detectors predict bounding boxes over the images without the region proposal step. This process consumes less time and can therefore be used in real-time applications.
One-stage object detectors prioritize inference speed and are super fast but not as good at recognizing irregularly shaped objects or a group of small objects.
The most popular one-stage detectors include the YOLO, SSD, and RetinaNet. The latest real-time detectors are YOLOv4-Scaled (2020) and YOLOR (2021). The main advantage of single-stage is that those algorithms are generally faster than multi-stage detectors and structurally simpler.
|
github_jupyter
|
# Object Detection
Object detection is an important computer vision task used to detect instances of visual objects of certain classes (for example, humans, animals, cars, or buildings) in digital images such as photos or video frames. The goal of object detection is to develop computational models that provide the most fundamental information needed by computer vision applications: "What objects are where?"
Object detection is one of the fundamental problems of computer vision. It forms the basis of many other downstream computer vision tasks, for example, instance segmentation, image captioning, object tracking, and more. Specific object detection applications include pedestrian detection, people counting, face detection, text detection, pose detection, or number-plate recognition.
### Milestones in state-of-the-art Object Detection
The field of object detection is not as new as it may seem. In fact, object detection has evolved over the past 20 years. The progress of object detection is usually separated into two separate historical periods (before and after the introduction of Deep Learning):
Before 2014 – Traditional Object Detection period
+ Viola-Jones Detector (2001), the pioneering work that started the development of traditional object detection methods
+ HOG Detector (2006), a popular feature descriptor for object detection in computer vision and image processing
+ DPM (2008) with the first introduction of bounding box regression
After 2014 – Deep Learning Detection period
Most important two-stage object detection algorithms
+ RCNN and SPPNet (2014)
+ Fast RCNN and Faster RCNN (2015)
+ Mask R-CNN (2017)
+ Pyramid Networks/FPN (2017)
+ G-RCNN (2021)
Most important one-stage object detection algorithms
+ YOLO (2016)
+ SSD (2016)
+ RetinaNet (2017)
+ YOLOv3 (2018)
+ YOLOv4 (2020)
+ YOLOR (2021)
To understand which algorithm is the best for a given use case, it is important to understand the main characteristics. First, we will look into the key differences of the relevant image recognition algorithms for object detection before discussing the individual algorithms.
### One-stage vs. two-stage deep learning object detectors
As you can see in the list above, the state-of-the-art object detection methods can be categorized into two main types: One-stage vs. two-stage object detectors.
In general, deep learning based object detectors extract features from the input image or video frame. An object detector solves two subsequent tasks:
+ Task #1: Find an arbitrary number of objects (possibly even zero), and
+ Task #2: Classify every single object and estimate its size with a bounding box.
To simplify the process, you can separate those tasks into two stages. Other methods combine both tasks into one step (single-stage detectors) to achieve higher performance at the cost of accuracy.
**Two-stage detectors:** In two-stage object detectors, the approximate object regions are proposed using deep features before these features are used for the classification as well as bounding box regression for the object candidate.
The two-stage architecture involves (1) object region proposal with conventional Computer Vision methods or deep networks, followed by (2) object classification based on features extracted from the proposed region with bounding-box regression.
Two-stage methods achieve the highest detection accuracy but are typically slower. Because of the many inference steps per image, the performance (frames per second) is not as good as one-stage detectors.
Various two-stage detectors include region convolutional neural network (RCNN), with evolutions Faster R-CNN or Mask R-CNN. The latest evolution is the granulated RCNN (G-RCNN).
Two-stage object detectors first find a region of interest and use this cropped region for classification. However, such multi-stage detectors are usually not end-to-end trainable because cropping is a non-differentiable operation.
**One-stage detectors:** One-stage detectors predict bounding boxes over the images without the region proposal step. This process consumes less time and can therefore be used in real-time applications.
One-stage object detectors prioritize inference speed and are super fast but not as good at recognizing irregularly shaped objects or a group of small objects.
The most popular one-stage detectors include the YOLO, SSD, and RetinaNet. The latest real-time detectors are YOLOv4-Scaled (2020) and YOLOR (2021). The main advantage of single-stage is that those algorithms are generally faster than multi-stage detectors and structurally simpler.
| 0.868186 | 0.989771 |
## 1. Short Answer
1. **False**:The Mean-variance optimization is the used the covariance matrix and mean of the return to calculate the optimal weights. There is no correlation with the Sharpe-Ratio
2. **True**: LETF may be holding for a long-time due to the short period of its variation.
3. I suggest that we need to estimate the regression with an intercept. Because in short samples, the mean returns may be estimated inaccurately, so we may want to include an intercept to focus on explaining variation.
4. HDG is effective at tracking HFRI in-sample, but out of sample, HDG may be not effective beacuse the strategies used by hedge funds may not be considered and the beta may be ineffective.
5. The "High Alpha" claimed by the hedge fund may be the regression with only MKT as a factor. However, when regressing the hedge fund returns on other factors, the alpha may be negative because the other factors compansate for the original alpha. That's why there is a discrepancy.
## 2. Allocation
```
import pandas as pd
import numpy as np
import statsmodels.api as sm
import matplotlib.pyplot as plt
import seaborn as sns
rets = pd.read_excel('proshares_analysis_data.xlsx', index_col = 0, sheet_name = 'merrill_factors')
retsx = rets.subtract(rets['USGG3M Index'], axis = 0)
retsx = retsx.drop(columns = ['USGG3M Index'])
```
### 2.1.
```
def tangency_weights(returns,dropna=True,scale_cov=1):
if dropna:
returns = returns.dropna()
covmat_full = returns.cov()
covmat_diag = np.diag(np.diag(covmat_full))
covmat = scale_cov * covmat_full + (1-scale_cov) * covmat_diag
weights = np.linalg.solve(covmat,returns.mean())
weights = weights / weights.sum()
return pd.DataFrame(weights, index=returns.columns)
wts = pd.DataFrame(index=retsx.columns)
wts['tangency'] = tangency_weights(retsx)
display(wts)
```
### 2.2.
**Solution**: Yes, the optimal portfolio will short in the risk-free rate for about 15.76%.
```
target_mean = .02
mu_tan = retsx.mean() @ wts['tangency'] # 1 * 1
delta = target_mean / mu_tan # 1 * 1
wts['optimal'] = wts['tangency'] * delta
display(wts)
wts_rf = 1 - wts['optimal'].sum()
print('The weights of investment in the risk free asset is: ' + str(round(wts_rf,4)))
```
### 2.3.
```
def performanceMetrics(returns,annualization=1, quantile=.05):
metrics = pd.DataFrame(index=returns.columns)
metrics['Mean'] = returns.mean() * annualization
metrics['Vol'] = returns.std() * np.sqrt(annualization)
metrics['Sharpe'] = (returns.mean() / returns.std()) * np.sqrt(annualization)
return metrics
res_optimal = retsx @ wts['optimal']
ans3 = performanceMetrics(res_optimal.to_frame(), 12)
ans3.index = ['optimized portfolio']
display(ans3)
```
### 2.4.
```
retsx_IS = retsx.loc[:'2018']
retsx_OOS = retsx.loc['2019':]
wts_IS = tangency_weights(retsx.loc[:'2018'])
wts_IS.columns = ['tangency']
target_mean = .02
mu_tan = retsx_IS.mean() @ wts['tangency'] # 1 * 1
delta = target_mean / mu_tan # 1 * 1
wts_IS['optimal'] = wts_IS['tangency'] * delta
display(wts_IS)
res_optimal_OOS = retsx_OOS @ wts_IS['optimal']
ans4 = performanceMetrics(res_optimal_OOS.to_frame(), 12)
ans4.index = ['optimized portfolio_OOS']
display(ans4)
```
### 2.5.
**Solution**: I think the out-of-sample fragility problem would be worse. Because for commodity futures, they are all the one type future which the covariance matrix may be varied in times. Therefore, when doing out-of-sample, the covariance may change a lot and then the fragility will be more than the risky assets we have done.
## 3. Hedging & Replication
```
y = retsx['EEM US Equity']
X = retsx['SPY US Equity']
static_model = sm.OLS(y,X).fit()
```
### 3.1.
**Solution**: The optimal hedge ratio is 0.92566 over the full sample data. That is, for every dollar invested in EEM, 0.924 dollar would be invested in SPY
```
beta = static_model.params
beta
```
### 3.2.
**Solution**: Because the hedged position has a negative mean and Sharpe ratio, we could not apply that hedge throughout the full sample.
```
eem_new = y - beta[0] * X
ans32 = performanceMetrics(eem_new.to_frame(), 12)
ans32.index = ['EEM_new']
display(ans32)
```
### 3.3.
**Solution**: They don't have the same mean. Because the hedge doesn't include an intercept, which means that the hedge explain the total return (including the mean). Thus, the mean of them is not the same.
```
eem_new_mean = eem_new.mean()
eem_mean = y.mean()
print('EEM mean is:' + str(round(eem_mean,4)))
print('EEM_new mean is:' + str(round(eem_new_mean,4)))
```
### 3.4.
**Solution**: The regression will be difficult because the R-squared is only 0.527.
```
y_ = retsx['EEM US Equity']
X_ = retsx.loc[:,['SPY US Equity', 'IWM US Equity']]
static_model_ = sm.OLS(y_,X_).fit()
static_model_.summary()
```
## 4. Modeling Risk
### 4.1.
<span style="color:#00008B"> $$ p(h) = Pr\left[R^{EFA}_{t,t+h} < R^{SPY}_{t,t+h}\right] $$ </span>
<span style="color:#00008B"> $$ = Pr\left[\text{r}^{EFA}_{t,t+h} < \text{r}^{SPY}_{t,t+h}\right] $$ </span>
<span style="color:#00008B"> $$ = Pr\left[ \sum_{i=1}^h \text{r}^{EFA}_{t+i} < \sum_{i=1}^h \text{r}^{SPY}_{t+i} \right] $$ </span>
<span style="color:#00008B"> $$ = Pr\left[ \overline{\text{r}}^{EFA}_{t,t+h} < \overline{\text{r}}^{SPY}_{t,t+h} \right] $$ </span>
<span style="color:#00008B"> $$ = Pr\left[ \overline{\text{r}}^{EFA}_{t,t+h} - \overline{\text{r}}^{SPY}_{t,t+h} < 0 \right] $$ </span>
**Solution**: Over next 10 years, there is 83.45% confident that SPY will overperform EFA.
```
ret_sub = rets['EFA US Equity'] - rets['SPY US Equity']
tilde_mu = ret_sub.mean()
tilde_sigma = ret_sub.std()
table4 = pd.DataFrame(columns=['h', 'tilde_mu_hat'])
table4['h'] = [5, 10, 15, 20, 25, 30]
table4 = table4.set_index('h')
def p(h, tilde_mu=0.525, tilde_sigma=0.150):
x = - np.sqrt(h) * tilde_mu / tilde_sigma
val = scipy.stats.norm.cdf(x)
return val
table4['tilde_mu_hat'] = p(table4.index, tilde_mu=tilde_mu, tilde_sigma=tilde_sigma)
table4.T.style.set_caption('Solution Table 4.1: Shortfall probability estimates ')
```
### 4.2.
**Solution**:The VaR is 0.035
```
def rms(x):
return (lambda x: ((x**2).sum()/len(x))**(0.5))
sigma_roll = rets['EFA US Equity'].shift(1).dropna().rolling(60).apply(lambda x: ((x**2).sum()/len(x))**(0.5))
sigma_roll
import scipy.stats
var = sigma_roll[-1] * scipy.stats.norm.cdf(0.99)
var
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import statsmodels.api as sm
import matplotlib.pyplot as plt
import seaborn as sns
rets = pd.read_excel('proshares_analysis_data.xlsx', index_col = 0, sheet_name = 'merrill_factors')
retsx = rets.subtract(rets['USGG3M Index'], axis = 0)
retsx = retsx.drop(columns = ['USGG3M Index'])
def tangency_weights(returns,dropna=True,scale_cov=1):
if dropna:
returns = returns.dropna()
covmat_full = returns.cov()
covmat_diag = np.diag(np.diag(covmat_full))
covmat = scale_cov * covmat_full + (1-scale_cov) * covmat_diag
weights = np.linalg.solve(covmat,returns.mean())
weights = weights / weights.sum()
return pd.DataFrame(weights, index=returns.columns)
wts = pd.DataFrame(index=retsx.columns)
wts['tangency'] = tangency_weights(retsx)
display(wts)
target_mean = .02
mu_tan = retsx.mean() @ wts['tangency'] # 1 * 1
delta = target_mean / mu_tan # 1 * 1
wts['optimal'] = wts['tangency'] * delta
display(wts)
wts_rf = 1 - wts['optimal'].sum()
print('The weights of investment in the risk free asset is: ' + str(round(wts_rf,4)))
def performanceMetrics(returns,annualization=1, quantile=.05):
metrics = pd.DataFrame(index=returns.columns)
metrics['Mean'] = returns.mean() * annualization
metrics['Vol'] = returns.std() * np.sqrt(annualization)
metrics['Sharpe'] = (returns.mean() / returns.std()) * np.sqrt(annualization)
return metrics
res_optimal = retsx @ wts['optimal']
ans3 = performanceMetrics(res_optimal.to_frame(), 12)
ans3.index = ['optimized portfolio']
display(ans3)
retsx_IS = retsx.loc[:'2018']
retsx_OOS = retsx.loc['2019':]
wts_IS = tangency_weights(retsx.loc[:'2018'])
wts_IS.columns = ['tangency']
target_mean = .02
mu_tan = retsx_IS.mean() @ wts['tangency'] # 1 * 1
delta = target_mean / mu_tan # 1 * 1
wts_IS['optimal'] = wts_IS['tangency'] * delta
display(wts_IS)
res_optimal_OOS = retsx_OOS @ wts_IS['optimal']
ans4 = performanceMetrics(res_optimal_OOS.to_frame(), 12)
ans4.index = ['optimized portfolio_OOS']
display(ans4)
y = retsx['EEM US Equity']
X = retsx['SPY US Equity']
static_model = sm.OLS(y,X).fit()
beta = static_model.params
beta
eem_new = y - beta[0] * X
ans32 = performanceMetrics(eem_new.to_frame(), 12)
ans32.index = ['EEM_new']
display(ans32)
eem_new_mean = eem_new.mean()
eem_mean = y.mean()
print('EEM mean is:' + str(round(eem_mean,4)))
print('EEM_new mean is:' + str(round(eem_new_mean,4)))
y_ = retsx['EEM US Equity']
X_ = retsx.loc[:,['SPY US Equity', 'IWM US Equity']]
static_model_ = sm.OLS(y_,X_).fit()
static_model_.summary()
ret_sub = rets['EFA US Equity'] - rets['SPY US Equity']
tilde_mu = ret_sub.mean()
tilde_sigma = ret_sub.std()
table4 = pd.DataFrame(columns=['h', 'tilde_mu_hat'])
table4['h'] = [5, 10, 15, 20, 25, 30]
table4 = table4.set_index('h')
def p(h, tilde_mu=0.525, tilde_sigma=0.150):
x = - np.sqrt(h) * tilde_mu / tilde_sigma
val = scipy.stats.norm.cdf(x)
return val
table4['tilde_mu_hat'] = p(table4.index, tilde_mu=tilde_mu, tilde_sigma=tilde_sigma)
table4.T.style.set_caption('Solution Table 4.1: Shortfall probability estimates ')
def rms(x):
return (lambda x: ((x**2).sum()/len(x))**(0.5))
sigma_roll = rets['EFA US Equity'].shift(1).dropna().rolling(60).apply(lambda x: ((x**2).sum()/len(x))**(0.5))
sigma_roll
import scipy.stats
var = sigma_roll[-1] * scipy.stats.norm.cdf(0.99)
var
| 0.610221 | 0.980673 |
# Google Play Store data EDA 분석
## 데이터 소개
# Google Play Store Apps
저희가 사용한 data는 Google Play Store에서 수집된 데이터로 Kaggle에 올라와 있는 데이터를 사용
https://www.kaggle.com/lava18/google-play-store-apps
이 데이터는:
- 총 13개의 특징 칼럼
- 9660개의 unique한 값
- 총 10842개의 데이터
- csv 파일
- 1개를 제외한 12개의 특징 칼럼들은 모두 object 타입
# 데이터 전처리
1. 결측치 확인
Rating, Version data 결측치 존재
- Rating : 1,473개 데이터 -> NaN값 데이터 존재
=> 모두 0으로 처리 (사용자의 평가 유보 대상 서비스 판단, Install정보 기반)
* Installs수( Rating 0 mean / 전체 mean / 전체 median) : (4,095, 1,546만, 10만)
- Version : Current Ver 8개 / Android Ver 2개
=> 모두 0으로 처리
2. 이상치 제거
2개 행(데이터) 이상치 제거
- 총 rows 수 : 10841개 -> 10839개
3. 컬럼별 데이터타입 변경
Size, Reviews, Installs, Price, Rating : 숫자로 변경
4. 컬럼 추가
- Log 값 적용한 컬럼 생성 : Installs_log, Reviews_log
- 추가 이유 : 데이터의 증감 추세 유지 하 데이터 수치 너비를 좁혀 분석 용이성 제고
# 컬럼 별 데이터 분석
### 1. Category
### Top 10 App Categories
```
plt.figure(figsize=(15,6))
sns.barplot(x=category.index[:10], y ='Count',data = category[:10],palette='hls')
plt.title('Top 10 App categories')
plt.xticks(rotation=90)
plt.show()
```
### Finding
1) 가장 많은 카테고리 : Family(18%), Game(11%)
2) 가장 적은 카테고리 : Beauty(1%), Cosmic(1%)
### 2. Rating
```
#각 Rating별 갯수
plt.subplots(figsize=(10,10))
plt.xticks(rotation=90)
ax = sns.countplot(x="Rating", data=df, palette="Set3")
```
### Finding
1) 4점 이상의 앱 비중 : 67.97%
### 4.Reviews
```
#histogram
plt.figure(figsize=(10,5))
sns.distplot(df['Reviews_log'],color='g')
```
### Findings
1) Review 갯수는 약 1억개까지 분포
2) Review 갯수 상위 앱은 Facebook (약 1억 5천개)
### 4. Installs
```
print(df['Installs_log'].describe())
plt.figure(figsize=(9, 8))
sns.distplot(df['Installs_log'], color='g', bins=10, hist_kws={'alpha': 0.4});
```
### Finding
```
1) 설치횟수가 1백만 이상인 앱의 비중 : 14.57%, 1천만 이상인 앱의 비중 : 11.55%
2) 평균 설치 횟수 : 1천 5백만 회
3) 최대 설치 횟수 : 10억 회
4) 최소 설치 횟수 : 0회
```
### 6. Price : 무료 앱 제외 후 분석 (무료 앱 갯수 : 790개)
```
plt.figure(figsize=(8,6))
plt.title('Distribution of Paid App Prices')
sns.distplot(paid_apps['Price'],bins=50)
plt.show()
paid_apps[paid_apps['Price'] >= 350]
```
### Finding
```
1) 10$ 이하 앱 비중 : 89%
2) 350$ 이상 앱 : 16개의 앱이 350$ 이상의 Price. 8건 이상이 10,000건 이상의 다운로드 횟수 기록
```
# 컬럼 별 상관관계 분석
1) Reviews_log - Installs_log
2) Reviews_log - Rating
```
df.corr()['Reviews_log']
```
## 1. Reviews_log와 Installs_log 분석
1) log값 취한 데이터를 기준으로 상관성 분석
2) 목표하는 Installs 컬럼을 포함한 전체 특징들과 분석
```
# joint scatter plot
sns.jointplot(x="Installs_log", y="Reviews_log", data=df, kind='reg')
df.corr()['Reviews_log']
```
### Finding
```
Reviews수의 로그를 취한 데이터와 상관관계를 보인 데이터
: Installs_log, Rating
```
## 2. Rating, Reviews_log 선형 분석
1) log값 취한 데이터를 기준으로 상관성 분석
2) Reviews와의 상관관계를 보이는 feature에 대한 추가 탐색
```
j = sns.jointplot(x="Reviews_log", y="Rating", data=df, kind='reg')
j.annotate(stats.pearsonr)
plt.show()
```
# Reviews 데이터 분석
1) Reviews가 높은 Install과 Rating의 유의미한 상관관계를 보이는 feature임을 확인
2) 이에 따라, 별도의 Review의 User_data 활용 분석
3) 상위 Install 앱의 Reivew 데이터 분석 실시
### Installs_log, Reviews_log의 sorting 결과, 상위 10개 데이터 분석
```
df1 = df1.sort_values(by=['Installs_log', 'Reviews_log'], ascending=False)
df1.head(10)
```
### Installs, Reviews 상위 10개 App
```
df_or.iloc[[2544, 3943, 336, 381, 3904, 2604, 2545, 2611, 3909, 382]].App
```
### Install, Review 최다횟수 보유 app인 Facebook의 Review Data 분석
- Sentiment 위주 분석 진행
```
df_facebook = df_r.loc[df_r["App"] == 'Facebook']
df_facebook_f = df_facebook.sort_values(by='Sentiment_Polarity', ascending=False)
df_facebook_f.head(10)
```
### Facebook의 Review의 Sentiment 분포도
```
df_facebook["Sentiment"].value_counts().plot.pie(label='Sentiment', autopct='%1.0f%%', figsize=(2, 2))
```
### Whats_app, Instagram 대한 Review들은 data가 존재하지 않아 분석할 수 없음
- 이유 : 데이터 부재(App 이름의 'I' 이후 데이터 부재)
# Conclusion
1) 높은 Installs과 Rating을 위해선 많은 Review 갯수가 중요하다는 관계 발견
2) Price : Price의 중요도는 상대적으로 낮은 것으로 확인됨 (상관계수 절대값 0.1 이상 feature 부재)
3) Category : 특정 Category에 치중된 상품 구성 (예상과 달리 Social의 비중이 적음)
4) Rating : 90% 가까이 4점 rating. 객관적인 평가가 이루어진다는 점에는 의문
|
github_jupyter
|
plt.figure(figsize=(15,6))
sns.barplot(x=category.index[:10], y ='Count',data = category[:10],palette='hls')
plt.title('Top 10 App categories')
plt.xticks(rotation=90)
plt.show()
#각 Rating별 갯수
plt.subplots(figsize=(10,10))
plt.xticks(rotation=90)
ax = sns.countplot(x="Rating", data=df, palette="Set3")
#histogram
plt.figure(figsize=(10,5))
sns.distplot(df['Reviews_log'],color='g')
print(df['Installs_log'].describe())
plt.figure(figsize=(9, 8))
sns.distplot(df['Installs_log'], color='g', bins=10, hist_kws={'alpha': 0.4});
1) 설치횟수가 1백만 이상인 앱의 비중 : 14.57%, 1천만 이상인 앱의 비중 : 11.55%
2) 평균 설치 횟수 : 1천 5백만 회
3) 최대 설치 횟수 : 10억 회
4) 최소 설치 횟수 : 0회
plt.figure(figsize=(8,6))
plt.title('Distribution of Paid App Prices')
sns.distplot(paid_apps['Price'],bins=50)
plt.show()
paid_apps[paid_apps['Price'] >= 350]
1) 10$ 이하 앱 비중 : 89%
2) 350$ 이상 앱 : 16개의 앱이 350$ 이상의 Price. 8건 이상이 10,000건 이상의 다운로드 횟수 기록
df.corr()['Reviews_log']
# joint scatter plot
sns.jointplot(x="Installs_log", y="Reviews_log", data=df, kind='reg')
df.corr()['Reviews_log']
Reviews수의 로그를 취한 데이터와 상관관계를 보인 데이터
: Installs_log, Rating
j = sns.jointplot(x="Reviews_log", y="Rating", data=df, kind='reg')
j.annotate(stats.pearsonr)
plt.show()
df1 = df1.sort_values(by=['Installs_log', 'Reviews_log'], ascending=False)
df1.head(10)
df_or.iloc[[2544, 3943, 336, 381, 3904, 2604, 2545, 2611, 3909, 382]].App
df_facebook = df_r.loc[df_r["App"] == 'Facebook']
df_facebook_f = df_facebook.sort_values(by='Sentiment_Polarity', ascending=False)
df_facebook_f.head(10)
df_facebook["Sentiment"].value_counts().plot.pie(label='Sentiment', autopct='%1.0f%%', figsize=(2, 2))
| 0.603581 | 0.956309 |
```
import pandas as pd
import numpy as np
root = "data/"
ratings_list = [i.strip().split("::") for i in open(root+'ml-1m/ratings.dat', 'r').readlines()]
users_list = [i.strip().split("::") for i in open(root+'ml-1m/users.dat', 'r').readlines()]
movies_list = [i.strip().split("::") for i in open(root+'ml-1m/movies.dat', 'r').readlines()]
ratings_df = pd.DataFrame(ratings_list, columns = ['UserID', 'MovieID', 'Rating', 'Timestamp'], dtype = int)
movies_df = pd.DataFrame(movies_list, columns = ['MovieID', 'Title', 'Genres'])
movies_df['MovieID'] = movies_df['MovieID'].apply(pd.to_numeric)
movies_df.head()
ratings_df.head()
df = ratings_df.astype(int)
df.head()
R_df = df.pivot(index="UserID",columns="MovieID",values='Rating').fillna(0)
R_df.head()
R = R_df.to_numpy()
R = R.astype(np.int)
user_ratings_mean = np.mean(R, axis = 1)
R_demeaned = R - user_ratings_mean.reshape(-1, 1)
from scipy.sparse.linalg import svds
U, sigma, Vt = svds(R_demeaned, k = 50)
sigma = np.diag(sigma)
all_user_predicted_ratings = np.dot(np.dot(U, sigma), Vt) + user_ratings_mean.reshape(-1, 1)
preds_df = pd.DataFrame(all_user_predicted_ratings, columns = R_df.columns)
preds_df = pd.DataFrame(all_user_predicted_ratings, columns = R_df.columns)
preds_df.head()
def recommend_movies(predictions_df, userID, movies_df, original_ratings_df, num_recommendations=5):
# Get and sort the user's predictions
user_row_number = userID - 1 # UserID starts at 1, not 0
sorted_user_predictions = preds_df.iloc[user_row_number].sort_values(ascending=False) # UserID starts at 1
# Get the user's data and merge in the movie information.
user_data = original_ratings_df[original_ratings_df.UserID == (userID)]
user_full = (user_data.merge(movies_df, how = 'left', left_on = 'MovieID', right_on = 'MovieID').
sort_values(['Rating'], ascending=False)
)
print('The User with UserID '+ (str)(userID) + ' has already rated ' + (str)(user_full.shape[0]) + ' movies.')
print('Based on it, recommending highest '+ (str)(num_recommendations) +' predicted ratings for movies that the user has not already rated.'.format(num_recommendations))
# Recommend the highest predicted rating movies that the user hasn't seen yet.
recommendations = (movies_df[~movies_df['MovieID'].isin(user_full['MovieID'])].
merge(pd.DataFrame(sorted_user_predictions).reset_index(), how = 'left',
left_on = 'MovieID',
right_on = 'MovieID').
rename(columns = {user_row_number: 'Predictions'}).
sort_values('Predictions', ascending = False).
iloc[:num_recommendations, :-1]
)
return user_full, recommendations
already_rated, predictions = recommend_movies(preds_df, 24, movies_df, df, 10)
print("\nAlready rated")
display(already_rated.head(10))
print("\nRecommended movies")
display(predictions)
```
|
github_jupyter
|
import pandas as pd
import numpy as np
root = "data/"
ratings_list = [i.strip().split("::") for i in open(root+'ml-1m/ratings.dat', 'r').readlines()]
users_list = [i.strip().split("::") for i in open(root+'ml-1m/users.dat', 'r').readlines()]
movies_list = [i.strip().split("::") for i in open(root+'ml-1m/movies.dat', 'r').readlines()]
ratings_df = pd.DataFrame(ratings_list, columns = ['UserID', 'MovieID', 'Rating', 'Timestamp'], dtype = int)
movies_df = pd.DataFrame(movies_list, columns = ['MovieID', 'Title', 'Genres'])
movies_df['MovieID'] = movies_df['MovieID'].apply(pd.to_numeric)
movies_df.head()
ratings_df.head()
df = ratings_df.astype(int)
df.head()
R_df = df.pivot(index="UserID",columns="MovieID",values='Rating').fillna(0)
R_df.head()
R = R_df.to_numpy()
R = R.astype(np.int)
user_ratings_mean = np.mean(R, axis = 1)
R_demeaned = R - user_ratings_mean.reshape(-1, 1)
from scipy.sparse.linalg import svds
U, sigma, Vt = svds(R_demeaned, k = 50)
sigma = np.diag(sigma)
all_user_predicted_ratings = np.dot(np.dot(U, sigma), Vt) + user_ratings_mean.reshape(-1, 1)
preds_df = pd.DataFrame(all_user_predicted_ratings, columns = R_df.columns)
preds_df = pd.DataFrame(all_user_predicted_ratings, columns = R_df.columns)
preds_df.head()
def recommend_movies(predictions_df, userID, movies_df, original_ratings_df, num_recommendations=5):
# Get and sort the user's predictions
user_row_number = userID - 1 # UserID starts at 1, not 0
sorted_user_predictions = preds_df.iloc[user_row_number].sort_values(ascending=False) # UserID starts at 1
# Get the user's data and merge in the movie information.
user_data = original_ratings_df[original_ratings_df.UserID == (userID)]
user_full = (user_data.merge(movies_df, how = 'left', left_on = 'MovieID', right_on = 'MovieID').
sort_values(['Rating'], ascending=False)
)
print('The User with UserID '+ (str)(userID) + ' has already rated ' + (str)(user_full.shape[0]) + ' movies.')
print('Based on it, recommending highest '+ (str)(num_recommendations) +' predicted ratings for movies that the user has not already rated.'.format(num_recommendations))
# Recommend the highest predicted rating movies that the user hasn't seen yet.
recommendations = (movies_df[~movies_df['MovieID'].isin(user_full['MovieID'])].
merge(pd.DataFrame(sorted_user_predictions).reset_index(), how = 'left',
left_on = 'MovieID',
right_on = 'MovieID').
rename(columns = {user_row_number: 'Predictions'}).
sort_values('Predictions', ascending = False).
iloc[:num_recommendations, :-1]
)
return user_full, recommendations
already_rated, predictions = recommend_movies(preds_df, 24, movies_df, df, 10)
print("\nAlready rated")
display(already_rated.head(10))
print("\nRecommended movies")
display(predictions)
| 0.395601 | 0.475057 |
```
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [10, 5]
```
# Few helpful definitions
- **Prior** probability is a distribution over the parameters of data distribution $\mathbb{P}(\theta)$
- **Likelihood** is the probability model of data we are considering $\mathbb{P}(X | \theta)$
- **Posterior** probability is a distribution over the parameter of a distribution given data provided
$\mathbb{P}(\theta | X) $
**Inference** is done using a simple Bayes rule:
$$
\mathbb{P}(\theta | X) = \frac{\mathbb{P}(X | \theta) \mathbb{P}(\theta)}{
\int_{\Theta} \mathbb{P}(X|\vartheta) \mathbb{P}(\vartheta) d\vartheta
}
$$
```
# In the meantime I'll define thin wrappers around the probability distributions
class Bernoulli:
def __init__(self, p):
self.p = p
def sample(self, size=1):
return stats.bernoulli.rvs(p=self.p, size=size)
class Uniform:
def __init__(self, start, end):
self.start = start
self.end = end
def sample(self, size=1):
return stats.uniform.rvs(loc=self.start, scale=self.end-self.start, size=size)
def pdf(self, x):
return stats.uniform.pdf(x, loc=self.start, scale=self.end-self.start)
def mean(self):
return stats.uniform.mean(loc=self.start, scale=self.end-self.start)
class Beta:
def __init__(self, alpha, beta):
self.alpha = alpha
self.beta = beta
def pdf(self, X):
return stats.beta.pdf(X, a=self.alpha, b=self.beta)
def mean(self):
return stats.beta.mean(a=self.alpha, b=self.beta)
class Normal:
def __init__(self, mu, sigma):
self.mu = mu
self.sigma = sigma
def pdf(self, X):
return stats.norm.pdf(X, loc=self.mu, scale=self.sigma)
def sample(self, size=1):
return stats.norm.rvs(loc=self.mu, scale=self.sigma, size=size)
def mean(self):
return self.mu
```
# Concrete example - discrete case
Let's consider a simple example, where:
- Prior $\mathbb{P}(\theta) \sim U(0, 1)$
- Likelihood $\mathbb{P}(X | \theta) \sim B(\theta)$
```
N = 100
Prior = Uniform(0, 1)
hidden_theta = Prior.sample()[0]
hidden_theta
Likelihood = Bernoulli(hidden_theta)
X = Likelihood.sample(size=N)
fig, axs = plt.subplots(1, 1)
axs.set_title("X histogram")
color = next(axs._get_lines.prop_cycler)["color"]
axs.hist(X, density=True, color=color, alpha=0.3)
axs.hist(X, density=True, color=color, edgecolor=color, fc="None", lw=1)
None
```
If we evaluate, the posterior pdf analytically, we can see that it is a **beta** distribution, which turns out to be a **conjugage prior** of the bernoulli distribution.
If we define two helper variables for this problem
- Number of successes $s = \sum_i x_i$
- Number of failures $p = \sum_i (1-x_i)$
Then the posterior pdf can be written as:
$$
\mathbb{P}(\theta | X) =
\frac{ \prod_i \theta^x_i (1 - \theta)^{1 - x_i}}{
\int_\Theta
\prod_i \vartheta^x_i (1 - \vartheta)^{1 - x_i} d\vartheta
}
=
\frac{ \theta^s (1-\theta)^p}{
\int_\Theta
\prod_i \vartheta^s (1-\vartheta)^p d\vartheta
}
=
\frac{ \theta^s (1-\theta)^p}{
\textrm{Beta}(s + 1, p + 1)
}
\sim
\textrm{Beta}(s + 1, p + 1)
$$
```
Posterior = Beta(X.sum() + 1, (1-X).sum() + 1)
successes = X.sum()
failures = (1-X).sum()
hidden_theta
mle = successes / (successes + failures) # In other words, mode of a distribution
mle
fig, axs = plt.subplots(1, 1)
axs.set_title("Prior vs Posterior")
support = np.linspace(0.0, 1.0, 100)
axs.plot(support, Prior.pdf(support), label="Prior")
axs.fill_between(support, 0, Prior.pdf(support), alpha=0.2)
axs.plot(support, Posterior.pdf(support), label="Posterior")
axs.fill_between(support, 0, Posterior.pdf(support), alpha=0.2)
axs.axvline(hidden_theta, color='red', linestyle='--', label="True paramter value")
axs.axvline(mle, color='blue', linestyle='--', label="Maximum likelihood estimate")
axs.legend()
None
```
# Second example - continuous case
- Prior $\mathbb{P}(\theta) \sim N(0, 1)$
- Likelihood $\mathbb{P}(X | \theta) \sim N(\theta, 1)$
```
N = 100
Prior = Normal(0, 1)
hidden_theta = Prior.sample()[0]
hidden_theta
Likelihood = Normal(hidden_theta, 1)
X = Likelihood.sample(N)
fig, axs = plt.subplots(1, 1)
axs.set_title("X histogram")
color = next(axs._get_lines.prop_cycler)["color"]
axs.hist(X, density=True, color=color, alpha=0.3)
axs.hist(X, density=True, color=color, edgecolor=color, fc="None", lw=1)
None
```
After doing some algebra, we can find that the posterior distribution is a normal distribution with parameters:
- $\mu = \frac{\sum_i x_i}{n+1}$
- $\sigma = \frac{1}{\sqrt{n+1}}$
```
Posterior = Normal(X.sum() / (X.size + 1), 1.0 / np.sqrt(X.size + 1))
hidden_theta
mle = Posterior.mean()
mle
```
In terms of normal distribution, MLE is equal to the mean of the parameter.
```
fig, axs = plt.subplots(1, 1)
axs.set_title("Prior vs Posterior")
support = np.linspace(-4, 4, 10_00)
axs.plot(support, Prior.pdf(support), label="Prior")
axs.fill_between(support, 0, Prior.pdf(support), alpha=0.2)
axs.plot(support, np.minimum(Posterior.pdf(support), 2.0), label="Posterior")
axs.fill_between(support, 0, np.minimum(Posterior.pdf(support), 2.0), alpha=0.2)
axs.axvline(hidden_theta, color='red', linestyle='--', label="True paramter value")
axs.axvline(mle, color='blue', linestyle='--', label="Maximum likelihood estimate")
axs.legend()
```
|
github_jupyter
|
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [10, 5]
# In the meantime I'll define thin wrappers around the probability distributions
class Bernoulli:
def __init__(self, p):
self.p = p
def sample(self, size=1):
return stats.bernoulli.rvs(p=self.p, size=size)
class Uniform:
def __init__(self, start, end):
self.start = start
self.end = end
def sample(self, size=1):
return stats.uniform.rvs(loc=self.start, scale=self.end-self.start, size=size)
def pdf(self, x):
return stats.uniform.pdf(x, loc=self.start, scale=self.end-self.start)
def mean(self):
return stats.uniform.mean(loc=self.start, scale=self.end-self.start)
class Beta:
def __init__(self, alpha, beta):
self.alpha = alpha
self.beta = beta
def pdf(self, X):
return stats.beta.pdf(X, a=self.alpha, b=self.beta)
def mean(self):
return stats.beta.mean(a=self.alpha, b=self.beta)
class Normal:
def __init__(self, mu, sigma):
self.mu = mu
self.sigma = sigma
def pdf(self, X):
return stats.norm.pdf(X, loc=self.mu, scale=self.sigma)
def sample(self, size=1):
return stats.norm.rvs(loc=self.mu, scale=self.sigma, size=size)
def mean(self):
return self.mu
N = 100
Prior = Uniform(0, 1)
hidden_theta = Prior.sample()[0]
hidden_theta
Likelihood = Bernoulli(hidden_theta)
X = Likelihood.sample(size=N)
fig, axs = plt.subplots(1, 1)
axs.set_title("X histogram")
color = next(axs._get_lines.prop_cycler)["color"]
axs.hist(X, density=True, color=color, alpha=0.3)
axs.hist(X, density=True, color=color, edgecolor=color, fc="None", lw=1)
None
Posterior = Beta(X.sum() + 1, (1-X).sum() + 1)
successes = X.sum()
failures = (1-X).sum()
hidden_theta
mle = successes / (successes + failures) # In other words, mode of a distribution
mle
fig, axs = plt.subplots(1, 1)
axs.set_title("Prior vs Posterior")
support = np.linspace(0.0, 1.0, 100)
axs.plot(support, Prior.pdf(support), label="Prior")
axs.fill_between(support, 0, Prior.pdf(support), alpha=0.2)
axs.plot(support, Posterior.pdf(support), label="Posterior")
axs.fill_between(support, 0, Posterior.pdf(support), alpha=0.2)
axs.axvline(hidden_theta, color='red', linestyle='--', label="True paramter value")
axs.axvline(mle, color='blue', linestyle='--', label="Maximum likelihood estimate")
axs.legend()
None
N = 100
Prior = Normal(0, 1)
hidden_theta = Prior.sample()[0]
hidden_theta
Likelihood = Normal(hidden_theta, 1)
X = Likelihood.sample(N)
fig, axs = plt.subplots(1, 1)
axs.set_title("X histogram")
color = next(axs._get_lines.prop_cycler)["color"]
axs.hist(X, density=True, color=color, alpha=0.3)
axs.hist(X, density=True, color=color, edgecolor=color, fc="None", lw=1)
None
Posterior = Normal(X.sum() / (X.size + 1), 1.0 / np.sqrt(X.size + 1))
hidden_theta
mle = Posterior.mean()
mle
fig, axs = plt.subplots(1, 1)
axs.set_title("Prior vs Posterior")
support = np.linspace(-4, 4, 10_00)
axs.plot(support, Prior.pdf(support), label="Prior")
axs.fill_between(support, 0, Prior.pdf(support), alpha=0.2)
axs.plot(support, np.minimum(Posterior.pdf(support), 2.0), label="Posterior")
axs.fill_between(support, 0, np.minimum(Posterior.pdf(support), 2.0), alpha=0.2)
axs.axvline(hidden_theta, color='red', linestyle='--', label="True paramter value")
axs.axvline(mle, color='blue', linestyle='--', label="Maximum likelihood estimate")
axs.legend()
| 0.670824 | 0.969295 |
# Introduction to Deep Learning with PyTorch
In this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks.
## Neural Networks
Deep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.
<img src="assets/simple_neuron.png" width=400px>
Mathematically this looks like:
$$
\begin{align}
y &= f(w_1 x_1 + w_2 x_2 + b) \\
y &= f\left(\sum_i w_i x_i +b \right)
\end{align}
$$
With vectors this is the dot/inner product of two vectors:
$$
h = \begin{bmatrix}
x_1 \, x_2 \cdots x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_1 \\
w_2 \\
\vdots \\
w_n
\end{bmatrix}
$$
## Tensors
It turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.
<img src="assets/tensor_examples.svg" width=600px>
With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network.
```
# First, import PyTorch
import torch
def activation(x):
""" Sigmoid activation function
Arguments
---------
x: torch.Tensor
"""
return 1/(1+torch.exp(-x))
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 5 random normal variables
features = torch.randn((1, 5))
# True weights for our data, random normal variables again
weights = torch.randn_like(features)
# and a true bias term
bias = torch.randn((1, 1))
```
Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:
`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one.
`weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.
Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.
PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network.
> **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.html#torch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function.
```
### Solution
# Now, make our labels from our data and true weights
y = activation(torch.sum(features * weights) + bias)
y = activation((features * weights).sum() + bias)
```
You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.
Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.html#torch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.html#torch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error
```python
>> torch.mm(features, weights)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-13-15d592eb5279> in <module>()
----> 1 torch.mm(features, weights)
RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033
```
As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.
**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.
There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view).
* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.
* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.
* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.
I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.
> **Exercise**: Calculate the output of our little network using matrix multiplication.
```
## Solution
y = activation(torch.mm(features, weights.view(5,1)) + bias)
```
### Stack them up!
That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.
<img src='assets/multilayer_diagram_weights.png' width=450px>
The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated
$$
\vec{h} = [h_1 \, h_2] =
\begin{bmatrix}
x_1 \, x_2 \cdots \, x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_{11} & w_{12} \\
w_{21} &w_{22} \\
\vdots &\vdots \\
w_{n1} &w_{n2}
\end{bmatrix}
$$
The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply
$$
y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)
$$
```
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 3 random normal variables
features = torch.randn((1, 3))
# Define the size of each layer in our network
n_input = features.shape[1] # Number of input units, must match number of input features
n_hidden = 2 # Number of hidden units
n_output = 1 # Number of output units
# Weights for inputs to hidden layer
W1 = torch.randn(n_input, n_hidden)
# Weights for hidden layer to output layer
W2 = torch.randn(n_hidden, n_output)
# and bias terms for hidden and output layers
B1 = torch.randn((1, n_hidden))
B2 = torch.randn((1, n_output))
```
> **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`.
```
### Solution
h = activation(torch.mm(features, W1) + B1)
output = activation(torch.mm(h, W2) + B2)
print(output)
```
If you did this correctly, you should see the output `tensor([[ 0.3171]])`.
The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions.
## Numpy to Torch and back
Special bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method.
```
import numpy as np
a = np.random.rand(4,3)
a
b = torch.from_numpy(a)
b
b.numpy()
```
The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well.
```
# Multiply PyTorch Tensor by 2, in place
b.mul_(2)
# Numpy array matches new values from Tensor
a
```
|
github_jupyter
|
# First, import PyTorch
import torch
def activation(x):
""" Sigmoid activation function
Arguments
---------
x: torch.Tensor
"""
return 1/(1+torch.exp(-x))
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 5 random normal variables
features = torch.randn((1, 5))
# True weights for our data, random normal variables again
weights = torch.randn_like(features)
# and a true bias term
bias = torch.randn((1, 1))
### Solution
# Now, make our labels from our data and true weights
y = activation(torch.sum(features * weights) + bias)
y = activation((features * weights).sum() + bias)
>> torch.mm(features, weights)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-13-15d592eb5279> in <module>()
----> 1 torch.mm(features, weights)
RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033
## Solution
y = activation(torch.mm(features, weights.view(5,1)) + bias)
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 3 random normal variables
features = torch.randn((1, 3))
# Define the size of each layer in our network
n_input = features.shape[1] # Number of input units, must match number of input features
n_hidden = 2 # Number of hidden units
n_output = 1 # Number of output units
# Weights for inputs to hidden layer
W1 = torch.randn(n_input, n_hidden)
# Weights for hidden layer to output layer
W2 = torch.randn(n_hidden, n_output)
# and bias terms for hidden and output layers
B1 = torch.randn((1, n_hidden))
B2 = torch.randn((1, n_output))
### Solution
h = activation(torch.mm(features, W1) + B1)
output = activation(torch.mm(h, W2) + B2)
print(output)
import numpy as np
a = np.random.rand(4,3)
a
b = torch.from_numpy(a)
b
b.numpy()
# Multiply PyTorch Tensor by 2, in place
b.mul_(2)
# Numpy array matches new values from Tensor
a
| 0.76454 | 0.995104 |
```
import json
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib import rc
from matplotlib.ticker import LogLocator
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
FOR_PRINT = True
if FOR_PRINT:
LINE_WIDTH = 1
MARKER_SIZE = 3
FONT_SIZE = 8
AXES_WIDTH = 0.65 * LINE_WIDTH
plt.rcParams['grid.linewidth']=AXES_WIDTH
plt.rcParams['axes.linewidth']=AXES_WIDTH
plt.rcParams['axes.labelpad']=3.0
plt.rcParams['xtick.major.pad']=0
plt.rcParams['xtick.major.size']=2.0
plt.rcParams['xtick.major.width']=AXES_WIDTH
plt.rcParams['xtick.minor.size']=1.0
plt.rcParams['xtick.minor.width']=0.75 * AXES_WIDTH
plt.rcParams['ytick.major.pad']=-1.5
plt.rcParams['ytick.major.size']=2.0
plt.rcParams['ytick.major.width']=AXES_WIDTH
plt.rcParams['ytick.minor.size']=1.0
plt.rcParams['ytick.minor.width']=0.75 * AXES_WIDTH
else:
LINE_WIDTH = 6
MARKER_SIZE = 14
FONT_SIZE = 45
%matplotlib inline
#plt.rcParams['figure.figsize'] = [15, 15]
plt.rcParams['lines.linewidth'] = LINE_WIDTH
plt.rcParams['lines.markeredgewidth'] = 0.75 * LINE_WIDTH
plt.rcParams['lines.markersize'] = MARKER_SIZE
plt.rcParams['font.size'] = FONT_SIZE
rc('text', usetex=True)
data = list()
with open("hex_results.json") as results_json_file:
data = json.load(results_json_file)
def draw_convergence_triangle(fig, ax, origin, width_inches, slope, inverted=False, color=None, polygon_kwargs=None, label=True, labelcolor=None, label_kwargs=None, zorder=None):
"""
This function draws slopes or "convergence triangles" into loglog plots.
@param fig: The figure
@param ax: The axes object to draw to
@param origin: The 2D origin (usually lower-left corner) coordinate of the triangle
@param width_inches: The width in inches of the triangle
@param slope: The slope of the triangle, i.e. order of convergence
@param inverted: Whether to mirror the triangle around the origin, i.e. whether
it indicates the slope towards the lower left instead of upper right (defaults to false)
@param color: The color of the of the triangle edges (defaults to default color)
@param polygon_kwargs: Additional kwargs to the Polygon draw call that creates the slope
@param label: Whether to enable labeling the slope (defaults to true)
@param labelcolor: The color of the slope labels (defaults to the edge color)
@param label_kwargs: Additional kwargs to the Annotation draw call that creates the labels
@param zorder: The z-order value of the triangle and labels, defaults to a high value
"""
if polygon_kwargs is None:
polygon_kwargs = {}
if label_kwargs is None:
label_kwargs = {}
if color is not None:
polygon_kwargs["color"] = color
if "linewidth" not in polygon_kwargs:
polygon_kwargs["linewidth"] = 0.75 * mpl.rcParams["lines.linewidth"]
if labelcolor is not None:
label_kwargs["color"] = labelcolor
if "color" not in label_kwargs:
label_kwargs["color"] = polygon_kwargs["color"]
if "fontsize" not in label_kwargs:
label_kwargs["fontsize"] = 0.75 * mpl.rcParams["font.size"]
if inverted:
width_inches = -width_inches
if zorder is None:
zorder = 10
# For more information on coordinate transformations in Matplotlib see
# https://matplotlib.org/3.1.1/tutorials/advanced/transforms_tutorial.html
# Convert the origin into figure coordinates in inches
origin_disp = ax.transData.transform(origin)
origin_dpi = fig.dpi_scale_trans.inverted().transform(origin_disp)
# Obtain the bottom-right corner in data coordinates
corner_dpi = origin_dpi + width_inches * np.array([1.0, 0.0])
corner_disp = fig.dpi_scale_trans.transform(corner_dpi)
corner = ax.transData.inverted().transform(corner_disp)
(x1, y1) = (origin[0], origin[1])
x2 = corner[0]
# The width of the triangle in data coordinates
width = x2 - x1
# Compute offset of the slope
log_offset = y1 / (x1 ** slope)
y2 = log_offset * ((x1 + width) ** slope)
height = y2 - y1
# The vertices of the slope
a = origin
b = corner
c = [x2, y2]
# Draw the slope triangle
X = np.array([a, b, c])
triangle = plt.Polygon(X[:3,:], fill=False, zorder=zorder, **polygon_kwargs)
ax.add_patch(triangle)
# Convert vertices into display space
a_disp = ax.transData.transform(a)
b_disp = ax.transData.transform(b)
c_disp = ax.transData.transform(c)
# Figure out the center of the triangle sides in display space
bottom_center_disp = a_disp + 0.5 * (b_disp - a_disp)
bottom_center = ax.transData.inverted().transform(bottom_center_disp)
right_center_disp = b_disp + 0.5 * (c_disp - b_disp)
right_center = ax.transData.inverted().transform(right_center_disp)
# Label alignment depending on inversion parameter
va_xlabel = "top" if not inverted else "bottom"
ha_ylabel = "left" if not inverted else "right"
# Label offset depending on inversion parameter
offset_xlabel = [0.0, -0.33 * label_kwargs["fontsize"]] if not inverted else [0.0, 0.33 * label_kwargs["fontsize"]]
offset_ylabel = [0.33 * label_kwargs["fontsize"], 0.0] if not inverted else [-0.33 * label_kwargs["fontsize"], 0.0]
# Draw the slope labels
ax.annotate("$1$", bottom_center, xytext=offset_xlabel, textcoords='offset points', ha="center", va=va_xlabel, zorder=zorder, **label_kwargs)
ax.annotate(f"${slope}$", right_center, xytext=offset_ylabel, textcoords='offset points', ha=ha_ylabel, va="center", zorder=zorder, **label_kwargs)
FIG_WIDTH = 3.6 if FOR_PRINT else 20
FIG_HEIGHT = 2.25 if FOR_PRINT else 12
SLOPE_WIDTH = 0.125 * FIG_WIDTH
fig = plt.figure(figsize=(FIG_WIDTH, FIG_HEIGHT))
def method_sort_order_key(method_name):
method_mapping = {
"fem_hex20": 1,
"fcm_hex20": 3,
"fem_hex8": 0,
"fcm_hex8": 2
}
return method_mapping[method_name]
methods = data['methods']
method_names = sorted([method for method in methods], key=method_sort_order_key)
def get_label(method):
method_mapping = {
"fem_hex20": "FEM Hex20",
"fcm_hex20": "FCM Hex20",
"fem_hex8": "FEM Hex8",
"fcm_hex8": "FCM Hex8"
}
return method_mapping[method]
for method in method_names:
method_data = methods[method]
resolutions = [entry['resolution'] for entry in method_data]
l2_errors = [entry['l2_error'] for entry in method_data]
mesh_sizes = [entry['mesh_size'] for entry in method_data]
plt.plot(mesh_sizes, l2_errors, '-o', label=get_label(method))
plt.legend(prop={'size': 0.75 * FONT_SIZE}, loc = 'lower right')
plt.grid()
plt.xlabel('Cell width $h$', fontsize=FONT_SIZE)
plt.ylabel('$L^2$ error', fontsize=FONT_SIZE)
plt.loglog()
plt.xlim(2e-2, 1.5e0)
plt.ylim(1e-6, 1e-1)
plt.axes().yaxis.set_major_locator(LogLocator(10.0, subs=(1.0,)))
plt.tick_params(axis='both', which='major', labelsize=FONT_SIZE)
plt.grid(which='major', linestyle='-', linewidth=0.75 * LINE_WIDTH)
#plt.grid(axis='x', which='minor', linestyle='--', linewidth=0.25 * LINE_WIDTH, color="lightgray")
plt.grid(which='minor', linestyle='--', linewidth=0.25 * LINE_WIDTH, color="lightgray")
# Whether to use
use_single_color_slopes = True
single_color_slopes = "dimgrey"
color_per_slope = {3: "tab:red", 2: "tab:green", 1: "tab:blue"}
if use_single_color_slopes:
color_per_slope = {s: single_color_slopes for s,c in color_per_slope.items()}
ax = plt.gca()
draw_convergence_triangle(fig, ax, [2.00e-1, 2.5e-4], SLOPE_WIDTH, 3, color=color_per_slope[3])
draw_convergence_triangle(fig, ax, [7.00e-2, 5.25e-4], SLOPE_WIDTH, 2, color=color_per_slope[2])
draw_convergence_triangle(fig, ax, [2.60e-1, 2.0e-2], SLOPE_WIDTH, 1, color=color_per_slope[1], inverted=True)
plt.tight_layout()
#plt.savefig('convergence_rate.pdf', format='pdf', bbox_inches='tight')
plt.savefig('convergence_rate.pgf', format='pgf', bbox_inches='tight')
plt.show()
```
|
github_jupyter
|
import json
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib import rc
from matplotlib.ticker import LogLocator
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
FOR_PRINT = True
if FOR_PRINT:
LINE_WIDTH = 1
MARKER_SIZE = 3
FONT_SIZE = 8
AXES_WIDTH = 0.65 * LINE_WIDTH
plt.rcParams['grid.linewidth']=AXES_WIDTH
plt.rcParams['axes.linewidth']=AXES_WIDTH
plt.rcParams['axes.labelpad']=3.0
plt.rcParams['xtick.major.pad']=0
plt.rcParams['xtick.major.size']=2.0
plt.rcParams['xtick.major.width']=AXES_WIDTH
plt.rcParams['xtick.minor.size']=1.0
plt.rcParams['xtick.minor.width']=0.75 * AXES_WIDTH
plt.rcParams['ytick.major.pad']=-1.5
plt.rcParams['ytick.major.size']=2.0
plt.rcParams['ytick.major.width']=AXES_WIDTH
plt.rcParams['ytick.minor.size']=1.0
plt.rcParams['ytick.minor.width']=0.75 * AXES_WIDTH
else:
LINE_WIDTH = 6
MARKER_SIZE = 14
FONT_SIZE = 45
%matplotlib inline
#plt.rcParams['figure.figsize'] = [15, 15]
plt.rcParams['lines.linewidth'] = LINE_WIDTH
plt.rcParams['lines.markeredgewidth'] = 0.75 * LINE_WIDTH
plt.rcParams['lines.markersize'] = MARKER_SIZE
plt.rcParams['font.size'] = FONT_SIZE
rc('text', usetex=True)
data = list()
with open("hex_results.json") as results_json_file:
data = json.load(results_json_file)
def draw_convergence_triangle(fig, ax, origin, width_inches, slope, inverted=False, color=None, polygon_kwargs=None, label=True, labelcolor=None, label_kwargs=None, zorder=None):
"""
This function draws slopes or "convergence triangles" into loglog plots.
@param fig: The figure
@param ax: The axes object to draw to
@param origin: The 2D origin (usually lower-left corner) coordinate of the triangle
@param width_inches: The width in inches of the triangle
@param slope: The slope of the triangle, i.e. order of convergence
@param inverted: Whether to mirror the triangle around the origin, i.e. whether
it indicates the slope towards the lower left instead of upper right (defaults to false)
@param color: The color of the of the triangle edges (defaults to default color)
@param polygon_kwargs: Additional kwargs to the Polygon draw call that creates the slope
@param label: Whether to enable labeling the slope (defaults to true)
@param labelcolor: The color of the slope labels (defaults to the edge color)
@param label_kwargs: Additional kwargs to the Annotation draw call that creates the labels
@param zorder: The z-order value of the triangle and labels, defaults to a high value
"""
if polygon_kwargs is None:
polygon_kwargs = {}
if label_kwargs is None:
label_kwargs = {}
if color is not None:
polygon_kwargs["color"] = color
if "linewidth" not in polygon_kwargs:
polygon_kwargs["linewidth"] = 0.75 * mpl.rcParams["lines.linewidth"]
if labelcolor is not None:
label_kwargs["color"] = labelcolor
if "color" not in label_kwargs:
label_kwargs["color"] = polygon_kwargs["color"]
if "fontsize" not in label_kwargs:
label_kwargs["fontsize"] = 0.75 * mpl.rcParams["font.size"]
if inverted:
width_inches = -width_inches
if zorder is None:
zorder = 10
# For more information on coordinate transformations in Matplotlib see
# https://matplotlib.org/3.1.1/tutorials/advanced/transforms_tutorial.html
# Convert the origin into figure coordinates in inches
origin_disp = ax.transData.transform(origin)
origin_dpi = fig.dpi_scale_trans.inverted().transform(origin_disp)
# Obtain the bottom-right corner in data coordinates
corner_dpi = origin_dpi + width_inches * np.array([1.0, 0.0])
corner_disp = fig.dpi_scale_trans.transform(corner_dpi)
corner = ax.transData.inverted().transform(corner_disp)
(x1, y1) = (origin[0], origin[1])
x2 = corner[0]
# The width of the triangle in data coordinates
width = x2 - x1
# Compute offset of the slope
log_offset = y1 / (x1 ** slope)
y2 = log_offset * ((x1 + width) ** slope)
height = y2 - y1
# The vertices of the slope
a = origin
b = corner
c = [x2, y2]
# Draw the slope triangle
X = np.array([a, b, c])
triangle = plt.Polygon(X[:3,:], fill=False, zorder=zorder, **polygon_kwargs)
ax.add_patch(triangle)
# Convert vertices into display space
a_disp = ax.transData.transform(a)
b_disp = ax.transData.transform(b)
c_disp = ax.transData.transform(c)
# Figure out the center of the triangle sides in display space
bottom_center_disp = a_disp + 0.5 * (b_disp - a_disp)
bottom_center = ax.transData.inverted().transform(bottom_center_disp)
right_center_disp = b_disp + 0.5 * (c_disp - b_disp)
right_center = ax.transData.inverted().transform(right_center_disp)
# Label alignment depending on inversion parameter
va_xlabel = "top" if not inverted else "bottom"
ha_ylabel = "left" if not inverted else "right"
# Label offset depending on inversion parameter
offset_xlabel = [0.0, -0.33 * label_kwargs["fontsize"]] if not inverted else [0.0, 0.33 * label_kwargs["fontsize"]]
offset_ylabel = [0.33 * label_kwargs["fontsize"], 0.0] if not inverted else [-0.33 * label_kwargs["fontsize"], 0.0]
# Draw the slope labels
ax.annotate("$1$", bottom_center, xytext=offset_xlabel, textcoords='offset points', ha="center", va=va_xlabel, zorder=zorder, **label_kwargs)
ax.annotate(f"${slope}$", right_center, xytext=offset_ylabel, textcoords='offset points', ha=ha_ylabel, va="center", zorder=zorder, **label_kwargs)
FIG_WIDTH = 3.6 if FOR_PRINT else 20
FIG_HEIGHT = 2.25 if FOR_PRINT else 12
SLOPE_WIDTH = 0.125 * FIG_WIDTH
fig = plt.figure(figsize=(FIG_WIDTH, FIG_HEIGHT))
def method_sort_order_key(method_name):
method_mapping = {
"fem_hex20": 1,
"fcm_hex20": 3,
"fem_hex8": 0,
"fcm_hex8": 2
}
return method_mapping[method_name]
methods = data['methods']
method_names = sorted([method for method in methods], key=method_sort_order_key)
def get_label(method):
method_mapping = {
"fem_hex20": "FEM Hex20",
"fcm_hex20": "FCM Hex20",
"fem_hex8": "FEM Hex8",
"fcm_hex8": "FCM Hex8"
}
return method_mapping[method]
for method in method_names:
method_data = methods[method]
resolutions = [entry['resolution'] for entry in method_data]
l2_errors = [entry['l2_error'] for entry in method_data]
mesh_sizes = [entry['mesh_size'] for entry in method_data]
plt.plot(mesh_sizes, l2_errors, '-o', label=get_label(method))
plt.legend(prop={'size': 0.75 * FONT_SIZE}, loc = 'lower right')
plt.grid()
plt.xlabel('Cell width $h$', fontsize=FONT_SIZE)
plt.ylabel('$L^2$ error', fontsize=FONT_SIZE)
plt.loglog()
plt.xlim(2e-2, 1.5e0)
plt.ylim(1e-6, 1e-1)
plt.axes().yaxis.set_major_locator(LogLocator(10.0, subs=(1.0,)))
plt.tick_params(axis='both', which='major', labelsize=FONT_SIZE)
plt.grid(which='major', linestyle='-', linewidth=0.75 * LINE_WIDTH)
#plt.grid(axis='x', which='minor', linestyle='--', linewidth=0.25 * LINE_WIDTH, color="lightgray")
plt.grid(which='minor', linestyle='--', linewidth=0.25 * LINE_WIDTH, color="lightgray")
# Whether to use
use_single_color_slopes = True
single_color_slopes = "dimgrey"
color_per_slope = {3: "tab:red", 2: "tab:green", 1: "tab:blue"}
if use_single_color_slopes:
color_per_slope = {s: single_color_slopes for s,c in color_per_slope.items()}
ax = plt.gca()
draw_convergence_triangle(fig, ax, [2.00e-1, 2.5e-4], SLOPE_WIDTH, 3, color=color_per_slope[3])
draw_convergence_triangle(fig, ax, [7.00e-2, 5.25e-4], SLOPE_WIDTH, 2, color=color_per_slope[2])
draw_convergence_triangle(fig, ax, [2.60e-1, 2.0e-2], SLOPE_WIDTH, 1, color=color_per_slope[1], inverted=True)
plt.tight_layout()
#plt.savefig('convergence_rate.pdf', format='pdf', bbox_inches='tight')
plt.savefig('convergence_rate.pgf', format='pgf', bbox_inches='tight')
plt.show()
| 0.732687 | 0.520314 |
<a href="https://colab.research.google.com/github/YeonKang/Python-for-Machine-Learning/blob/main/Lec3_7_Cross_validation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from sklearn import datasets
boston = datasets.load_boston()
X = boston.data
y = boston.target
from sklearn.model_selection import KFold
kf = KFold(n_splits=10, shuffle=True)
for train_index, test_index in kf.split(X):
print("TRAIN - ", len(train_index))
print("TEST - ", len(test_index))
from sklearn.linear_model import Lasso, Ridge
from sklearn.metrics import mean_squared_error
kf = KFold(n_splits=10)
lasso_regressor = Lasso()
ridge_regressor = Ridge()
lasso_mse = []
ridge_mse = []
for train_index, test_index in kf.split(X):
lasso_regressor.fit(X[train_index], y[train_index])
ridge_regressor.fit(X[train_index], y[train_index])
lasso_mse.append(mean_squared_error(y[test_index], lasso_regressor.predict(X[test_index])))
ridge_mse.append(mean_squared_error(y[test_index], ridge_regressor.predict(X[test_index])))
sum(lasso_mse) / 10, sum(ridge_mse) / 10
from sklearn.model_selection import cross_val_score
import numpy as np
lasso_regressor = Lasso(warm_start=False)
ridge_regressor = Ridge()
lasso_scores = cross_val_score(lasso_regressor, X, y, cv=10, scoring='neg_mean_squared_error')
ridge_scores= cross_val_score(ridge_regressor, X, y, cv=10, scoring='neg_mean_squared_error')
np.mean(lasso_scores), np.mean(ridge_scores)
from sklearn.model_selection import cross_validate
import numpy as np
lasso_regressor = Lasso(warm_start=False)
ridge_regressor = Ridge()
scoring = ['neg_mean_squared_error', 'r2']
lasso_scores = cross_validate(lasso_regressor, X, y, cv=10, scoring=scoring)
ridge_scores= cross_validate(ridge_regressor, X, y, cv=10, scoring='neg_mean_squared_error')
lasso_scores
from sklearn.model_selection import cross_val_score
import numpy as np
lasso_regressor = Lasso(warm_start=False)
ridge_regressor = Ridge()
kf = KFold(n_splits=10, shuffle=True)
lasso_scores = cross_val_score(lasso_regressor, X, y, cv=kf, scoring='neg_mean_squared_error')
ridge_scores= cross_val_score(ridge_regressor, X, y, cv=kf, scoring='neg_mean_squared_error')
np.mean(lasso_scores), np.mean(ridge_scores)
from sklearn.model_selection import LeaveOneOut
test = [1, 2, 3, 4]
loo = LeaveOneOut()
for train, test in loo.split(test):
print("%s %s" % (train, test))
loo = LeaveOneOut()
lasso_scores = cross_val_score(lasso_regressor, X, y, cv=loo, scoring='neg_mean_squared_error')
ridge_scores= cross_val_score(ridge_regressor, X, y, cv=loo, scoring='neg_mean_squared_error')
np.mean(lasso_scores), np.mean(ridge_scores)
lasso_scores = cross_val_score(
lasso_regressor, X, y, cv=kf, scoring='neg_mean_squared_error')
ridge_scores= cross_val_score(
ridge_regressor, X, y, cv=kf, scoring='neg_mean_squared_error')
import matplotlib.pyplot as plt
labels=["LASSO", "RIDGE"]
plt.boxplot((lasso_scores, ridge_scores), labels=labels)
plt.grid(linestyle="--")
plt.show()
def rmse(predictions, targets):
return np.sqrt(((predictions - targets) ** 2).mean())
from sklearn.linear_model import SGDRegressor
from sklearn.preprocessing import StandardScaler
std = StandardScaler()
std.fit(X)
X_scaled = std.transform(X)
eta0 = 0.01
max_iter = 100
from sklearn.model_selection import train_test_split
X_train_dataset, X_test, y_train_dataset, y_test = train_test_split(
X_scaled,y, test_size=0.2, random_state=42)
sgd_regressor = SGDRegressor(
eta0=eta0, max_iter=max_iter, warm_start=True, learning_rate="constant")
rmse_val_score = []
rmse_train_score = []
model_list = []
X_train, X_val, y_train, y_val = train_test_split(
X_train_dataset,y_train_dataset, test_size=0.2, random_state=42)
sgd_regressor.fit(X_train,y_train)
for i in range(300):
y_pred = sgd_regressor.predict(X_train)
y_true = y_train
rmse_train_score.append(rmse(y_pred, y_true))
y_pred = sgd_regressor.predict(X_val)
y_true = y_val
rmse_val_score.append(rmse(y_pred, y_true))
model_list.append(sgd_regressor)
coef = sgd_regressor.coef_.copy()
intercept = sgd_regressor.intercept_.copy()
sgd_regressor = SGDRegressor(
eta0=eta0, max_iter=max_iter, warm_start=True, learning_rate="constant")
sgd_regressor.fit(X_train,y_train, coef_init=coef, intercept_init=intercept)
plt.plot(range(len(rmse_val_score)), rmse_val_score, c="G", label="VAL")
plt.plot(range(len(rmse_train_score)), rmse_train_score, c="r", label="TRAINING")
plt.scatter(99, rmse(y_test,sgd_regressor.predict(X_test)), s=1, label="TEST")
plt.legend()
plt.show()
np.argsort(rmse_val_score)
rmse(y_test,sgd_regressor.predict(X_test))
rmse(y_test,model_list[217].predict(X_test))
model_list[0].coef_
```
|
github_jupyter
|
from sklearn import datasets
boston = datasets.load_boston()
X = boston.data
y = boston.target
from sklearn.model_selection import KFold
kf = KFold(n_splits=10, shuffle=True)
for train_index, test_index in kf.split(X):
print("TRAIN - ", len(train_index))
print("TEST - ", len(test_index))
from sklearn.linear_model import Lasso, Ridge
from sklearn.metrics import mean_squared_error
kf = KFold(n_splits=10)
lasso_regressor = Lasso()
ridge_regressor = Ridge()
lasso_mse = []
ridge_mse = []
for train_index, test_index in kf.split(X):
lasso_regressor.fit(X[train_index], y[train_index])
ridge_regressor.fit(X[train_index], y[train_index])
lasso_mse.append(mean_squared_error(y[test_index], lasso_regressor.predict(X[test_index])))
ridge_mse.append(mean_squared_error(y[test_index], ridge_regressor.predict(X[test_index])))
sum(lasso_mse) / 10, sum(ridge_mse) / 10
from sklearn.model_selection import cross_val_score
import numpy as np
lasso_regressor = Lasso(warm_start=False)
ridge_regressor = Ridge()
lasso_scores = cross_val_score(lasso_regressor, X, y, cv=10, scoring='neg_mean_squared_error')
ridge_scores= cross_val_score(ridge_regressor, X, y, cv=10, scoring='neg_mean_squared_error')
np.mean(lasso_scores), np.mean(ridge_scores)
from sklearn.model_selection import cross_validate
import numpy as np
lasso_regressor = Lasso(warm_start=False)
ridge_regressor = Ridge()
scoring = ['neg_mean_squared_error', 'r2']
lasso_scores = cross_validate(lasso_regressor, X, y, cv=10, scoring=scoring)
ridge_scores= cross_validate(ridge_regressor, X, y, cv=10, scoring='neg_mean_squared_error')
lasso_scores
from sklearn.model_selection import cross_val_score
import numpy as np
lasso_regressor = Lasso(warm_start=False)
ridge_regressor = Ridge()
kf = KFold(n_splits=10, shuffle=True)
lasso_scores = cross_val_score(lasso_regressor, X, y, cv=kf, scoring='neg_mean_squared_error')
ridge_scores= cross_val_score(ridge_regressor, X, y, cv=kf, scoring='neg_mean_squared_error')
np.mean(lasso_scores), np.mean(ridge_scores)
from sklearn.model_selection import LeaveOneOut
test = [1, 2, 3, 4]
loo = LeaveOneOut()
for train, test in loo.split(test):
print("%s %s" % (train, test))
loo = LeaveOneOut()
lasso_scores = cross_val_score(lasso_regressor, X, y, cv=loo, scoring='neg_mean_squared_error')
ridge_scores= cross_val_score(ridge_regressor, X, y, cv=loo, scoring='neg_mean_squared_error')
np.mean(lasso_scores), np.mean(ridge_scores)
lasso_scores = cross_val_score(
lasso_regressor, X, y, cv=kf, scoring='neg_mean_squared_error')
ridge_scores= cross_val_score(
ridge_regressor, X, y, cv=kf, scoring='neg_mean_squared_error')
import matplotlib.pyplot as plt
labels=["LASSO", "RIDGE"]
plt.boxplot((lasso_scores, ridge_scores), labels=labels)
plt.grid(linestyle="--")
plt.show()
def rmse(predictions, targets):
return np.sqrt(((predictions - targets) ** 2).mean())
from sklearn.linear_model import SGDRegressor
from sklearn.preprocessing import StandardScaler
std = StandardScaler()
std.fit(X)
X_scaled = std.transform(X)
eta0 = 0.01
max_iter = 100
from sklearn.model_selection import train_test_split
X_train_dataset, X_test, y_train_dataset, y_test = train_test_split(
X_scaled,y, test_size=0.2, random_state=42)
sgd_regressor = SGDRegressor(
eta0=eta0, max_iter=max_iter, warm_start=True, learning_rate="constant")
rmse_val_score = []
rmse_train_score = []
model_list = []
X_train, X_val, y_train, y_val = train_test_split(
X_train_dataset,y_train_dataset, test_size=0.2, random_state=42)
sgd_regressor.fit(X_train,y_train)
for i in range(300):
y_pred = sgd_regressor.predict(X_train)
y_true = y_train
rmse_train_score.append(rmse(y_pred, y_true))
y_pred = sgd_regressor.predict(X_val)
y_true = y_val
rmse_val_score.append(rmse(y_pred, y_true))
model_list.append(sgd_regressor)
coef = sgd_regressor.coef_.copy()
intercept = sgd_regressor.intercept_.copy()
sgd_regressor = SGDRegressor(
eta0=eta0, max_iter=max_iter, warm_start=True, learning_rate="constant")
sgd_regressor.fit(X_train,y_train, coef_init=coef, intercept_init=intercept)
plt.plot(range(len(rmse_val_score)), rmse_val_score, c="G", label="VAL")
plt.plot(range(len(rmse_train_score)), rmse_train_score, c="r", label="TRAINING")
plt.scatter(99, rmse(y_test,sgd_regressor.predict(X_test)), s=1, label="TEST")
plt.legend()
plt.show()
np.argsort(rmse_val_score)
rmse(y_test,sgd_regressor.predict(X_test))
rmse(y_test,model_list[217].predict(X_test))
model_list[0].coef_
| 0.564098 | 0.791217 |
# Description
See description in notebook `10_00-spectral_clustering...`.
# Environment variables
```
from IPython.display import display
import conf
N_JOBS = conf.GENERAL["N_JOBS"]
display(N_JOBS)
%env MKL_NUM_THREADS=$N_JOBS
%env OPEN_BLAS_NUM_THREADS=$N_JOBS
%env NUMEXPR_NUM_THREADS=$N_JOBS
%env OMP_NUM_THREADS=$N_JOBS
```
# Modules loading
```
%load_ext autoreload
%autoreload 2
from pathlib import Path
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from utils import generate_result_set_name
```
# Settings
```
INITIAL_RANDOM_STATE = 100000
CLUSTERING_METHOD_NAME = "DeltaSpectralClustering"
# output dir for this notebook
CONSENSUS_CLUSTERING_DIR = Path(
conf.RESULTS["CLUSTERING_DIR"], "consensus_clustering"
).resolve()
display(CONSENSUS_CLUSTERING_DIR)
```
# Load data
```
INPUT_SUBSET = "umap"
INPUT_STEM = "z_score_std-projection-smultixcan-efo_partial-mashr-zscores"
DR_OPTIONS = {
"n_components": 50,
"metric": "euclidean",
"n_neighbors": 15,
"random_state": 0,
}
input_filepath = Path(
conf.RESULTS["DATA_TRANSFORMATIONS_DIR"],
INPUT_SUBSET,
generate_result_set_name(
DR_OPTIONS, prefix=f"{INPUT_SUBSET}-{INPUT_STEM}-", suffix=".pkl"
),
).resolve()
display(input_filepath)
assert input_filepath.exists(), "Input file does not exist"
input_filepath_stem = input_filepath.stem
display(input_filepath_stem)
data = pd.read_pickle(input_filepath)
data.shape
data.head()
traits = data.index.tolist()
len(traits)
```
# Load coassociation matrix (ensemble)
```
input_file = Path(CONSENSUS_CLUSTERING_DIR, "ensemble_coassoc_matrix.npy").resolve()
display(input_file)
coassoc_matrix = np.load(input_file)
coassoc_matrix = pd.DataFrame(
data=coassoc_matrix,
index=traits,
columns=traits,
)
coassoc_matrix.shape
coassoc_matrix.head()
dist_matrix = coassoc_matrix
```
# Clustering
```
from sklearn.metrics import (
calinski_harabasz_score,
davies_bouldin_score,
)
```
## More exhaustive test
Here I run some test across several `k` and `delta` values; then I check how results perform with different clustering quality measures.
```
CLUSTERING_OPTIONS = {}
CLUSTERING_OPTIONS["K_RANGE"] = [
2,
4,
6,
8,
10,
12,
14,
16,
18,
20,
25,
30,
35,
40,
50,
60,
]
CLUSTERING_OPTIONS["N_REPS_PER_K"] = 5
CLUSTERING_OPTIONS["KMEANS_N_INIT"] = 10
CLUSTERING_OPTIONS["DELTAS"] = [
5.00,
2.00,
1.00,
0.90,
0.75,
0.50,
0.30,
0.25,
0.20,
]
display(CLUSTERING_OPTIONS)
```
### Generate ensemble
```
import tempfile
ensemble_folder = Path(
tempfile.gettempdir(),
"pre_cluster_analysis",
CLUSTERING_METHOD_NAME,
).resolve()
ensemble_folder.mkdir(parents=True, exist_ok=True)
ensemble_file = Path(
ensemble_folder,
generate_result_set_name(CLUSTERING_OPTIONS, prefix="ensemble-", suffix=".pkl"),
)
display(ensemble_file)
assert ensemble_file.exists(), "Ensemble file does not exists"
ensemble = pd.read_pickle(ensemble_file)
ensemble.shape
ensemble.head()
```
### Add clustering quality measures
```
ensemble = ensemble.assign(
# si_score=ensemble["partition"].apply(lambda x: silhouette_score(dist_matrix, x, metric="precomputed")),
ch_score=ensemble["partition"].apply(lambda x: calinski_harabasz_score(data, x)),
db_score=ensemble["partition"].apply(lambda x: davies_bouldin_score(data, x)),
)
ensemble.shape
ensemble.head()
```
# Cluster quality
```
with pd.option_context("display.max_rows", None, "display.max_columns", None):
_df = ensemble.groupby(["n_clusters", "delta"]).mean()
display(_df)
with sns.plotting_context("talk", font_scale=0.75), sns.axes_style(
"whitegrid", {"grid.linestyle": "--"}
):
fig = plt.figure(figsize=(14, 6))
ax = sns.pointplot(data=ensemble, x="n_clusters", y="si_score", hue="delta")
ax.set_ylabel("Silhouette index\n(higher is better)")
ax.set_xlabel("Number of clusters ($k$)")
ax.set_xticklabels(ax.get_xticklabels(), rotation=45)
plt.grid(True)
plt.tight_layout()
with sns.plotting_context("talk", font_scale=0.75), sns.axes_style(
"whitegrid", {"grid.linestyle": "--"}
):
fig = plt.figure(figsize=(14, 6))
ax = sns.pointplot(data=ensemble, x="n_clusters", y="ch_score", hue="delta")
ax.set_ylabel("Calinski-Harabasz index\n(higher is better)")
ax.set_xlabel("Number of clusters ($k$)")
ax.set_xticklabels(ax.get_xticklabels(), rotation=45)
plt.grid(True)
plt.tight_layout()
with sns.plotting_context("talk", font_scale=0.75), sns.axes_style(
"whitegrid", {"grid.linestyle": "--"}
):
fig = plt.figure(figsize=(14, 6))
ax = sns.pointplot(data=ensemble, x="n_clusters", y="db_score", hue="delta")
ax.set_ylabel("Davies-Bouldin index\n(lower is better)")
ax.set_xlabel("Number of clusters ($k$)")
ax.set_xticklabels(ax.get_xticklabels(), rotation=45)
plt.grid(True)
plt.tight_layout()
```
|
github_jupyter
|
from IPython.display import display
import conf
N_JOBS = conf.GENERAL["N_JOBS"]
display(N_JOBS)
%env MKL_NUM_THREADS=$N_JOBS
%env OPEN_BLAS_NUM_THREADS=$N_JOBS
%env NUMEXPR_NUM_THREADS=$N_JOBS
%env OMP_NUM_THREADS=$N_JOBS
%load_ext autoreload
%autoreload 2
from pathlib import Path
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from utils import generate_result_set_name
INITIAL_RANDOM_STATE = 100000
CLUSTERING_METHOD_NAME = "DeltaSpectralClustering"
# output dir for this notebook
CONSENSUS_CLUSTERING_DIR = Path(
conf.RESULTS["CLUSTERING_DIR"], "consensus_clustering"
).resolve()
display(CONSENSUS_CLUSTERING_DIR)
INPUT_SUBSET = "umap"
INPUT_STEM = "z_score_std-projection-smultixcan-efo_partial-mashr-zscores"
DR_OPTIONS = {
"n_components": 50,
"metric": "euclidean",
"n_neighbors": 15,
"random_state": 0,
}
input_filepath = Path(
conf.RESULTS["DATA_TRANSFORMATIONS_DIR"],
INPUT_SUBSET,
generate_result_set_name(
DR_OPTIONS, prefix=f"{INPUT_SUBSET}-{INPUT_STEM}-", suffix=".pkl"
),
).resolve()
display(input_filepath)
assert input_filepath.exists(), "Input file does not exist"
input_filepath_stem = input_filepath.stem
display(input_filepath_stem)
data = pd.read_pickle(input_filepath)
data.shape
data.head()
traits = data.index.tolist()
len(traits)
input_file = Path(CONSENSUS_CLUSTERING_DIR, "ensemble_coassoc_matrix.npy").resolve()
display(input_file)
coassoc_matrix = np.load(input_file)
coassoc_matrix = pd.DataFrame(
data=coassoc_matrix,
index=traits,
columns=traits,
)
coassoc_matrix.shape
coassoc_matrix.head()
dist_matrix = coassoc_matrix
from sklearn.metrics import (
calinski_harabasz_score,
davies_bouldin_score,
)
CLUSTERING_OPTIONS = {}
CLUSTERING_OPTIONS["K_RANGE"] = [
2,
4,
6,
8,
10,
12,
14,
16,
18,
20,
25,
30,
35,
40,
50,
60,
]
CLUSTERING_OPTIONS["N_REPS_PER_K"] = 5
CLUSTERING_OPTIONS["KMEANS_N_INIT"] = 10
CLUSTERING_OPTIONS["DELTAS"] = [
5.00,
2.00,
1.00,
0.90,
0.75,
0.50,
0.30,
0.25,
0.20,
]
display(CLUSTERING_OPTIONS)
import tempfile
ensemble_folder = Path(
tempfile.gettempdir(),
"pre_cluster_analysis",
CLUSTERING_METHOD_NAME,
).resolve()
ensemble_folder.mkdir(parents=True, exist_ok=True)
ensemble_file = Path(
ensemble_folder,
generate_result_set_name(CLUSTERING_OPTIONS, prefix="ensemble-", suffix=".pkl"),
)
display(ensemble_file)
assert ensemble_file.exists(), "Ensemble file does not exists"
ensemble = pd.read_pickle(ensemble_file)
ensemble.shape
ensemble.head()
ensemble = ensemble.assign(
# si_score=ensemble["partition"].apply(lambda x: silhouette_score(dist_matrix, x, metric="precomputed")),
ch_score=ensemble["partition"].apply(lambda x: calinski_harabasz_score(data, x)),
db_score=ensemble["partition"].apply(lambda x: davies_bouldin_score(data, x)),
)
ensemble.shape
ensemble.head()
with pd.option_context("display.max_rows", None, "display.max_columns", None):
_df = ensemble.groupby(["n_clusters", "delta"]).mean()
display(_df)
with sns.plotting_context("talk", font_scale=0.75), sns.axes_style(
"whitegrid", {"grid.linestyle": "--"}
):
fig = plt.figure(figsize=(14, 6))
ax = sns.pointplot(data=ensemble, x="n_clusters", y="si_score", hue="delta")
ax.set_ylabel("Silhouette index\n(higher is better)")
ax.set_xlabel("Number of clusters ($k$)")
ax.set_xticklabels(ax.get_xticklabels(), rotation=45)
plt.grid(True)
plt.tight_layout()
with sns.plotting_context("talk", font_scale=0.75), sns.axes_style(
"whitegrid", {"grid.linestyle": "--"}
):
fig = plt.figure(figsize=(14, 6))
ax = sns.pointplot(data=ensemble, x="n_clusters", y="ch_score", hue="delta")
ax.set_ylabel("Calinski-Harabasz index\n(higher is better)")
ax.set_xlabel("Number of clusters ($k$)")
ax.set_xticklabels(ax.get_xticklabels(), rotation=45)
plt.grid(True)
plt.tight_layout()
with sns.plotting_context("talk", font_scale=0.75), sns.axes_style(
"whitegrid", {"grid.linestyle": "--"}
):
fig = plt.figure(figsize=(14, 6))
ax = sns.pointplot(data=ensemble, x="n_clusters", y="db_score", hue="delta")
ax.set_ylabel("Davies-Bouldin index\n(lower is better)")
ax.set_xlabel("Number of clusters ($k$)")
ax.set_xticklabels(ax.get_xticklabels(), rotation=45)
plt.grid(True)
plt.tight_layout()
| 0.492432 | 0.666093 |
<a href="https://colab.research.google.com/github/partha1189/machine_learning/blob/timeSeries/moving_average.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import numpy as np
import matplotlib.pyplot as plt
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
keras = tf.keras
def plot_series(time, series, format='-', start=0, end=None, label=None):
plt.plot(time[start:end], series[start:end], format, label = label)
plt.xlabel('Time')
plt.ylabel('Value')
if label:
plt.legend(fontsize=14)
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def white_noise(time, noise_level =1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(4 * 365 + 1)
slope = 0.05
baseline = 10
amplitude = 40
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
noise_level = 5
noise = white_noise(time, noise_level, seed=42)
series += noise
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
naive_forecast = series[split_time - 1:-1]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, start=0, end=150, label="Series")
plot_series(time_valid, naive_forecast, start=1, end=151, label="Forecast")
```
Moving Average
```
def moving_average_forecast(series, window_size):
forecast = []
for time in range(len(series) - window_size):
forecast.append(series[time : time + window].mean())
return np.array(forecast)
def moving_average_forecast(series, window_size):
mov = np.cumsum(series)
mov[window_size:] = mov[window_size:] - mov[:-window_size]
return mov[window_size - 1:-1] / window_size
moving_avg = moving_average_forecast(series, 30)[split_time - 30:]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, label='Series')
plot_series(time_valid, moving_avg, label='Moving average (30 days)')
keras.metrics.mean_absolute_error(x_valid, moving_avg).numpy()
diff_series = (series[365:] - series[:-365])
diff_time = time[365:]
plt.figure(figsize=(10, 6))
plot_series(diff_time, diff_series, label='Series(t) - Series(t-365)')
plt.show()
time[365:]
time[:-365]
time
1460 - 365
plt.figure(figsize=(10, 6))
plot_series(time_valid, diff_series[split_time - 365:], label="Series(t) – Series(t–365)")
plt.show()
diff_moving_avg = moving_average_forecast(diff_series, 50)[split_time - 365 - 50:]
plt.figure(figsize=(10, 6))
plot_series(time_valid, diff_series[split_time - 365:], label="Series(t) – Series(t–365)")
plot_series(time_valid, diff_moving_avg, label="Moving Average of Diff")
plt.show()
diff_moving_avg_plus_past = series[split_time - 365:-365] + diff_moving_avg
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, label="Series")
plot_series(time_valid, diff_moving_avg_plus_past, label="Forecasts")
plt.show()
keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_past).numpy()
diff_moving_avg_plus_smooth_past = moving_average_forecast(series[split_time - 370:-359], 11) + diff_moving_avg
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, label="Series")
plot_series(time_valid, diff_moving_avg_plus_smooth_past, label="Forecasts")
plt.show()
keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_smooth_past).numpy()
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
keras = tf.keras
def plot_series(time, series, format='-', start=0, end=None, label=None):
plt.plot(time[start:end], series[start:end], format, label = label)
plt.xlabel('Time')
plt.ylabel('Value')
if label:
plt.legend(fontsize=14)
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def white_noise(time, noise_level =1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(4 * 365 + 1)
slope = 0.05
baseline = 10
amplitude = 40
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
noise_level = 5
noise = white_noise(time, noise_level, seed=42)
series += noise
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
naive_forecast = series[split_time - 1:-1]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, start=0, end=150, label="Series")
plot_series(time_valid, naive_forecast, start=1, end=151, label="Forecast")
def moving_average_forecast(series, window_size):
forecast = []
for time in range(len(series) - window_size):
forecast.append(series[time : time + window].mean())
return np.array(forecast)
def moving_average_forecast(series, window_size):
mov = np.cumsum(series)
mov[window_size:] = mov[window_size:] - mov[:-window_size]
return mov[window_size - 1:-1] / window_size
moving_avg = moving_average_forecast(series, 30)[split_time - 30:]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, label='Series')
plot_series(time_valid, moving_avg, label='Moving average (30 days)')
keras.metrics.mean_absolute_error(x_valid, moving_avg).numpy()
diff_series = (series[365:] - series[:-365])
diff_time = time[365:]
plt.figure(figsize=(10, 6))
plot_series(diff_time, diff_series, label='Series(t) - Series(t-365)')
plt.show()
time[365:]
time[:-365]
time
1460 - 365
plt.figure(figsize=(10, 6))
plot_series(time_valid, diff_series[split_time - 365:], label="Series(t) – Series(t–365)")
plt.show()
diff_moving_avg = moving_average_forecast(diff_series, 50)[split_time - 365 - 50:]
plt.figure(figsize=(10, 6))
plot_series(time_valid, diff_series[split_time - 365:], label="Series(t) – Series(t–365)")
plot_series(time_valid, diff_moving_avg, label="Moving Average of Diff")
plt.show()
diff_moving_avg_plus_past = series[split_time - 365:-365] + diff_moving_avg
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, label="Series")
plot_series(time_valid, diff_moving_avg_plus_past, label="Forecasts")
plt.show()
keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_past).numpy()
diff_moving_avg_plus_smooth_past = moving_average_forecast(series[split_time - 370:-359], 11) + diff_moving_avg
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, label="Series")
plot_series(time_valid, diff_moving_avg_plus_smooth_past, label="Forecasts")
plt.show()
keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_smooth_past).numpy()
| 0.713831 | 0.97824 |
# Chapter 2
## 2E1
Which of the expressions below correspond to the statement: the probability of rain on Monday?
(1) $Pr(\text{rain})$
(2) $Pr(\text{rain|Monday})$
(3) $Pr(\text{Monday|rain})$
(4) $Pr(\text{rain, Monday})/ Pr(\text{Monday})$
### Ans
$$
\frac{Pr(\text{rain, Monday})}{Pr(\text{Monday})} = Pr(\text{rain} \mid \text{Monday})
$$
## 2E2
Which of the following statements corresponds to the expression: $Pr(\text{Monday} \mid \text{rain})$?
(1) The probability of rain on Monday.
(2) The probability of rain, given that it is Monday.
(3) The probability that it is Monday, given that it is raining.
(4) The probability that it is Monday and that it is raining.
### Ans
(3) The probability that it is Monday, given that it is raining.
## 2E3
2E3. Which of the expressions below correspond to the statement: _the probability that it is Monday, given that it is raining_?
(1) $Pr(\text{Monday} \mid \text{rain})$
(2) $Pr(\text{rain} \mid \text{Monday})$
(3) $Pr(\text{rain} \mid \text{Monday}) \cdot Pr(\text{Monday})$
(4) $Pr(\text{rain} \mid \text{Monday}) \cdot Pr(\text{Monday}) / Pr(\text{rain})$
(5) $Pr(\text{Monday} \mid \text{rain}) \cdot Pr(\text{rain}) / Pr(\text{Monday})$
### Ans
(1) $Pr(\text{Monday} \mid \text{rain})$
(4)
$$
\begin{equation}
\begin{aligned}
Pr(\text{rain} \mid \text{Monday}) \cdot Pr(\text{Monday}) / Pr(\text{rain})
&= \frac{Pr(\text{rain}, \text{Monday})}{Pr(\text{rain})} \\
&= Pr(\text{Monday} \mid \text{rain}) \\
\end{aligned}
\end{equation}
$$
## 2E4.
The Bayesian statistician Bruno de Finetti (1906–1985) began his 1973 book on probability theory with the declaration: “PROBABILITY DOES NOT EXIST.” The capitals appeared in the original, so I imagine de Finetti wanted us to shout this statement. What he meant is that probability is a device for describing uncertainty from the perspective of an observer with limited knowledge; it has no objective reality. Discuss the globe tossing example from the chapter, in light of this statement. What does it mean to say “the probability of water is 0.7”?
### Ans
In the case of the globe tossing example, it means that there is some "true" proportion of water to land. However, "truth"s like thees are usually fully known to us for many reasons (e.g. due to finite sample size, measurement error, unobserved variables, missing data, etc.). We can model our uncertainty using probability theory. Hopefully our model correctly represents truths, such as the "true" proportion of water to land. However, that's not guaranteed. When we say "the probability of water is 0.7," colloquially, it means that we *think* that the probability of water is somewhere around 70%, given this point in time. We did not give a lot of significant digits. It might not actually be exactly 0.7, but we _think_ it is somewhere around there. It still denotes uncertainty of the observer, given what we currently know.
## 2M1.
2M1. Recall the globe tossing model from the chapter. Compute and plot the grid approximate posterior distribution for each of the following sets of observations. In each case, assume a uniform prior for p.
(1) W, W, W
(2) W, W, W, L
(3) L, W, W, L, W, W, W
### Ans
We'll instead use rejection sampling and ABC-SMC:
```
import numpy as np
import pandas as pd
from unlikely.models import Models, Model
from unlikely.priors import Beta
from unlikely.engine import abc_smc
from unlikely.misc import create_images_from_data
num_particles = 2000
obs = np.array([
1,1,1,
1,1,1,0,
0,1,1,0,1,1,1
])
epsilons_list = [[0], [3,2,1,0]]
def distance(x,y):
"""
Compare the number of ones in one vs. the other.
"""
return abs(x.sum() - y.sum())
def simulate(priors, obs):
"""
Data is binomially distributed.
"""
return np.random.binomial(n=1, p=priors['beta'], size=len(obs))
data_to_display = [
[
{
'title': f"obs: {obs[:3]}",
'data': [
pd.DataFrame({
'reference': np.random.beta(
obs[:3].sum() + 1,
len(obs[:3]) - obs[:3].sum() + 1,
num_particles
)
})
]
},
{
'title': f"after {obs[3:7]}",
'data': [
pd.DataFrame({
'reference': np.random.beta(
obs[:7].sum() + 1,
len(obs[:7]) - obs[:7].sum() + 1,
num_particles
)
})
]
},
{
'title': f"after {obs[7:]}",
'data': [
pd.DataFrame({
'reference': np.random.beta(
obs.sum() + 1,
len(obs) - obs.sum() + 1,
num_particles
)
})
]
},
{
'title': "Full batch",
'data': [
pd.DataFrame({
'reference': np.random.beta(
obs.sum() + 1,
len(obs) - obs.sum() + 1,
num_particles
)
})
]
}
]
]
for row, epsilons in enumerate(epsilons_list):
models = Models(
[
Model(
name='flat prior',
priors=[
Beta(alpha=1, beta=1, name="beta"),
],
simulate=simulate,
prior_model_proba=1,
),
]
)
# Update with 1st batch
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models,
obs=obs[:3],
distance=distance,
)
data_to_display[0][0]['data'].append(
pd.DataFrame(models[0].prev_accepted_proposals).rename(
columns={'beta': f'eps: {epsilons}'}
)
)
# The posterior distribution becomes the prior
models.use_distribution_from_samples()
# Update with 2nd batch
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models,
obs=obs[3:7],
distance=distance,
)
# The posterior distribution becomes the prior
models.use_distribution_from_samples()
data_to_display[0][1]['data'].append(
pd.DataFrame(
models[0].prev_accepted_proposals).rename(
columns={'beta': f'eps: {epsilons}'}
)
)
# Update with 3rd batch
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models,
obs=obs[7:],
distance=distance,
)
data_to_display[0][2]['data'].append(
pd.DataFrame(
models[0].prev_accepted_proposals).rename(
columns={'beta': f'eps: {epsilons}'}
)
)
models_full_batch = Models(
[
Model(
name='flat prior',
priors=[
Beta(alpha=1, beta=1, name="beta"),
],
simulate=simulate,
prior_model_proba=1,
),
]
)
# Update full batch
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models_full_batch,
obs=obs,
distance=distance,
)
data_to_display[0][3]['data'].append(
pd.DataFrame(
models_full_batch[0].prev_accepted_proposals
).rename(columns={'beta': f'eps: {epsilons}'})
)
create_images_from_data(
data={
'title': '3 batch updates',
'data': data_to_display
},
xlim=(0,1),
figsize_mult=(2,8)
)
```
Using the [unlikely](https://github.com/Edderic/unlikely) library, Bayesian updating with the full batch, along with Bayesian updating through mini batches with only epsilon 0, seems to be more accurate than updating through mini batches with many epsilons. Mini batch updating leads to overinflated posterior distributions.
## 2M2
Now assume a prior for p that is equal to zero when p < 0.5 and is a positive constant when p ≥ 0.5. Again compute and plot the grid approximate posterior distribution for each of the sets of observations in the problem just above.
```
import numpy as np
import pandas as pd
from unlikely.models import Models, Model
from unlikely.priors import Uniform
from unlikely.engine import abc_smc
from unlikely.misc import create_images_from_data
import pdb
num_particles = 2000
obs = np.array([
1,1,1,
1,1,1,0,
0,1,1,0,1,1,1
])
epsilons_list = [[0], [3,2,1,0]]
def distance(x,y):
"""
Compare the number of ones in one vs. the other.
"""
return abs(x.sum() - y.sum())
def simulate(priors, obs):
"""
Data is binomially distributed.
"""
return np.random.binomial(n=1, p=priors['uniform'], size=len(obs))
data_to_display = [
[
{
'title': f"obs: {obs[:3]}",
'data': []
},
{
'title': f"after {obs[3:7]}",
'data': []
},
{
'title': f"after {obs[7:]}",
'data': []
},
{
'title': "Full batch",
'data': []
}
]
]
for row, epsilons in enumerate(epsilons_list):
models = Models(
[
Model(
name='Uniform over (0.5, 1)',
priors=[
Uniform(alpha=0.5, beta=1, name="uniform"),
],
simulate=simulate,
prior_model_proba=1,
),
]
)
# Update with 1st batch
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models,
obs=obs[:3],
distance=distance,
)
data_to_display[0][0]['data'].append(
pd.DataFrame(models[0].prev_accepted_proposals).rename(
columns={'uniform': f'eps: {epsilons}'}
)
)
# The posterior distribution becomes the prior
models.use_distribution_from_samples()
# Update with 2nd batch
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models,
obs=obs[3:7],
distance=distance,
)
# The posterior distribution becomes the prior
models.use_distribution_from_samples()
data_to_display[0][1]['data'].append(
pd.DataFrame(
models[0].prev_accepted_proposals).rename(
columns={'uniform': f'eps: {epsilons}'}
)
)
# Update with 3rd batch
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models,
obs=obs[7:],
distance=distance,
)
data_to_display[0][2]['data'].append(
pd.DataFrame(
models[0].prev_accepted_proposals).rename(
columns={'uniform': f'eps: {epsilons}'}
)
)
models_full_batch = Models(
[
Model(
name='flat prior',
priors=[
Uniform(alpha=0.5, beta=1, name="uniform"),
],
simulate=simulate,
prior_model_proba=1,
),
]
)
# Update full batch
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models_full_batch,
obs=obs,
distance=distance,
)
data_to_display[0][3]['data'].append(
pd.DataFrame(
models_full_batch[0].prev_accepted_proposals
).rename(columns={'uniform': f'eps: {epsilons}'})
)
create_images_from_data(
data={
'title': '3 batch updates',
'data': data_to_display
},
xlim=(0,1),
figsize_mult=(2,8)
)
```
## 2M3
Suppose there are two globes, one for Earth and one for Mars. The Earth globe is 70% covered in water. The Mars globe is 100% land. Further suppose that one of these globes—you don’t know which—was tossed in the air and produced a “land” observation. Assume that each globe was equally likely to be tossed. Show that the posterior probability that the globe was the Earth, conditional on seeing “land” (Pr(Earth|land)), is 0.23.
```
Beta(alpha=700, beta=300, name="proba_water").get_name()
```
We can construct two models of the world: one corresponding to the Earth and one corresponding to Mars. We then run the ABC-SMC algorithm. Afterwards, we compute to see how many times the samples produced from Earth are accepted and compare that to how many times the samples from Mars were accepted. We then normalize those two numbers by the number of total samples.
```
import numpy as np
import pandas as pd
from unlikely.models import Models, Model
from unlikely.priors import Beta
from unlikely.engine import abc_smc
from unlikely.misc import create_images_from_data
def simulate(priors, obs):
"""
Data is binomially distributed.
"""
return np.random.binomial(n=1, p=priors['proba_water'], size=len(obs))
def distance(x,y):
"""
Compare the number of ones in one vs. the other.
"""
return abs(x.sum() - y.sum())
obs = np.array([0]) # land
num_particles = 2000
epsilons = [0]
models = Models(
[
Model(
name='Earth produced it',
priors=[
Beta(alpha=700 + 1, beta=300 + 1, name="proba_water")
],
simulate=simulate,
prior_model_proba=0.5,
),
Model(
name='Mars produced it',
priors=[
Beta(alpha=1, beta=1001, name="proba_water")
],
simulate=simulate,
prior_model_proba=0.5,
)
]
)
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models,
obs=obs,
distance=distance,
)
models.get_posterior_probabilities()
```
$P(\text{Earth} \mid \text{Data}) \approx 0.23$
## 2M4
Suppose you have a deck with only three cards. Each card has two sides, and each side is either black or white. One card has two black sides. The second card has one black and one white side. The third card has two white sides. Now suppose all three cards are placed in a bag and shuffled. Someone reaches into the bag and pulls out a card and places it flat on a table. A black side is shown facing up, but you don’t know the color of the side facing down. Show that the probability that the other side is also black is 2/3. Use the counting method (Section 2 of the chapter) to approach this problem. This means counting up the ways that each card could produce the observed data (a black side facing up on the table).
```
np.random.randint(0,1)
import numpy as np
card_1 = [0, 0]
card_2 = [0, 1]
card_3 = [1, 1]
probas = [1/3, 1/3, 1/3]
cards = [card_1, card_2, card_3]
def simulate_card_selection(cards, probas):
"""
Let Black be represented by 0, and White be represented by 1.
"""
count_black_other_side = 0
num_sims = 10000
num_div = 0
for i in range(num_sims):
# choose a card randomly
chosen_card_index = np.random.choice(list(range(len(cards))), p=probas)
chosen_card = cards[chosen_card_index]
# choose a side to display
chosen_side_to_show_index = np.random.randint(0,2)
other_side_index = 1 - chosen_side_to_show_index
chosen_side_to_show = chosen_card[chosen_side_to_show_index]
if chosen_side_to_show == 0:
num_div += 1
if chosen_card[other_side_index] == 0:
count_black_other_side += 1
return count_black_other_side / num_div
print(f"Probability that the other side is black: {round(simulate_card_selection(cards, probas), 2)}")
```
## 2M5
2M5. Now suppose there are four cards: B/B, B/W, W/W, and another B/B. Again suppose a card is drawn from the bag and a black side appears face up. Again calculate the probability that the other side is black.
```
cards = [(0,0), (0,1), (1,1), (0,0)]
print(f"Probability that the other side is black: {round(simulate_card_selection(cards), 2)}")
```
## 2M6
Imagine that black ink is heavy, and so cards with black sides are heavier than cards with white sides. As a result, it’s less likely that a card with black sides is pulled from the bag. So again assume there are three cards: B/B, B/W, and W/W. After experimenting a number of times, you conclude that for every way to pull the B/B card from the bag, there are 2 ways to pull the B/W card and 3 ways to pull the W/W card. Again suppose that a card is pulled and a black side appears face up. Show that the probability the other side is black is now 0.5. Use the counting method, as before.
```
cards = [(0,0), (0,1), (1,1)]
probas = np.array([1, 2, 3])
probas = probas / probas.sum() # Normalize
print(f"Probability that the other side is black: {round(simulate_card_selection(cards, probas), 2)}")
```
## 2M7
Assume again the original card problem, with a single card showing a black side face up. Before looking at the other side, we draw another card from the bag and lay it face up on the table. The face that is shown on the new card is white. Show that the probability that the first card, the one showing a black side, has black on its other side is now 0.75. Use the counting method, if you can. Hint: Treat this like the sequence of globe tosses, counting all the ways to see each observation, for each possible first card.
```
def simulate_card_selection_2M7(cards, probas):
"""
Let Black be represented by 0, and White be represented by 1.
"""
count_black_other_side = 0
num_sims = 10000
num_div = 0
for i in range(num_sims):
# choose a card randomly
chosen_card_index = np.random.choice(
list(range(len(cards))),
p=probas
)
chosen_card = cards[chosen_card_index]
# choose a side to display
chosen_side_to_show_index = np.random.randint(0,2)
other_side_index = 1 - chosen_side_to_show_index
chosen_side_to_show = chosen_card[chosen_side_to_show_index]
possible_other_chosen_card_indices = list(
set(list(range(len(cards)))) - set({chosen_card_index})
)
other_card_index = np.random.choice(possible_other_chosen_card_indices)
other_card = cards[other_card_index]
other_card_side_to_show_index = np.random.randint(0,2)
other_card_side_to_hide_index = 1 - other_card_side_to_show_index
other_card_side_to_show = other_card[other_card_side_to_show_index]
other_card_side_to_hide = other_card[other_card_side_to_hide_index]
if chosen_side_to_show == 0 and other_card_side_to_show == 1:
num_div += 1
if chosen_card[other_side_index] == 0:
count_black_other_side += 1
return count_black_other_side / num_div
cards = [(0,0), (0,1), (1,1)]
probas = np.array([1, 1, 1])
probas = probas / probas.sum()
print(f"Probability that the other side is black: {round(simulate_card_selection_2M7(cards, probas), 2)}")
```
## 2H1
Suppose there are two species of panda bear. Both are equally common in the wild and live in the same places. They look exactly alike and eat the same food, and there is yet no genetic assay capable of telling them apart. They differ however in their family sizes. Species A gives birth to twins 10% of the time, otherwise birthing a single infant. Species B births twins 20% of the time, otherwise birthing singleton infants. Assume these numbers are known with certainty, from many years of field research.
Now suppose you are managing a captive panda breeding program. You have a new female panda of unknown species, and she has just given birth to twins. What is the probability that her next birth will also be twins?
```
import numpy as np
import pandas as pd
from unlikely.models import Models, Model
from unlikely.priors import Beta
from unlikely.engine import abc_smc
from unlikely.misc import create_images_from_data
def simulate(priors, obs):
"""
Data is binomially distributed.
"""
return np.random.binomial(n=1, p=priors['proba_twin'], size=len(obs))
def distance(x,y):
"""
Compare the number of ones in one vs. the other.
"""
return abs(x.sum() - y.sum())
obs = np.array([1]) # observe a twin birth
num_particles = 5000
epsilons = [0]
models = Models(
[
Model(
name='Species A',
priors=[
# Species A gives birth to twins 10% of the time
Beta(alpha=1000 + 1, beta=9000 + 1, name="proba_twin")
],
simulate=simulate,
prior_model_proba=0.5,
),
Model(
name='Species B',
priors=[
# Species B gives birth to twins 20% of the time
Beta(alpha=2000 + 1, beta=8000 + 1, name="proba_twin")
],
simulate=simulate,
prior_model_proba=0.5,
)
]
)
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models,
obs=obs,
distance=distance,
)
models.get_posterior_probabilities()
```
There's a 1/3 chance that the species of mother is A, while there's a 2/3 chance that it is B. The data generating process in terms of the DAG is:
```
import graphviz
from graphviz import Digraph
dag = Digraph('species-birthing-dag')
dag.edge('Species', 'Twin(t=0) = 1')
dag.edge('Species', 'Twin(t=1)')
dag
```
$$
\begin{equation}
\begin{aligned}
P(T_{t=1} \mid T_{t=0} = 1) &= \sum_s P(T_{t=1}, S=s \mid T_{t=0} = 1) \\
&= \sum_s P(T_{t=1} \mid S=s, T_{t=0} = 1) \cdot P(S=s \mid T_{t=1}) \\
&= \sum_s P(T_{t=1} \mid S=s) \cdot P(S=s \mid T_{t=0} = 1) & \text{Once you know the Species, a previous birth doesn't tell you anything about the next birth}\\
\end{aligned}
\end{equation}
$$
From running the above code, we have $P(S=a \mid T_{t=0}=1) \approx 0.334$ and $P(S=b \mid T_{t=0}=1) \approx 0.666$. We are also given $P(T_{t=1} \mid S=s)$. When $S=a$, we have $P(T_{t=1} \mid S=a) = 0.10$. For $S=b$, it is double that: $P(T_{t=1} \mid S=a) = 0.20$.
Thus, the above equation reduces to:
$$
\begin{equation}
\begin{aligned}
\sum_s P(T_{t=1} \mid S=s) \cdot P(S=s \mid T_{t=0} = 1) &=
P(T_{t=1} \mid S=a) \cdot P(S=a \mid T_{t=0} = 1) \\
&\quad + P(T_{t=1} \mid S=b) \cdot P(S=b \mid T_{t=0} = 1) \\
&= 0.1 \cdot 0.334 + 0.2 \cdot 0.666 \\
&= 0.1666 \\
&\approx 0.17
\end{aligned}
\end{equation}
$$
We could find the same answer by using our updated model of the world ($P(S=s \mid T_{t=0} = 1)$) to produce imaginary data for the next birth:
```
simulate(
models[0].prev_accepted_proposals,
models[0].prev_accepted_proposals
).mean() * models.get_posterior_probabilities()['Species A'] + simulate(
models[1].prev_accepted_proposals,
models[1].prev_accepted_proposals
).mean() * models.get_posterior_probabilities()['Species B']
```
## 2H2
Recall all the facts from the problem above. Now compute the probability that the panda we have is from species A, assuming we have observed only the first birth and that it was twins.
```
import numpy as np
import pandas as pd
from unlikely.models import Models, Model
from unlikely.priors import Beta
from unlikely.engine import abc_smc
from unlikely.misc import create_images_from_data
def simulate(priors, obs):
"""
Data is binomially distributed.
"""
return np.random.binomial(n=1, p=priors['proba_twin'], size=len(obs))
def distance(x,y):
"""
Compare the number of ones in one vs. the other.
"""
return abs(x.sum() - y.sum())
obs = np.array([1]) # observe a twin birth
num_particles = 5000
epsilons = [0]
models = Models(
[
Model(
name='Species A',
priors=[
# Species A gives birth to twins 10% of the time
Beta(alpha=1000 + 1, beta=9000 + 1, name="proba_twin")
],
simulate=simulate,
prior_model_proba=0.5,
),
Model(
name='Species B',
priors=[
# Species B gives birth to twins 20% of the time
Beta(alpha=2000 + 1, beta=8000 + 1, name="proba_twin")
],
simulate=simulate,
prior_model_proba=0.5,
)
]
)
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models,
obs=obs,
distance=distance,
)
print(
"The probabilities of the species of the mother, given we observed twins: "\
+ f"{models.get_posterior_probabilities()}"
)
```
## 2H3
Continuing on from the previous problem, suppose the same panda mother has a second birth and that it is not twins, but a singleton infant. Compute the posterior probability that this panda is species A.
```
models.use_distribution_from_samples() # set posterior distribution samples as new prior.
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models,
obs=np.array([0]),
distance=distance,
)
print(
"The probabilities of the species of the mother, given we observed 1 twin and 1 singleton: "\
+ f"{models.get_posterior_probabilities()}"
)
```
Using the posterior from 2H2 as the prior, and combining that with the observation of a singleton infant, Species A became more likely, but Species B is still the more probable.
## 2H4
A common boast of Bayesian statisticians is that Bayesian inference makes it easy to use all of the data, even if the data are of different types. So suppose now that a veterinarian comes along who has a new genetic test that she claims can identify the species of our mother panda. But the test, like all tests, is imperfect. This is the information you have about the test:
• The probability it correctly identifies a species A panda is 0.8.
• The probability it correctly identifies a species B panda is 0.65.
The vet administers the test to your panda and tells you that the test is positive for species A. First ignore your previous information from the births and compute the posterior probability that your panda is species A. Then redo your calculation, now using the birth data as well.
### Using vet genetic test only:
```
import numpy as np
import pandas as pd
from unlikely.models import Models, Model
from unlikely.priors import Beta
from unlikely.engine import abc_smc
from unlikely.misc import create_images_from_data
def simulate(priors, obs):
return np.random.binomial(n=1, p=priors['doc_proba'], size=len(obs))
def distance(x,y):
"""
Compare the number of ones in one vs. the other.
"""
return abs(x.sum() - y.sum())
obs = np.array([1]) # Doctor claims a species A
num_particles = 5000
epsilons = [0]
models = Models(
[
Model(
name='Species A',
priors=[
Beta(alpha=8000 + 1, beta=2000 + 1, name="doc_proba")
],
simulate=simulate,
prior_model_proba=0.5,
),
Model(
name='Species B',
priors=[
Beta(alpha=3500 + 1, beta=6500 + 1, name="doc_proba")
],
simulate=simulate,
prior_model_proba=0.5,
)
]
)
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models,
obs=obs,
distance=distance,
)
print(
"The probabilities of the species of the mother, given the doctor's claim: "\
+ f"{models.get_posterior_probabilities()}"
)
list(np.array([1])) + list(np.array([1,2]))
import numpy as np
import pandas as pd
from unlikely.models import Models, Model
from unlikely.priors import Beta
from unlikely.engine import abc_smc
from unlikely.misc import create_images_from_data
def simulate(priors, obs):
genetic_test = np.random.binomial(n=1, p=priors['genetic_test'], size=1)
twin_birth = np.random.binomial(n=1, p=priors['twin_birth'], size=len(obs['twin_birth']))
return {
'genetic_test': genetic_test,
'twin_birth': twin_birth
}
def distance(x,y):
"""
Compare the number of ones in one vs. the other.
"""
x_list = list(x['genetic_test']) + list(x['twin_birth'])
y_list = list(y['genetic_test']) + list(y['twin_birth'])
return abs(x['genetic_test'] - y['genetic_test']) \
+ sum(abs(x['twin_birth'] - y['twin_birth']))
obs = {
'genetic_test': [1], # Species A
'twin_birth': [1, 0]
}
np.array([1]) # Doctor claims a species A
num_particles = 5000
epsilons = [0]
models = Models(
[
Model(
name='Species A',
priors=[
Beta(alpha=8000 + 1, beta=2000 + 1, name="genetic_test"),
Beta(alpha=1000 + 1, beta=9000 + 1, name="twin_birth")
],
simulate=simulate,
prior_model_proba=0.5,
),
Model(
name='Species B',
priors=[
Beta(alpha=3500 + 1, beta=6500 + 1, name="genetic_test"),
Beta(alpha=2000 + 1, beta=8000 + 1, name="twin_birth")
],
simulate=simulate,
prior_model_proba=0.5,
)
]
)
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models,
obs=obs,
distance=distance,
)
print(
"The probabilities of the species of the mother, given the doctor's claim and births: "\
+ f"{models.get_posterior_probabilities()}"
)
```
|
github_jupyter
|
import numpy as np
import pandas as pd
from unlikely.models import Models, Model
from unlikely.priors import Beta
from unlikely.engine import abc_smc
from unlikely.misc import create_images_from_data
num_particles = 2000
obs = np.array([
1,1,1,
1,1,1,0,
0,1,1,0,1,1,1
])
epsilons_list = [[0], [3,2,1,0]]
def distance(x,y):
"""
Compare the number of ones in one vs. the other.
"""
return abs(x.sum() - y.sum())
def simulate(priors, obs):
"""
Data is binomially distributed.
"""
return np.random.binomial(n=1, p=priors['beta'], size=len(obs))
data_to_display = [
[
{
'title': f"obs: {obs[:3]}",
'data': [
pd.DataFrame({
'reference': np.random.beta(
obs[:3].sum() + 1,
len(obs[:3]) - obs[:3].sum() + 1,
num_particles
)
})
]
},
{
'title': f"after {obs[3:7]}",
'data': [
pd.DataFrame({
'reference': np.random.beta(
obs[:7].sum() + 1,
len(obs[:7]) - obs[:7].sum() + 1,
num_particles
)
})
]
},
{
'title': f"after {obs[7:]}",
'data': [
pd.DataFrame({
'reference': np.random.beta(
obs.sum() + 1,
len(obs) - obs.sum() + 1,
num_particles
)
})
]
},
{
'title': "Full batch",
'data': [
pd.DataFrame({
'reference': np.random.beta(
obs.sum() + 1,
len(obs) - obs.sum() + 1,
num_particles
)
})
]
}
]
]
for row, epsilons in enumerate(epsilons_list):
models = Models(
[
Model(
name='flat prior',
priors=[
Beta(alpha=1, beta=1, name="beta"),
],
simulate=simulate,
prior_model_proba=1,
),
]
)
# Update with 1st batch
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models,
obs=obs[:3],
distance=distance,
)
data_to_display[0][0]['data'].append(
pd.DataFrame(models[0].prev_accepted_proposals).rename(
columns={'beta': f'eps: {epsilons}'}
)
)
# The posterior distribution becomes the prior
models.use_distribution_from_samples()
# Update with 2nd batch
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models,
obs=obs[3:7],
distance=distance,
)
# The posterior distribution becomes the prior
models.use_distribution_from_samples()
data_to_display[0][1]['data'].append(
pd.DataFrame(
models[0].prev_accepted_proposals).rename(
columns={'beta': f'eps: {epsilons}'}
)
)
# Update with 3rd batch
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models,
obs=obs[7:],
distance=distance,
)
data_to_display[0][2]['data'].append(
pd.DataFrame(
models[0].prev_accepted_proposals).rename(
columns={'beta': f'eps: {epsilons}'}
)
)
models_full_batch = Models(
[
Model(
name='flat prior',
priors=[
Beta(alpha=1, beta=1, name="beta"),
],
simulate=simulate,
prior_model_proba=1,
),
]
)
# Update full batch
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models_full_batch,
obs=obs,
distance=distance,
)
data_to_display[0][3]['data'].append(
pd.DataFrame(
models_full_batch[0].prev_accepted_proposals
).rename(columns={'beta': f'eps: {epsilons}'})
)
create_images_from_data(
data={
'title': '3 batch updates',
'data': data_to_display
},
xlim=(0,1),
figsize_mult=(2,8)
)
import numpy as np
import pandas as pd
from unlikely.models import Models, Model
from unlikely.priors import Uniform
from unlikely.engine import abc_smc
from unlikely.misc import create_images_from_data
import pdb
num_particles = 2000
obs = np.array([
1,1,1,
1,1,1,0,
0,1,1,0,1,1,1
])
epsilons_list = [[0], [3,2,1,0]]
def distance(x,y):
"""
Compare the number of ones in one vs. the other.
"""
return abs(x.sum() - y.sum())
def simulate(priors, obs):
"""
Data is binomially distributed.
"""
return np.random.binomial(n=1, p=priors['uniform'], size=len(obs))
data_to_display = [
[
{
'title': f"obs: {obs[:3]}",
'data': []
},
{
'title': f"after {obs[3:7]}",
'data': []
},
{
'title': f"after {obs[7:]}",
'data': []
},
{
'title': "Full batch",
'data': []
}
]
]
for row, epsilons in enumerate(epsilons_list):
models = Models(
[
Model(
name='Uniform over (0.5, 1)',
priors=[
Uniform(alpha=0.5, beta=1, name="uniform"),
],
simulate=simulate,
prior_model_proba=1,
),
]
)
# Update with 1st batch
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models,
obs=obs[:3],
distance=distance,
)
data_to_display[0][0]['data'].append(
pd.DataFrame(models[0].prev_accepted_proposals).rename(
columns={'uniform': f'eps: {epsilons}'}
)
)
# The posterior distribution becomes the prior
models.use_distribution_from_samples()
# Update with 2nd batch
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models,
obs=obs[3:7],
distance=distance,
)
# The posterior distribution becomes the prior
models.use_distribution_from_samples()
data_to_display[0][1]['data'].append(
pd.DataFrame(
models[0].prev_accepted_proposals).rename(
columns={'uniform': f'eps: {epsilons}'}
)
)
# Update with 3rd batch
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models,
obs=obs[7:],
distance=distance,
)
data_to_display[0][2]['data'].append(
pd.DataFrame(
models[0].prev_accepted_proposals).rename(
columns={'uniform': f'eps: {epsilons}'}
)
)
models_full_batch = Models(
[
Model(
name='flat prior',
priors=[
Uniform(alpha=0.5, beta=1, name="uniform"),
],
simulate=simulate,
prior_model_proba=1,
),
]
)
# Update full batch
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models_full_batch,
obs=obs,
distance=distance,
)
data_to_display[0][3]['data'].append(
pd.DataFrame(
models_full_batch[0].prev_accepted_proposals
).rename(columns={'uniform': f'eps: {epsilons}'})
)
create_images_from_data(
data={
'title': '3 batch updates',
'data': data_to_display
},
xlim=(0,1),
figsize_mult=(2,8)
)
Beta(alpha=700, beta=300, name="proba_water").get_name()
import numpy as np
import pandas as pd
from unlikely.models import Models, Model
from unlikely.priors import Beta
from unlikely.engine import abc_smc
from unlikely.misc import create_images_from_data
def simulate(priors, obs):
"""
Data is binomially distributed.
"""
return np.random.binomial(n=1, p=priors['proba_water'], size=len(obs))
def distance(x,y):
"""
Compare the number of ones in one vs. the other.
"""
return abs(x.sum() - y.sum())
obs = np.array([0]) # land
num_particles = 2000
epsilons = [0]
models = Models(
[
Model(
name='Earth produced it',
priors=[
Beta(alpha=700 + 1, beta=300 + 1, name="proba_water")
],
simulate=simulate,
prior_model_proba=0.5,
),
Model(
name='Mars produced it',
priors=[
Beta(alpha=1, beta=1001, name="proba_water")
],
simulate=simulate,
prior_model_proba=0.5,
)
]
)
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models,
obs=obs,
distance=distance,
)
models.get_posterior_probabilities()
np.random.randint(0,1)
import numpy as np
card_1 = [0, 0]
card_2 = [0, 1]
card_3 = [1, 1]
probas = [1/3, 1/3, 1/3]
cards = [card_1, card_2, card_3]
def simulate_card_selection(cards, probas):
"""
Let Black be represented by 0, and White be represented by 1.
"""
count_black_other_side = 0
num_sims = 10000
num_div = 0
for i in range(num_sims):
# choose a card randomly
chosen_card_index = np.random.choice(list(range(len(cards))), p=probas)
chosen_card = cards[chosen_card_index]
# choose a side to display
chosen_side_to_show_index = np.random.randint(0,2)
other_side_index = 1 - chosen_side_to_show_index
chosen_side_to_show = chosen_card[chosen_side_to_show_index]
if chosen_side_to_show == 0:
num_div += 1
if chosen_card[other_side_index] == 0:
count_black_other_side += 1
return count_black_other_side / num_div
print(f"Probability that the other side is black: {round(simulate_card_selection(cards, probas), 2)}")
cards = [(0,0), (0,1), (1,1), (0,0)]
print(f"Probability that the other side is black: {round(simulate_card_selection(cards), 2)}")
cards = [(0,0), (0,1), (1,1)]
probas = np.array([1, 2, 3])
probas = probas / probas.sum() # Normalize
print(f"Probability that the other side is black: {round(simulate_card_selection(cards, probas), 2)}")
def simulate_card_selection_2M7(cards, probas):
"""
Let Black be represented by 0, and White be represented by 1.
"""
count_black_other_side = 0
num_sims = 10000
num_div = 0
for i in range(num_sims):
# choose a card randomly
chosen_card_index = np.random.choice(
list(range(len(cards))),
p=probas
)
chosen_card = cards[chosen_card_index]
# choose a side to display
chosen_side_to_show_index = np.random.randint(0,2)
other_side_index = 1 - chosen_side_to_show_index
chosen_side_to_show = chosen_card[chosen_side_to_show_index]
possible_other_chosen_card_indices = list(
set(list(range(len(cards)))) - set({chosen_card_index})
)
other_card_index = np.random.choice(possible_other_chosen_card_indices)
other_card = cards[other_card_index]
other_card_side_to_show_index = np.random.randint(0,2)
other_card_side_to_hide_index = 1 - other_card_side_to_show_index
other_card_side_to_show = other_card[other_card_side_to_show_index]
other_card_side_to_hide = other_card[other_card_side_to_hide_index]
if chosen_side_to_show == 0 and other_card_side_to_show == 1:
num_div += 1
if chosen_card[other_side_index] == 0:
count_black_other_side += 1
return count_black_other_side / num_div
cards = [(0,0), (0,1), (1,1)]
probas = np.array([1, 1, 1])
probas = probas / probas.sum()
print(f"Probability that the other side is black: {round(simulate_card_selection_2M7(cards, probas), 2)}")
import numpy as np
import pandas as pd
from unlikely.models import Models, Model
from unlikely.priors import Beta
from unlikely.engine import abc_smc
from unlikely.misc import create_images_from_data
def simulate(priors, obs):
"""
Data is binomially distributed.
"""
return np.random.binomial(n=1, p=priors['proba_twin'], size=len(obs))
def distance(x,y):
"""
Compare the number of ones in one vs. the other.
"""
return abs(x.sum() - y.sum())
obs = np.array([1]) # observe a twin birth
num_particles = 5000
epsilons = [0]
models = Models(
[
Model(
name='Species A',
priors=[
# Species A gives birth to twins 10% of the time
Beta(alpha=1000 + 1, beta=9000 + 1, name="proba_twin")
],
simulate=simulate,
prior_model_proba=0.5,
),
Model(
name='Species B',
priors=[
# Species B gives birth to twins 20% of the time
Beta(alpha=2000 + 1, beta=8000 + 1, name="proba_twin")
],
simulate=simulate,
prior_model_proba=0.5,
)
]
)
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models,
obs=obs,
distance=distance,
)
models.get_posterior_probabilities()
import graphviz
from graphviz import Digraph
dag = Digraph('species-birthing-dag')
dag.edge('Species', 'Twin(t=0) = 1')
dag.edge('Species', 'Twin(t=1)')
dag
simulate(
models[0].prev_accepted_proposals,
models[0].prev_accepted_proposals
).mean() * models.get_posterior_probabilities()['Species A'] + simulate(
models[1].prev_accepted_proposals,
models[1].prev_accepted_proposals
).mean() * models.get_posterior_probabilities()['Species B']
import numpy as np
import pandas as pd
from unlikely.models import Models, Model
from unlikely.priors import Beta
from unlikely.engine import abc_smc
from unlikely.misc import create_images_from_data
def simulate(priors, obs):
"""
Data is binomially distributed.
"""
return np.random.binomial(n=1, p=priors['proba_twin'], size=len(obs))
def distance(x,y):
"""
Compare the number of ones in one vs. the other.
"""
return abs(x.sum() - y.sum())
obs = np.array([1]) # observe a twin birth
num_particles = 5000
epsilons = [0]
models = Models(
[
Model(
name='Species A',
priors=[
# Species A gives birth to twins 10% of the time
Beta(alpha=1000 + 1, beta=9000 + 1, name="proba_twin")
],
simulate=simulate,
prior_model_proba=0.5,
),
Model(
name='Species B',
priors=[
# Species B gives birth to twins 20% of the time
Beta(alpha=2000 + 1, beta=8000 + 1, name="proba_twin")
],
simulate=simulate,
prior_model_proba=0.5,
)
]
)
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models,
obs=obs,
distance=distance,
)
print(
"The probabilities of the species of the mother, given we observed twins: "\
+ f"{models.get_posterior_probabilities()}"
)
models.use_distribution_from_samples() # set posterior distribution samples as new prior.
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models,
obs=np.array([0]),
distance=distance,
)
print(
"The probabilities of the species of the mother, given we observed 1 twin and 1 singleton: "\
+ f"{models.get_posterior_probabilities()}"
)
import numpy as np
import pandas as pd
from unlikely.models import Models, Model
from unlikely.priors import Beta
from unlikely.engine import abc_smc
from unlikely.misc import create_images_from_data
def simulate(priors, obs):
return np.random.binomial(n=1, p=priors['doc_proba'], size=len(obs))
def distance(x,y):
"""
Compare the number of ones in one vs. the other.
"""
return abs(x.sum() - y.sum())
obs = np.array([1]) # Doctor claims a species A
num_particles = 5000
epsilons = [0]
models = Models(
[
Model(
name='Species A',
priors=[
Beta(alpha=8000 + 1, beta=2000 + 1, name="doc_proba")
],
simulate=simulate,
prior_model_proba=0.5,
),
Model(
name='Species B',
priors=[
Beta(alpha=3500 + 1, beta=6500 + 1, name="doc_proba")
],
simulate=simulate,
prior_model_proba=0.5,
)
]
)
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models,
obs=obs,
distance=distance,
)
print(
"The probabilities of the species of the mother, given the doctor's claim: "\
+ f"{models.get_posterior_probabilities()}"
)
list(np.array([1])) + list(np.array([1,2]))
import numpy as np
import pandas as pd
from unlikely.models import Models, Model
from unlikely.priors import Beta
from unlikely.engine import abc_smc
from unlikely.misc import create_images_from_data
def simulate(priors, obs):
genetic_test = np.random.binomial(n=1, p=priors['genetic_test'], size=1)
twin_birth = np.random.binomial(n=1, p=priors['twin_birth'], size=len(obs['twin_birth']))
return {
'genetic_test': genetic_test,
'twin_birth': twin_birth
}
def distance(x,y):
"""
Compare the number of ones in one vs. the other.
"""
x_list = list(x['genetic_test']) + list(x['twin_birth'])
y_list = list(y['genetic_test']) + list(y['twin_birth'])
return abs(x['genetic_test'] - y['genetic_test']) \
+ sum(abs(x['twin_birth'] - y['twin_birth']))
obs = {
'genetic_test': [1], # Species A
'twin_birth': [1, 0]
}
np.array([1]) # Doctor claims a species A
num_particles = 5000
epsilons = [0]
models = Models(
[
Model(
name='Species A',
priors=[
Beta(alpha=8000 + 1, beta=2000 + 1, name="genetic_test"),
Beta(alpha=1000 + 1, beta=9000 + 1, name="twin_birth")
],
simulate=simulate,
prior_model_proba=0.5,
),
Model(
name='Species B',
priors=[
Beta(alpha=3500 + 1, beta=6500 + 1, name="genetic_test"),
Beta(alpha=2000 + 1, beta=8000 + 1, name="twin_birth")
],
simulate=simulate,
prior_model_proba=0.5,
)
]
)
abc_smc(
num_particles=num_particles,
epsilons=epsilons,
models=models,
obs=obs,
distance=distance,
)
print(
"The probabilities of the species of the mother, given the doctor's claim and births: "\
+ f"{models.get_posterior_probabilities()}"
)
| 0.631367 | 0.933915 |
```
import boto3
import datetime
import json
import pandas as pd
import os
from pathlib import Path
import json
from sklearn.metrics import confusion_matrix
from sklearn.metrics import ConfusionMatrixDisplay
import matplotlib.pyplot as plt
from sklearn.preprocessing import label_binarize
from sklearn.metrics import roc_curve, auc
from itertools import cycle
from sklearn.metrics import accuracy_score
import numpy as np
from numpy import interp
from pycm import *
from sklearn.metrics import log_loss
from sklearn.metrics import balanced_accuracy_score
def time_converter(o):
if isinstance(o, datetime.datetime):
return o.__str__()
rootpath = os.path.join(r'replace your path here\mturk-task-helper')
answers_path = os.path.join(rootpath,"batch100_HITs","answers", "1selwyn_answers_classification_task_layout2_3_batch100.csv")
results_path = os.path.join(rootpath,"batch100_HITs","all")
output_results = os.path.join(rootpath,"batch100_HITs","all","results")
if not os.path.exists(results_path):
os.mkdir(results_path)
if not os.path.exists(output_results):
os.mkdir(output_results)
assert os.path.exists(rootpath) and os.path.exists(answers_path) and os.path.exists(results_path) and os.path.exists(output_results),
"One of the aforementioned paths do not exist or incorrect. Please check before proceeding to next cell"
region_name = 'us-east-1'
endpoint_url = 'https://mturk-requester-sandbox.us-east-1.amazonaws.com'
prod_url = "https://mturk-requester.us-east-1.amazonaws.com"
client = boto3.client(
'mturk',
endpoint_url=prod_url,
region_name=region_name,
)
# This will return $10,000.00 in the MTurk Developer Sandbox
print(client.get_account_balance()['AvailableBalance'])
files = os.listdir(os.path.join(results_path))
files
def save_file(final_results:[dict],filename_with_ext):
result = pd.DataFrame.from_dict(final_results)
result.to_csv(os.path.join(os.getcwd(),filename_with_ext), index=False)
filename = 'all_batch_results.csv'
df = pd.read_csv(os.path.join(results_path, filename))
```
#### Following set of cells show how batch results are presented, what data is needed to score the tasks
```
df.columns
submitted_answers = df[['HITId','Answer.taskAnswers', 'WorkerId', 'WorkTimeInSeconds', 'LifetimeApprovalRate','Approve', 'Reject']]
submitted_answers['WorkTimeInSeconds'].max()
submitted_answers.columns
workers = list(submitted_answers.groupby(['WorkerId']).groups.keys())
print("Number of unique workers:",len(workers))
workerid = workers[0]
submitted_answers.loc[submitted_answers['WorkerId'] == workerid]['WorkTimeInSeconds']
answers = pd.read_csv(answers_path)
answers.columns
```
### Uncomment following to see answers format and values
```
#answers.head()
hitids = list(submitted_answers.groupby(['HITId']).groups.keys())
print("Number of unique HITs: ",len(hitids))
hit_wise_scores = dict()
for id in hitids:
hit_wise_scores[id] = dict()
#print(json.loads(submitted_answers))
all_count = len(submitted_answers)
count = 0
answer_count = 0
scores_dict = []
worker_total_scores = 0
image_wise_scores = []
worker_wise_scores = []
hit_wise_scores = []
total_hit_time = 0
total_hit_score = 0
scores = 0
task_count = 0
total_worker_time = 0
feedback = ""
```
## Scoring approach
### For each worker, get submitted answers, compare against answers and score
25 vehicles images, scores = total correct answers/25 (correct answer = if class label is equal to answer label and if answer label is none, if worker's answer = none)
```
for worker in workers:
worker_total_scores = 0
total_worker_time = 0
worker_answers = submitted_answers.loc[submitted_answers['WorkerId'] == worker]
#print(len(worker_answers.index))
for index in worker_answers.index:
feedback = ""
ans = worker_answers['Answer.taskAnswers'][index]
#print(ans.replace("[","").replace("]",""))
ajson = json.loads(ans.replace("[","").replace("]",""))
for k,v in ajson.items():
if k != "feedback":
#print(answers['image_url'] , k)
#print(len(k.split("/")))
#_,_,_,folder,filename = k.split("/")
#print(folder+"/"+filename)
# Check if this vehicle image url
exists = k in answers['image_url'].tolist()
# get results for this vehicle image url
values = answers.loc[answers['image_url'] == k]
t = type(v)
#print( count,k)
#check received labels
if t == dict:
label_value = v['label'] if v['label'] else "None"
answered = label_value.replace(": ","").strip()
else:
answered = v.replace(": ","").strip()
#check if label is correct answer
if answered == values.iloc[0]['class']:
image_wise_scores.append({"image_url":k, "submitted":answered, "answer": values.iloc[0]['class'], "score":1,"WorkerId":worker,"HITId": worker_answers['HITId'][index],})
scores += 1
count += 1
#we do not want to reject workers since we only aim to identify the gaps in training and analyzing the results and accuracy from this batch.
#this definitely will NOT hold true if you do plan to provide true evaluation in future once a "reasonable" training is provided to worker.
elif answered != values.iloc[0]['class'] and (answered == 'None' or answered == 'Not relevant vehicles'):
image_wise_scores.append({"image_url":k,"submitted":answered, "answer": values.iloc[0]['class'], "score":0, "WorkerId":worker,"HITId": worker_answers['HITId'][index],})
count += 1
else:
image_wise_scores.append({"image_url":k,"submitted":answered, "answer": values.iloc[0]['class'],"score":0,"WorkerId":worker,"HITId": worker_answers['HITId'][index],})
count += 1
else:
feedback = v
print(v)
#overall score for this HIT Id and for this worker
percent_score = scores*100/25
approve = ""
reject = ""
print("*************decision rule to approve or reject: we approve for correct answers greater than 70% and approve otherwise reject************")
#we approve for correct answers greater than 70% and approve otherwise reject
if percent_score > 70:
approve = "x"
else:
reject = "Number of incorrect class labels are more. Sorry. Maybe you can spend a bit more time and utilize duration of task and utlizie looking at examples."
scores_dict.append({"WorkerId":worker, "HITId":worker_answers['HITId'][index],"total_worker_score":scores,"task_percentage_score":percent_score,
"approve":approve, "reject":reject,"WorkTimeInSeconds":worker_answers['WorkTimeInSeconds'][index], "LifetimeApprovalRate":worker_answers['LifetimeApprovalRate'][index]})
image_wise_scores.append({"WorkerId":worker,"HITId": worker_answers['HITId'][index],"total_score":scores,"approve":approve, "reject":reject, "WorkTimeInSeconds":worker_answers['WorkTimeInSeconds'][index], "LifetimeApprovalRate":worker_answers['LifetimeApprovalRate'][index]})
total_worker_time += worker_answers['WorkTimeInSeconds'][index]
if count == 25:
worker_total_scores += scores
answer_count += 1
count = 0
scores = 0
#print score for this worker and append this worker's scores, results
print(worker_total_scores,25*len(worker_answers.index),len(worker_answers.index) , worker_total_scores*100/(25*len(worker_answers.index)))
worker_percent_score = worker_total_scores*100/(25*len(worker_answers.index))
worker_wise_scores.append({"WorkerId":worker,"total_worker_score":worker_total_scores,"worker_percent_score":worker_percent_score,
"TotalWorkTimeInSeconds":total_worker_time, "total_tasks": len(worker_answers.index),"LifetimeApprovalRate":worker_answers['LifetimeApprovalRate'][index], "feedback":feedback})
all_count,answer_count
len(scores_dict), len(image_wise_scores)
raw_results = pd.DataFrame.from_dict(scores_dict)
raw_results.to_csv(os.path.join(output_results,"selwyn_raw_classification_scores_all.csv"), index=False)
image_wise_results = pd.DataFrame.from_dict(image_wise_scores)
image_wise_results.to_csv(os.path.join(output_results,"selwyn_image_wise_worker_classification_scores_all.csv"), index=False)
result = pd.DataFrame.from_dict(worker_wise_scores)
result.to_csv(os.path.join(output_results,"selwyn_worker_wise_classification_scores_all.csv"), index=False)
print(len(worker_wise_scores))
best_workers = []
best_worker_wise_scores = []
for worker_score in worker_wise_scores:
print(worker_score['worker_percent_score'],float(worker_score['worker_percent_score']) > 70.0)
if float(worker_score['worker_percent_score']) > 70.0:
best_workers.append(worker_score['WorkerId'])
best_worker_wise_scores.append(worker_score)
print( "Number of best worker", len(best_workers))
print("Number of workers ", len(worker_wise_scores))
len(best_worker_wise_scores)
result = pd.DataFrame.from_dict(best_worker_wise_scores)
result.to_csv(os.path.join(output_results,"new","selwyn_best_worker_wise_classification_scores_all.csv"), index=False)
labels = answers['class'].unique().tolist()
labels
print("Buses are not there in this batch of images.")
```
### Following approach is for calculating confusion matrix and analysis of True positive rate for each worker's answers
```
labels = ['Trucks', 'Small trailers', 'Specialized vehicles',
'Large trailers', 'Small vehicles', 'Vans and RVs','Buses']
def plot_results(answers_labels, worker_hit_labels, labels, roc_filename_path):
classification_report = []
acc = {}
y_ans = label_binarize(answers_labels, classes=labels)
y_sub = label_binarize(worker_hit_labels, classes=labels)
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(len(labels)):
fpr[i], tpr[i], _ = roc_curve(y_ans[:, i], y_sub[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
acc[labels[i]] = {}
acc[labels[i]] = accuracy_score(y_ans[:, i], y_sub[:, i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_ans.ravel(), y_sub.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(len(labels))]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(len(labels)):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute macro TPR
mean_tpr /= len(labels)
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
worker, hitid,_ = os.path.basename(roc_filename_path).split(".")[0].split("_")
colors = cycle(['red', 'green','blue','violet', 'deepskyblue', 'darkorange', 'cyan','lime' ])
for i, color in zip(range(len(labels)), colors):
classification_report.append({"label":labels[i],"accuracy":acc[labels[i]],"tpr": tpr[i].tolist(), "fpr": fpr[i].tolist(),"worker": worker, "hitid":hitid})
save_file(classification_report, os.path.join(output_results,roc_filename_path.replace(".png",".csv")))
```
### Cohen Kappa scores are for inter-rater reliability. However it does not necessarily complete for imbalanced class distributions.
```
def calculate_cohen_kappa(worker1_labels, worker2_hit_labels, labels, worker_hit_id, hitid):
from sklearn.metrics import cohen_kappa_score
from sklearn.preprocessing import label_binarize
from itertools import cycle
class_cohen_kappa_score = {}
class_cohen_kappa_score['worker_hit_id'] = worker_hit_id
class_cohen_kappa_score['HITId'] = hitid
y_ans = label_binarize(worker1_labels, classes=labels)
y_sub = label_binarize(worker_hit_labels, classes=labels)
scores = {}
for i in range(len(labels)):
class_cohen_kappa_score[labels[i]] = cohen_kappa_score(y_ans[:, i], y_sub[:, i])
return class_cohen_kappa_score
```
### Following converts a confusion matrix results to dataframe so we can save results to csv as well as further analyze the Dataframe using join queries.
```
def cm2df(cm, labels, worker_hitid):
df = pd.DataFrame()
# rows
for i, row_label in enumerate(labels):
rowdata={}
# columns
for j, col_label in enumerate(labels):
rowdata[col_label]=cm[i,j]
df = df.append(pd.DataFrame.from_dict({row_label:rowdata}, orient='index'))
#df.style.set_table_attributes("style='display:inline'").set_caption(worker_hitid)
return df[labels]
worker_hit_labels = []
answers_labels = []
cm = []
urls = []
classification_report = []
total_answers_labels = []
total_worker_hit_labels = []
lw =2
all_cm = []
worker_count = 0
cohen_kappa_scores = []
answer_worker_labels = {}
```
### The following cell calculates confusion matrix and workerwise True positive rate and other statistics from Confusion matrix.
### Since this is different from scoring mechanism, we do it from scratch from batch results for each worker
```
for worker in workers:
worker_answers = submitted_answers.loc[submitted_answers['WorkerId'] == worker]
for index in worker_answers.index:
feedback = ""
ans = worker_answers['Answer.taskAnswers'][index]
ajson = json.loads(ans.replace("[","").replace("]",""))
urls = []
worker_hit_labels = []
answers_labels = []
for k,v in ajson.items():
if k != "feedback":
exists = k in answers['image_url'].tolist()
values = answers.loc[answers['image_url'] == k]
t = type(v)
if exists:
urls.append(k)
if t == dict:
label_value = v['label'] if v['label'] else "None"
answered = label_value.replace(": ","").strip()
else:
answered = v.replace(": ","").strip()
worker_hit_labels.append(answered);
answers_labels.append(values.iloc[0]['class'])
else:
#worker_hit_labels.append(answered);
#answers_labels.append(values.iloc[0]['class'])
feedback = v
plot_results(answers_labels, worker_hit_labels, labels, os.path.join(output_results,worker+"_"+worker_answers['HITId'][index]+"_ROC"+".png"))
cohen_kappa_scores.append(calculate_cohen_kappa(answers_labels, worker_hit_labels, labels,worker+"_"+worker_answers['HITId'][index],worker_answers['HITId'][index]))
answer_worker_labels[worker+"_"+worker_answers['HITId'][index]] ={"HITId":worker_answers['HITId'][index], "labels":worker_hit_labels}
res = confusion_matrix(answers_labels, worker_hit_labels, labels=labels)
#print(res, labels)
cm_display = ConfusionMatrixDisplay(res,display_labels=labels)
cm_display.plot(xticks_rotation=80)
plt.tight_layout(pad=1)
plt.savefig(os.path.join(output_results,worker+"_"+worker_answers['HITId'][index]+".png"), pad_inches=0.2)
plt.tight_layout()
all_cm.append(cm2df(res, labels,worker+"_"+worker_answers['HITId'][index]))
cm.append({"worker_hit":worker+"_"+worker_answers['HITId'][index],"answers":answers_labels,"submitted":worker_hit_labels,"ursl":urls,"cm":res.tolist() })
save_file(cm,os.path.join(output_results,"confusion_matrix_results_all.csv"))
len(cohen_kappa_scores)
all_cohen_kappa_scores = []
for worker_hit_id1 in answer_worker_labels.keys():
for worker_hit_id2 in answer_worker_labels.keys():
worker1,hit1 = worker_hit_id1.split("_")
worker2,hit2 = worker_hit_id2.split("_")
if worker1 != worker2 and hit1 == hit2 :
worker1_labels = answer_worker_labels[worker_hit_id1]["labels"]
worker2_labels = answer_worker_labels[worker_hit_id2]["labels"]
all_cohen_kappa_scores.append(calculate_cohen_kappa(worker1_labels, worker2_labels, labels,worker1 + " vs "+ worker2+ " for hit : "+hit1, hit1))
len(all_cohen_kappa_scores)
save_file(all_cohen_kappa_scores,os.path.join(output_results,"all_cohen_kappa_scores_layout123.csv"))
save_file(cohen_kappa_scores,os.path.join(output_results,"cohen_kappa_scores_layout123.csv"))
len(all_cm)
```
### Combine all confusion matrices together into one
```
final = pd.concat(all_cm,axis=1, keys=workers)
save_file(final,os.path.join(output_results,"all_confusion_matrix_results_all.csv"))
save_file(classification_report, os.path.join(output_results,"classification_reports.csv"))
len(answers_labels)
```
### example for one worker's Confusion matrix for multi-class using PyCM library
```
cmr = ConfusionMatrix(actual_vector=answers_labels, predict_vector=worker_hit_labels) # Create CM From Data
cmr.classes
cmr.table
```
### Results of Confusion Matrix results for single worker's Confusion matrix
```
print(cmr)
```
### Plot heatmap of One worker's confusion matrix
```
plt.rcParams["figure.figsize"] = (12,10)
cmr.plot(cmap=plt.cm.Greens,number_label=True,plot_lib="matplotlib")
plt.savefig(worker+"_"+worker_answers['HITId'][index]+".png", pad_inches=0.2)
#plt.tight_layout()
plt.show()
len(answers_labels)
def plot_results1(answers_labels, worker_hit_labels, labels, roc_filename_path):
classification_report = []
acc = {}
y_ans = label_binarize(answers_labels, classes=labels)
y_sub = label_binarize(worker_hit_labels, classes=labels)
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(len(labels)):
fpr[i], tpr[i], _ = roc_curve(y_ans[:, i], y_sub[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
acc[labels[i]] = {}
acc[labels[i]] = accuracy_score(y_ans[:, i], y_sub[:, i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_ans.ravel(), y_sub.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(len(labels))]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(len(labels)):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= len(labels)
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
colors = cycle(['red', 'green','blue','violet', 'deepskyblue', 'darkorange', 'cyan','lime' ])
for i, color in zip(range(len(labels)), colors):
classification_report.append({"label":labels[i],"accuracy":acc[labels[i]],"tpr": tpr[i].tolist(), "fpr": fpr[i].tolist()})
save_file(classification_report, os.path.join(output_results,roc_filename_path.replace(".png",".csv")))
worker_hit_labels = []
answers_labels = []
cm1 = []
urls = []
classification_report = []
total_answers_labels = []
total_worker_hit_labels = []
normalized_cm = None
```
### We want to compare confusion matrix results from above PyCM to that of combining and creating confusion matrix for each worker into one single Confusion matrix
```
for worker in workers:
worker_answers = submitted_answers.loc[submitted_answers['WorkerId'] == worker]
for index in worker_answers.index:
feedback = ""
ans = worker_answers['Answer.taskAnswers'][index]
ajson = json.loads(ans.replace("[","").replace("]",""))
for k,v in ajson.items():
if k != "feedback":
exists = k in answers['image_url'].tolist()
values = answers.loc[answers['image_url'] == k]
t = type(v)
if exists:
urls.append(k)
if t == dict:
label_value = v['label'] if v['label'] else "None"
answered = label_value.replace(": ","").strip()
else:
answered = v.replace(": ","").strip()
total_worker_hit_labels.append(answered);
total_answers_labels.append(values.iloc[0]['class'])
else:
total_worker_hit_labels.append(answered);
total_answers_labels.append(values.iloc[0]['class'])
feedback = v
plot_results1(total_answers_labels, total_worker_hit_labels, labels, os.path.join(output_results,"new","roc_4421963.png"))
res1 = confusion_matrix(total_answers_labels, total_worker_hit_labels, labels=labels)
print(res1, labels)
cm_display = ConfusionMatrixDisplay(res1,display_labels=labels)
cm_display.plot(xticks_rotation=80)
plt.tight_layout(pad=1)
plt.savefig(os.path.join(output_results,"cm_all"+".png"), pad_inches=0.2)
plt.tight_layout()
normalized_cm = res1.astype('float') / res1.sum(axis=1)[:, np.newaxis]
normalized_cm_sk =confusion_matrix(total_answers_labels, total_worker_hit_labels, labels=labels, normalize="true")
cm.append({"batch":"all","answers":total_answers_labels,"submitted":total_worker_hit_labels,"ursl":urls,"cm":res1 })
#save_file(cm, os.path.join(output_results,"overall_cm_reports_4408618.csv"))
res1.tolist()
normalized_cm_sk
df = pd.DataFrame(res1.tolist(), index =labels,columns =labels)
df
df.to_csv(os.path.join(output_results,"overall_best_of_worker_accuracy_cm_reports.csv"), index_label=[df.index.name, df.columns.name])
normalized_cm.tolist()
normalized_cm_sk.tolist()
```
### Normalized Overall Confusion matrix to address imbalanced class distribution
```
df1 = pd.DataFrame(normalized_cm.tolist(), index =labels,columns =labels)
df1
df1.to_csv(os.path.join(output_results,"overall_best_of_worker_accuracy_normalized_cm_reports.csv"), index_label=[df.index.name, df.columns.name])
```
### Apply PyCM to find analysis and results for Combined Confusion Matrix
```
cmr = ConfusionMatrix(actual_vector=total_answers_labels, predict_vector=total_worker_hit_labels) # Create CM From Data
print(cmr)
plt.rcParams["figure.figsize"] = (12,10)
cmr.plot(cmap=plt.cm.Greens,number_label=True,plot_lib="matplotlib")
```
### Normalize Overall Confusion Matrix (combined)
```
cmr.to_array(normalized=True).tolist()
```
### Plot heatmap for Overall Confusion Matrix combined together
```
cmr.plot(cmap=plt.cm.Reds,normalized=True,number_label=True,plot_lib="seaborn")
```
|
github_jupyter
|
import boto3
import datetime
import json
import pandas as pd
import os
from pathlib import Path
import json
from sklearn.metrics import confusion_matrix
from sklearn.metrics import ConfusionMatrixDisplay
import matplotlib.pyplot as plt
from sklearn.preprocessing import label_binarize
from sklearn.metrics import roc_curve, auc
from itertools import cycle
from sklearn.metrics import accuracy_score
import numpy as np
from numpy import interp
from pycm import *
from sklearn.metrics import log_loss
from sklearn.metrics import balanced_accuracy_score
def time_converter(o):
if isinstance(o, datetime.datetime):
return o.__str__()
rootpath = os.path.join(r'replace your path here\mturk-task-helper')
answers_path = os.path.join(rootpath,"batch100_HITs","answers", "1selwyn_answers_classification_task_layout2_3_batch100.csv")
results_path = os.path.join(rootpath,"batch100_HITs","all")
output_results = os.path.join(rootpath,"batch100_HITs","all","results")
if not os.path.exists(results_path):
os.mkdir(results_path)
if not os.path.exists(output_results):
os.mkdir(output_results)
assert os.path.exists(rootpath) and os.path.exists(answers_path) and os.path.exists(results_path) and os.path.exists(output_results),
"One of the aforementioned paths do not exist or incorrect. Please check before proceeding to next cell"
region_name = 'us-east-1'
endpoint_url = 'https://mturk-requester-sandbox.us-east-1.amazonaws.com'
prod_url = "https://mturk-requester.us-east-1.amazonaws.com"
client = boto3.client(
'mturk',
endpoint_url=prod_url,
region_name=region_name,
)
# This will return $10,000.00 in the MTurk Developer Sandbox
print(client.get_account_balance()['AvailableBalance'])
files = os.listdir(os.path.join(results_path))
files
def save_file(final_results:[dict],filename_with_ext):
result = pd.DataFrame.from_dict(final_results)
result.to_csv(os.path.join(os.getcwd(),filename_with_ext), index=False)
filename = 'all_batch_results.csv'
df = pd.read_csv(os.path.join(results_path, filename))
df.columns
submitted_answers = df[['HITId','Answer.taskAnswers', 'WorkerId', 'WorkTimeInSeconds', 'LifetimeApprovalRate','Approve', 'Reject']]
submitted_answers['WorkTimeInSeconds'].max()
submitted_answers.columns
workers = list(submitted_answers.groupby(['WorkerId']).groups.keys())
print("Number of unique workers:",len(workers))
workerid = workers[0]
submitted_answers.loc[submitted_answers['WorkerId'] == workerid]['WorkTimeInSeconds']
answers = pd.read_csv(answers_path)
answers.columns
#answers.head()
hitids = list(submitted_answers.groupby(['HITId']).groups.keys())
print("Number of unique HITs: ",len(hitids))
hit_wise_scores = dict()
for id in hitids:
hit_wise_scores[id] = dict()
#print(json.loads(submitted_answers))
all_count = len(submitted_answers)
count = 0
answer_count = 0
scores_dict = []
worker_total_scores = 0
image_wise_scores = []
worker_wise_scores = []
hit_wise_scores = []
total_hit_time = 0
total_hit_score = 0
scores = 0
task_count = 0
total_worker_time = 0
feedback = ""
for worker in workers:
worker_total_scores = 0
total_worker_time = 0
worker_answers = submitted_answers.loc[submitted_answers['WorkerId'] == worker]
#print(len(worker_answers.index))
for index in worker_answers.index:
feedback = ""
ans = worker_answers['Answer.taskAnswers'][index]
#print(ans.replace("[","").replace("]",""))
ajson = json.loads(ans.replace("[","").replace("]",""))
for k,v in ajson.items():
if k != "feedback":
#print(answers['image_url'] , k)
#print(len(k.split("/")))
#_,_,_,folder,filename = k.split("/")
#print(folder+"/"+filename)
# Check if this vehicle image url
exists = k in answers['image_url'].tolist()
# get results for this vehicle image url
values = answers.loc[answers['image_url'] == k]
t = type(v)
#print( count,k)
#check received labels
if t == dict:
label_value = v['label'] if v['label'] else "None"
answered = label_value.replace(": ","").strip()
else:
answered = v.replace(": ","").strip()
#check if label is correct answer
if answered == values.iloc[0]['class']:
image_wise_scores.append({"image_url":k, "submitted":answered, "answer": values.iloc[0]['class'], "score":1,"WorkerId":worker,"HITId": worker_answers['HITId'][index],})
scores += 1
count += 1
#we do not want to reject workers since we only aim to identify the gaps in training and analyzing the results and accuracy from this batch.
#this definitely will NOT hold true if you do plan to provide true evaluation in future once a "reasonable" training is provided to worker.
elif answered != values.iloc[0]['class'] and (answered == 'None' or answered == 'Not relevant vehicles'):
image_wise_scores.append({"image_url":k,"submitted":answered, "answer": values.iloc[0]['class'], "score":0, "WorkerId":worker,"HITId": worker_answers['HITId'][index],})
count += 1
else:
image_wise_scores.append({"image_url":k,"submitted":answered, "answer": values.iloc[0]['class'],"score":0,"WorkerId":worker,"HITId": worker_answers['HITId'][index],})
count += 1
else:
feedback = v
print(v)
#overall score for this HIT Id and for this worker
percent_score = scores*100/25
approve = ""
reject = ""
print("*************decision rule to approve or reject: we approve for correct answers greater than 70% and approve otherwise reject************")
#we approve for correct answers greater than 70% and approve otherwise reject
if percent_score > 70:
approve = "x"
else:
reject = "Number of incorrect class labels are more. Sorry. Maybe you can spend a bit more time and utilize duration of task and utlizie looking at examples."
scores_dict.append({"WorkerId":worker, "HITId":worker_answers['HITId'][index],"total_worker_score":scores,"task_percentage_score":percent_score,
"approve":approve, "reject":reject,"WorkTimeInSeconds":worker_answers['WorkTimeInSeconds'][index], "LifetimeApprovalRate":worker_answers['LifetimeApprovalRate'][index]})
image_wise_scores.append({"WorkerId":worker,"HITId": worker_answers['HITId'][index],"total_score":scores,"approve":approve, "reject":reject, "WorkTimeInSeconds":worker_answers['WorkTimeInSeconds'][index], "LifetimeApprovalRate":worker_answers['LifetimeApprovalRate'][index]})
total_worker_time += worker_answers['WorkTimeInSeconds'][index]
if count == 25:
worker_total_scores += scores
answer_count += 1
count = 0
scores = 0
#print score for this worker and append this worker's scores, results
print(worker_total_scores,25*len(worker_answers.index),len(worker_answers.index) , worker_total_scores*100/(25*len(worker_answers.index)))
worker_percent_score = worker_total_scores*100/(25*len(worker_answers.index))
worker_wise_scores.append({"WorkerId":worker,"total_worker_score":worker_total_scores,"worker_percent_score":worker_percent_score,
"TotalWorkTimeInSeconds":total_worker_time, "total_tasks": len(worker_answers.index),"LifetimeApprovalRate":worker_answers['LifetimeApprovalRate'][index], "feedback":feedback})
all_count,answer_count
len(scores_dict), len(image_wise_scores)
raw_results = pd.DataFrame.from_dict(scores_dict)
raw_results.to_csv(os.path.join(output_results,"selwyn_raw_classification_scores_all.csv"), index=False)
image_wise_results = pd.DataFrame.from_dict(image_wise_scores)
image_wise_results.to_csv(os.path.join(output_results,"selwyn_image_wise_worker_classification_scores_all.csv"), index=False)
result = pd.DataFrame.from_dict(worker_wise_scores)
result.to_csv(os.path.join(output_results,"selwyn_worker_wise_classification_scores_all.csv"), index=False)
print(len(worker_wise_scores))
best_workers = []
best_worker_wise_scores = []
for worker_score in worker_wise_scores:
print(worker_score['worker_percent_score'],float(worker_score['worker_percent_score']) > 70.0)
if float(worker_score['worker_percent_score']) > 70.0:
best_workers.append(worker_score['WorkerId'])
best_worker_wise_scores.append(worker_score)
print( "Number of best worker", len(best_workers))
print("Number of workers ", len(worker_wise_scores))
len(best_worker_wise_scores)
result = pd.DataFrame.from_dict(best_worker_wise_scores)
result.to_csv(os.path.join(output_results,"new","selwyn_best_worker_wise_classification_scores_all.csv"), index=False)
labels = answers['class'].unique().tolist()
labels
print("Buses are not there in this batch of images.")
labels = ['Trucks', 'Small trailers', 'Specialized vehicles',
'Large trailers', 'Small vehicles', 'Vans and RVs','Buses']
def plot_results(answers_labels, worker_hit_labels, labels, roc_filename_path):
classification_report = []
acc = {}
y_ans = label_binarize(answers_labels, classes=labels)
y_sub = label_binarize(worker_hit_labels, classes=labels)
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(len(labels)):
fpr[i], tpr[i], _ = roc_curve(y_ans[:, i], y_sub[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
acc[labels[i]] = {}
acc[labels[i]] = accuracy_score(y_ans[:, i], y_sub[:, i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_ans.ravel(), y_sub.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(len(labels))]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(len(labels)):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute macro TPR
mean_tpr /= len(labels)
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
worker, hitid,_ = os.path.basename(roc_filename_path).split(".")[0].split("_")
colors = cycle(['red', 'green','blue','violet', 'deepskyblue', 'darkorange', 'cyan','lime' ])
for i, color in zip(range(len(labels)), colors):
classification_report.append({"label":labels[i],"accuracy":acc[labels[i]],"tpr": tpr[i].tolist(), "fpr": fpr[i].tolist(),"worker": worker, "hitid":hitid})
save_file(classification_report, os.path.join(output_results,roc_filename_path.replace(".png",".csv")))
def calculate_cohen_kappa(worker1_labels, worker2_hit_labels, labels, worker_hit_id, hitid):
from sklearn.metrics import cohen_kappa_score
from sklearn.preprocessing import label_binarize
from itertools import cycle
class_cohen_kappa_score = {}
class_cohen_kappa_score['worker_hit_id'] = worker_hit_id
class_cohen_kappa_score['HITId'] = hitid
y_ans = label_binarize(worker1_labels, classes=labels)
y_sub = label_binarize(worker_hit_labels, classes=labels)
scores = {}
for i in range(len(labels)):
class_cohen_kappa_score[labels[i]] = cohen_kappa_score(y_ans[:, i], y_sub[:, i])
return class_cohen_kappa_score
def cm2df(cm, labels, worker_hitid):
df = pd.DataFrame()
# rows
for i, row_label in enumerate(labels):
rowdata={}
# columns
for j, col_label in enumerate(labels):
rowdata[col_label]=cm[i,j]
df = df.append(pd.DataFrame.from_dict({row_label:rowdata}, orient='index'))
#df.style.set_table_attributes("style='display:inline'").set_caption(worker_hitid)
return df[labels]
worker_hit_labels = []
answers_labels = []
cm = []
urls = []
classification_report = []
total_answers_labels = []
total_worker_hit_labels = []
lw =2
all_cm = []
worker_count = 0
cohen_kappa_scores = []
answer_worker_labels = {}
for worker in workers:
worker_answers = submitted_answers.loc[submitted_answers['WorkerId'] == worker]
for index in worker_answers.index:
feedback = ""
ans = worker_answers['Answer.taskAnswers'][index]
ajson = json.loads(ans.replace("[","").replace("]",""))
urls = []
worker_hit_labels = []
answers_labels = []
for k,v in ajson.items():
if k != "feedback":
exists = k in answers['image_url'].tolist()
values = answers.loc[answers['image_url'] == k]
t = type(v)
if exists:
urls.append(k)
if t == dict:
label_value = v['label'] if v['label'] else "None"
answered = label_value.replace(": ","").strip()
else:
answered = v.replace(": ","").strip()
worker_hit_labels.append(answered);
answers_labels.append(values.iloc[0]['class'])
else:
#worker_hit_labels.append(answered);
#answers_labels.append(values.iloc[0]['class'])
feedback = v
plot_results(answers_labels, worker_hit_labels, labels, os.path.join(output_results,worker+"_"+worker_answers['HITId'][index]+"_ROC"+".png"))
cohen_kappa_scores.append(calculate_cohen_kappa(answers_labels, worker_hit_labels, labels,worker+"_"+worker_answers['HITId'][index],worker_answers['HITId'][index]))
answer_worker_labels[worker+"_"+worker_answers['HITId'][index]] ={"HITId":worker_answers['HITId'][index], "labels":worker_hit_labels}
res = confusion_matrix(answers_labels, worker_hit_labels, labels=labels)
#print(res, labels)
cm_display = ConfusionMatrixDisplay(res,display_labels=labels)
cm_display.plot(xticks_rotation=80)
plt.tight_layout(pad=1)
plt.savefig(os.path.join(output_results,worker+"_"+worker_answers['HITId'][index]+".png"), pad_inches=0.2)
plt.tight_layout()
all_cm.append(cm2df(res, labels,worker+"_"+worker_answers['HITId'][index]))
cm.append({"worker_hit":worker+"_"+worker_answers['HITId'][index],"answers":answers_labels,"submitted":worker_hit_labels,"ursl":urls,"cm":res.tolist() })
save_file(cm,os.path.join(output_results,"confusion_matrix_results_all.csv"))
len(cohen_kappa_scores)
all_cohen_kappa_scores = []
for worker_hit_id1 in answer_worker_labels.keys():
for worker_hit_id2 in answer_worker_labels.keys():
worker1,hit1 = worker_hit_id1.split("_")
worker2,hit2 = worker_hit_id2.split("_")
if worker1 != worker2 and hit1 == hit2 :
worker1_labels = answer_worker_labels[worker_hit_id1]["labels"]
worker2_labels = answer_worker_labels[worker_hit_id2]["labels"]
all_cohen_kappa_scores.append(calculate_cohen_kappa(worker1_labels, worker2_labels, labels,worker1 + " vs "+ worker2+ " for hit : "+hit1, hit1))
len(all_cohen_kappa_scores)
save_file(all_cohen_kappa_scores,os.path.join(output_results,"all_cohen_kappa_scores_layout123.csv"))
save_file(cohen_kappa_scores,os.path.join(output_results,"cohen_kappa_scores_layout123.csv"))
len(all_cm)
final = pd.concat(all_cm,axis=1, keys=workers)
save_file(final,os.path.join(output_results,"all_confusion_matrix_results_all.csv"))
save_file(classification_report, os.path.join(output_results,"classification_reports.csv"))
len(answers_labels)
cmr = ConfusionMatrix(actual_vector=answers_labels, predict_vector=worker_hit_labels) # Create CM From Data
cmr.classes
cmr.table
print(cmr)
plt.rcParams["figure.figsize"] = (12,10)
cmr.plot(cmap=plt.cm.Greens,number_label=True,plot_lib="matplotlib")
plt.savefig(worker+"_"+worker_answers['HITId'][index]+".png", pad_inches=0.2)
#plt.tight_layout()
plt.show()
len(answers_labels)
def plot_results1(answers_labels, worker_hit_labels, labels, roc_filename_path):
classification_report = []
acc = {}
y_ans = label_binarize(answers_labels, classes=labels)
y_sub = label_binarize(worker_hit_labels, classes=labels)
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(len(labels)):
fpr[i], tpr[i], _ = roc_curve(y_ans[:, i], y_sub[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
acc[labels[i]] = {}
acc[labels[i]] = accuracy_score(y_ans[:, i], y_sub[:, i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_ans.ravel(), y_sub.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(len(labels))]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(len(labels)):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= len(labels)
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
colors = cycle(['red', 'green','blue','violet', 'deepskyblue', 'darkorange', 'cyan','lime' ])
for i, color in zip(range(len(labels)), colors):
classification_report.append({"label":labels[i],"accuracy":acc[labels[i]],"tpr": tpr[i].tolist(), "fpr": fpr[i].tolist()})
save_file(classification_report, os.path.join(output_results,roc_filename_path.replace(".png",".csv")))
worker_hit_labels = []
answers_labels = []
cm1 = []
urls = []
classification_report = []
total_answers_labels = []
total_worker_hit_labels = []
normalized_cm = None
for worker in workers:
worker_answers = submitted_answers.loc[submitted_answers['WorkerId'] == worker]
for index in worker_answers.index:
feedback = ""
ans = worker_answers['Answer.taskAnswers'][index]
ajson = json.loads(ans.replace("[","").replace("]",""))
for k,v in ajson.items():
if k != "feedback":
exists = k in answers['image_url'].tolist()
values = answers.loc[answers['image_url'] == k]
t = type(v)
if exists:
urls.append(k)
if t == dict:
label_value = v['label'] if v['label'] else "None"
answered = label_value.replace(": ","").strip()
else:
answered = v.replace(": ","").strip()
total_worker_hit_labels.append(answered);
total_answers_labels.append(values.iloc[0]['class'])
else:
total_worker_hit_labels.append(answered);
total_answers_labels.append(values.iloc[0]['class'])
feedback = v
plot_results1(total_answers_labels, total_worker_hit_labels, labels, os.path.join(output_results,"new","roc_4421963.png"))
res1 = confusion_matrix(total_answers_labels, total_worker_hit_labels, labels=labels)
print(res1, labels)
cm_display = ConfusionMatrixDisplay(res1,display_labels=labels)
cm_display.plot(xticks_rotation=80)
plt.tight_layout(pad=1)
plt.savefig(os.path.join(output_results,"cm_all"+".png"), pad_inches=0.2)
plt.tight_layout()
normalized_cm = res1.astype('float') / res1.sum(axis=1)[:, np.newaxis]
normalized_cm_sk =confusion_matrix(total_answers_labels, total_worker_hit_labels, labels=labels, normalize="true")
cm.append({"batch":"all","answers":total_answers_labels,"submitted":total_worker_hit_labels,"ursl":urls,"cm":res1 })
#save_file(cm, os.path.join(output_results,"overall_cm_reports_4408618.csv"))
res1.tolist()
normalized_cm_sk
df = pd.DataFrame(res1.tolist(), index =labels,columns =labels)
df
df.to_csv(os.path.join(output_results,"overall_best_of_worker_accuracy_cm_reports.csv"), index_label=[df.index.name, df.columns.name])
normalized_cm.tolist()
normalized_cm_sk.tolist()
df1 = pd.DataFrame(normalized_cm.tolist(), index =labels,columns =labels)
df1
df1.to_csv(os.path.join(output_results,"overall_best_of_worker_accuracy_normalized_cm_reports.csv"), index_label=[df.index.name, df.columns.name])
cmr = ConfusionMatrix(actual_vector=total_answers_labels, predict_vector=total_worker_hit_labels) # Create CM From Data
print(cmr)
plt.rcParams["figure.figsize"] = (12,10)
cmr.plot(cmap=plt.cm.Greens,number_label=True,plot_lib="matplotlib")
cmr.to_array(normalized=True).tolist()
cmr.plot(cmap=plt.cm.Reds,normalized=True,number_label=True,plot_lib="seaborn")
| 0.253584 | 0.362687 |
# Day 12: Population
I've already plotted population for an earlier day (Day 10), but perhaps this time I can visualize it in a different way. A few weeks ago, I saw a map that overlaid bar plots, and so I wanted to see if I could recreate that concept.
## Configuration
```
import os
from mpl_toolkits.basemap import Basemap
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
import geopandas as gpd
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.patches as mpatches
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
# Desired styling for matplotlib
from matplotlib import cycler
colors = cycler('color',["44aa98","ab4498","332389","86ccec","ddcc76","cd6477","882255", "117732"])
plt.rcParams['figure.figsize'] = [6,4]
plt.rcParams['axes.spines.top'] = False
plt.rcParams['axes.spines.right'] = False
plt.rcParams['text.color'] = '212121'
plt.rcParams['xtick.color'] = '212121'
plt.rcParams['ytick.color'] = '212121'
plt.rcParams['font.family'] = 'sans serif'
plt.rcParams['axes.facecolor'] = 'None'
plt.rcParams['axes.edgecolor'] = 'dimgray'
plt.rcParams['axes.grid'] = False
plt.rcParams['axes.grid'] = False
plt.rcParams['grid.color'] = 'lightgray'
plt.rcParams['grid.linestyle'] = 'dashed'
plt.rcParams['xtick.labelsize'] = 'x-small'
plt.rcParams['ytick.labelsize'] = 'x-small'
plt.rcParams['legend.frameon'] = True
plt.rcParams['legend.framealpha'] = 0.8
plt.rcParams['legend.facecolor'] = 'white'
plt.rcParams['legend.edgecolor'] = 'None'
plt.rcParams['legend.fontsize'] = 'medium'
plt.rcParams['axes.labelsize'] = 'small'
plt.rcParams['savefig.facecolor'] = 'None'
plt.rcParams['savefig.edgecolor'] = 'None'
plt.rc('axes', prop_cycle=colors)
```
# geoBoundaries
Administrative boundary shapefiles are courtesy geoBoundaries 4.0: Comprehensive Global Administrative Zones (CGAZ). These boundaries are simplified to 10% of the original data (full resolution single-country products), both of which are available from [https://www.geoboundaries.org/](https://www.geoboundaries.org/)
*Runfola, Daniel, Community Contributors, and [v4.0: Lindsey Rogers, Joshua Habib, Sidonie Horn, Sean Murphy, Dorian Miller, Hadley Day, Lydia Troup, Dominic Fornatora, Natalie Spage, Kristina Pupkiewicz, Michael Roth, Carolina Rivera, Charlie Altman, Isabel Schruer, Tara McLaughlin, Russ Biddle, Renee Ritchey, Emily Topness, James Turner, Sam Updike, Helena Buckman, Neel Simpson, Jason Lin], [v2.0: Austin Anderson, Heather Baier, Matt Crittenden, Elizabeth Dowker, Sydney Fuhrig, Seth Goodman, Grace Grimsley, Rachel Layko, Graham Melville, Maddy Mulder, Rachel Oberman, Joshua Panganiban, Andrew Peck, Leigh Seitz, Sylvia Shea, Hannah Slevin, Rebecca Yougerman, Lauren Hobbs]. "geoBoundaries: A global database of political administrative boundaries." Plos one 15, no. 4 (2020): e0231866.*
```
# Map to path
data_folder = os.path.join("..", "data")
adm_folder = os.path.join(data_folder, "admin")
adm_file = "geoBoundariesCGAZ_ADM0.shp"
adm_path = os.path.join(adm_folder, adm_file)
# Read in data at GeoDataFrame
adm = gpd.read_file(adm_path, encoding='utf-8')
# Preview
adm.head(5)
```
# United Nations
Population estimates (including the breakdown between the urban and rural population) are from the 2018 revision of the [World Urbanization Prospects](https://population.un.org/wup/).
I've modified the dataset to include the ISO3 country codes. To map between the numeric country codes (on the UN data) and the alpha3 country codes (on geoBoundaries), I used [World countries](https://stefangabos.github.io/world_countries/) data.
```
# Map to path
pop_folder = os.path.join(data_folder, "etc", "un-wup")
pop_file = "WUP2018-F01-Total_Urban_Rural.xls"
pop_path = os.path.join(pop_folder, pop_file)
# Read file into pandas dataframe
pop = pd.read_excel(pop_path, sheet_name='Data', skiprows=16)
# Reduce to national data
pop = pop[pop.ISO3.notna()].copy()
# Convert population data to ones (original is in thousands)
pop_cols = ['Urban', 'Rural', 'Total']
pop[pop_cols] = pop[pop_cols].mul(1e3)
# Preview
pop.head(5)
# Merge with geoBoundaries data
adm_poly = pd.merge(adm, pop, left_on='ISO_CODE', right_on='ISO3', how='left')
# Preview
adm_poly.head(5)
# Get points
adm_points = adm_poly.copy()
adm_points["rep"] = adm_points.geometry.representative_point()
adm_points.set_geometry("rep", inplace=True)
# Preview
adm_points.head(5)
```
# Construct map
Using `Basemap` to create the plot
```
# Choose region of interest - South America
llcrnrlat = -50
llcrnrlon = -110
urcrnrlat = 12
urcrnrlon = -26
# Determine center point for projection
mid_lon = (urcrnrlon+llcrnrlon)/2.0
mid_lat = (urcrnrlat+llcrnrlat)/2.0
# Limit points to region of interest
adm_points_region = adm_points.cx[llcrnrlon:urcrnrlon, llcrnrlat:urcrnrlat]
# Print something
print(f"Found in region: {adm_points_region.ISO3.tolist()}")
# Limit polys to region of interest
adm_poly_region = adm_poly.cx[llcrnrlon:urcrnrlon, llcrnrlat:urcrnrlat]
# Print something
print(f"Found in region: {adm_poly_region.ISO3.tolist()}")
# Utility function - adjusted from
# https://stackoverflow.com/questions/55854988/subplots-onto-a-basemap/55890475#55890475
def build_bar(mx, my, ax, xvals, yvals, width, fcolors):
# Construct inset axes
ax_h = inset_axes(ax,
width=width,
height=width,
loc='center',
bbox_to_anchor=(mx, my),
bbox_transform=ax.transData,
borderpad=0,
axes_kwargs={'alpha': 0.35, 'visible': True})
# Plot bars
for x,y,c in zip(xvals, yvals, fcolors):
ax_h.bar(x, y, label=str(x), fc=c)
# Turn off axis
ax_h.axis('off')
return ax_h
# Construct plot
fig, ax = plt.subplots(figsize=(10, 10))
# Create basemap
m = Basemap(llcrnrlat= llcrnrlat,
llcrnrlon= llcrnrlon,
urcrnrlat= urcrnrlat,
urcrnrlon= urcrnrlon,
ax = ax,
resolution='i',
projection='tmerc',
lon_0=mid_lon,
lat_0=mid_lat)
# Style continent and coastlines
m.fillcontinents(color='#eeeeee', lake_color="w", zorder=0)
m.drawcountries(color="silver", linewidth=0.5, zorder=1)
m.drawcoastlines(color='silver', linewidth=0.5, zorder=1)
# Bar axes styling
axes_width = 0.25
n_data = len(pop_cols)
bar_colors = [f'C{i}' for i in range(n_data)]
# Find max population data in region
max_pop = adm_points_region['Total'].max()
print(f"Max population in region: {max_pop:,.0f}")
# Plot population data
for i, row in adm_points_region.iterrows():
if row.ISO3 == row.ISO3:
bar_data = row[pop_cols].values
x, y = row.rep.x, row.rep.y
mx, my = m(x, y)
bax = build_bar(
mx,
my,
ax,
list(range(n_data)),
bar_data,
axes_width,
bar_colors,
)
bax.set_title(f"{row.ISO3}: {row.Total/1e6:.1f}M", fontsize="x-small")
bax.set(ylim=(0,max_pop))
# Create legend
patches = [None for _ in range(n_data)]
for i in range(n_data):
patches[i] = mpatches.Patch(color=bar_colors[i], label=pop_cols[i])
ax.legend(handles=patches, loc='best', title="Population, 2018")
# Turn off axis
ax.axis('off')
# Credit sources
ax.annotate("Data source: United Nation's World Urbanization Prospects rev.2018",
xy=(1,0), xycoords='axes fraction',
fontsize="x-small", ha="right", va="bottom",
)
# Save plot
out_file = "12_Population.png"
out_path = os.path.join("..", "contributions", out_file)
fig.savefig(out_path, dpi=300, facecolor="w", bbox_inches="tight")
# Preview
plt.show()
```
|
github_jupyter
|
import os
from mpl_toolkits.basemap import Basemap
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
import geopandas as gpd
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.patches as mpatches
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
# Desired styling for matplotlib
from matplotlib import cycler
colors = cycler('color',["44aa98","ab4498","332389","86ccec","ddcc76","cd6477","882255", "117732"])
plt.rcParams['figure.figsize'] = [6,4]
plt.rcParams['axes.spines.top'] = False
plt.rcParams['axes.spines.right'] = False
plt.rcParams['text.color'] = '212121'
plt.rcParams['xtick.color'] = '212121'
plt.rcParams['ytick.color'] = '212121'
plt.rcParams['font.family'] = 'sans serif'
plt.rcParams['axes.facecolor'] = 'None'
plt.rcParams['axes.edgecolor'] = 'dimgray'
plt.rcParams['axes.grid'] = False
plt.rcParams['axes.grid'] = False
plt.rcParams['grid.color'] = 'lightgray'
plt.rcParams['grid.linestyle'] = 'dashed'
plt.rcParams['xtick.labelsize'] = 'x-small'
plt.rcParams['ytick.labelsize'] = 'x-small'
plt.rcParams['legend.frameon'] = True
plt.rcParams['legend.framealpha'] = 0.8
plt.rcParams['legend.facecolor'] = 'white'
plt.rcParams['legend.edgecolor'] = 'None'
plt.rcParams['legend.fontsize'] = 'medium'
plt.rcParams['axes.labelsize'] = 'small'
plt.rcParams['savefig.facecolor'] = 'None'
plt.rcParams['savefig.edgecolor'] = 'None'
plt.rc('axes', prop_cycle=colors)
# Map to path
data_folder = os.path.join("..", "data")
adm_folder = os.path.join(data_folder, "admin")
adm_file = "geoBoundariesCGAZ_ADM0.shp"
adm_path = os.path.join(adm_folder, adm_file)
# Read in data at GeoDataFrame
adm = gpd.read_file(adm_path, encoding='utf-8')
# Preview
adm.head(5)
# Map to path
pop_folder = os.path.join(data_folder, "etc", "un-wup")
pop_file = "WUP2018-F01-Total_Urban_Rural.xls"
pop_path = os.path.join(pop_folder, pop_file)
# Read file into pandas dataframe
pop = pd.read_excel(pop_path, sheet_name='Data', skiprows=16)
# Reduce to national data
pop = pop[pop.ISO3.notna()].copy()
# Convert population data to ones (original is in thousands)
pop_cols = ['Urban', 'Rural', 'Total']
pop[pop_cols] = pop[pop_cols].mul(1e3)
# Preview
pop.head(5)
# Merge with geoBoundaries data
adm_poly = pd.merge(adm, pop, left_on='ISO_CODE', right_on='ISO3', how='left')
# Preview
adm_poly.head(5)
# Get points
adm_points = adm_poly.copy()
adm_points["rep"] = adm_points.geometry.representative_point()
adm_points.set_geometry("rep", inplace=True)
# Preview
adm_points.head(5)
# Choose region of interest - South America
llcrnrlat = -50
llcrnrlon = -110
urcrnrlat = 12
urcrnrlon = -26
# Determine center point for projection
mid_lon = (urcrnrlon+llcrnrlon)/2.0
mid_lat = (urcrnrlat+llcrnrlat)/2.0
# Limit points to region of interest
adm_points_region = adm_points.cx[llcrnrlon:urcrnrlon, llcrnrlat:urcrnrlat]
# Print something
print(f"Found in region: {adm_points_region.ISO3.tolist()}")
# Limit polys to region of interest
adm_poly_region = adm_poly.cx[llcrnrlon:urcrnrlon, llcrnrlat:urcrnrlat]
# Print something
print(f"Found in region: {adm_poly_region.ISO3.tolist()}")
# Utility function - adjusted from
# https://stackoverflow.com/questions/55854988/subplots-onto-a-basemap/55890475#55890475
def build_bar(mx, my, ax, xvals, yvals, width, fcolors):
# Construct inset axes
ax_h = inset_axes(ax,
width=width,
height=width,
loc='center',
bbox_to_anchor=(mx, my),
bbox_transform=ax.transData,
borderpad=0,
axes_kwargs={'alpha': 0.35, 'visible': True})
# Plot bars
for x,y,c in zip(xvals, yvals, fcolors):
ax_h.bar(x, y, label=str(x), fc=c)
# Turn off axis
ax_h.axis('off')
return ax_h
# Construct plot
fig, ax = plt.subplots(figsize=(10, 10))
# Create basemap
m = Basemap(llcrnrlat= llcrnrlat,
llcrnrlon= llcrnrlon,
urcrnrlat= urcrnrlat,
urcrnrlon= urcrnrlon,
ax = ax,
resolution='i',
projection='tmerc',
lon_0=mid_lon,
lat_0=mid_lat)
# Style continent and coastlines
m.fillcontinents(color='#eeeeee', lake_color="w", zorder=0)
m.drawcountries(color="silver", linewidth=0.5, zorder=1)
m.drawcoastlines(color='silver', linewidth=0.5, zorder=1)
# Bar axes styling
axes_width = 0.25
n_data = len(pop_cols)
bar_colors = [f'C{i}' for i in range(n_data)]
# Find max population data in region
max_pop = adm_points_region['Total'].max()
print(f"Max population in region: {max_pop:,.0f}")
# Plot population data
for i, row in adm_points_region.iterrows():
if row.ISO3 == row.ISO3:
bar_data = row[pop_cols].values
x, y = row.rep.x, row.rep.y
mx, my = m(x, y)
bax = build_bar(
mx,
my,
ax,
list(range(n_data)),
bar_data,
axes_width,
bar_colors,
)
bax.set_title(f"{row.ISO3}: {row.Total/1e6:.1f}M", fontsize="x-small")
bax.set(ylim=(0,max_pop))
# Create legend
patches = [None for _ in range(n_data)]
for i in range(n_data):
patches[i] = mpatches.Patch(color=bar_colors[i], label=pop_cols[i])
ax.legend(handles=patches, loc='best', title="Population, 2018")
# Turn off axis
ax.axis('off')
# Credit sources
ax.annotate("Data source: United Nation's World Urbanization Prospects rev.2018",
xy=(1,0), xycoords='axes fraction',
fontsize="x-small", ha="right", va="bottom",
)
# Save plot
out_file = "12_Population.png"
out_path = os.path.join("..", "contributions", out_file)
fig.savefig(out_path, dpi=300, facecolor="w", bbox_inches="tight")
# Preview
plt.show()
| 0.521959 | 0.89974 |
```
import numpy as np
a = np.arange(15).reshape(3,5)
print(a)
a.shape
a.size
a.dtype.itemsize
a.dtype
```
a.itemsize
###### ndarray.itemsize
the size in bytes of each element of the array. For example, an array of elements of type float64 has itemsize 8 (=64/8), while one of type complex32 has itemsize 4 (=32/8). It is equivalent to ndarray.dtype.itemsize.
```
a.ndim
a.data
```
###### ndarray.data
the buffer containing the actual elements of the array. Normally, we won’t need to use this attribute because we will access the elements in an array using indexing facilities.
```
type(a)
```
#### array creation
```
import numpy as np
s = np.array([2,3,4,3,22,34,56])
print(s)
type(s)
st = np.array((1,2,3,5,66,75,44))
st
type(st)
st.dtype
ss = np.arange(20, dtype=np.float32)
ss
ss.dtype #by default the numpy float is float64
ss.reshape(2,2,5)
ss.dtype
d = np.array([[3.4,44.5],[55.66,7.7]], dtype = complex)
d
d.imag
d.real
type(d)
d.dtype # by default the numpy complex is complex 128
d.shape
d.itemsize
d.data
d
d.T
d.shape
d.T.shape
t = np.array(((2,3,4,5),(44,56,77,88)), dtype = complex)
t
tt = np.array(((2,3,4,5),(44,56,77,88)), dtype = float)
tt
tt.dtype
import numpy as np
np.zeros((3,4), dtype = int)
np.eye(5,5,dtype=int)
np.ones((3,3),dtype=float)
np.empty((3,3), dtype = int)
np.arange(20)
f= np.arange(30,40,.2, dtype=float).reshape((10,5))
f.size
f
np.linspace(2,10,25, dtype= float).reshape((5,5))
import numpy as np
import matplotlib.pyplot as plt
a = np.linspace(0,20,200)
b = np.sin(a)
bb = np.exp(a)
plt.title("sine and exponential plot")
plt.plot(b,bb)
np.random.rand(3,3)
np.random.random((3,4))
np.random.randn(5,3)
np.random.randint(44,54)
np.random.randint((44,54))
np.random.randint(44)
f = np.random.normal()
f
np.random.normal(22)
np.random.normal((22,30))
np.random.normal(22,30)
type(f)
np.arange(2999)
import sys
np.set_printoptions(threshold=sys.maxsize)
```
#### Basic operations
```
import numpy as np
a = np.arange(4)
b= np.array([33,44,55,66])
c= b-a
c
b**3
10*np.sin(b)
a<33
a = np.array( [[1,1],
[0,1]] )
b = np.array( [[2,0],
[3,4]] )
a*b
a**b
a.dot(b)
a@b
a.dtype.name
ddd = np.random.rand(3,3)
ddd
ddd.dtype
ddd.dtype.name
ddd.sum()
ddd.min()
ddd.max()
ddd.mean()
ddd.std()
ddd.var()
cs = ddd.cumsum()
cs
plt.plot(cs,ddd.ravel(),c="r")
plt.title('Cumsum and original flatten data plot')
plt.xlabel("Cumulative sum")
plt.ylabel("Flattened array")
ml = np.array([[[2,22,33,43,3],[44,54,5,6,77]],
[[4,33,22,11,123],[6,77,56,4,37]]
])
ml
ml.ndim
ml.shape
type(ml)
ml.dtype
ml.sum(axis=0)
ml.sum(axis=2)
ml.sum(axis=1)
ml.min(axis=2)
ml.min(axis=1)
ml.max(axis=2)
ml.max(axis=1)
ml.cumsum(axis=2)
ml.cumsum(axis=1)
ml.mean(axis=2)
ml.mean(axis=1)
a= np.arange(3)
a
np.exp(a)
np.sqrt(a)
np.add(a,np.exp(a))
np.subtract(a,np.sqrt(a))
np.multiply(a,np.sum(a))
np.divide(a,np.exp(a))
w = np.arange(10)*2
w
w[:5]
w[::2]
w[:7:2]=-100
w
w
w[::-1]
for i in w:
print(i*(2/3), end ="\n")
def f(x,y):
return 10*x+y
b= np.fromfunction(f,(5,5),dtype=np.int)
b
b[2,4]
b[:3]
b[3:4]
b[:5,2]
b[:,2]
b[-1]
b[3]
b
for i in b.flat:
print(i)
```
column stack == hstack (only for 2D arrays)
\n
On the other hand, the function row_stack is equivalent to vstack for any input arrays. In fact, row_stack is an alias for vstack:
```
np.column_stack is np.hstack
np.row_stack is np.vstack
import numpy as np
import matplotlib.pyplot as plt
# Build a vector of 10000 normal deviates with variance 0.5^2 and mean 2
mu, sigma = 2, 0.5
v = np.random.normal(mu,sigma,2000)
#print(v)
# Plot a normalized histogram with 50 bins
plt.hist(v, bins=50, density=0) # matplotlib version (plot)
plt.show()
np.r_[1:4,0,4]
id(a)
b = np.random.random((2,3))
a *= 3
print(b)
a
b += a
b
a
b
a += b # b is not automatically converted to integer type
d=[]
for i in b:
for j in i:
d.append(j)
d
dd=[]
for i in d:
dd.append(np.floor(i))
dd
a+=dd
a
p = np.exp(a*1j)
p
p.dtype.name
def f(x,y):
return 10*x+y
b = np.fromfunction(f,(5,4),dtype=int)
b
b[:,3]
b[-1]
for i in range(10):
for j in range(i+1):
print("*", end='')
print()
for k in range(10,0,-1):
for jj in range(k):
print("&",end="")
print()
a= np.arange(10)
d=a.copy()
d.flags.owndata
d.base is a
d is a
d.shape =2,5
d.shape
d
a.shape
d[:1]=-100
d
a
a = np.arange(1e8)
d = a[:100].copy()
print(d)
del a
```
If b = a[:100] is used instead, a is referenced by b and will persist in memory even if del a is executed.
```
a = np.arange(12)**2 # the first 12 square numbers
i = np.array( [ 1,1,3,8,5 ] ) # an array of indices
a[i] # the elements of a at the positions i
j = np.array( [ [ 3, 4], [ 9, 7 ] ] ) # a bidimensional array of indices
a[j]
palette = np.array( [ [0,0,0], # black
[255,0,0], # red
[0,255,0], # green
[0,0,255], # blue
[255,255,255] ] ) # white
image = np.array( [ [ 0, 1, 2, 0 ], # each value corresponds to a color in the palette
[ 0, 3, 4, 0 ] ] )
palette[image] # the (2,4,3) color image
a = np.arange(12).reshape(3,4)
print(a)
i = np.array( [ [0,1],[1,2] ] ) # indices for the first dim of a
j = np.array( [ [2,1],[3,3] ] ) # indices for the second dim
a[i,j] # i and j must have equal shape
a[i,2]
a[:,j] # i.e., a[ : , j] ### Very important
```
|
github_jupyter
|
import numpy as np
a = np.arange(15).reshape(3,5)
print(a)
a.shape
a.size
a.dtype.itemsize
a.dtype
a.ndim
a.data
type(a)
import numpy as np
s = np.array([2,3,4,3,22,34,56])
print(s)
type(s)
st = np.array((1,2,3,5,66,75,44))
st
type(st)
st.dtype
ss = np.arange(20, dtype=np.float32)
ss
ss.dtype #by default the numpy float is float64
ss.reshape(2,2,5)
ss.dtype
d = np.array([[3.4,44.5],[55.66,7.7]], dtype = complex)
d
d.imag
d.real
type(d)
d.dtype # by default the numpy complex is complex 128
d.shape
d.itemsize
d.data
d
d.T
d.shape
d.T.shape
t = np.array(((2,3,4,5),(44,56,77,88)), dtype = complex)
t
tt = np.array(((2,3,4,5),(44,56,77,88)), dtype = float)
tt
tt.dtype
import numpy as np
np.zeros((3,4), dtype = int)
np.eye(5,5,dtype=int)
np.ones((3,3),dtype=float)
np.empty((3,3), dtype = int)
np.arange(20)
f= np.arange(30,40,.2, dtype=float).reshape((10,5))
f.size
f
np.linspace(2,10,25, dtype= float).reshape((5,5))
import numpy as np
import matplotlib.pyplot as plt
a = np.linspace(0,20,200)
b = np.sin(a)
bb = np.exp(a)
plt.title("sine and exponential plot")
plt.plot(b,bb)
np.random.rand(3,3)
np.random.random((3,4))
np.random.randn(5,3)
np.random.randint(44,54)
np.random.randint((44,54))
np.random.randint(44)
f = np.random.normal()
f
np.random.normal(22)
np.random.normal((22,30))
np.random.normal(22,30)
type(f)
np.arange(2999)
import sys
np.set_printoptions(threshold=sys.maxsize)
import numpy as np
a = np.arange(4)
b= np.array([33,44,55,66])
c= b-a
c
b**3
10*np.sin(b)
a<33
a = np.array( [[1,1],
[0,1]] )
b = np.array( [[2,0],
[3,4]] )
a*b
a**b
a.dot(b)
a@b
a.dtype.name
ddd = np.random.rand(3,3)
ddd
ddd.dtype
ddd.dtype.name
ddd.sum()
ddd.min()
ddd.max()
ddd.mean()
ddd.std()
ddd.var()
cs = ddd.cumsum()
cs
plt.plot(cs,ddd.ravel(),c="r")
plt.title('Cumsum and original flatten data plot')
plt.xlabel("Cumulative sum")
plt.ylabel("Flattened array")
ml = np.array([[[2,22,33,43,3],[44,54,5,6,77]],
[[4,33,22,11,123],[6,77,56,4,37]]
])
ml
ml.ndim
ml.shape
type(ml)
ml.dtype
ml.sum(axis=0)
ml.sum(axis=2)
ml.sum(axis=1)
ml.min(axis=2)
ml.min(axis=1)
ml.max(axis=2)
ml.max(axis=1)
ml.cumsum(axis=2)
ml.cumsum(axis=1)
ml.mean(axis=2)
ml.mean(axis=1)
a= np.arange(3)
a
np.exp(a)
np.sqrt(a)
np.add(a,np.exp(a))
np.subtract(a,np.sqrt(a))
np.multiply(a,np.sum(a))
np.divide(a,np.exp(a))
w = np.arange(10)*2
w
w[:5]
w[::2]
w[:7:2]=-100
w
w
w[::-1]
for i in w:
print(i*(2/3), end ="\n")
def f(x,y):
return 10*x+y
b= np.fromfunction(f,(5,5),dtype=np.int)
b
b[2,4]
b[:3]
b[3:4]
b[:5,2]
b[:,2]
b[-1]
b[3]
b
for i in b.flat:
print(i)
np.column_stack is np.hstack
np.row_stack is np.vstack
import numpy as np
import matplotlib.pyplot as plt
# Build a vector of 10000 normal deviates with variance 0.5^2 and mean 2
mu, sigma = 2, 0.5
v = np.random.normal(mu,sigma,2000)
#print(v)
# Plot a normalized histogram with 50 bins
plt.hist(v, bins=50, density=0) # matplotlib version (plot)
plt.show()
np.r_[1:4,0,4]
id(a)
b = np.random.random((2,3))
a *= 3
print(b)
a
b += a
b
a
b
a += b # b is not automatically converted to integer type
d=[]
for i in b:
for j in i:
d.append(j)
d
dd=[]
for i in d:
dd.append(np.floor(i))
dd
a+=dd
a
p = np.exp(a*1j)
p
p.dtype.name
def f(x,y):
return 10*x+y
b = np.fromfunction(f,(5,4),dtype=int)
b
b[:,3]
b[-1]
for i in range(10):
for j in range(i+1):
print("*", end='')
print()
for k in range(10,0,-1):
for jj in range(k):
print("&",end="")
print()
a= np.arange(10)
d=a.copy()
d.flags.owndata
d.base is a
d is a
d.shape =2,5
d.shape
d
a.shape
d[:1]=-100
d
a
a = np.arange(1e8)
d = a[:100].copy()
print(d)
del a
a = np.arange(12)**2 # the first 12 square numbers
i = np.array( [ 1,1,3,8,5 ] ) # an array of indices
a[i] # the elements of a at the positions i
j = np.array( [ [ 3, 4], [ 9, 7 ] ] ) # a bidimensional array of indices
a[j]
palette = np.array( [ [0,0,0], # black
[255,0,0], # red
[0,255,0], # green
[0,0,255], # blue
[255,255,255] ] ) # white
image = np.array( [ [ 0, 1, 2, 0 ], # each value corresponds to a color in the palette
[ 0, 3, 4, 0 ] ] )
palette[image] # the (2,4,3) color image
a = np.arange(12).reshape(3,4)
print(a)
i = np.array( [ [0,1],[1,2] ] ) # indices for the first dim of a
j = np.array( [ [2,1],[3,3] ] ) # indices for the second dim
a[i,j] # i and j must have equal shape
a[i,2]
a[:,j] # i.e., a[ : , j] ### Very important
| 0.357343 | 0.946695 |
<h1> 2c. Refactoring to add batching and feature-creation </h1>
In this notebook, we continue reading the same small dataset, but refactor our ML pipeline in two small, but significant, ways:
<ol>
<li> Refactor the input to read data in batches.
<li> Refactor the feature creation so that it is not one-to-one with inputs.
</ol>
The Pandas function in the previous notebook also batched, only after it had read the whole data into memory -- on a large dataset, this won't be an option.
```
import datalab.bigquery as bq
import tensorflow as tf
import numpy as np
import shutil
import tensorflow as tf
print(tf.__version__)
```
<h2> 1. Refactor the input </h2>
Read data created in Lab1a, but this time make it more general and performant. Instead of using Pandas, we will use TensorFlow's Dataset API.
```
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
# Create list of files that match pattern
file_list = tf.gfile.Glob(filename)
# Create dataset from file list
dataset = tf.data.TextLineDataset(file_list).map(decode_csv)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
return _input_fn
def get_train():
return read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN)
def get_valid():
return read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL)
def get_test():
return read_dataset('./taxi-test.csv', mode = tf.estimator.ModeKeys.EVAL)
```
<h2> 2. Refactor the way features are created. </h2>
For now, pass these through (same as previous lab). However, refactoring this way will enable us to break the one-to-one relationship between inputs and features.
```
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
```
<h2> Create and train the model </h2>
Note that we train for num_steps * batch_size examples.
```
tf.logging.set_verbosity(tf.logging.INFO)
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.LinearRegressor(
feature_columns = feature_cols, model_dir = OUTDIR)
model.train(input_fn = get_train(), steps = 100); # TODO: change the name of input_fn as needed
```
<h3> Evaluate model </h3>
As before, evaluate on the validation data. We'll do the third refactoring (to move the evaluation into the training loop) in the next lab.
```
def print_rmse(model, name, input_fn):
metrics = model.evaluate(input_fn = input_fn, steps = 1)
print('RMSE on {} dataset = {}'.format(name, np.sqrt(metrics['average_loss'])))
print_rmse(model, 'validation', get_valid())
```
Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
|
github_jupyter
|
import datalab.bigquery as bq
import tensorflow as tf
import numpy as np
import shutil
import tensorflow as tf
print(tf.__version__)
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
# Create list of files that match pattern
file_list = tf.gfile.Glob(filename)
# Create dataset from file list
dataset = tf.data.TextLineDataset(file_list).map(decode_csv)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
return _input_fn
def get_train():
return read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN)
def get_valid():
return read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL)
def get_test():
return read_dataset('./taxi-test.csv', mode = tf.estimator.ModeKeys.EVAL)
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
tf.logging.set_verbosity(tf.logging.INFO)
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.LinearRegressor(
feature_columns = feature_cols, model_dir = OUTDIR)
model.train(input_fn = get_train(), steps = 100); # TODO: change the name of input_fn as needed
def print_rmse(model, name, input_fn):
metrics = model.evaluate(input_fn = input_fn, steps = 1)
print('RMSE on {} dataset = {}'.format(name, np.sqrt(metrics['average_loss'])))
print_rmse(model, 'validation', get_valid())
| 0.406626 | 0.906694 |
# Line Plot Demo
```
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
%matplotlib inline
```
## Create some fake data
*I'm using numpy to create a bunch of random curves. Let's imagine that these are spectra or seismological data. Each curve has a similar format, but there were taken at different times. We want to show all the data to compare each together.*
```
nval = 500
x = np.linspace(0, nval-1, nval)
def gauss(x,s,m,a):
return a*np.exp(-0.5*((x - m)/s)**2.)
nline = 10
y = np.empty((nline, nval))
for i in range(nline):
#add some random noise
y[i,:] = np.random.random(size=nval) - 0.5
for i in range(nline):
#add a few random Gaussians
for foo in range(10):
mean = np.random.random()*nval
stdev = np.random.random() + 2
amp = -10*np.random.random()
y[i,:] += gauss(x,stdev,mean,amp)
#define the times
times = np.linspace(1, nline-1, nline) + np.random.random(size=nline)-0.5
print(times)
```
## Let's plot these in one plot
```
#define the subplots and figure size
f, ax = plt.subplots()
for i in range(nline):
ax.plot(x,y[i,:])
```
##### How can we make this better
* Right now it is a jumble of lines. We want to separate them out somehow.
* We also want to label the lines, but it will be combersome to have so many items in a legend
* We want to choose our own colors
* We want to be sure to label the axes
*One option could be to spread the lines out using a default offset, and perhaps to color code them using a sequential colormap. This might work, for instance, if these measurements were taken in resonalby consistent intervals of time (e.g., a spectrum taken every few seconds)*
```
f, ax = plt.subplots(figsize=(10, 10))
#define some offset
offset = 10.
#choose some colormap
cmap = matplotlib.cm.get_cmap('viridis_r')
for i in range(nline):
ax.plot(x,y[i,:] + offset*i, c=cmap(times[i]/nline))
#add the colorbar on right side of the plot
cax = f.add_axes([0.92, 0.125, 0.03, 0.755])
sm = plt.cm.ScalarMappable(cmap=cmap, norm=plt.Normalize(vmin=0, vmax=nline))
f.colorbar(sm, cax=cax, orientation='vertical')
cax.set_ylabel('time', fontsize=14)
ax.set_xlabel('frequency', fontsize=20)
ax.set_ylabel('intensity + offset', fontsize=20)
f.savefig('line1.pdf',format='pdf', bbox_inches = 'tight')
```
## Now let's imagine that one of these lines is different from the others
*How can we highlight and identify it visually?*
```
#let's make one line a little different
jdiff = 3
for foo in range(10):
mean = np.random.random()*nval
stdev = np.random.random()*4 + 6
amp = -5*np.random.random()
y[jdiff,:] += gauss(x,stdev,mean,amp)
```
*For this plot, let's note the times in text, and identify the line of interest with color. Using text to identify the lines, instead of a colormap, would also help if the differences in times were wildly different (e.g., if some taken with seconds of each other while others were taken years apart).*
```
f, ax = plt.subplots(figsize=(10, 10))
#define some offset
offset = 10.
for i in range(nline):
c = 'gray'
w = 1
if (i == jdiff):
c = 'firebrick'
w = 2
ax.plot(x,y[i,:] + offset*i, c=c, linewidth=w)
ax.text(nval+10, offset*i-1, "{:5.3f}".format(times[i]), fontsize=14,c=c)
ax.set_xlabel('frequency', fontsize=20)
ax.set_ylabel('intensity + offset', fontsize=20)
ax.text(nval+2, offset*nline-5, r'time $(s)$', fontsize=14)
ax.text(nval+2, offset*nline-7, '---------------')
ax.set_xlim(-10,570)
ax.set_ylim(-10,100)
f.savefig('line2.pdf',format='pdf', bbox_inches = 'tight')
```
|
github_jupyter
|
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
%matplotlib inline
nval = 500
x = np.linspace(0, nval-1, nval)
def gauss(x,s,m,a):
return a*np.exp(-0.5*((x - m)/s)**2.)
nline = 10
y = np.empty((nline, nval))
for i in range(nline):
#add some random noise
y[i,:] = np.random.random(size=nval) - 0.5
for i in range(nline):
#add a few random Gaussians
for foo in range(10):
mean = np.random.random()*nval
stdev = np.random.random() + 2
amp = -10*np.random.random()
y[i,:] += gauss(x,stdev,mean,amp)
#define the times
times = np.linspace(1, nline-1, nline) + np.random.random(size=nline)-0.5
print(times)
#define the subplots and figure size
f, ax = plt.subplots()
for i in range(nline):
ax.plot(x,y[i,:])
f, ax = plt.subplots(figsize=(10, 10))
#define some offset
offset = 10.
#choose some colormap
cmap = matplotlib.cm.get_cmap('viridis_r')
for i in range(nline):
ax.plot(x,y[i,:] + offset*i, c=cmap(times[i]/nline))
#add the colorbar on right side of the plot
cax = f.add_axes([0.92, 0.125, 0.03, 0.755])
sm = plt.cm.ScalarMappable(cmap=cmap, norm=plt.Normalize(vmin=0, vmax=nline))
f.colorbar(sm, cax=cax, orientation='vertical')
cax.set_ylabel('time', fontsize=14)
ax.set_xlabel('frequency', fontsize=20)
ax.set_ylabel('intensity + offset', fontsize=20)
f.savefig('line1.pdf',format='pdf', bbox_inches = 'tight')
#let's make one line a little different
jdiff = 3
for foo in range(10):
mean = np.random.random()*nval
stdev = np.random.random()*4 + 6
amp = -5*np.random.random()
y[jdiff,:] += gauss(x,stdev,mean,amp)
f, ax = plt.subplots(figsize=(10, 10))
#define some offset
offset = 10.
for i in range(nline):
c = 'gray'
w = 1
if (i == jdiff):
c = 'firebrick'
w = 2
ax.plot(x,y[i,:] + offset*i, c=c, linewidth=w)
ax.text(nval+10, offset*i-1, "{:5.3f}".format(times[i]), fontsize=14,c=c)
ax.set_xlabel('frequency', fontsize=20)
ax.set_ylabel('intensity + offset', fontsize=20)
ax.text(nval+2, offset*nline-5, r'time $(s)$', fontsize=14)
ax.text(nval+2, offset*nline-7, '---------------')
ax.set_xlim(-10,570)
ax.set_ylim(-10,100)
f.savefig('line2.pdf',format='pdf', bbox_inches = 'tight')
| 0.246443 | 0.936923 |
<img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/>
<img width="20%" alt="Naas" src="https://logos-world.net/wp-content/uploads/2020/04/Linkedin-Logo.png">
# LinkedIn - Get profile
<a href="https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/LinkedIn/LinkedIn_get_profile.ipynb" target="_parent">
<img src="https://img.shields.io/badge/-Open%20in%20Naas-success?labelColor=000000&logo=data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyB3aWR0aD0iMTAyNHB4IiBoZWlnaHQ9IjEwMjRweCIgdmlld0JveD0iMCAwIDEwMjQgMTAyNCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayIgdmVyc2lvbj0iMS4xIj4KIDwhLS0gR2VuZXJhdGVkIGJ5IFBpeGVsbWF0b3IgUHJvIDIuMC41IC0tPgogPGRlZnM+CiAgPHRleHQgaWQ9InN0cmluZyIgdHJhbnNmb3JtPSJtYXRyaXgoMS4wIDAuMCAwLjAgMS4wIDIyOC4wIDU0LjUpIiBmb250LWZhbWlseT0iQ29tZm9ydGFhLVJlZ3VsYXIsIENvbWZvcnRhYSIgZm9udC1zaXplPSI4MDAiIHRleHQtZGVjb3JhdGlvbj0ibm9uZSIgZmlsbD0iI2ZmZmZmZiIgeD0iMS4xOTk5OTk5OTk5OTk5ODg2IiB5PSI3MDUuMCI+bjwvdGV4dD4KIDwvZGVmcz4KIDx1c2UgaWQ9Im4iIHhsaW5rOmhyZWY9IiNzdHJpbmciLz4KPC9zdmc+Cg=="/>
</a>
## Get your cookies
To take action on your behalf on a social network, Naas needs to connect as you. To do this, it needs your session cookie(s). This will give your Naas access to your social network account.
- [Open your profile](https://www.linkedin.com/in/)
- Right-click anywhere on the page and open "Inspect"
- Locate the "Application" tab. Then select "Cookies" > Locate the cookie you're looking for = "li_at" and "JSESSIONID" :
<img width="80%" alt="Naas" src="https://public.naas.ai/ZmxvcmVudC0yRXJhdmVuZWwtNDBjYXNoc3RvcnktMkVjb20=/asset/4ccb0452e5717de02f550f4b87026dd0ee6ddcb4e14b898e59d31fce64b3">
## Input
```
from naas_drivers import linkedin
LI_AT = 'YOUR_COOKIE_LI_AT' # EXAMPLE AQFAzQN_PLPR4wAAAXc-FCKmgiMit5FLdY1af3-2
JSESSIONID = 'YOUR_COOKIE_JSESSIONID' # EXAMPLE ajax:8379907400220387585
```
## Model & Output
Get profile ID or enter the complete url, and return a dataframe.
<img width="80%" alt="Naas" src="https://public.naas.ai/ZmxvcmVudC0yRXJhdmVuZWwtNDBjYXNoc3RvcnktMkVjb20=/asset/c05b3bf416d771e70ca2f1f0b96855fe1a9e9cae53899e5412d0232075ff">
## Get the profile
Get the information return in a dataframe.<br><br>
**Available columns :**
- FIRSTNAME : First name
- LASTNAME : Last name
- BIRTHDATE_DAY : Day of birth in format DD
- BIRTHDATE_MONTH : Month of birth in format MM
- BIRTHDATE_YEAR : Year of birth in format YYYY
- BIRTHDATE : Birthdate in format DD, MM - YYYY
- COUNTRY : Country name
- ADDRESS : Address
- LK_HEADLINE : Job description (headline)
- LK_SECTOR : Work industry
- LK_FOLLOWERS : Number of followers
- LK_PHONE : Phone number
- LK_EMAIL : Email
- LK_TWITER : Twitter account
```
# Enter the linkedin id or linkedin url
url = "LINKEDIN_ID or LINKEDIN_URL"
# Get dataframe as result
df = linkedin.connect(LI_AT, JSESSIONID).get_profil(url)
df
```
|
github_jupyter
|
from naas_drivers import linkedin
LI_AT = 'YOUR_COOKIE_LI_AT' # EXAMPLE AQFAzQN_PLPR4wAAAXc-FCKmgiMit5FLdY1af3-2
JSESSIONID = 'YOUR_COOKIE_JSESSIONID' # EXAMPLE ajax:8379907400220387585
# Enter the linkedin id or linkedin url
url = "LINKEDIN_ID or LINKEDIN_URL"
# Get dataframe as result
df = linkedin.connect(LI_AT, JSESSIONID).get_profil(url)
df
| 0.262464 | 0.683426 |
```
%%bash
mkdir -p /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks
mkdir -p /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/counts
mkdir -p /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/logs
mkdir -p /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/plots
```
First, quantify reads in targeted regions
```
%%bash
module load bedtools2
# Create union peakset for FLAG-p300 samples:
cat /data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_p300.flag.{targeted,scram,PBS}.rep*.masked.dedup.sorted_peaks.narrowPeak \
| /bin/grep "^chr" \
| sort -k1,1 -k2,2n \
| bedtools merge -nonamecheck -i stdin \
| sort -k1,1 -k2,2n \
| bedtools closest \
-nonamecheck \
-a stdin \
-b <(sort -k1,1 -k2,2n /data/reddylab/Reference_Data/Gencode/vM19/gencode.vM19.basic.annotation.no_gm.bed) \
> /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.flag.union_peakset.bed
%%bash
source /data/reddylab/software/miniconda2/bin/activate alex
python /data/reddylab/Alex/reddylab_utils/scripts/bed_to_saf.py \
-beds /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.flag.union_peakset.bed \
-safs /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.flag.union_peakset.saf
%%bash
/data/reddylab/software/subread-1.4.6-p4-Linux-x86_64/bin/featureCounts \
-F SAF \
-a /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.flag.union_peakset.saf \
-o /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/counts/mmLiver_p300.flag.union_peakset.featureCounts.txt \
/data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_p300.flag.{targeted,scram,PBS}.rep*masked.dedup.sorted.bam \
> /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/logs/mmLiver_p300.flag.union_peakset.featureCounts.out \
2>&1
```
Same, but discarding peaks that are also found in input controls (artifacts!)
```
%%bash
module load bedtools2
# Create union peakset for FLAG-p300 samples:
cat /data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_p300.flag.{targeted,scram,PBS}.rep*.masked.dedup.sorted_peaks.narrowPeak \
| /bin/grep "^chr" \
| sort -k1,1 -k2,2n \
| bedtools merge -nonamecheck -i stdin \
| sort -k1,1 -k2,2n \
| bedtools intersect -wa -v -a stdin -b /data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_p300.[Ii]nput.{targeted,scram,PBS}.rep*.masked.dedup.sorted_peaks.narrowPeak \
| bedtools closest \
-nonamecheck \
-a stdin \
-b <(sort -k1,1 -k2,2n /data/reddylab/Reference_Data/Gencode/vM19/gencode.vM19.basic.annotation.no_gm.bed) \
> /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.flag.union_peakset_no_input.bed
%%bash
source /data/reddylab/software/miniconda2/bin/activate alex
python /data/reddylab/Alex/reddylab_utils/scripts/bed_to_saf.py \
-beds /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.flag.union_peakset_no_input.bed \
-safs /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.flag.union_peakset_no_input.saf
%%bash
/data/reddylab/software/subread-1.4.6-p4-Linux-x86_64/bin/featureCounts \
-F SAF \
-a /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.flag.union_peakset_no_input.saf \
-o /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/counts/mmLiver_p300.flag.union_peakset_no_input.featureCounts.txt \
/data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_p300.flag.{targeted,scram,PBS}.rep*masked.dedup.sorted.bam \
> /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/logs/mmLiver_p300.flag.union_peakset_no_input.featureCounts.out \
2>&1
```
Quantify K27ac in
```
mid_point = (106463479+106465480)/2
print mid_point - 500, mid_point + 500
%%writefile /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver.flag.2kb.saf
chr4_106463479_106465480_Pcsk9 chr4 106463479 106465480 +
chr5_147268985_147270986_Pdx1 chr5 147268985 147270986 +
chr14_76877399_76877806_scrampeak chr14 76876602 76878602 +
```
Add the single peak from the scram nontargeting guide that seems to be an offtarget. Trying to answer the question, does it have K9me3/p300 signal?
```
midpoint_scrampeak = int((76877399+76877806)/2.)
win_size=250
print 'chr14', midpoint_scrampeak-win_size, midpoint_scrampeak+win_size
%%writefile /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver.flag.1kb.saf
chr4_106463479_106465480_Pcsk9 chr4 106463979 106464980 +
chr5_147268985_147270986_Pdx1 chr5 147269485 147270486 +
chr14_76877399_76877806_scrampeak chr14 76877102 76878102 +
%%writefile /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver.flag.500bp.saf
chr4_106463479_106465480_Pcsk9 chr4 106464229 106464730 +
chr5_147268985_147270986_Pdx1 chr5 147269735 147270236 +
chr14_76877399_76877806_scrampeak chr14 76877352 76877852 +
%%bash
WINDOWS=(2kb 1kb 500bp)
sbatch -pnew,all \
--array=0-2 \
-o /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/logs/mmLiver.flag.2kb_no_input.featureCounts.%a.out \
--cpus-per-task 4 \
--mem 8G \
<<'EOF'
#!/bin/bash
WINDOWS=(2kb 1kb 500bp)
WINDOW=${WINDOWS[${SLURM_ARRAY_TASK_ID}]}
/data/reddylab/software/subread-1.4.6-p4-Linux-x86_64/bin/featureCounts \
-T 4 \
-F SAF \
-a /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver.flag.${WINDOW}.saf \
-o /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/counts/mmLiver.flag.${WINDOW}_no_input.featureCounts.txt \
/data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_{p300.K27ac,KRAB.K9me3}.{targeted,scram,PBS}.rep*masked.dedup.sorted.bam
EOF
%matplotlib inline
from scipy.stats import ttest_ind, f_oneway
import pandas as pd
import seaborn as sns
sns.set_style('whitegrid')
sns.set_context('paper')
import numpy as np
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
sns.set_context("paper")
sns.set_style("whitegrid")
sns.set_style("ticks", {"xtick.major.size": 8, "ytick.major.size": 8})
plt.rcParams['pdf.fonttype'] = 42
plt.rcParams['lines.markersize'] = 5
def simpleaxis(ax):
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
def get_stats(a, b, method = 'anova'):
if method == 'anova':
return f_oneway(a, b)
elif method == 'ttest_ind':
return ttest_ind(a, b)
else:
return "%s not implemented" % method
for window in ['1kb']:#'2kb', '1kb', '500bp'
df = pd.read_csv('/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/counts/mmLiver.flag.%s_no_input.featureCounts.txt' % window, sep="\t", comment="#")
lib_sizes = []
for bam in df.columns.values[6:-1]:
tt = np.loadtxt(bam.replace('masked.dedup.sorted.bam', 'bowtie.log.read_count.mapped'))
lib_sizes.append(tt[1])
df.loc[:, df.columns.values[6:-1]] = df.loc[:, df.columns.values[6:-1]]/lib_sizes*1e6
df.index = df.iloc[:, 0]
# p300.K27ac.
# KRAB.K9me3.
df.columns = df.columns\
.str.replace('/data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_','')\
.str.replace('/data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_','')\
.str.replace('.masked.dedup.sorted.bam','')
df = df.loc[:, df.columns.values[6:-1]]
df.columns = pd.MultiIndex.from_arrays([
['.'.join(c.split('.')[:2]) for c in df.columns],
[c.split('.')[2] for c in df.columns],
df.columns
])
factors = ['p300.K27ac', 'KRAB.K9me3', 'KRAB.K9me3'][::-1]
peaks = ['chr5_147268985_147270986_Pdx1', 'chr4_106463479_106465480_Pcsk9', 'chr14_76877399_76877806_scrampeak'][::-1]
print "---===", window, "===---"
for f_ix, factor in enumerate(factors[:1]):
figg = plt.figure(figsize=[5,3])
df_tmp = df.T.loc[df.T.index.get_level_values(0)==factor,: ]
# df_tmp = df_tmp.loc[df_tmp.index.get_level_values(2) != 'p300.K27ac.targeted.rep9', :]
ax = sns.barplot(data=df_tmp,
x=df_tmp.index.get_level_values(1),
y=peaks[f_ix],
n_boot=1000)
ax.set_ylabel('Normalized counts')
ax.set_yticks(np.arange(0, 3, .5))
simpleaxis(ax)
data_dir = '/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/plots'
ax.set_title('%s Normalized Counts\n(%s window around FLAG summit)' % (factor, window))
figg.tight_layout()
figg.savefig("%s/mmLiver_%s.%s.cpms.pdf" % (data_dir, factor, window))
df_tmp.to_csv("%s/mmLiver_%s.%s.cpms.txt" % (data_dir, factor, window), sep='\t')
targeted_values = df_tmp.loc[df_tmp.index.get_level_values(1)=='targeted', peaks[f_ix]].values
scram_values = df_tmp.loc[df_tmp.index.get_level_values(1)=='scram', peaks[f_ix]].values
pbs_values = df_tmp.loc[df_tmp.index.get_level_values(1)=='PBS', peaks[f_ix]].values
plt.ylim([0, 1.5])
print "=== %s stats ===" % factor
print "--- ANOVA ---"
print "targeted vs scram\t", get_stats(targeted_values, scram_values, method = 'anova')
print "targeted vs pbs\t",get_stats(targeted_values, pbs_values, method = 'anova')
print "scram vs pbs\t",get_stats(scram_values, pbs_values, method = 'anova')
print "--- t-test ---"
print "targeted vs scram\t", get_stats(targeted_values, scram_values, method = 'ttest_ind')
print "targeted vs pbs\t",get_stats(targeted_values, pbs_values, method = 'ttest_ind')
print "scram vs pbs\t",get_stats(scram_values, pbs_values, method = 'ttest_ind')
print "---===", window, "===---"
df.head()
from scipy.stats import ttest_ind, f_oneway
import pandas as pd
import seaborn as sns
sns.set_style('whitegrid')
sns.set_context('paper')
import numpy as np
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
sns.set_context("paper")
sns.set_style("whitegrid")
sns.set_style("ticks", {"xtick.major.size": 8, "ytick.major.size": 8})
plt.rcParams['pdf.fonttype'] = 42
plt.rcParams['lines.markersize'] = 5
def simpleaxis(ax):
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
def get_stats(a, b, method = 'anova'):
if method == 'anova':
return f_oneway(a, b)
elif method == 'ttest_ind':
return ttest_ind(a, b)
else:
return "%s not implemented" % method
for window in ['1kb']:#['2kb', '1kb', '500bp']:
df = pd.read_csv('/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/counts/mmLiver.flag.%s_no_input.featureCounts.txt' % window, sep="\t", comment="#")
lib_sizes = []
for bam in df.columns.values[6:-1]:
tt = np.loadtxt(bam.replace('masked.dedup.sorted.bam', 'bowtie.log.read_count.mapped'))
lib_sizes.append(tt[1])
df.loc[:, df.columns.values[6:-1]] = df.loc[:, df.columns.values[6:-1]]/lib_sizes*1e6
df.index = df.iloc[:, 0]
# p300.K27ac.
# KRAB.K9me3.
df.columns = df.columns\
.str.replace('/data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_','')\
.str.replace('/data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_','')\
.str.replace('.masked.dedup.sorted.bam','')
df = df.loc[:, df.columns.values[6:-1]]
df.columns = pd.MultiIndex.from_arrays([
['.'.join(c.split('.')[:2]) for c in df.columns],
[c.split('.')[2] for c in df.columns],
df.columns
])
factors = ['p300.K27ac', 'KRAB.K9me3', 'KRAB.K9me3'][::-1]
peaks = ['chr5_147268985_147270986_Pdx1', 'chr4_106463479_106465480_Pcsk9', 'chr14_76877399_76877806_scrampeak'][::-1]
print "---===", window, "===---"
for f_ix, factor in enumerate(factors[:2]):
figg = plt.figure(figsize=[5,3])
df_tmp = df.T.loc[df.T.index.get_level_values(0)==factor,: ]
# df_tmp = df_tmp.loc[df_tmp.index.get_level_values(2) != 'p300.K27ac.targeted.rep9', :]
ax = sns.swarmplot(data=df_tmp,
x=df_tmp.index.get_level_values(1),
y=peaks[f_ix])
ax.set_ylabel('Normalized counts')
# ax.set_yticks(np.arange(0, 3, .5))
simpleaxis(ax)
data_dir = '/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/plots'
ax.set_title('%s Normalized Counts\n(%s window around FLAG summit)' % (factor, window))
figg.tight_layout()
figg.savefig("%s/mmLiver_%s.%s.cpms.points.pdf" % (data_dir, factor, window))
targeted_values = df_tmp.loc[df_tmp.index.get_level_values(1)=='targeted', peaks[f_ix]].values
scram_values = df_tmp.loc[df_tmp.index.get_level_values(1)=='scram', peaks[f_ix]].values
pbs_values = df_tmp.loc[df_tmp.index.get_level_values(1)=='PBS', peaks[f_ix]].values
print "=== %s stats ===" % factor
print "--- ANOVA ---"
print "targeted vs scram\t", get_stats(targeted_values, scram_values, method = 'anova')
print "targeted vs pbs\t",get_stats(targeted_values, pbs_values, method = 'anova')
print "scram vs pbs\t",get_stats(scram_values, pbs_values, method = 'anova')
print "--- t-test ---"
print "targeted vs scram\t", get_stats(targeted_values, scram_values, method = 'ttest_ind')
print "targeted vs pbs\t",get_stats(targeted_values, pbs_values, method = 'ttest_ind')
print "scram vs pbs\t",get_stats(scram_values, pbs_values, method = 'ttest_ind')
print "---===", window, "===---"
ax = sns.barplot(data=df.T.loc[df.T.index.get_level_values(0)=='p300.K27ac',: ],
x=df.T.loc[df.T.index.get_level_values(0)=='p300.K27ac',: ].index.get_level_values(1),
y='chr5_147268985_147270986_Pdx1')
ax.set_ylabel('p300.K27ac')
simpleaxis(ax)
data_dir = '/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/plots'
plt.tight_layout()
plt.title('FLAG CPMs')
plt.savefig("%s/mmLiver_p300.K27ac.2kb.cpms.pdf" % (data_dir))
import pandas as pd
import seaborn as sns
sns.set_style('whitegrid')
sns.set_context('paper')
import numpy as np
from matplotlib import pyplot as plt
df = pd.read_csv('/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/counts/mmLiver_p300.flag.union_peakset_no_input.featureCounts.txt', sep="\t", comment="#")
df_anno = pd.read_csv('/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.flag.union_peakset.bed', sep='\t',
header=None)
df_anno = df_anno.drop(columns=range(3,9) + [10], axis=1)
df_anno.columns = ['Chr', 'Start', 'End', 'GeneSymbol']
df = df.merge(df_anno)
lib_sizes = []
for bam in df.columns.values[6:-1]:
tt = np.loadtxt(bam.replace('masked.dedup.sorted.bam', 'bowtie.log.read_count.mapped'))
lib_sizes.append(tt[1])
df.loc[:, df.columns.values[6:-1]] = df.loc[:, df.columns.values[6:-1]]/lib_sizes*1e6
df.index = df.Geneid + "_" + df.GeneSymbol
df = df[~df.index.str.contains('chrM')]
# Remove Mitochondrial peaks
df = df[~df.index.str.contains('chrM')]
df.columns = df.columns.str\
.replace('/data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_p300.flag.','')\
.str.replace('.masked.dedup.sorted.bam','')
df = df.loc[:, df.columns.values[6:-1]]
df.columns = pd.MultiIndex.from_arrays( [[c.split('.')[0] for c in df.columns], df.columns])
df.loc[df.var(axis=1).sort_values(ascending=False).index, :]
gene_of_interest = 'Pdx1'
sns.barplot(data=df.loc[df.index.str.contains(gene_of_interest),: ].T,
x=df.loc[df.index.str.contains(gene_of_interest),: ].T.index.get_level_values(0),
y='chr5_147269830_147270140_Pdx1')
foo
import pandas as pd
import seaborn as sns
sns.set_style('whitegrid')
sns.set_context('paper')
import numpy as np
from matplotlib import pyplot as plt
df_krab = pd.read_csv('/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/counts/mmLiver_KRAB.flag.union_peakset_no_input.featureCounts.txt', sep="\t", comment="#")
df_krab_anno = pd.read_csv('/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_KRAB.flag.union_peakset.bed', sep='\t',
header=None)
df_krab_anno = df_krab_anno.drop(columns=range(3,9) + [10], axis=1)
df_krab_anno.columns = ['Chr', 'Start', 'End', 'GeneSymbol']
df_krab = df_krab.merge(df_krab_anno)
lib_sizes = []
for bam in df_krab.columns.values[6:-1]:
tt = np.loadtxt(bam.replace('masked.dedup.sorted.bam', 'bowtie.log.read_count.mapped'))
lib_sizes.append(tt[1])
df_krab.loc[:, df_krab.columns.values[6:-1]] = df_krab.loc[:, df_krab.columns.values[6:-1]]/lib_sizes*1e6
df_krab.index = df_krab.Geneid + "_" + df_krab.GeneSymbol
df_krab = df_krab[~df_krab.index.str.contains('chrM')]
# Remove Mitochondrial peaks
df_krab = df_krab[~df_krab.index.str.contains('chrM')]
df_krab.columns = df_krab.columns.str\
.replace('/data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_KRAB.flag.','')\
.str.replace('.masked.dedup.sorted.bam','')
df_krab = df_krab.loc[:, df_krab.columns.values[6:-1]]
# Drop failed library
df_krab.drop('scram.rep8', axis=1, inplace=True)
df_krab.columns = pd.MultiIndex.from_arrays( [[c.split('.')[0] for c in df_krab.columns], df_krab.columns])
foo = df.loc[df.index.str.contains('Pdx1'),: ].T
foo_krab = df_krab.loc[df_krab.index.str.contains('Pcsk9'),: ].T
foo_merged = pd.concat([foo, foo_krab])
df.loc[df.index.str.contains('Pdx1'),: ].T
foo_merged
foo_merged.loc[:, 'locus'] = 1
foo_merged.loc[:, 'cpm'] = 1
~foo_merged.chr4_106464226_106464732_Pcsk9.isna()
foo_merged.loc[~foo_merged.chr4_106464226_106464732_Pcsk9.isna(), 'locus'] = 'Pcsk9'
foo_merged.loc[~foo_merged.chr5_147269830_147270140_Pdx1.isna(), 'locus'] = 'Pdx1'
foo_merged.chr4_106464226_106464732_Pcsk9.values
foo_merged.loc[~foo_merged.chr4_106464226_106464732_Pcsk9.isna(), 'cpm'] = foo_merged.loc[~foo_merged.chr4_106464226_106464732_Pcsk9.isna(), 'chr4_106464226_106464732_Pcsk9']
foo_merged.loc[~foo_merged.chr5_147269830_147270140_Pdx1.isna(), 'cpm'] = foo_merged.loc[~foo_merged.chr5_147269830_147270140_Pdx1.isna(), 'chr5_147269830_147270140_Pdx1']
import numpy as np
import pandas as pd
import seaborn as sns
sns.set_context("paper")
sns.set_style("whitegrid")
sns.set_style("ticks", {"xtick.major.size": 8, "ytick.major.size": 8})
def simpleaxis(ax):
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
df.loc[df.index.str.contains('Pdx1'),: ].T
(df.loc[:, [c for c in df.columns if 'targeted' in c ]].values).flatten()
(df.loc[:, [c for c in df.columns if 'scram' in c ]].values).flatten()
df.shape
get_stats(
(df.loc[:, [c for c in df.columns if 'targeted' in c ]].values).flatten(),
(df.loc[:, [c for c in df.columns if 'scram' in c ]].values).flatten(),
method = 'ttest_ind')
plt.rcParams['svg.fonttype'] = 'none'
plt.rcParams['pdf.fonttype'] = 42
f, ax = plt.subplots(figsize=[3, 3])
sns.swarmplot(data=df.loc[df.index.str.contains('Pdx1'),: ].T,
x=df.loc[df.index.str.contains('Pdx1'),: ].T.index.get_level_values(0), y='chr5_147269830_147270140_Pdx1')
# sns.boxplot(data=df.loc[df.index.str.contains('Pdx1'),: ].T,
# x=df.loc[df.index.str.contains('Pdx1'),: ].T.index.get_level_values(0), y='chr5_147269830_147270140_Pdx1')
ax.set_ylabel('CPMs')
simpleaxis(ax)
data_dir = '/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/plots'
plt.title('p300 FLAG CPMs');
plt.tight_layout()
plt.savefig("%s/mmLiver_p300.flag.union_peakset.cpms.points.pdf" % (data_dir))
df.loc[df.index.str.contains('Pdx1'),: ].T.to_csv("%s/mmLiver_p300.flag.union_peakset.cpms.txt" % (data_dir), sep='\t')
plt.rcParams['svg.fonttype'] = 'none'
plt.rcParams['pdf.fonttype'] = 42
f, ax = plt.subplots(figsize=[3, 3])
sns.swarmplot(data=df_krab.loc[df_krab.index.str.contains('Pcsk9'),: ].T,
x=df_krab.loc[df_krab.index.str.contains('Pcsk9'),: ].T.index.get_level_values(0), y='chr4_106464226_106464732_Pcsk9')
# sns.boxplot(data=df_krab.loc[df_krab.index.str.contains('Pcsk9'),: ].T,
# x=df_krab.loc[df_krab.index.str.contains('Pcsk9'),: ].T.index.get_level_values(0), y='chr5_147269830_147270140_Pcsk9')
ax.set_ylabel('CPMs')
simpleaxis(ax)
data_dir = '/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/plots'
plt.title('KRAB FLAG CPMs');
plt.tight_layout()
plt.savefig("%s/mmLiver_KRAB.flag.union_peakset.cpms.points.pdf" % (data_dir))
df_krab.loc[df_krab.index.str.contains('Pcsk9'),: ].T.to_csv("%s/mmLiver_KRAB.flag.union_peakset.cpms.txt" % (data_dir), sep='\t')
ax = sns.barplot(data=foo_merged,
x=foo_merged.index.get_level_values(0), y='cpm', hue='locus', n_boot=50)
simpleaxis(ax)
data_dir = '/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/plots'
plt.tight_layout()
plt.title('FLAG CPMs')
plt.savefig("%s/mmLiver_KRAB.flag.union_peakset.cpms.pdf" % (data_dir))
ax = sns.swarmplot(data=foo_merged,
x=foo_merged.index.get_level_values(0), y='cpm', hue='locus')
simpleaxis(ax)
data_dir = '/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/plots'
plt.tight_layout()
plt.title('FLAG CPMs')
plt.savefig("%s/mmLiver_KRAB.flag.union_peakset.cpms.points.pdf" % (data_dir))
# Save plot for special case
gene_of_interest = 'Pdx1'
figg = plt.figure(figsize=[6,4])
fig = df.loc[df.index.str.contains(gene_of_interest),: ].T.groupby(level=0, axis=0)\
.boxplot(
subplots=False,
)
fig.axes.set_xticklabels(['PBS', 'SCRAM', 'Pdx1']);
fig.axes.set_title('Pdx1');
fig.set_ylabel('CPM')
figg.tight_layout()
figg.savefig('/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/plots/mmLiver_p300.flag.union_peakset.cpms.%s.pdf' % (gene_of_interest))
# Special case: Pdx1 peak overlaps 2 genes in the annotation, therefore appears twice. Remove "Plut"
df = df[df.index != 'chr5_147269830_147270140_Plut']
from matplotlib import pyplot as plt
plt.rcParams['svg.fonttype'] = 'none'
plt.rcParams['pdf.fonttype'] = 42
ncols = 4
nrows = int(np.ceil(df.shape[0] / ncols))
figg, axes = plt.subplots(nrows, ncols, sharey=True, figsize=[16, 24])
for ix, ii in enumerate(df.var(axis=1).sort_values(ascending=False).index):
fig = df.loc[df.index==ii,: ].T.groupby(level=0, axis=0)\
.boxplot(
subplots=False,
ax = axes.flatten()[ix]
)
if gene_of_interest in ii:
fig.axes.set_facecolor('r')
if ix>=((nrows-1)*ncols):
fig.axes.set_xticklabels(['PBS', 'SCRAM', 'Pdx1']);
else:
fig.axes.set_xticklabels([]);
fig.axes.set_title(ii);
figg.tight_layout()
figg.savefig('/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/plots/mmLiver_p300.flag.union_peakset_no_input.cpms.gridplot.pdf')
df.to_csv('/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/counts/mmLiver_p300.flag.union_peakset.cpms.txt',sep='\t')
```
Matt is interested in quantify the enrichment of p300 signal in the p300-FLAG samples compared with PBS and scram guides, also looking at enrichment of K27ac signal.
Conversely, for samples treated with KRAB he would like to quantify the gain of signal of KRAB in FLAG samples versus scram and PBS, also looking at enrichment of K9me3 signal.
- [ ] Use the peaksets of K27ac and K9me3 to quantify signal in those peaks.
```
%%bash
module load bedtools2
# Create union peakset for FLAG-p300 samples:
cat /data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_p300.K27ac.{targeted,scram,PBS}.rep*.masked.dedup.sorted_peaks.narrowPeak \
| /bin/grep "^chr" \
| sort -k1,1 -k2,2n \
| bedtools merge -nonamecheck -i stdin \
| sort -k1,1 -k2,2n \
| bedtools intersect -wa -v -a stdin -b /data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_p300.[Ii]nput.{targeted,scram,PBS}.rep*.masked.dedup.sorted_peaks.narrowPeak \
| bedtools closest \
-nonamecheck \
-a stdin \
-b <(sort -k1,1 -k2,2n /data/reddylab/Reference_Data/Gencode/vM19/gencode.vM19.basic.annotation.no_gm.bed) \
> /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.K27ac.union_peakset_no_input.bed
%%bash
module load bedtools2
cat \
<(awk -vOFS="\t" '{$2=($2+$3)/2;$3=$2+1; print $0}' /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.flag.union_peakset_no_input.bed | bedtools slop -i stdin -b 1000 -g /data/reddylab/Reference_Data/Genomes/mm10/GRCm38.header.sizes) \
/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.K27ac.union_peakset_no_input.bed \
| sort -k1,1 -k2,2n \
| bedtools merge -i stdin \
| bedtools closest \
-nonamecheck \
-a stdin \
-b <(sort -k1,1 -k2,2n /data/reddylab/Reference_Data/Gencode/vM19/gencode.vM19.basic.annotation.no_gm.bed) \
> /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.K27ac.union_peakset_no_input_plus_flag.bed
!wc -l /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.K27ac.union_peakset_no_input_plus_flag.bed
%%bash
source /data/reddylab/software/miniconda2/bin/activate alex
python /data/reddylab/Alex/reddylab_utils/scripts/bed_to_saf.py \
-beds /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.K27ac.union_peakset_no_input_plus_flag.bed \
-safs /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.K27ac.union_peakset_no_input_plus_flag.saf
%%bash
/data/reddylab/software/subread-1.4.6-p4-Linux-x86_64/bin/featureCounts \
-F SAF \
-a /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.K27ac.union_peakset_no_input_plus_flag.saf \
-o /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/counts/mmLiver_p300.K27ac.union_peakset_no_input_plus_flag.featureCounts.txt \
/data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_p300.K27ac.{targeted,scram,PBS}.rep*masked.dedup.sorted.bam \
> /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/logs/mmLiver_p300.K27ac.union_peakset_no_input_plus_flag.featureCounts.out \
2>&1
```
|
github_jupyter
|
%%bash
mkdir -p /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks
mkdir -p /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/counts
mkdir -p /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/logs
mkdir -p /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/plots
%%bash
module load bedtools2
# Create union peakset for FLAG-p300 samples:
cat /data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_p300.flag.{targeted,scram,PBS}.rep*.masked.dedup.sorted_peaks.narrowPeak \
| /bin/grep "^chr" \
| sort -k1,1 -k2,2n \
| bedtools merge -nonamecheck -i stdin \
| sort -k1,1 -k2,2n \
| bedtools closest \
-nonamecheck \
-a stdin \
-b <(sort -k1,1 -k2,2n /data/reddylab/Reference_Data/Gencode/vM19/gencode.vM19.basic.annotation.no_gm.bed) \
> /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.flag.union_peakset.bed
%%bash
source /data/reddylab/software/miniconda2/bin/activate alex
python /data/reddylab/Alex/reddylab_utils/scripts/bed_to_saf.py \
-beds /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.flag.union_peakset.bed \
-safs /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.flag.union_peakset.saf
%%bash
/data/reddylab/software/subread-1.4.6-p4-Linux-x86_64/bin/featureCounts \
-F SAF \
-a /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.flag.union_peakset.saf \
-o /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/counts/mmLiver_p300.flag.union_peakset.featureCounts.txt \
/data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_p300.flag.{targeted,scram,PBS}.rep*masked.dedup.sorted.bam \
> /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/logs/mmLiver_p300.flag.union_peakset.featureCounts.out \
2>&1
%%bash
module load bedtools2
# Create union peakset for FLAG-p300 samples:
cat /data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_p300.flag.{targeted,scram,PBS}.rep*.masked.dedup.sorted_peaks.narrowPeak \
| /bin/grep "^chr" \
| sort -k1,1 -k2,2n \
| bedtools merge -nonamecheck -i stdin \
| sort -k1,1 -k2,2n \
| bedtools intersect -wa -v -a stdin -b /data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_p300.[Ii]nput.{targeted,scram,PBS}.rep*.masked.dedup.sorted_peaks.narrowPeak \
| bedtools closest \
-nonamecheck \
-a stdin \
-b <(sort -k1,1 -k2,2n /data/reddylab/Reference_Data/Gencode/vM19/gencode.vM19.basic.annotation.no_gm.bed) \
> /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.flag.union_peakset_no_input.bed
%%bash
source /data/reddylab/software/miniconda2/bin/activate alex
python /data/reddylab/Alex/reddylab_utils/scripts/bed_to_saf.py \
-beds /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.flag.union_peakset_no_input.bed \
-safs /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.flag.union_peakset_no_input.saf
%%bash
/data/reddylab/software/subread-1.4.6-p4-Linux-x86_64/bin/featureCounts \
-F SAF \
-a /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.flag.union_peakset_no_input.saf \
-o /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/counts/mmLiver_p300.flag.union_peakset_no_input.featureCounts.txt \
/data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_p300.flag.{targeted,scram,PBS}.rep*masked.dedup.sorted.bam \
> /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/logs/mmLiver_p300.flag.union_peakset_no_input.featureCounts.out \
2>&1
mid_point = (106463479+106465480)/2
print mid_point - 500, mid_point + 500
%%writefile /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver.flag.2kb.saf
chr4_106463479_106465480_Pcsk9 chr4 106463479 106465480 +
chr5_147268985_147270986_Pdx1 chr5 147268985 147270986 +
chr14_76877399_76877806_scrampeak chr14 76876602 76878602 +
midpoint_scrampeak = int((76877399+76877806)/2.)
win_size=250
print 'chr14', midpoint_scrampeak-win_size, midpoint_scrampeak+win_size
%%writefile /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver.flag.1kb.saf
chr4_106463479_106465480_Pcsk9 chr4 106463979 106464980 +
chr5_147268985_147270986_Pdx1 chr5 147269485 147270486 +
chr14_76877399_76877806_scrampeak chr14 76877102 76878102 +
%%writefile /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver.flag.500bp.saf
chr4_106463479_106465480_Pcsk9 chr4 106464229 106464730 +
chr5_147268985_147270986_Pdx1 chr5 147269735 147270236 +
chr14_76877399_76877806_scrampeak chr14 76877352 76877852 +
%%bash
WINDOWS=(2kb 1kb 500bp)
sbatch -pnew,all \
--array=0-2 \
-o /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/logs/mmLiver.flag.2kb_no_input.featureCounts.%a.out \
--cpus-per-task 4 \
--mem 8G \
<<'EOF'
#!/bin/bash
WINDOWS=(2kb 1kb 500bp)
WINDOW=${WINDOWS[${SLURM_ARRAY_TASK_ID}]}
/data/reddylab/software/subread-1.4.6-p4-Linux-x86_64/bin/featureCounts \
-T 4 \
-F SAF \
-a /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver.flag.${WINDOW}.saf \
-o /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/counts/mmLiver.flag.${WINDOW}_no_input.featureCounts.txt \
/data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_{p300.K27ac,KRAB.K9me3}.{targeted,scram,PBS}.rep*masked.dedup.sorted.bam
EOF
%matplotlib inline
from scipy.stats import ttest_ind, f_oneway
import pandas as pd
import seaborn as sns
sns.set_style('whitegrid')
sns.set_context('paper')
import numpy as np
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
sns.set_context("paper")
sns.set_style("whitegrid")
sns.set_style("ticks", {"xtick.major.size": 8, "ytick.major.size": 8})
plt.rcParams['pdf.fonttype'] = 42
plt.rcParams['lines.markersize'] = 5
def simpleaxis(ax):
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
def get_stats(a, b, method = 'anova'):
if method == 'anova':
return f_oneway(a, b)
elif method == 'ttest_ind':
return ttest_ind(a, b)
else:
return "%s not implemented" % method
for window in ['1kb']:#'2kb', '1kb', '500bp'
df = pd.read_csv('/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/counts/mmLiver.flag.%s_no_input.featureCounts.txt' % window, sep="\t", comment="#")
lib_sizes = []
for bam in df.columns.values[6:-1]:
tt = np.loadtxt(bam.replace('masked.dedup.sorted.bam', 'bowtie.log.read_count.mapped'))
lib_sizes.append(tt[1])
df.loc[:, df.columns.values[6:-1]] = df.loc[:, df.columns.values[6:-1]]/lib_sizes*1e6
df.index = df.iloc[:, 0]
# p300.K27ac.
# KRAB.K9me3.
df.columns = df.columns\
.str.replace('/data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_','')\
.str.replace('/data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_','')\
.str.replace('.masked.dedup.sorted.bam','')
df = df.loc[:, df.columns.values[6:-1]]
df.columns = pd.MultiIndex.from_arrays([
['.'.join(c.split('.')[:2]) for c in df.columns],
[c.split('.')[2] for c in df.columns],
df.columns
])
factors = ['p300.K27ac', 'KRAB.K9me3', 'KRAB.K9me3'][::-1]
peaks = ['chr5_147268985_147270986_Pdx1', 'chr4_106463479_106465480_Pcsk9', 'chr14_76877399_76877806_scrampeak'][::-1]
print "---===", window, "===---"
for f_ix, factor in enumerate(factors[:1]):
figg = plt.figure(figsize=[5,3])
df_tmp = df.T.loc[df.T.index.get_level_values(0)==factor,: ]
# df_tmp = df_tmp.loc[df_tmp.index.get_level_values(2) != 'p300.K27ac.targeted.rep9', :]
ax = sns.barplot(data=df_tmp,
x=df_tmp.index.get_level_values(1),
y=peaks[f_ix],
n_boot=1000)
ax.set_ylabel('Normalized counts')
ax.set_yticks(np.arange(0, 3, .5))
simpleaxis(ax)
data_dir = '/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/plots'
ax.set_title('%s Normalized Counts\n(%s window around FLAG summit)' % (factor, window))
figg.tight_layout()
figg.savefig("%s/mmLiver_%s.%s.cpms.pdf" % (data_dir, factor, window))
df_tmp.to_csv("%s/mmLiver_%s.%s.cpms.txt" % (data_dir, factor, window), sep='\t')
targeted_values = df_tmp.loc[df_tmp.index.get_level_values(1)=='targeted', peaks[f_ix]].values
scram_values = df_tmp.loc[df_tmp.index.get_level_values(1)=='scram', peaks[f_ix]].values
pbs_values = df_tmp.loc[df_tmp.index.get_level_values(1)=='PBS', peaks[f_ix]].values
plt.ylim([0, 1.5])
print "=== %s stats ===" % factor
print "--- ANOVA ---"
print "targeted vs scram\t", get_stats(targeted_values, scram_values, method = 'anova')
print "targeted vs pbs\t",get_stats(targeted_values, pbs_values, method = 'anova')
print "scram vs pbs\t",get_stats(scram_values, pbs_values, method = 'anova')
print "--- t-test ---"
print "targeted vs scram\t", get_stats(targeted_values, scram_values, method = 'ttest_ind')
print "targeted vs pbs\t",get_stats(targeted_values, pbs_values, method = 'ttest_ind')
print "scram vs pbs\t",get_stats(scram_values, pbs_values, method = 'ttest_ind')
print "---===", window, "===---"
df.head()
from scipy.stats import ttest_ind, f_oneway
import pandas as pd
import seaborn as sns
sns.set_style('whitegrid')
sns.set_context('paper')
import numpy as np
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
sns.set_context("paper")
sns.set_style("whitegrid")
sns.set_style("ticks", {"xtick.major.size": 8, "ytick.major.size": 8})
plt.rcParams['pdf.fonttype'] = 42
plt.rcParams['lines.markersize'] = 5
def simpleaxis(ax):
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
def get_stats(a, b, method = 'anova'):
if method == 'anova':
return f_oneway(a, b)
elif method == 'ttest_ind':
return ttest_ind(a, b)
else:
return "%s not implemented" % method
for window in ['1kb']:#['2kb', '1kb', '500bp']:
df = pd.read_csv('/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/counts/mmLiver.flag.%s_no_input.featureCounts.txt' % window, sep="\t", comment="#")
lib_sizes = []
for bam in df.columns.values[6:-1]:
tt = np.loadtxt(bam.replace('masked.dedup.sorted.bam', 'bowtie.log.read_count.mapped'))
lib_sizes.append(tt[1])
df.loc[:, df.columns.values[6:-1]] = df.loc[:, df.columns.values[6:-1]]/lib_sizes*1e6
df.index = df.iloc[:, 0]
# p300.K27ac.
# KRAB.K9me3.
df.columns = df.columns\
.str.replace('/data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_','')\
.str.replace('/data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_','')\
.str.replace('.masked.dedup.sorted.bam','')
df = df.loc[:, df.columns.values[6:-1]]
df.columns = pd.MultiIndex.from_arrays([
['.'.join(c.split('.')[:2]) for c in df.columns],
[c.split('.')[2] for c in df.columns],
df.columns
])
factors = ['p300.K27ac', 'KRAB.K9me3', 'KRAB.K9me3'][::-1]
peaks = ['chr5_147268985_147270986_Pdx1', 'chr4_106463479_106465480_Pcsk9', 'chr14_76877399_76877806_scrampeak'][::-1]
print "---===", window, "===---"
for f_ix, factor in enumerate(factors[:2]):
figg = plt.figure(figsize=[5,3])
df_tmp = df.T.loc[df.T.index.get_level_values(0)==factor,: ]
# df_tmp = df_tmp.loc[df_tmp.index.get_level_values(2) != 'p300.K27ac.targeted.rep9', :]
ax = sns.swarmplot(data=df_tmp,
x=df_tmp.index.get_level_values(1),
y=peaks[f_ix])
ax.set_ylabel('Normalized counts')
# ax.set_yticks(np.arange(0, 3, .5))
simpleaxis(ax)
data_dir = '/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/plots'
ax.set_title('%s Normalized Counts\n(%s window around FLAG summit)' % (factor, window))
figg.tight_layout()
figg.savefig("%s/mmLiver_%s.%s.cpms.points.pdf" % (data_dir, factor, window))
targeted_values = df_tmp.loc[df_tmp.index.get_level_values(1)=='targeted', peaks[f_ix]].values
scram_values = df_tmp.loc[df_tmp.index.get_level_values(1)=='scram', peaks[f_ix]].values
pbs_values = df_tmp.loc[df_tmp.index.get_level_values(1)=='PBS', peaks[f_ix]].values
print "=== %s stats ===" % factor
print "--- ANOVA ---"
print "targeted vs scram\t", get_stats(targeted_values, scram_values, method = 'anova')
print "targeted vs pbs\t",get_stats(targeted_values, pbs_values, method = 'anova')
print "scram vs pbs\t",get_stats(scram_values, pbs_values, method = 'anova')
print "--- t-test ---"
print "targeted vs scram\t", get_stats(targeted_values, scram_values, method = 'ttest_ind')
print "targeted vs pbs\t",get_stats(targeted_values, pbs_values, method = 'ttest_ind')
print "scram vs pbs\t",get_stats(scram_values, pbs_values, method = 'ttest_ind')
print "---===", window, "===---"
ax = sns.barplot(data=df.T.loc[df.T.index.get_level_values(0)=='p300.K27ac',: ],
x=df.T.loc[df.T.index.get_level_values(0)=='p300.K27ac',: ].index.get_level_values(1),
y='chr5_147268985_147270986_Pdx1')
ax.set_ylabel('p300.K27ac')
simpleaxis(ax)
data_dir = '/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/plots'
plt.tight_layout()
plt.title('FLAG CPMs')
plt.savefig("%s/mmLiver_p300.K27ac.2kb.cpms.pdf" % (data_dir))
import pandas as pd
import seaborn as sns
sns.set_style('whitegrid')
sns.set_context('paper')
import numpy as np
from matplotlib import pyplot as plt
df = pd.read_csv('/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/counts/mmLiver_p300.flag.union_peakset_no_input.featureCounts.txt', sep="\t", comment="#")
df_anno = pd.read_csv('/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.flag.union_peakset.bed', sep='\t',
header=None)
df_anno = df_anno.drop(columns=range(3,9) + [10], axis=1)
df_anno.columns = ['Chr', 'Start', 'End', 'GeneSymbol']
df = df.merge(df_anno)
lib_sizes = []
for bam in df.columns.values[6:-1]:
tt = np.loadtxt(bam.replace('masked.dedup.sorted.bam', 'bowtie.log.read_count.mapped'))
lib_sizes.append(tt[1])
df.loc[:, df.columns.values[6:-1]] = df.loc[:, df.columns.values[6:-1]]/lib_sizes*1e6
df.index = df.Geneid + "_" + df.GeneSymbol
df = df[~df.index.str.contains('chrM')]
# Remove Mitochondrial peaks
df = df[~df.index.str.contains('chrM')]
df.columns = df.columns.str\
.replace('/data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_p300.flag.','')\
.str.replace('.masked.dedup.sorted.bam','')
df = df.loc[:, df.columns.values[6:-1]]
df.columns = pd.MultiIndex.from_arrays( [[c.split('.')[0] for c in df.columns], df.columns])
df.loc[df.var(axis=1).sort_values(ascending=False).index, :]
gene_of_interest = 'Pdx1'
sns.barplot(data=df.loc[df.index.str.contains(gene_of_interest),: ].T,
x=df.loc[df.index.str.contains(gene_of_interest),: ].T.index.get_level_values(0),
y='chr5_147269830_147270140_Pdx1')
foo
import pandas as pd
import seaborn as sns
sns.set_style('whitegrid')
sns.set_context('paper')
import numpy as np
from matplotlib import pyplot as plt
df_krab = pd.read_csv('/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/counts/mmLiver_KRAB.flag.union_peakset_no_input.featureCounts.txt', sep="\t", comment="#")
df_krab_anno = pd.read_csv('/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_KRAB.flag.union_peakset.bed', sep='\t',
header=None)
df_krab_anno = df_krab_anno.drop(columns=range(3,9) + [10], axis=1)
df_krab_anno.columns = ['Chr', 'Start', 'End', 'GeneSymbol']
df_krab = df_krab.merge(df_krab_anno)
lib_sizes = []
for bam in df_krab.columns.values[6:-1]:
tt = np.loadtxt(bam.replace('masked.dedup.sorted.bam', 'bowtie.log.read_count.mapped'))
lib_sizes.append(tt[1])
df_krab.loc[:, df_krab.columns.values[6:-1]] = df_krab.loc[:, df_krab.columns.values[6:-1]]/lib_sizes*1e6
df_krab.index = df_krab.Geneid + "_" + df_krab.GeneSymbol
df_krab = df_krab[~df_krab.index.str.contains('chrM')]
# Remove Mitochondrial peaks
df_krab = df_krab[~df_krab.index.str.contains('chrM')]
df_krab.columns = df_krab.columns.str\
.replace('/data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_KRAB.flag.','')\
.str.replace('.masked.dedup.sorted.bam','')
df_krab = df_krab.loc[:, df_krab.columns.values[6:-1]]
# Drop failed library
df_krab.drop('scram.rep8', axis=1, inplace=True)
df_krab.columns = pd.MultiIndex.from_arrays( [[c.split('.')[0] for c in df_krab.columns], df_krab.columns])
foo = df.loc[df.index.str.contains('Pdx1'),: ].T
foo_krab = df_krab.loc[df_krab.index.str.contains('Pcsk9'),: ].T
foo_merged = pd.concat([foo, foo_krab])
df.loc[df.index.str.contains('Pdx1'),: ].T
foo_merged
foo_merged.loc[:, 'locus'] = 1
foo_merged.loc[:, 'cpm'] = 1
~foo_merged.chr4_106464226_106464732_Pcsk9.isna()
foo_merged.loc[~foo_merged.chr4_106464226_106464732_Pcsk9.isna(), 'locus'] = 'Pcsk9'
foo_merged.loc[~foo_merged.chr5_147269830_147270140_Pdx1.isna(), 'locus'] = 'Pdx1'
foo_merged.chr4_106464226_106464732_Pcsk9.values
foo_merged.loc[~foo_merged.chr4_106464226_106464732_Pcsk9.isna(), 'cpm'] = foo_merged.loc[~foo_merged.chr4_106464226_106464732_Pcsk9.isna(), 'chr4_106464226_106464732_Pcsk9']
foo_merged.loc[~foo_merged.chr5_147269830_147270140_Pdx1.isna(), 'cpm'] = foo_merged.loc[~foo_merged.chr5_147269830_147270140_Pdx1.isna(), 'chr5_147269830_147270140_Pdx1']
import numpy as np
import pandas as pd
import seaborn as sns
sns.set_context("paper")
sns.set_style("whitegrid")
sns.set_style("ticks", {"xtick.major.size": 8, "ytick.major.size": 8})
def simpleaxis(ax):
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
df.loc[df.index.str.contains('Pdx1'),: ].T
(df.loc[:, [c for c in df.columns if 'targeted' in c ]].values).flatten()
(df.loc[:, [c for c in df.columns if 'scram' in c ]].values).flatten()
df.shape
get_stats(
(df.loc[:, [c for c in df.columns if 'targeted' in c ]].values).flatten(),
(df.loc[:, [c for c in df.columns if 'scram' in c ]].values).flatten(),
method = 'ttest_ind')
plt.rcParams['svg.fonttype'] = 'none'
plt.rcParams['pdf.fonttype'] = 42
f, ax = plt.subplots(figsize=[3, 3])
sns.swarmplot(data=df.loc[df.index.str.contains('Pdx1'),: ].T,
x=df.loc[df.index.str.contains('Pdx1'),: ].T.index.get_level_values(0), y='chr5_147269830_147270140_Pdx1')
# sns.boxplot(data=df.loc[df.index.str.contains('Pdx1'),: ].T,
# x=df.loc[df.index.str.contains('Pdx1'),: ].T.index.get_level_values(0), y='chr5_147269830_147270140_Pdx1')
ax.set_ylabel('CPMs')
simpleaxis(ax)
data_dir = '/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/plots'
plt.title('p300 FLAG CPMs');
plt.tight_layout()
plt.savefig("%s/mmLiver_p300.flag.union_peakset.cpms.points.pdf" % (data_dir))
df.loc[df.index.str.contains('Pdx1'),: ].T.to_csv("%s/mmLiver_p300.flag.union_peakset.cpms.txt" % (data_dir), sep='\t')
plt.rcParams['svg.fonttype'] = 'none'
plt.rcParams['pdf.fonttype'] = 42
f, ax = plt.subplots(figsize=[3, 3])
sns.swarmplot(data=df_krab.loc[df_krab.index.str.contains('Pcsk9'),: ].T,
x=df_krab.loc[df_krab.index.str.contains('Pcsk9'),: ].T.index.get_level_values(0), y='chr4_106464226_106464732_Pcsk9')
# sns.boxplot(data=df_krab.loc[df_krab.index.str.contains('Pcsk9'),: ].T,
# x=df_krab.loc[df_krab.index.str.contains('Pcsk9'),: ].T.index.get_level_values(0), y='chr5_147269830_147270140_Pcsk9')
ax.set_ylabel('CPMs')
simpleaxis(ax)
data_dir = '/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/plots'
plt.title('KRAB FLAG CPMs');
plt.tight_layout()
plt.savefig("%s/mmLiver_KRAB.flag.union_peakset.cpms.points.pdf" % (data_dir))
df_krab.loc[df_krab.index.str.contains('Pcsk9'),: ].T.to_csv("%s/mmLiver_KRAB.flag.union_peakset.cpms.txt" % (data_dir), sep='\t')
ax = sns.barplot(data=foo_merged,
x=foo_merged.index.get_level_values(0), y='cpm', hue='locus', n_boot=50)
simpleaxis(ax)
data_dir = '/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/plots'
plt.tight_layout()
plt.title('FLAG CPMs')
plt.savefig("%s/mmLiver_KRAB.flag.union_peakset.cpms.pdf" % (data_dir))
ax = sns.swarmplot(data=foo_merged,
x=foo_merged.index.get_level_values(0), y='cpm', hue='locus')
simpleaxis(ax)
data_dir = '/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/plots'
plt.tight_layout()
plt.title('FLAG CPMs')
plt.savefig("%s/mmLiver_KRAB.flag.union_peakset.cpms.points.pdf" % (data_dir))
# Save plot for special case
gene_of_interest = 'Pdx1'
figg = plt.figure(figsize=[6,4])
fig = df.loc[df.index.str.contains(gene_of_interest),: ].T.groupby(level=0, axis=0)\
.boxplot(
subplots=False,
)
fig.axes.set_xticklabels(['PBS', 'SCRAM', 'Pdx1']);
fig.axes.set_title('Pdx1');
fig.set_ylabel('CPM')
figg.tight_layout()
figg.savefig('/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/plots/mmLiver_p300.flag.union_peakset.cpms.%s.pdf' % (gene_of_interest))
# Special case: Pdx1 peak overlaps 2 genes in the annotation, therefore appears twice. Remove "Plut"
df = df[df.index != 'chr5_147269830_147270140_Plut']
from matplotlib import pyplot as plt
plt.rcParams['svg.fonttype'] = 'none'
plt.rcParams['pdf.fonttype'] = 42
ncols = 4
nrows = int(np.ceil(df.shape[0] / ncols))
figg, axes = plt.subplots(nrows, ncols, sharey=True, figsize=[16, 24])
for ix, ii in enumerate(df.var(axis=1).sort_values(ascending=False).index):
fig = df.loc[df.index==ii,: ].T.groupby(level=0, axis=0)\
.boxplot(
subplots=False,
ax = axes.flatten()[ix]
)
if gene_of_interest in ii:
fig.axes.set_facecolor('r')
if ix>=((nrows-1)*ncols):
fig.axes.set_xticklabels(['PBS', 'SCRAM', 'Pdx1']);
else:
fig.axes.set_xticklabels([]);
fig.axes.set_title(ii);
figg.tight_layout()
figg.savefig('/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/plots/mmLiver_p300.flag.union_peakset_no_input.cpms.gridplot.pdf')
df.to_csv('/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/counts/mmLiver_p300.flag.union_peakset.cpms.txt',sep='\t')
%%bash
module load bedtools2
# Create union peakset for FLAG-p300 samples:
cat /data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_p300.K27ac.{targeted,scram,PBS}.rep*.masked.dedup.sorted_peaks.narrowPeak \
| /bin/grep "^chr" \
| sort -k1,1 -k2,2n \
| bedtools merge -nonamecheck -i stdin \
| sort -k1,1 -k2,2n \
| bedtools intersect -wa -v -a stdin -b /data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_p300.[Ii]nput.{targeted,scram,PBS}.rep*.masked.dedup.sorted_peaks.narrowPeak \
| bedtools closest \
-nonamecheck \
-a stdin \
-b <(sort -k1,1 -k2,2n /data/reddylab/Reference_Data/Gencode/vM19/gencode.vM19.basic.annotation.no_gm.bed) \
> /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.K27ac.union_peakset_no_input.bed
%%bash
module load bedtools2
cat \
<(awk -vOFS="\t" '{$2=($2+$3)/2;$3=$2+1; print $0}' /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.flag.union_peakset_no_input.bed | bedtools slop -i stdin -b 1000 -g /data/reddylab/Reference_Data/Genomes/mm10/GRCm38.header.sizes) \
/data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.K27ac.union_peakset_no_input.bed \
| sort -k1,1 -k2,2n \
| bedtools merge -i stdin \
| bedtools closest \
-nonamecheck \
-a stdin \
-b <(sort -k1,1 -k2,2n /data/reddylab/Reference_Data/Gencode/vM19/gencode.vM19.basic.annotation.no_gm.bed) \
> /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.K27ac.union_peakset_no_input_plus_flag.bed
!wc -l /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.K27ac.union_peakset_no_input_plus_flag.bed
%%bash
source /data/reddylab/software/miniconda2/bin/activate alex
python /data/reddylab/Alex/reddylab_utils/scripts/bed_to_saf.py \
-beds /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.K27ac.union_peakset_no_input_plus_flag.bed \
-safs /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.K27ac.union_peakset_no_input_plus_flag.saf
%%bash
/data/reddylab/software/subread-1.4.6-p4-Linux-x86_64/bin/featureCounts \
-F SAF \
-a /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/peaks/mmLiver_p300.K27ac.union_peakset_no_input_plus_flag.saf \
-o /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/counts/mmLiver_p300.K27ac.union_peakset_no_input_plus_flag.featureCounts.txt \
/data/reddylab/Alex/collab/20190701_Matt/processing/chip_seq/Matt_5756_190620B1-se-with-control/mmLiver_p300.K27ac.{targeted,scram,PBS}.rep*masked.dedup.sorted.bam \
> /data/reddylab/Alex/collab/20190701_Matt/results/chip_seq/logs/mmLiver_p300.K27ac.union_peakset_no_input_plus_flag.featureCounts.out \
2>&1
| 0.196749 | 0.378919 |
```
import sys
!{sys.executable} -m pip install --upgrade google-cloud-bigquery
import sys
!{sys.executable} -m pip install cython pandas-gbq
from google.cloud import bigquery
client = bigquery.Client()
query_job = client.query("""
WITH
table1 AS (
SELECT
project_short_name,
case_barcode,
IF (gender = 'FEMALE',
1,
0) AS F,
IF (gender = 'MALE',
1,
0) AS M
FROM
`isb-cgc.TCGA_bioclin_v0.Clinical`
GROUP BY
project_short_name,
case_barcode,
gender)
--
--
SELECT
project_short_name,
SUM(M) AS M_count,
SUM(F) AS F_count
FROM
table1
GROUP BY
project_short_name
""")
results = query_job.result()
for row in results:
print("{} : {} : {}".format(row.project_short_name, row.F_count, row.M_count))
import pandas
projectid = "isb-cgc-02-0001"
query = """
WITH
table1 AS (
SELECT
project_short_name,
case_barcode,
IF (gender = 'FEMALE',
1,
0) AS F,
IF (gender = 'MALE',
1,
0) AS M
FROM
`isb-cgc.TCGA_bioclin_v0.Clinical`
GROUP BY
project_short_name,
case_barcode,
gender)
--
--
SELECT
project_short_name,
SUM(M) AS M_count,
SUM(F) AS F_count
FROM
table1
GROUP BY
project_short_name
"""
data_frame = pandas.read_gbq(query, project_id=projectid, dialect='standard')
data_frame.shape
data_frame
import matplotlib.pyplot as plt
plt.figure();
df2 = pandas.DataFrame(data_frame, columns=['M_count','F_count'])
df2.plot.bar();
import sys
!{sys.executable} -m pip install pyspark findspark
import findspark
findspark.init()
from datetime import datetime
from pyspark.context import SparkContext
from pyspark.ml.linalg import Vectors
from pyspark.ml.classification import LogisticRegression
from pyspark.sql.session import SparkSession
def vector_from_inputs(r):
return (float(r["label"]), Vectors.dense(float(r["EFGR"]),
float(r["TP53"]),
float(r["NOTCH1"]),
float(r["GATA3"])))
# Use Cloud Dataprocs automatically propagated configurations to get
# the Cloud Storage bucket and Google Cloud Platform project for this
# cluster.
sc = SparkContext()
spark = SparkSession(sc)
bucket = spark._jsc.hadoopConfiguration().get("fs.gs.system.bucket")
project = spark._jsc.hadoopConfiguration().get("fs.gs.project.id")
print(bucket)
print(project)
# Set an input directory for reading data from Bigquery.
todays_date = datetime.strftime(datetime.today(), "%Y-%m-%d-%H-%M-%S")
input_directory = "gs://qotm_oct_2018" + todays_date
# Set the configuration for importing data from BigQuery.
# Specifically, make sure to set the project ID and bucket for Cloud Dataproc,
# and the project ID, dataset, and table names for BigQuery.
conf = {
# Input Parameters
"mapred.bq.project.id": project,
"mapred.bq.gcs.bucket": bucket,
"mapred.bq.temp.gcs.path": input_directory,
"mapred.bq.input.project.id": project,
"mapred.bq.input.dataset.id": "spark_job",
"mapred.bq.input.table.id": "tcga_spark"
}
print(conf)
# Read the data from BigQuery into Spark as an RDD.
table_data = spark.sparkContext.newAPIHadoopRDD(
"com.google.cloud.hadoop.io.bigquery.JsonTextBigQueryInputFormat",
"org.apache.hadoop.io.LongWritable",
"com.google.gson.JsonObject",
conf=conf)
# Extract the JSON strings from the RDD.
table_json = table_data.map(lambda x: x[1])
# Load the JSON strings as a Spark Dataframe.
tcga_data = spark.read.json(table_json)
# Create a view so that Spark SQL queries can be run against the data.
tcga_data.createOrReplaceTempView("tcga_view")
# As a precaution, run a query in Spark SQL to ensure no NULL values exist.
sql_query = """
SELECT *
from tcga_view
where label is not null
and EFGR is not null
and TP53 is not null
and GATA3 is not null
and NOTCH1 is not null
"""
clean_data = spark.sql(sql_query)
# Create an input DataFrame for Spark ML using the above function.
training_data = clean_data.rdd.map(vector_from_inputs).toDF(["label",
"features"])
training_data.cache()
# Construct a new LinearRegression object and fit the training data.
# https://spark.apache.org/docs/latest/ml-classification-regression.html#binomial-logistic-regression
lr = LogisticRegression(maxIter=5, regParam=0.3, elasticNetParam=0.8)
lrModel = lr.fit(training_data)
# Print the model summary.
print("Coefficients:" + str(model.coefficients))
print("Intercept:" + str(model.intercept))
# getting the model performance metrics
trainingSummary = lrModel.summary
# Obtain the receiver-operating characteristic as a dataframe and areaUnderROC.
trainingSummary.roc.show()
print("areaUnderROC: " + str(trainingSummary.areaUnderROC))
# Set the model threshold to maximize F-Measure
fMeasure = trainingSummary.fMeasureByThreshold
maxFMeasure = fMeasure.groupBy().max('F-Measure').select('max(F-Measure)').head()
bestThreshold = fMeasure.where(fMeasure['F-Measure'] == maxFMeasure['max(F-Measure)']) \
.select('threshold').head()['threshold']
lr.setThreshold(bestThreshold)
import pandas
import matplotlib.pyplot as plt
plt.figure();
trainingSummary.roc.toPandas().plot.scatter('FPR','TPR')
sc.stop()
```
|
github_jupyter
|
import sys
!{sys.executable} -m pip install --upgrade google-cloud-bigquery
import sys
!{sys.executable} -m pip install cython pandas-gbq
from google.cloud import bigquery
client = bigquery.Client()
query_job = client.query("""
WITH
table1 AS (
SELECT
project_short_name,
case_barcode,
IF (gender = 'FEMALE',
1,
0) AS F,
IF (gender = 'MALE',
1,
0) AS M
FROM
`isb-cgc.TCGA_bioclin_v0.Clinical`
GROUP BY
project_short_name,
case_barcode,
gender)
--
--
SELECT
project_short_name,
SUM(M) AS M_count,
SUM(F) AS F_count
FROM
table1
GROUP BY
project_short_name
""")
results = query_job.result()
for row in results:
print("{} : {} : {}".format(row.project_short_name, row.F_count, row.M_count))
import pandas
projectid = "isb-cgc-02-0001"
query = """
WITH
table1 AS (
SELECT
project_short_name,
case_barcode,
IF (gender = 'FEMALE',
1,
0) AS F,
IF (gender = 'MALE',
1,
0) AS M
FROM
`isb-cgc.TCGA_bioclin_v0.Clinical`
GROUP BY
project_short_name,
case_barcode,
gender)
--
--
SELECT
project_short_name,
SUM(M) AS M_count,
SUM(F) AS F_count
FROM
table1
GROUP BY
project_short_name
"""
data_frame = pandas.read_gbq(query, project_id=projectid, dialect='standard')
data_frame.shape
data_frame
import matplotlib.pyplot as plt
plt.figure();
df2 = pandas.DataFrame(data_frame, columns=['M_count','F_count'])
df2.plot.bar();
import sys
!{sys.executable} -m pip install pyspark findspark
import findspark
findspark.init()
from datetime import datetime
from pyspark.context import SparkContext
from pyspark.ml.linalg import Vectors
from pyspark.ml.classification import LogisticRegression
from pyspark.sql.session import SparkSession
def vector_from_inputs(r):
return (float(r["label"]), Vectors.dense(float(r["EFGR"]),
float(r["TP53"]),
float(r["NOTCH1"]),
float(r["GATA3"])))
# Use Cloud Dataprocs automatically propagated configurations to get
# the Cloud Storage bucket and Google Cloud Platform project for this
# cluster.
sc = SparkContext()
spark = SparkSession(sc)
bucket = spark._jsc.hadoopConfiguration().get("fs.gs.system.bucket")
project = spark._jsc.hadoopConfiguration().get("fs.gs.project.id")
print(bucket)
print(project)
# Set an input directory for reading data from Bigquery.
todays_date = datetime.strftime(datetime.today(), "%Y-%m-%d-%H-%M-%S")
input_directory = "gs://qotm_oct_2018" + todays_date
# Set the configuration for importing data from BigQuery.
# Specifically, make sure to set the project ID and bucket for Cloud Dataproc,
# and the project ID, dataset, and table names for BigQuery.
conf = {
# Input Parameters
"mapred.bq.project.id": project,
"mapred.bq.gcs.bucket": bucket,
"mapred.bq.temp.gcs.path": input_directory,
"mapred.bq.input.project.id": project,
"mapred.bq.input.dataset.id": "spark_job",
"mapred.bq.input.table.id": "tcga_spark"
}
print(conf)
# Read the data from BigQuery into Spark as an RDD.
table_data = spark.sparkContext.newAPIHadoopRDD(
"com.google.cloud.hadoop.io.bigquery.JsonTextBigQueryInputFormat",
"org.apache.hadoop.io.LongWritable",
"com.google.gson.JsonObject",
conf=conf)
# Extract the JSON strings from the RDD.
table_json = table_data.map(lambda x: x[1])
# Load the JSON strings as a Spark Dataframe.
tcga_data = spark.read.json(table_json)
# Create a view so that Spark SQL queries can be run against the data.
tcga_data.createOrReplaceTempView("tcga_view")
# As a precaution, run a query in Spark SQL to ensure no NULL values exist.
sql_query = """
SELECT *
from tcga_view
where label is not null
and EFGR is not null
and TP53 is not null
and GATA3 is not null
and NOTCH1 is not null
"""
clean_data = spark.sql(sql_query)
# Create an input DataFrame for Spark ML using the above function.
training_data = clean_data.rdd.map(vector_from_inputs).toDF(["label",
"features"])
training_data.cache()
# Construct a new LinearRegression object and fit the training data.
# https://spark.apache.org/docs/latest/ml-classification-regression.html#binomial-logistic-regression
lr = LogisticRegression(maxIter=5, regParam=0.3, elasticNetParam=0.8)
lrModel = lr.fit(training_data)
# Print the model summary.
print("Coefficients:" + str(model.coefficients))
print("Intercept:" + str(model.intercept))
# getting the model performance metrics
trainingSummary = lrModel.summary
# Obtain the receiver-operating characteristic as a dataframe and areaUnderROC.
trainingSummary.roc.show()
print("areaUnderROC: " + str(trainingSummary.areaUnderROC))
# Set the model threshold to maximize F-Measure
fMeasure = trainingSummary.fMeasureByThreshold
maxFMeasure = fMeasure.groupBy().max('F-Measure').select('max(F-Measure)').head()
bestThreshold = fMeasure.where(fMeasure['F-Measure'] == maxFMeasure['max(F-Measure)']) \
.select('threshold').head()['threshold']
lr.setThreshold(bestThreshold)
import pandas
import matplotlib.pyplot as plt
plt.figure();
trainingSummary.roc.toPandas().plot.scatter('FPR','TPR')
sc.stop()
| 0.519034 | 0.310299 |
#### Exercice 6
Le programme assembleur:
```
MOV R0,#1
ADD R1,R0,#8
ADD R2,R1,#100
SUB R3,R2,#25
HALT
```
Capture d'écran qui montre juste la valeur du registre `R3`.
> 
Qu'est-ce qu'il se passe si, lorsqu'il est à l'arrêt sur HALT, vous cliquez sur **Play** *avant* d'avoir cliquer sur **Stop**?
> On obtient une erreur «*Bad instruction at line unknown (PC=0x00014)*». Le processeur essai de lire l'instruction situé à l'adresse 0x00014 or le mot correspondant est nul.
____
#### Exercice 7
Même programme que pour l'exercice 6.
Pensez-vous que le code surligné correspond à l'instruction à exécuter à la prochaîne étape ou à celle qui vient juste d'être exécutée?
> L'instruction en surbrillance est *celle qui vient juste d'être exécutée*.
___
### Exercice 8
> Prenons par exemple le programme suivant; il n'utilise qu'un registre dans lequel on accumule les résultats au fur et à mesure.
>
> MOV R0, #100
> LSL R0, R0, #1
> LSR R0, R0, #2
> ORR R0, R0, #25 //25: 11001
> AND R0, R0, #20 //20: 10100
> EOR R0, R0, #10 //10: 1010
> HALT
> | Instructions | Valeur décimale du registre destination<br/> après exécution de cette instruction | Valeur binaire > du registre destination<br/> après exécution de cette instruction |
> |:--:|:--:|---:|
> | `MOV R0, #100` | 100 | 1100100 |
> | `LSL R0, R0, #1` | 200 | 11001000 |
> | `LSR R0, R0, #2` | 50 | 110010 |
> | `ORR R0, R0, #25` | 59 | 111011 |
> | `AND R0, R0, #20` | 16 | 10000 |
> | `EOR R0, R0, #10` | 26 | 11010 |
> | `HALT` | ... | ... |
Décrire l'effet sur un nombre décimal de l'opération «décalage logique à gauche» `LSL` d'une position, de deux position... Faites la même chose avec le décalage à droite `LSR`.
> `LSL` d'une position a pour effet de multiplier un nombre entier par 2; deux positions par 4; 3 positions par 8 etc.
>> **Règle générale**: Un décalage à **gauche** de $p$ positions **multiplie** un nombre entier par $2^p$.
>
> `LSR` d'une position divise le nombre entier par 2 (division entière); deux positions par 4; 3 position par 8 etc.
>>**Règle générale**: Un décalage à droite de $p$ positions **divise** (division entière) un nombre entier par $2^p$.
____
#### Exercice 9
Les nombres initiaux sont: 12, 11, 7, 5, 3 et 2 et votre cible est 79.
> Une solution: (12 | 7) | (2 << 5)
>
> 12: 1100 1100 (12) 10 (2)
> 11: 1011 | (ORR) 0111 (7) << (LSL) 5
> 7: 111 ------------- --------------
> 5: 101 1111 (15) 1000000 (64)
> 3: 11
> 2: 10 1111 (15)
> ----------- | (ORR) 1000000 (64)
> 79: 1001111 ---------------
> 1001111 (79)
>
>
> *Note*: a | b signifie a OR B; a << d signifie décaler le motif binaire de a de d position vers la gauche
>
> 
Coller une capture d'écran montrant votre programme et le résultat dans un registre.
___
#### Exercice 10
Les nombres initiaux sont: 99, 77, 33, 31, 14 et 12 et votre cible est: 32
> Une solution: (99 + 77) & 33 ou & désigne l'opération ET bit-à-bit.
>
> 99: 1100011 1100011 (99)
> 77: 1001101 + (ADD) 1001101 (77)
> 33: 10001 ---------------
> 31: 1111 10110000 (176)
> 14: 1110 & (AND)00010001 (33)
> 12: 1100 ---------------
> ----------- 00010000 (32)
> 32: 10000
>
Coller une capture d'écran montrant votre programme et le résultat dans un registre.
> 
___
#### Exercice 11
Les nombres initiaux sont: 30, 13, 7, 5, 2 et 1 et votre cible est: 390
> Une solution: (13 << 5) - (30 - 7 + 2 + 1)
>
> 30: 11110 1101 (13)
> 13: 1101 << (LSL) 5
> 7: 111 -----------------
> 5: 101 110100000 : 416
> 2: 10 (30-7+2+1=26) - 26
> 1: 1 -----
> -------------- 390
> 390: 110000110
>
Coller une capture d'écran montrant votre programme et le résultat dans un registre.
> 
___
#### Exercice 12
Capture d'écran du résultat montré dans `R1` pour:
MOV R0,#9999
LSL R1,R0,#18
HALT
> 
___
#### Exercice 13
Quelle est la représentation binaire de chacun de ces nombres entiers decimaux signés:
> 1: 0....01 inv 1....10
> -1: 0....11
> 2: 0...010 inv 1...101
> -2: 1...110
> 3: 0...011 inv 1...100
> -3: 1...101
> 4: 0..0100 inv 1..1011
> -4: 1..1100
Ainsi, dans la méthode du complément à 2, le négatif s'obtient à partir du positif en inversant (MVN pour *Move NOT*) chacun de ses bits puis en ajoutant 1 au résultat obtenu; vérifions sur un cas:
MOV R0, #27
MVN R1, R0 // inversion de chaque bit
ADD R2, R1, #1
HALT
> 
___
#### Exercice14
Que se passe-t-il si on ajoute -49 à 27?
> On obtient -22:
>
> 
___
|
github_jupyter
|
MOV R0,#1
ADD R1,R0,#8
ADD R2,R1,#100
SUB R3,R2,#25
HALT
| 0.187096 | 0.789842 |
## Getting Data from API's with Python
**GW Libraries and Academic Innovation**
Monday, February 1, 2021
### Workshop goals
This workshop will cover basic use cases for retrieving data from RESTful API's with Python.
By the conclusion of this workshop, you will have worked through the following:
* Understanding the REST framework for data retrieval
* Constructing a query with parameters in Python using the `requests` library
* Writing a `for` loop to retrieve multiple sets results
* Parsing a JSON response
* Exporting data in CSV format
### Tips for using this Google Colab notebook
When working in a Google Colaboratory notebook, `Shift-Return` (`Shift-Enter`) runs the cell you're on. You can also run the cell using the `Play` button at the left edge of the cell.
There are many other keyboard shortcuts. You can access the list via the menu bar, at `Tools`-->`Command palette`. In fact, you can even customize your keyboard shortcuts using `Tools`-->`Keyboard shortcuts`.
(If you're working in an Anaconda/Jupyter notebook:
- `Control-Enter` (`Command-Return`) runs the cell you're on. You can also run the cell using the `Run` button in the toolbar. `Esc`, then `A` inserts a cell above where you are.
- `Esc`, then `B` inserts a cell below where you are.
- More shortcuts under `Help` --> `Keyboard Shortcuts`)
You will probably get some errors in working through this notebook. That's okay, you can just go back and change the cell and re-run it.
The notebook auto-saves as you work, just like gmail and most Google apps.
### Introduction
#### What is an API?
An **A**pplication **P**rogramming **I**nterface is a generic term for functionality that allows one computer application to talk to another. In contrast to a graphical user interface (GUI), which allows an end user to interact with an application via visual symbols (*e.g.* icons) and manual operations (*e.g.* mouse clicks), an API allows a user to interact with the application by writing code.
You can think of API's as the glue that holds together the various modules and libraries of code that make up a given system, whether we're talking about a single piece of software or the entire World Wide Web.
-------------------------
#### What is REST?
**R**epresentational **S**tate **T**ransfer refers to a common set of principles implemented by services that communicate via the web. Most RESTful API's use **HTTP** to provide access. Via HTTP and its core methods, you code can communicate with a web service the way your browser does when you visit a web site. We'll see how to write code to do just that in this workshop.
### Setup
We're going to use a couple of libraries for making API calls and processing the data these calls return. They are not part of the standard Python distribution, but they're pre-installed for Google Colaboratory notebooks. If you're running a Jupyter notebook locally on your computer via the Anaconda distribution of Python, they are pre-installed there as well. If not, you can install them yourself by running these commands inline in your notebook:
`!pip install pandas`
`!pip intall requests`
You can also install them at the command line by using the above commands *without* the prefixed exclamation point.
### Using API's to find and rerieve COVID-19 data
First we need to import the libraries we're using to work with this data.
As a refresher:
- `import` loads an external Python library for use in your code.
- `as` with `import` allows us to provide a nickname for the library, so that we don't have type the full name each time.
```
import requests
import pandas as pd
```
#### A straightforward request with JSON
The first data set we'll use is provided by _The Atlantic_'s [Covid Tracking Project](https://covidtracking.com/data/api).
Let's take a moment to look at the documentation together.
This API is fairly straightforward. We can retrieve the results in either JSON or CSV. We'll be using JSON, primarily to familiarize ourselves with this format, which is quite common for RESTful API's.
**J**ava**S**cript **O**bject **N**otation is a data format designed to map readily onto Javascript data types. As it happens, it also maps readily onto Python data types.
We'll use the API **endpoint** for "Historic US Values" in JSON format. API documentation will often refer to multiple endpoints, each of which provides access to a different set or view of data. This endpoint provides time series data for COVID-19 cases in the US.
```
covid_us_url = 'https://api.covidtracking.com/v1/us/daily.json'
```
To fetch the data from the endpoint, we use the `requests` library, calling the `get` method and passing as an argument the endpoint URL.
`GET` is one of several HTTP "verbs," which correspond to different actions a web server can be asked to perform. `GET` means, _Give me the data stored at this particular URL path_.
```
resp = requests.get(covid_us_url)
```
`requests.get` returns a `Response` object. This Python object has many useful properties. It's important to remember that with HTTP services, there can be many reasons why your request for data might fail.
Common issues include the following:
- The server might be down.
- You might have used an incorrect or defunct URL.
- You might not have the right permissions.
Because of that, our `Response` object contains more than **just** the data we have requested.
It contains a `status_code` property, which lets us know what **kind** of response the server gave. Anything other than `200` means that the request failed.
```
resp.status_code
```
The `Response` object also contains the response **headers** sent by the server. Every web server you visit transmits one or more headers to the client you're using (web browser, etc.). Most of the time you don't need to worry about these, but when programming with API's, you may find them useful.
The `Content-Type` header, for instance, lets us confirm that the data we received was in fact formatted as JSON.
Note that our `Response` object has converted these headers to a Python dictionary for ease of access.
```
resp.headers
```
Each HTTP response also has a **body**. This is either the data we have requested, or some type of error message.
The data can be formatted in many different ways. Most plain web pages are formatted as `text/html`. This doesn't actually mean much to Python, since Python doesn't have an HTML data type. But you can view the contents of the body as a Python string by evaluating `resp.text`.
```
resp.text
```
Notice the outer quotation marks alerting us that this is a string. A giant string is no fun to work with as data. Fortunately, if the body of the response has been correctly formatted as JSON, we can easily convert it to mre useful Python data types.
`resp.json()` converts the **body** of the response, which is the data we requested, into native Python types: strings, numeric types, lists, and dictionaries.
**Note**: Not all API's return JSON by default or even at all. Many use XML. If you call `.json()` on a `Response` that does not contain JSON-formatted data, Python will raise an exception.
```
data_us_daily = resp.json()
```
Let's look at this data. What Python data types do you see here?
```
data_us_daily
```
We have a Python list of dictionaries, each of which has the same keys. This is a typical way to represent a table of data in Python.
The `pandas` library, however, provides the `DataFrame` type, which makes working with tabular data much easier.
The `DataFrame.from_records` method takes a list of Python dictionaries and converts it into a table, where the shared keys are the table columns, and the values become the values in each row.
```
data_us_daily = pd.DataFrame.from_records(data_us_daily)
```
Now we can really see the tabular nature of this data. From here, we can use `pandas` methods to clean, sort, filter, aggregate, and even plot the data. We can also export it easily to CSV.
We'll come back to `pandas` later in the workshop. For now, let's tackle a slightly more complicated API.
#### Making repeated requests
The `requests` library is great. But because HTTP requests can be complicated, there are certain steps we will usually want to take when making requests -- like checking for status errors, decoding content, etc. -- that can become repetitive if we have to write them out every time.
So let's create a Python **function** to handle all of that housekeeping.
Our function will take some arguments:
- a url
- an optional dictionary of URL parameters (to be explained later)
- an optional dictionary of HTTP headers
It will return:
- The body of the HTTP response, if the request succeeded.
- Otherwise, it will raise a Python exception.
```
def get_data(url, params=None, headers=None): # We'll talk about these later
'''Accepts a url, which should be a string.
Optionally, accepts a dictionary of URL parameters and a custom HTTP header.'''
try:
# We pass all our arguments to requests.get
resp = requests.get(url, params=params,
headers=headers)
# If the response is anything other than 200, raise_for_status() will raise an exception
resp.raise_for_status()
# Here we can check for a JSON response
# the expression headers.get('Content-Type', '') looks for a key of 'Content-Type' in the headers dictionary.
# If it doesn't find one, it returns the empty string as a default, since some headers may not have Content-Type specified.
if 'application/json' in resp.headers.get('Content-Type', ''):
# If the header says it's JSON, parse it as JSON
data = resp.json()
return data
else:
# Otherwise, just return the response as text
return resp.text
# Here we trap any errors and print a helpful message for the user
except Exception as e: # Here we catch errors
print('Error fetching data from url', url)
print(resp.text)
# This will cause the exception to bubble up in the stack trace, which is helpful for debugging
raise
```
If you've never used `try` and `except` before, these Python keywords provide ways for us to catch and handle errors gracefully. They are particularly useful when working with HTTP data, since you can't really predict how the web server you're sending requests to will behave.
If no errors/exceptions occur in processing the `try` block, Python will skip the `except` block altogether.
At the moment, our `except` block just prints an error message to the screen. But in other situations, you might want to log the errors to a file, or take some other action, depending on the type of error.
#### Getting COVID-19 data by country
The [COVID 19 API](https://covid19api.com/) collects data from various sources and provides it JSON format.
This API is a bit more complex, in that we need to specify both a country and a date range when making our requests.
We can check out the documentation on Postman:
[https://documenter.getpostman.com/view/10808728/SzS8rjbc](https://documenter.getpostman.com/view/10808728/SzS8rjbc)
If we consult the documentation for the endpoint **By Country Total**, we see that the URL should contain the name of the country in a specific format called a _slug_. (This is a format that removes spaces, capitalization, and characters that are more difficult to parse when constructing URL's.)
How do we find out the slug? There's an API endpoint for that, too. So our first step is to get the list of slugs and find the one for the country we want whose data we want to retrieve.
```
countries_url = 'https://api.covid19api.com/countries'
# We can use our new function to get this data
country_metadata = get_data(countries_url)
```
Note how the country metadata is presented. Again, we have a list of dictionaries, each of which contains the name of a country, its slug, and its ISO code.
#### Exercise
To get data for a specific country, we can use the following URL:
```
covid_country_url = 'https://api.covid19api.com/total/country/{country_slug}/status/confirmed'
```
We need to replace the `country_slug` in curly braces with the actual slug for the country we are interested in.
How would you use `country_metadata` to look up the slug for a specific country by name, _e.g._, Germany? Use only Python code.
#### Answer
There are multiple valid approaches. Here's one handy way.
```
country_data_dict = {c['Country']: c for c in country_data}
```
This is called a **dictionary comprehension**. It's basically a `for` loop embedded in a Python dictionary expression. You can use comprehensions to create Python dicts, lists, and sets.
Here we convert a list of dictionaries into a dictionary of dictionaries. That allows us to look up the metadata for each country by its more standard name.
```
country_data_dict = {c['Country']: c for c in country_metadata}
```
Now we can find the slug like so:
```
germany_slug = country_data_dict['Germany']['Slug']
```
To create the URL for the _By Country Total_ endpoint, we can use string formatting.
The part in curly braces will be replaced by whatever value we pass to a keyword argument to the `.format` method where the keyword is the same as the part in curly braces.
Note the `.format` is actually a method defined on the string itself. All string objects in Python have this method available.
```
covid_country_url = 'https://api.covid19api.com/total/country/{country_slug}/status/confirmed'
germany_url = covid_country_url.format(country_slug=germany_slug)
```
To get country COVID data for a range of dates, we can supply a `from` and a `to` date as URL paramters.
URL parameters are the parts of the URL that follow a question mark. They typically have the form `key=value` where `key` is the parameter name and `value` is the associated value. You can think of them like keywords you enter into a search engine using an Advanced Search form.
Constructing a URL with parameters in Python is straightfoward with the `requests` library. As we've seen, it takes an optional keyword argument called `params`, which should be a dictionary mapping keys to values.
The Covid API documentation indicates that the date value should conform to a particular format. Assuming we want data for each day starting at midnight, we can use string formatting to simplify creation of these parameters.
```
date_str = '{date}T00:00:00Z'
params = {'from': date_str.format(date='2020-03-01'),
'to': date_str.format(date='2021-01-31')}
germany_data = get_data(germany_url, params=params)
```
#### Exercise
Can you write a function that accepts the following:
- a country name as a string, e.g., `'Germany'`
- a from-date
- a to date
and that returns the case data for that country?
**Requirements**
1. We want to be able to pass in the standard country names in English, not the slugs.
2. We want to pass in the dates as strings of the format YEAR-MONTH-DAY.
3. We want to receive the data for the country that we identified.
4. **Bonus**: If the user submits a country name that's not in the list, we want to catch it gracefully, printing an error message for the user but not breaking the function
**Answer**
```
def get_country_data(country, from_date, to_date):
'''First argument should be a Python string.
Second and third arguments should be Python strings of the format YEAR-MONTH-DAY.'''
# Uses the date_str we defined above to create the parameters
params = {'from': date_str.format(date=from_date),
'to': date_str.format(date=to_date)}
try:
# Uses our predefined dictionary to retrieve the slug
# In a try/except block to catch cases where the country name we provided isn't in the dictionary
slug = country_data_dict[country]['Slug']
# If a dictionary doesn't have a certain key, a KeyError is raised
except KeyError:
# Error message for the user
print("Country not found: ", country)
return
# Creates the URL for this country
url = covid_country_url.format(country_slug=slug)
# Calls our predefined function
data = get_data(url, params=params)
# Don't forget to return something!
return data
get_country_data('United Kingdom', '2020-03-01', '2021-01-26')
```
What if we want to return data for multiple countries at the same time? We can refactor our function using a `for` loop and a list.
```
def get_country_data(countries, from_date, to_date):
'''First argument should be a Python list.
Second and third arguments should be Python strings of the format YEAR-MONTH-DAY.'''
# Uses the date_str we defined above to create the parameters
params = {'from': date_str.format(date=from_date),
'to': date_str.format(date=to_date)}
# An empty list to hold the data for all the countries
all_data = []
# Loops through the list of contries
for country in countries:
try:
# Uses our predefined dictionary to retrieve the slug
# In a try/except block to catch cases where the country name we provided isn't in the dictionary
slug = country_data_dict[country]['Slug']
# If a dictionary doesn't have a certain key, a KeyError is raised
except KeyError:
# Error message for the user
print("Country not found: ", country)
# Goes to the next iteration of the loop
continue
# Creates the URL for this country
url = covid_country_url.format(country_slug=slug)
# Calls our predefined function
data = get_data(url, params=params)
# Adds these results to the original set
# Using extend (rather than append) prevents us from getting a list of lists
all_data.extend(data)
# Don't forget to return something!
return all_data
three_countries = get_country_data(['Germany', 'China', 'United States of America'],
from_date='2020-03-01',
to_date='2021-01-26')
```
Assuming we used `.extend` to build our list, we can create a `DataFrame` with this data, which should be a single list of dictionaries.
```
comp_data = pd.DataFrame.from_records(three_countries)
```
#### Analyzing COVID-19 country data
We can filter our DataFrame and can even graph our data using `pandas` built-in plotting functions, which use `matplotlib` under the hood.
Let's look at how we would graph the trend of cases for a single country.
Our dataset contains the cumulative total by date for each country. If we want to plot date against case and country, the first step is to convert the date column to a datetime format that Python can recognize. (Datetime values transmitted via JSON will typically be either strings or integers.)
`pandas` makes such conversions fairly straightforward. The `pandas.to_datetime` method recognizes strings in a wide variety of standard formats and converts them to Python datetime objects.
```
comp_data['Date'] = pd.to_datetime(comp_data['Date'])
```
We can now use the `DataFrame.loc` property to isolate those rows where the `Country` column contains the name `Germany`.
```
germany = comp_data.loc[comp_data['Country'] == 'Germany']
```
To create a timeseries plot, we can use the `DataFrame.plot` method. In this case, since there are multiple columns, we'll want to supply the `x` and `y` arguments to the `plot` method, indicating which column to use as which axis.
```
germany.plot(x='Date', y='Cases')
```
Our plot could use some better formatting and a legend, all of which we could be accessing the plot's `matplotib` attributes. But let's step through a more complex example.
A `DataFrame` has an extremely useful `.groupby` method, which allows us to summarize results by grouping rows by certain unique values.
We can group our data by country to find the total number of cases per country.
```
grp = comp_data.groupby('Country')
grp['Cases'].max()
```
In order to compare the data for multiple countries, the best approach is to `groupby` both `Country` and `Date` columns. This will create what's called a `MultiIndex` in `pandas` -- an index with two levels. It's similar to what you can do with a pivot table in Excel.
```
grp2 = comp_data.groupby(['Date', 'Country'])
```
Now if we take the `max` of the `Cases` column (on our `GroupBy` object), we an see that for each aggregate value in the `Cases` column, the first level is the date, the second is the country
```
comp_tbl = grp2['Cases'].max()
```
Unfortunately, if we try to plot it as is, `matplotlib` won't know what to do with the different levels. One solution is to `unstack` the multi-level index, which will turn the unique values from one of the levels of the index into separate columns. (Again, this is very similar to the behavior of a pivot table.)
`.unstack` takes an optional argument corresponding to the level you want to convert, starting with `0` for the outermost level. In this case, we want our dates on the x-axis, so we want to keep `Date` as the index. So we will unstack on the second level (`1`), which is the `Country`.
Now calling `plot` on this should default to drawing a line graph where the x-axis is the date, the y-axis is the total number of cases, and each line represents a country.
```
comp_tbl.unstack(1).plot()
```
### In conclusion
In this workshop we've seen how to do the following:
- Query an API with `requests` and parse a JSON response
- Use URL parameters in constructing our request
- Use `try` and `catch` to trap errors that might arise
- Use `pandas` to create a `DataFrame` from JSON data
- Use functions to encapsulate and reuse code
- Group, aggregate, and plot data with a `DataFrame`
For another set of examples that explore other API topics, see the material for last year's [Python for API's workshop](https://github.com/gwu-libraries/gwlibraries-workshops/blob/master/python-for-apis/python_api_workshop.ipynb), which covers retrieving data from API's with paginated results and using API keys.
A closely related topic but one beyond the scope of our workshop is **web scraping**, which is useful when you need to extract data from a website that does not provide a REST API. There are a number of resources available on web scraping in Python, including the O'Reilly book [Web Scraping with Python](https://learning.oreilly.com/library/view/web-scraping-with/9781491985564/), available to GW students, faculty, and staff via our [Safari Tech Books](https://www.safaribooksonline.com/library/view/temporary-access) subscription.
|
github_jupyter
|
import requests
import pandas as pd
covid_us_url = 'https://api.covidtracking.com/v1/us/daily.json'
resp = requests.get(covid_us_url)
resp.status_code
resp.headers
resp.text
data_us_daily = resp.json()
data_us_daily
data_us_daily = pd.DataFrame.from_records(data_us_daily)
def get_data(url, params=None, headers=None): # We'll talk about these later
'''Accepts a url, which should be a string.
Optionally, accepts a dictionary of URL parameters and a custom HTTP header.'''
try:
# We pass all our arguments to requests.get
resp = requests.get(url, params=params,
headers=headers)
# If the response is anything other than 200, raise_for_status() will raise an exception
resp.raise_for_status()
# Here we can check for a JSON response
# the expression headers.get('Content-Type', '') looks for a key of 'Content-Type' in the headers dictionary.
# If it doesn't find one, it returns the empty string as a default, since some headers may not have Content-Type specified.
if 'application/json' in resp.headers.get('Content-Type', ''):
# If the header says it's JSON, parse it as JSON
data = resp.json()
return data
else:
# Otherwise, just return the response as text
return resp.text
# Here we trap any errors and print a helpful message for the user
except Exception as e: # Here we catch errors
print('Error fetching data from url', url)
print(resp.text)
# This will cause the exception to bubble up in the stack trace, which is helpful for debugging
raise
countries_url = 'https://api.covid19api.com/countries'
# We can use our new function to get this data
country_metadata = get_data(countries_url)
covid_country_url = 'https://api.covid19api.com/total/country/{country_slug}/status/confirmed'
country_data_dict = {c['Country']: c for c in country_data}
country_data_dict = {c['Country']: c for c in country_metadata}
germany_slug = country_data_dict['Germany']['Slug']
covid_country_url = 'https://api.covid19api.com/total/country/{country_slug}/status/confirmed'
germany_url = covid_country_url.format(country_slug=germany_slug)
date_str = '{date}T00:00:00Z'
params = {'from': date_str.format(date='2020-03-01'),
'to': date_str.format(date='2021-01-31')}
germany_data = get_data(germany_url, params=params)
def get_country_data(country, from_date, to_date):
'''First argument should be a Python string.
Second and third arguments should be Python strings of the format YEAR-MONTH-DAY.'''
# Uses the date_str we defined above to create the parameters
params = {'from': date_str.format(date=from_date),
'to': date_str.format(date=to_date)}
try:
# Uses our predefined dictionary to retrieve the slug
# In a try/except block to catch cases where the country name we provided isn't in the dictionary
slug = country_data_dict[country]['Slug']
# If a dictionary doesn't have a certain key, a KeyError is raised
except KeyError:
# Error message for the user
print("Country not found: ", country)
return
# Creates the URL for this country
url = covid_country_url.format(country_slug=slug)
# Calls our predefined function
data = get_data(url, params=params)
# Don't forget to return something!
return data
get_country_data('United Kingdom', '2020-03-01', '2021-01-26')
def get_country_data(countries, from_date, to_date):
'''First argument should be a Python list.
Second and third arguments should be Python strings of the format YEAR-MONTH-DAY.'''
# Uses the date_str we defined above to create the parameters
params = {'from': date_str.format(date=from_date),
'to': date_str.format(date=to_date)}
# An empty list to hold the data for all the countries
all_data = []
# Loops through the list of contries
for country in countries:
try:
# Uses our predefined dictionary to retrieve the slug
# In a try/except block to catch cases where the country name we provided isn't in the dictionary
slug = country_data_dict[country]['Slug']
# If a dictionary doesn't have a certain key, a KeyError is raised
except KeyError:
# Error message for the user
print("Country not found: ", country)
# Goes to the next iteration of the loop
continue
# Creates the URL for this country
url = covid_country_url.format(country_slug=slug)
# Calls our predefined function
data = get_data(url, params=params)
# Adds these results to the original set
# Using extend (rather than append) prevents us from getting a list of lists
all_data.extend(data)
# Don't forget to return something!
return all_data
three_countries = get_country_data(['Germany', 'China', 'United States of America'],
from_date='2020-03-01',
to_date='2021-01-26')
comp_data = pd.DataFrame.from_records(three_countries)
comp_data['Date'] = pd.to_datetime(comp_data['Date'])
germany = comp_data.loc[comp_data['Country'] == 'Germany']
germany.plot(x='Date', y='Cases')
grp = comp_data.groupby('Country')
grp['Cases'].max()
grp2 = comp_data.groupby(['Date', 'Country'])
comp_tbl = grp2['Cases'].max()
comp_tbl.unstack(1).plot()
| 0.444083 | 0.985412 |
# 7.1 오토인코더로 이미지의 특징을 추출하기
```
import torch
import torchvision
import torch.nn.functional as F
from torch import nn, optim
from torch.autograd import Variable
from torchvision import transforms, datasets
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
import numpy as np
%matplotlib inline
torch.manual_seed(1) # reproducible
# Hyper Parameters
EPOCH = 10
BATCH_SIZE = 64
USE_CUDA = torch.cuda.is_available()
DEVICE = torch.device("cuda" if USE_CUDA else "cpu")
print("Using Device:", DEVICE)
# Fashion MNIST digits dataset
trainset = datasets.FashionMNIST(
root = './.data/',
train = True,
download = True,
transform = transforms.ToTensor()
)
train_loader = torch.utils.data.DataLoader(
dataset = trainset,
batch_size = BATCH_SIZE,
shuffle = True,
num_workers = 2
)
class Autoencoder(nn.Module):
def __init__(self):
super(Autoencoder, self).__init__()
self.encoder = nn.Sequential(
nn.Linear(28*28, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 12),
nn.ReLU(),
nn.Linear(12, 3), # compress to 3 features which can be visualized in plt
)
self.decoder = nn.Sequential(
nn.Linear(3, 12),
nn.ReLU(),
nn.Linear(12, 64),
nn.ReLU(),
nn.Linear(64, 128),
nn.ReLU(),
nn.Linear(128, 28*28),
nn.Sigmoid(), # compress to a range (0, 1)
)
def forward(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return encoded, decoded
autoencoder = Autoencoder().to(DEVICE)
optimizer = torch.optim.Adam(autoencoder.parameters(), lr=0.005)
criterion = nn.MSELoss()
# original data (first row) for viewing
view_data = trainset.train_data[:5].view(-1, 28*28)
view_data = view_data.type(torch.FloatTensor)/255.
def train(autoencoder, train_loader):
autoencoder.train()
for step, (x, label) in enumerate(train_loader):
x = x.view(-1, 28*28).to(DEVICE)
y = x.view(-1, 28*28).to(DEVICE)
label = label.to(DEVICE)
encoded, decoded = autoencoder(x)
loss = criterion(decoded, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
for epoch in range(1, EPOCH+1):
train(autoencoder, train_loader)
# plotting decoded image (second row)
test_x = view_data.to(DEVICE)
_, decoded_data = autoencoder(test_x)
# 원본과 디코딩 결과 비교해보기
f, a = plt.subplots(2, 5, figsize=(5, 2))
print("[Epoch {}]".format(epoch))
for i in range(5):
img = np.reshape(view_data.data.numpy()[i],(28, 28))
a[0][i].imshow(img, cmap='gray')
a[0][i].set_xticks(()); a[0][i].set_yticks(())
for i in range(5):
img = np.reshape(decoded_data.to("cpu").data.numpy()[i], (28, 28))
a[1][i].imshow(img, cmap='gray')
a[1][i].set_xticks(()); a[1][i].set_yticks(())
plt.show()
```
# 잠재변수 들여다보기
```
# visualize in 3D plot
view_data = trainset.train_data[:200].view(-1, 28*28)
view_data = view_data.type(torch.FloatTensor)/255.
test_x = view_data.to(DEVICE)
encoded_data, _ = autoencoder(test_x)
encoded_data = encoded_data.to("cpu")
CLASSES = {
0: 'T-shirt/top',
1: 'Trouser',
2: 'Pullover',
3: 'Dress',
4: 'Coat',
5: 'Sandal',
6: 'Shirt',
7: 'Sneaker',
8: 'Bag',
9: 'Ankle boot'
}
fig = plt.figure(figsize=(10,8))
ax = Axes3D(fig)
X = encoded_data.data[:, 0].numpy()
Y = encoded_data.data[:, 1].numpy()
Z = encoded_data.data[:, 2].numpy()
labels = trainset.train_labels[:200].numpy()
for x, y, z, s in zip(X, Y, Z, labels):
name = CLASSES[s]
color = cm.rainbow(int(255*s/9))
ax.text(x, y, z, name, backgroundcolor=color)
ax.set_xlim(X.min(), X.max())
ax.set_ylim(Y.min(), Y.max())
ax.set_zlim(Z.min(), Z.max())
plt.show()
```
|
github_jupyter
|
import torch
import torchvision
import torch.nn.functional as F
from torch import nn, optim
from torch.autograd import Variable
from torchvision import transforms, datasets
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
import numpy as np
%matplotlib inline
torch.manual_seed(1) # reproducible
# Hyper Parameters
EPOCH = 10
BATCH_SIZE = 64
USE_CUDA = torch.cuda.is_available()
DEVICE = torch.device("cuda" if USE_CUDA else "cpu")
print("Using Device:", DEVICE)
# Fashion MNIST digits dataset
trainset = datasets.FashionMNIST(
root = './.data/',
train = True,
download = True,
transform = transforms.ToTensor()
)
train_loader = torch.utils.data.DataLoader(
dataset = trainset,
batch_size = BATCH_SIZE,
shuffle = True,
num_workers = 2
)
class Autoencoder(nn.Module):
def __init__(self):
super(Autoencoder, self).__init__()
self.encoder = nn.Sequential(
nn.Linear(28*28, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 12),
nn.ReLU(),
nn.Linear(12, 3), # compress to 3 features which can be visualized in plt
)
self.decoder = nn.Sequential(
nn.Linear(3, 12),
nn.ReLU(),
nn.Linear(12, 64),
nn.ReLU(),
nn.Linear(64, 128),
nn.ReLU(),
nn.Linear(128, 28*28),
nn.Sigmoid(), # compress to a range (0, 1)
)
def forward(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return encoded, decoded
autoencoder = Autoencoder().to(DEVICE)
optimizer = torch.optim.Adam(autoencoder.parameters(), lr=0.005)
criterion = nn.MSELoss()
# original data (first row) for viewing
view_data = trainset.train_data[:5].view(-1, 28*28)
view_data = view_data.type(torch.FloatTensor)/255.
def train(autoencoder, train_loader):
autoencoder.train()
for step, (x, label) in enumerate(train_loader):
x = x.view(-1, 28*28).to(DEVICE)
y = x.view(-1, 28*28).to(DEVICE)
label = label.to(DEVICE)
encoded, decoded = autoencoder(x)
loss = criterion(decoded, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
for epoch in range(1, EPOCH+1):
train(autoencoder, train_loader)
# plotting decoded image (second row)
test_x = view_data.to(DEVICE)
_, decoded_data = autoencoder(test_x)
# 원본과 디코딩 결과 비교해보기
f, a = plt.subplots(2, 5, figsize=(5, 2))
print("[Epoch {}]".format(epoch))
for i in range(5):
img = np.reshape(view_data.data.numpy()[i],(28, 28))
a[0][i].imshow(img, cmap='gray')
a[0][i].set_xticks(()); a[0][i].set_yticks(())
for i in range(5):
img = np.reshape(decoded_data.to("cpu").data.numpy()[i], (28, 28))
a[1][i].imshow(img, cmap='gray')
a[1][i].set_xticks(()); a[1][i].set_yticks(())
plt.show()
# visualize in 3D plot
view_data = trainset.train_data[:200].view(-1, 28*28)
view_data = view_data.type(torch.FloatTensor)/255.
test_x = view_data.to(DEVICE)
encoded_data, _ = autoencoder(test_x)
encoded_data = encoded_data.to("cpu")
CLASSES = {
0: 'T-shirt/top',
1: 'Trouser',
2: 'Pullover',
3: 'Dress',
4: 'Coat',
5: 'Sandal',
6: 'Shirt',
7: 'Sneaker',
8: 'Bag',
9: 'Ankle boot'
}
fig = plt.figure(figsize=(10,8))
ax = Axes3D(fig)
X = encoded_data.data[:, 0].numpy()
Y = encoded_data.data[:, 1].numpy()
Z = encoded_data.data[:, 2].numpy()
labels = trainset.train_labels[:200].numpy()
for x, y, z, s in zip(X, Y, Z, labels):
name = CLASSES[s]
color = cm.rainbow(int(255*s/9))
ax.text(x, y, z, name, backgroundcolor=color)
ax.set_xlim(X.min(), X.max())
ax.set_ylim(Y.min(), Y.max())
ax.set_zlim(Z.min(), Z.max())
plt.show()
| 0.947781 | 0.898411 |
```
import numpy as np
import json
import warnings
import operator
import h5py
from keras.models import model_from_json
from keras import backend as K
from matplotlib import pyplot as plt
warnings.filterwarnings("ignore")
size_title = 18
size_label = 14
n_pred = 2
base_path = "data/evaluate_multiply_usage_sample_normalized/"
path_data_dict = base_path + "data_dict.txt"
path_inverted_wt = base_path + "inverted_weights.txt"
path_usage_wt = base_path + "usage_prediction.txt"
path_class_wt = base_path + "class_weights.txt"
path_test_data = base_path + "test_paths_dict.txt"
model_path = base_path + "trained_model.hdf5"
def read_file(file_path):
with open(file_path, 'r') as data_file:
data = json.loads(data_file.read())
return data
class_weights = read_file(path_class_wt)
usage_weights = read_file(path_usage_wt)
inverted_weights = read_file(path_inverted_wt)
data_dict = read_file(path_data_dict)
def create_model(model_path):
trained_model = h5py.File(model_path, 'r')
model_config = json.loads(trained_model.get('model_config').value)
loaded_model = model_from_json(model_config)
dictionary = json.loads(trained_model.get('data_dictionary').value)
compatibile_tools = json.loads(trained_model.get('compatible_tools').value)
reverse_dictionary = dict((str(v), k) for k, v in dictionary.items())
model_weights = list()
weight_ctr = 0
while True:
try:
d_key = "weight_" + str(weight_ctr)
weights = trained_model.get(d_key).value
model_weights.append(weights)
weight_ctr += 1
except Exception as exception:
break
# set the model weights
loaded_model.set_weights(model_weights)
return loaded_model, dictionary, reverse_dictionary, compatibile_tools
model, dictionary, reverse_dictionary, compatibile_tools = create_model(model_path)
reverse_dictionary
def verify_model(model, tool_sequence, labels, dictionary, reverse_dictionary, compatible_tools, topk=20, max_seq_len=25):
tl_seq = tool_sequence.split(",")
last_tool_name = reverse_dictionary[str(tl_seq[-1])]
last_compatible_tools = compatible_tools[last_tool_name]
sample = np.zeros(max_seq_len)
for idx, tool_id in enumerate(tl_seq):
sample[idx] = int(tool_id)
sample_reshaped = np.reshape(sample, (1, max_seq_len))
tool_sequence_names = [reverse_dictionary[str(tool_pos)] for tool_pos in tool_sequence.split(",")]
print("Tool seq: %s" % ",".join(tool_sequence_names))
# predict next tools for a test path
prediction = model.predict(sample_reshaped, verbose=0)
prediction = np.reshape(prediction, (prediction.shape[1],))
prediction_pos = np.argsort(prediction, axis=-1)
# get topk prediction
topk_prediction_pos = prediction_pos[-topk:]
topk_prediction_val = [np.round(prediction[pos] * 100, 2) for pos in topk_prediction_pos]
# read tool names using reverse dictionary
pred_tool_ids = [reverse_dictionary[str(tool_pos)] for tool_pos in topk_prediction_pos]
actual_next_tool_ids = list(set(pred_tool_ids).intersection(set(last_compatible_tools.split(","))))
#print("Predicted tools: %s" % ",".join(pred_tool_ids))
print()
pred_tool_ids_sorted = dict()
for (tool_pos, tool_pred_val) in zip(topk_prediction_pos, topk_prediction_val):
tool_name = reverse_dictionary[str(tool_pos)]
if tool_name in actual_next_tool_ids:
pred_tool_ids_sorted[tool_name] = tool_pred_val
pred_tool_ids_sorted = dict(sorted(pred_tool_ids_sorted.items(), key=lambda kv: kv[1], reverse=True))
cls_wt = dict()
usg_wt = dict()
inv_wt = dict()
ids_tools = dict()
keys = list(pred_tool_ids_sorted.keys())
for k in keys:
try:
cls_wt[k] = np.round(class_weights[str(data_dict[k])], 2)
usg_wt[k] = np.round(usage_weights[k], 2)
inv_wt[k] = np.round(inverted_weights[str(data_dict[k])], 2)
except:
continue
print("Predicted tools: \n")
print(pred_tool_ids_sorted)
print()
print("Class weights: \n")
cls_wt = dict(sorted(cls_wt.items(), key=lambda kv: kv[1], reverse=True))
print(cls_wt)
print()
print("Usage weights: \n")
usg_wt = dict(sorted(usg_wt.items(), key=lambda kv: kv[1], reverse=True))
print(usg_wt)
print()
total_usage_wt = np.sum(list(usg_wt.values()))
print("Sum usage wt: %0.4f" % (total_usage_wt))
print()
print("Inverted weights: \n")
inv_wt = dict(sorted(inv_wt.items(), key=lambda kv: kv[1], reverse=True))
print(inv_wt)
for key in pred_tool_ids_sorted:
ids_tools[key] = dictionary[key]
print()
print("Tool ids")
print(ids_tools)
print("======================================")
return cls_wt, usg_wt, inv_wt, pred_tool_ids_sorted
topk = 10
tool_seq = "942" # ,875,223,17
class_wt, usage_wt, inverse_wt, pred_tools = verify_model(model, tool_seq, "", dictionary, reverse_dictionary, compatibile_tools, topk)
list_cls_wt = list()
list_usage_wt = list()
list_pred_wt = list()
list_inv_wt = list()
division_pt = int(len(pred_tools) / 2)
for tool in pred_tools:
list_pred_wt.append(pred_tools[tool])
#try:
list_inv_wt.append(inverse_wt[tool])
list_usage_wt.append(usage_wt[tool])
#except:
#list_inv_wt.append(1)
def plot_scatter(y_val, title, xlabel, ylabel):
x_val = range(1, len(y_val) + 1)
plt.figure(figsize=(8, 8))
plt.plot(x_val[:division_pt], y_val[:division_pt], 'ro')
plt.plot(x_val[division_pt:], y_val[division_pt:], 'b^')
plt.xlabel(xlabel, size=size_label)
plt.ylabel(ylabel, size=size_label)
plt.legend([("First top-%s" % str(division_pt)), ("Next top-%s" % str(division_pt))])
plt.title(title, size=size_title)
plt.grid(True)
plt.show()
plot_scatter(list_usage_wt, "Usage weights for tools", "Number of tools", "Class weights")
#plot_scatter(ave_prediction_weights, ave_usage_weights, "Prediction vs usage weights", "Prediction scores", "Usage weights")
#plot_scatter(ave_prediction_weights, ave_inverted_weights, "Prediction vs inverted weights", "Prediction scores", "Inverted weights")
```
|
github_jupyter
|
import numpy as np
import json
import warnings
import operator
import h5py
from keras.models import model_from_json
from keras import backend as K
from matplotlib import pyplot as plt
warnings.filterwarnings("ignore")
size_title = 18
size_label = 14
n_pred = 2
base_path = "data/evaluate_multiply_usage_sample_normalized/"
path_data_dict = base_path + "data_dict.txt"
path_inverted_wt = base_path + "inverted_weights.txt"
path_usage_wt = base_path + "usage_prediction.txt"
path_class_wt = base_path + "class_weights.txt"
path_test_data = base_path + "test_paths_dict.txt"
model_path = base_path + "trained_model.hdf5"
def read_file(file_path):
with open(file_path, 'r') as data_file:
data = json.loads(data_file.read())
return data
class_weights = read_file(path_class_wt)
usage_weights = read_file(path_usage_wt)
inverted_weights = read_file(path_inverted_wt)
data_dict = read_file(path_data_dict)
def create_model(model_path):
trained_model = h5py.File(model_path, 'r')
model_config = json.loads(trained_model.get('model_config').value)
loaded_model = model_from_json(model_config)
dictionary = json.loads(trained_model.get('data_dictionary').value)
compatibile_tools = json.loads(trained_model.get('compatible_tools').value)
reverse_dictionary = dict((str(v), k) for k, v in dictionary.items())
model_weights = list()
weight_ctr = 0
while True:
try:
d_key = "weight_" + str(weight_ctr)
weights = trained_model.get(d_key).value
model_weights.append(weights)
weight_ctr += 1
except Exception as exception:
break
# set the model weights
loaded_model.set_weights(model_weights)
return loaded_model, dictionary, reverse_dictionary, compatibile_tools
model, dictionary, reverse_dictionary, compatibile_tools = create_model(model_path)
reverse_dictionary
def verify_model(model, tool_sequence, labels, dictionary, reverse_dictionary, compatible_tools, topk=20, max_seq_len=25):
tl_seq = tool_sequence.split(",")
last_tool_name = reverse_dictionary[str(tl_seq[-1])]
last_compatible_tools = compatible_tools[last_tool_name]
sample = np.zeros(max_seq_len)
for idx, tool_id in enumerate(tl_seq):
sample[idx] = int(tool_id)
sample_reshaped = np.reshape(sample, (1, max_seq_len))
tool_sequence_names = [reverse_dictionary[str(tool_pos)] for tool_pos in tool_sequence.split(",")]
print("Tool seq: %s" % ",".join(tool_sequence_names))
# predict next tools for a test path
prediction = model.predict(sample_reshaped, verbose=0)
prediction = np.reshape(prediction, (prediction.shape[1],))
prediction_pos = np.argsort(prediction, axis=-1)
# get topk prediction
topk_prediction_pos = prediction_pos[-topk:]
topk_prediction_val = [np.round(prediction[pos] * 100, 2) for pos in topk_prediction_pos]
# read tool names using reverse dictionary
pred_tool_ids = [reverse_dictionary[str(tool_pos)] for tool_pos in topk_prediction_pos]
actual_next_tool_ids = list(set(pred_tool_ids).intersection(set(last_compatible_tools.split(","))))
#print("Predicted tools: %s" % ",".join(pred_tool_ids))
print()
pred_tool_ids_sorted = dict()
for (tool_pos, tool_pred_val) in zip(topk_prediction_pos, topk_prediction_val):
tool_name = reverse_dictionary[str(tool_pos)]
if tool_name in actual_next_tool_ids:
pred_tool_ids_sorted[tool_name] = tool_pred_val
pred_tool_ids_sorted = dict(sorted(pred_tool_ids_sorted.items(), key=lambda kv: kv[1], reverse=True))
cls_wt = dict()
usg_wt = dict()
inv_wt = dict()
ids_tools = dict()
keys = list(pred_tool_ids_sorted.keys())
for k in keys:
try:
cls_wt[k] = np.round(class_weights[str(data_dict[k])], 2)
usg_wt[k] = np.round(usage_weights[k], 2)
inv_wt[k] = np.round(inverted_weights[str(data_dict[k])], 2)
except:
continue
print("Predicted tools: \n")
print(pred_tool_ids_sorted)
print()
print("Class weights: \n")
cls_wt = dict(sorted(cls_wt.items(), key=lambda kv: kv[1], reverse=True))
print(cls_wt)
print()
print("Usage weights: \n")
usg_wt = dict(sorted(usg_wt.items(), key=lambda kv: kv[1], reverse=True))
print(usg_wt)
print()
total_usage_wt = np.sum(list(usg_wt.values()))
print("Sum usage wt: %0.4f" % (total_usage_wt))
print()
print("Inverted weights: \n")
inv_wt = dict(sorted(inv_wt.items(), key=lambda kv: kv[1], reverse=True))
print(inv_wt)
for key in pred_tool_ids_sorted:
ids_tools[key] = dictionary[key]
print()
print("Tool ids")
print(ids_tools)
print("======================================")
return cls_wt, usg_wt, inv_wt, pred_tool_ids_sorted
topk = 10
tool_seq = "942" # ,875,223,17
class_wt, usage_wt, inverse_wt, pred_tools = verify_model(model, tool_seq, "", dictionary, reverse_dictionary, compatibile_tools, topk)
list_cls_wt = list()
list_usage_wt = list()
list_pred_wt = list()
list_inv_wt = list()
division_pt = int(len(pred_tools) / 2)
for tool in pred_tools:
list_pred_wt.append(pred_tools[tool])
#try:
list_inv_wt.append(inverse_wt[tool])
list_usage_wt.append(usage_wt[tool])
#except:
#list_inv_wt.append(1)
def plot_scatter(y_val, title, xlabel, ylabel):
x_val = range(1, len(y_val) + 1)
plt.figure(figsize=(8, 8))
plt.plot(x_val[:division_pt], y_val[:division_pt], 'ro')
plt.plot(x_val[division_pt:], y_val[division_pt:], 'b^')
plt.xlabel(xlabel, size=size_label)
plt.ylabel(ylabel, size=size_label)
plt.legend([("First top-%s" % str(division_pt)), ("Next top-%s" % str(division_pt))])
plt.title(title, size=size_title)
plt.grid(True)
plt.show()
plot_scatter(list_usage_wt, "Usage weights for tools", "Number of tools", "Class weights")
#plot_scatter(ave_prediction_weights, ave_usage_weights, "Prediction vs usage weights", "Prediction scores", "Usage weights")
#plot_scatter(ave_prediction_weights, ave_inverted_weights, "Prediction vs inverted weights", "Prediction scores", "Inverted weights")
| 0.291586 | 0.259437 |
```
import numpy as np
import pandas as pd
train = pd.read_csv("../data/Train.csv")
test = pd.read_csv("../data/Test.csv")
```
## SOME BASIC EDA
```
print("Train Data shape: ",train.shape)
print("Test Data shape: ",test.shape)
```
#### Information on each of the columns in the dataset
```
train.info()
test.info()
## We can see from this that we have 4 categorical columns and 13 numerical columns
train.head()
test.head()
```
#### Investigate the number of unique values in each column
```
train.nunique()
test.nunique()
```
#### Lets us explore the number of missing / NaN values in each column of the dataset
```
train.isnull().sum()
test.isnull().sum()
```
### Let us take care of the missing / NaN values
```
## Note there are many ways this can be done i will be using "fillna()" method in this notebook
## You can explore the "SimpleImputer" package checkout the documentation here https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html
## OUT OF THE 4 CATEGORICAL COLUMNS ONLY "TOP_PACK" and "REGION" have missing values
train["TOP_PACK"] = train["TOP_PACK"].fillna(value= "None")
train["REGION"] = train["REGION"].fillna(value= "None")
test["TOP_PACK"] = test["TOP_PACK"].fillna(value= "None")
test["REGION"] = test["REGION"].fillna(value= "None")
## i filled the missing values with a new categorical value "None"
## ALMOST ALL THE NUMERICAL COLUMNS HAVE MISSING VALUES
## I FILL THE MISSING VALUES WITH A VALUE OF "0.0", you can use mean, median, mode values for this also (whichever works best)
for i in ['MONTANT', 'FREQUENCE_RECH', 'REVENUE',
'ARPU_SEGMENT', 'FREQUENCE', 'DATA_VOLUME', 'ON_NET', 'ORANGE', 'TIGO',
'ZONE1', 'ZONE2', 'FREQ_TOP_PACK']:
train[i] = train[i].fillna(value = 0.0)
test[i] = test[i].fillna(value = 0.0)
train.isnull().sum()
test.isnull().sum()
### Mission Successful all missing values taken care of
```
### MODEL BUILDING PART 1
```
## We drop some columns:
## 1) "user_id": because each of values is unique, so its redundant to the model (i think *smiles*)
## 2) "MRG": because it possesses only one value for all the entries, so its redundant (you can attempt to include it and comapre results)
## 3) "CHURN": it is the target columns it will be assigned to "y"
X = train.drop(["user_id", "MRG", "CHURN"], axis =1)
y = train["CHURN"]
X_test = test.drop(["user_id", "MRG"], axis = 1)
```
#### Encoding of the Categorical Features
```
### I will be using the "labelEncoder()" package for this, there are other options which you can experiment with like OneHotEncoder, checkout
### https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html#sklearn.preprocessing.OneHotEncoder
### https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OrdinalEncoder.html#sklearn.preprocessing.OrdinalEncoder
from sklearn.preprocessing import LabelEncoder
for i in ["REGION", "TENURE", "TOP_PACK"]:
X[i] = LabelEncoder().fit_transform(X[i])
X_test[i] = LabelEncoder().fit_transform(X_test[i])
X.head()
X_test.head()
## We can see that all the values are now in integrt/floats which our machine learning models can work with
```
#### Split data into train and validation sets
```
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size = 0.2, stratify = y, random_state = 0)
## "stratify = y", to make sure the districution of the target "CHURN" are evenly distributed in the train and validation sets
```
### MODEL BUILDING PART 2
```
## create a pipeline, that scales our data using the "StandardScaler()" module, and pass it to the model
## check out more stuff on "Pipeline", https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
#pipe = Pipeline([("scaler", StandardScaler()), ("model", RandomForestClassifier())])
pipe = Pipeline([("scaler", StandardScaler()), ("model", LogisticRegression())])
pipe.fit(X_train, y_train)
prediction = pipe.predict(X_val)
from sklearn.metrics import log_loss, accuracy_score
score = accuracy_score(y_val, prediction)
print("Accuracy on validation set: ",score)
```
## MAKING PREDICTION AND SAVE TO A SUBMISSION FILE
```
## we fit on the entire data this time
pipe.fit(X, y)
## we are now making prediction based on the "test dataset" stores with variable name "X_test" (if lost check the celss above)
## we are using "predict_proba()" can we are required to predict the likelihood(probabilitiy) of a user churning
## "[:,1]", this helps to select only the probabilites of churning (i.e 1), if we use "[:, 0]" it selects the probalities of not-churning (i.e 0)
pred = pipe.predict_proba(X_test)[:,1]
## so create a dictionary with the "user_id" from the "test_dataset" and the new predictions "pred"
## we trun the dictionary into a pandas dataframe using "pd.DataFrame()"
## we then export the dataframe into a csv file using ".to_csv"
pd.DataFrame({"user_id": test["user_id"], "CHURN": pred}).to_csv("starter-submission.csv", index = False)
## gave a score of 0.3029 on the Public Leaderboard
```
### WAY FORWARD: 1) try other classification models 2) Do some feature engineering/ featuring dropping/filling missing values with mean
|
github_jupyter
|
import numpy as np
import pandas as pd
train = pd.read_csv("../data/Train.csv")
test = pd.read_csv("../data/Test.csv")
print("Train Data shape: ",train.shape)
print("Test Data shape: ",test.shape)
train.info()
test.info()
## We can see from this that we have 4 categorical columns and 13 numerical columns
train.head()
test.head()
train.nunique()
test.nunique()
train.isnull().sum()
test.isnull().sum()
## Note there are many ways this can be done i will be using "fillna()" method in this notebook
## You can explore the "SimpleImputer" package checkout the documentation here https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html
## OUT OF THE 4 CATEGORICAL COLUMNS ONLY "TOP_PACK" and "REGION" have missing values
train["TOP_PACK"] = train["TOP_PACK"].fillna(value= "None")
train["REGION"] = train["REGION"].fillna(value= "None")
test["TOP_PACK"] = test["TOP_PACK"].fillna(value= "None")
test["REGION"] = test["REGION"].fillna(value= "None")
## i filled the missing values with a new categorical value "None"
## ALMOST ALL THE NUMERICAL COLUMNS HAVE MISSING VALUES
## I FILL THE MISSING VALUES WITH A VALUE OF "0.0", you can use mean, median, mode values for this also (whichever works best)
for i in ['MONTANT', 'FREQUENCE_RECH', 'REVENUE',
'ARPU_SEGMENT', 'FREQUENCE', 'DATA_VOLUME', 'ON_NET', 'ORANGE', 'TIGO',
'ZONE1', 'ZONE2', 'FREQ_TOP_PACK']:
train[i] = train[i].fillna(value = 0.0)
test[i] = test[i].fillna(value = 0.0)
train.isnull().sum()
test.isnull().sum()
### Mission Successful all missing values taken care of
## We drop some columns:
## 1) "user_id": because each of values is unique, so its redundant to the model (i think *smiles*)
## 2) "MRG": because it possesses only one value for all the entries, so its redundant (you can attempt to include it and comapre results)
## 3) "CHURN": it is the target columns it will be assigned to "y"
X = train.drop(["user_id", "MRG", "CHURN"], axis =1)
y = train["CHURN"]
X_test = test.drop(["user_id", "MRG"], axis = 1)
### I will be using the "labelEncoder()" package for this, there are other options which you can experiment with like OneHotEncoder, checkout
### https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html#sklearn.preprocessing.OneHotEncoder
### https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OrdinalEncoder.html#sklearn.preprocessing.OrdinalEncoder
from sklearn.preprocessing import LabelEncoder
for i in ["REGION", "TENURE", "TOP_PACK"]:
X[i] = LabelEncoder().fit_transform(X[i])
X_test[i] = LabelEncoder().fit_transform(X_test[i])
X.head()
X_test.head()
## We can see that all the values are now in integrt/floats which our machine learning models can work with
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size = 0.2, stratify = y, random_state = 0)
## "stratify = y", to make sure the districution of the target "CHURN" are evenly distributed in the train and validation sets
## create a pipeline, that scales our data using the "StandardScaler()" module, and pass it to the model
## check out more stuff on "Pipeline", https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
#pipe = Pipeline([("scaler", StandardScaler()), ("model", RandomForestClassifier())])
pipe = Pipeline([("scaler", StandardScaler()), ("model", LogisticRegression())])
pipe.fit(X_train, y_train)
prediction = pipe.predict(X_val)
from sklearn.metrics import log_loss, accuracy_score
score = accuracy_score(y_val, prediction)
print("Accuracy on validation set: ",score)
## we fit on the entire data this time
pipe.fit(X, y)
## we are now making prediction based on the "test dataset" stores with variable name "X_test" (if lost check the celss above)
## we are using "predict_proba()" can we are required to predict the likelihood(probabilitiy) of a user churning
## "[:,1]", this helps to select only the probabilites of churning (i.e 1), if we use "[:, 0]" it selects the probalities of not-churning (i.e 0)
pred = pipe.predict_proba(X_test)[:,1]
## so create a dictionary with the "user_id" from the "test_dataset" and the new predictions "pred"
## we trun the dictionary into a pandas dataframe using "pd.DataFrame()"
## we then export the dataframe into a csv file using ".to_csv"
pd.DataFrame({"user_id": test["user_id"], "CHURN": pred}).to_csv("starter-submission.csv", index = False)
## gave a score of 0.3029 on the Public Leaderboard
| 0.549399 | 0.866133 |
A notebook to compute enrichment analysis using a SPARQL endpoint
```
import sys
import rdflib
from IPython.core.display import display, HTML
from SPARQLWrapper import SPARQLWrapper, JSON, XML
import scipy.stats as ss
from decimal import Decimal
import pandas as pd, io
from pandas.io.json import json_normalize
pd.set_option("display.max_colwidth",300)
pd.set_option('colheader_justify', 'left')
def getPrefixDec(prefixes):
l = ""
for k,v in prefixes.items():
l = l + "PREFIX " + k + ": <" + v + ">" + "\r\n"
return l
def getValuesDec(entities):
l = ""
for i in entities:
if(i[0:4] == "http"):
l = l + "(<" + i + ">) "
else:
l = l + "(" + i + ") "
return l
def getPopulationCount(endpoint, prefixes, triplepattern):
prefixDec = getPrefixDec(prefixes)
sparql = SPARQLWrapper(endpoint)
query = prefixDec + "SELECT (count(*) AS ?c) {" + triplepattern + " }"
sparql.setQuery(query)
sparql.setReturnFormat(JSON)
results = sparql.query().convert()
count = int(results["results"]["bindings"][0]["c"]["value"])
return count
def getProperties(endpoint, prefixes, entities = "", triplepattern = ""):
prefixDec = getPrefixDec(prefixes)
eValuesDec = ""
if len(entities) != 0:
eValuesDec = " VALUES (?s) {" + getValuesDec(entities) + "}"
sparql = SPARQLWrapper(endpoint)
query = prefixDec + """
SELECT ?p ?plabel ?c
{
{{
SELECT ?p (count(distinct ?s) AS ?c)
{ """ + eValuesDec + """
""" + triplepattern + """
?s ?p ?o .
} GROUP BY ?p
ORDER BY DESC(?c)
}}
OPTIONAL {
?p dct:title ?plabel
}
}
"""
#print(query)
sparql.setQuery(query)
sparql.setReturnFormat(JSON)
results = sparql.query().convert()
return results["results"]["bindings"]
def getFrequencyValuesForSelectedEntitiesAndPredicates(endpoint, prefixes, entities = "", predicates = "", triplepattern = ""):
prefixDec = getPrefixDec(prefixes)
eValuesDec = " VALUES (?s) {" + getValuesDec(entities) + "}"
pValuesDec = " VALUES (?p) {" + getValuesDec(predicates) + "}"
sparql = SPARQLWrapper(endpoint)
query = prefixDec + """
SELECT ?p ?o ?olabel ?sc (count(?o) AS ?pc)
{
{{
SELECT ?p ?o (count(?o) AS ?sc)
{ """ + eValuesDec + pValuesDec + """
?s ?p ?o
} GROUP BY ?p ?o
}}
""" + triplepattern + """
?s ?p ?o .
OPTIONAL {
?o dct:title ?olabel
}
}
"""
#print(query)
sparql.setQuery(query)
sparql.setReturnFormat(JSON)
results = sparql.query().convert()
return results["results"]["bindings"]
def makeCURIE(uri,prefixes):
l = ""
for prefix,uribase in prefixes.items():
# now match substr
size = len(uribase)
if uribase == uri[0:size]:
# match
l = prefix + ":" + uri[size:]
if l == "":
l = uri
return l
def performStatistics(nSamples, nPopulation, fv):
ret = dict()
meta = dict()
meta["nSamples"] = nSamples
meta["nPopulation"] = nPopulation
ret["meta"] = meta
results = []
for i in fv:
o = dict()
o["predicate"] = str(i["p"]["value"])
o["attribute"] = str(i["o"]["value"])
o['attribute_label'] = str(i["olabel"]["value"])
o["sample_count"] = int(i["sc"]["value"])
o["population_count"] = int(i["pc"]["value"])
hpd = ss.hypergeom(nPopulation, o["population_count"], nSamples)
prob = hpd.pmf(o["sample_count"])
o["prob"]= prob
results.append(o)
ret["results"] = results
return ret
def printResults(results,prefixes, pfilter = 0.001):
meta = results["meta"]
print("Sample size: " + str(meta['nSamples']))
print("Population size: " + str(meta['nPopulation']))
for i in results["results"]:
p = makeCURIE( i['predicate'], prefixes)
o = makeCURIE( i['attribute'], prefixes)
ol = i['attribute_label']
if i['prob'] <= pfilter:
if i['prob'] < 0.0001:
prob = '{0:.2E}'.format(Decimal(i['prob']))
else:
prob = '{0:.5f}'.format(Decimal(i['prob']))
print(" " + str(i['sample_count']) + " / " + str(i['population_count']) + " p-value: " + str(prob) + " " + str(p) + " " + str(o) + " " + str(ol))
#print(getPrefixDec({ "drugbank":"http://bio2rdf.org/drugbank:","dv":"http://bio2rdf.org/drugbank_vocabulary:"}))
#print(getValuesDec( ["http://bio2rdf.org/drugbank:test", "drugbank:test2"]))
#print(makeCURIE("http://bio2rdf.org/drugbank_vocabulary:category",prefixes))
### drug example
endpoint = "http://bio2rdf.org/sparql"
prefixes = { "dct":"http://purl.org/dc/terms/", "drugbank":"http://bio2rdf.org/drugbank:","dv":"http://bio2rdf.org/drugbank_vocabulary:"}
sample_names = ["Eletriptan","Zolmitriptan","Dihydroergotamine","Almotriptan","Rizatriptan"]
sample_curies = ["drugbank:DB00216","drugbank:DB00315","drugbank:DB00320","drugbank:DB00918","drugbank:DB00953"]
population_tp = "?s rdf:type dv:Drug ."
attributes = ["dv:category","dv:group"]
nSamples = len(sample_curies)
nPopulation = getPopulationCount(endpoint, prefixes, population_tp)
print("There are " + str(nSamples) + " samples in a population of " + str(nPopulation))
#fv_test = getProperties(endpoint, prefixes, "", population_tp)
#table = json_normalize(fv_test)
#table
#table[['p.value','plabel.value','c.value']]
fv = getFrequencyValuesForSelectedEntitiesAndPredicates(endpoint, prefixes, sample_curies, attributes, population_tp)
results = performStatistics(nSamples, nPopulation, fv)
printResults(results, prefixes)
endpoint = "http://bio2rdf.org/sparql"
prefixes = { "dct":"http://purl.org/dc/terms/", "sgd":"http://bio2rdf.org/sgd:","sgd_resource":"http://bio2rdf.org/sgd_resource:","sv":"http://bio2rdf.org/sgd_vocabulary:"}
sample_curies = ["sgd_resource:S000004425gp","sgd_resource:S000005376gp","sgd_resource:S000004238gp","sgd_resource:S000003399gp","sgd_resource:S000005853gp"]
population_tp = "?s rdf:type sv:Protein ."
attributes = ["sv:function"]
nSamples = len(sample_curies)
nPopulation = getPopulationCount(endpoint, prefixes, population_tp)
fv = getFrequencyValuesForSelectedEntitiesAndPredicates(endpoint, prefixes, sample_curies, attributes, population_tp)
results = performStatistics(nSamples, nPopulation, fv)
printResults(results, prefixes)
```
|
github_jupyter
|
import sys
import rdflib
from IPython.core.display import display, HTML
from SPARQLWrapper import SPARQLWrapper, JSON, XML
import scipy.stats as ss
from decimal import Decimal
import pandas as pd, io
from pandas.io.json import json_normalize
pd.set_option("display.max_colwidth",300)
pd.set_option('colheader_justify', 'left')
def getPrefixDec(prefixes):
l = ""
for k,v in prefixes.items():
l = l + "PREFIX " + k + ": <" + v + ">" + "\r\n"
return l
def getValuesDec(entities):
l = ""
for i in entities:
if(i[0:4] == "http"):
l = l + "(<" + i + ">) "
else:
l = l + "(" + i + ") "
return l
def getPopulationCount(endpoint, prefixes, triplepattern):
prefixDec = getPrefixDec(prefixes)
sparql = SPARQLWrapper(endpoint)
query = prefixDec + "SELECT (count(*) AS ?c) {" + triplepattern + " }"
sparql.setQuery(query)
sparql.setReturnFormat(JSON)
results = sparql.query().convert()
count = int(results["results"]["bindings"][0]["c"]["value"])
return count
def getProperties(endpoint, prefixes, entities = "", triplepattern = ""):
prefixDec = getPrefixDec(prefixes)
eValuesDec = ""
if len(entities) != 0:
eValuesDec = " VALUES (?s) {" + getValuesDec(entities) + "}"
sparql = SPARQLWrapper(endpoint)
query = prefixDec + """
SELECT ?p ?plabel ?c
{
{{
SELECT ?p (count(distinct ?s) AS ?c)
{ """ + eValuesDec + """
""" + triplepattern + """
?s ?p ?o .
} GROUP BY ?p
ORDER BY DESC(?c)
}}
OPTIONAL {
?p dct:title ?plabel
}
}
"""
#print(query)
sparql.setQuery(query)
sparql.setReturnFormat(JSON)
results = sparql.query().convert()
return results["results"]["bindings"]
def getFrequencyValuesForSelectedEntitiesAndPredicates(endpoint, prefixes, entities = "", predicates = "", triplepattern = ""):
prefixDec = getPrefixDec(prefixes)
eValuesDec = " VALUES (?s) {" + getValuesDec(entities) + "}"
pValuesDec = " VALUES (?p) {" + getValuesDec(predicates) + "}"
sparql = SPARQLWrapper(endpoint)
query = prefixDec + """
SELECT ?p ?o ?olabel ?sc (count(?o) AS ?pc)
{
{{
SELECT ?p ?o (count(?o) AS ?sc)
{ """ + eValuesDec + pValuesDec + """
?s ?p ?o
} GROUP BY ?p ?o
}}
""" + triplepattern + """
?s ?p ?o .
OPTIONAL {
?o dct:title ?olabel
}
}
"""
#print(query)
sparql.setQuery(query)
sparql.setReturnFormat(JSON)
results = sparql.query().convert()
return results["results"]["bindings"]
def makeCURIE(uri,prefixes):
l = ""
for prefix,uribase in prefixes.items():
# now match substr
size = len(uribase)
if uribase == uri[0:size]:
# match
l = prefix + ":" + uri[size:]
if l == "":
l = uri
return l
def performStatistics(nSamples, nPopulation, fv):
ret = dict()
meta = dict()
meta["nSamples"] = nSamples
meta["nPopulation"] = nPopulation
ret["meta"] = meta
results = []
for i in fv:
o = dict()
o["predicate"] = str(i["p"]["value"])
o["attribute"] = str(i["o"]["value"])
o['attribute_label'] = str(i["olabel"]["value"])
o["sample_count"] = int(i["sc"]["value"])
o["population_count"] = int(i["pc"]["value"])
hpd = ss.hypergeom(nPopulation, o["population_count"], nSamples)
prob = hpd.pmf(o["sample_count"])
o["prob"]= prob
results.append(o)
ret["results"] = results
return ret
def printResults(results,prefixes, pfilter = 0.001):
meta = results["meta"]
print("Sample size: " + str(meta['nSamples']))
print("Population size: " + str(meta['nPopulation']))
for i in results["results"]:
p = makeCURIE( i['predicate'], prefixes)
o = makeCURIE( i['attribute'], prefixes)
ol = i['attribute_label']
if i['prob'] <= pfilter:
if i['prob'] < 0.0001:
prob = '{0:.2E}'.format(Decimal(i['prob']))
else:
prob = '{0:.5f}'.format(Decimal(i['prob']))
print(" " + str(i['sample_count']) + " / " + str(i['population_count']) + " p-value: " + str(prob) + " " + str(p) + " " + str(o) + " " + str(ol))
#print(getPrefixDec({ "drugbank":"http://bio2rdf.org/drugbank:","dv":"http://bio2rdf.org/drugbank_vocabulary:"}))
#print(getValuesDec( ["http://bio2rdf.org/drugbank:test", "drugbank:test2"]))
#print(makeCURIE("http://bio2rdf.org/drugbank_vocabulary:category",prefixes))
### drug example
endpoint = "http://bio2rdf.org/sparql"
prefixes = { "dct":"http://purl.org/dc/terms/", "drugbank":"http://bio2rdf.org/drugbank:","dv":"http://bio2rdf.org/drugbank_vocabulary:"}
sample_names = ["Eletriptan","Zolmitriptan","Dihydroergotamine","Almotriptan","Rizatriptan"]
sample_curies = ["drugbank:DB00216","drugbank:DB00315","drugbank:DB00320","drugbank:DB00918","drugbank:DB00953"]
population_tp = "?s rdf:type dv:Drug ."
attributes = ["dv:category","dv:group"]
nSamples = len(sample_curies)
nPopulation = getPopulationCount(endpoint, prefixes, population_tp)
print("There are " + str(nSamples) + " samples in a population of " + str(nPopulation))
#fv_test = getProperties(endpoint, prefixes, "", population_tp)
#table = json_normalize(fv_test)
#table
#table[['p.value','plabel.value','c.value']]
fv = getFrequencyValuesForSelectedEntitiesAndPredicates(endpoint, prefixes, sample_curies, attributes, population_tp)
results = performStatistics(nSamples, nPopulation, fv)
printResults(results, prefixes)
endpoint = "http://bio2rdf.org/sparql"
prefixes = { "dct":"http://purl.org/dc/terms/", "sgd":"http://bio2rdf.org/sgd:","sgd_resource":"http://bio2rdf.org/sgd_resource:","sv":"http://bio2rdf.org/sgd_vocabulary:"}
sample_curies = ["sgd_resource:S000004425gp","sgd_resource:S000005376gp","sgd_resource:S000004238gp","sgd_resource:S000003399gp","sgd_resource:S000005853gp"]
population_tp = "?s rdf:type sv:Protein ."
attributes = ["sv:function"]
nSamples = len(sample_curies)
nPopulation = getPopulationCount(endpoint, prefixes, population_tp)
fv = getFrequencyValuesForSelectedEntitiesAndPredicates(endpoint, prefixes, sample_curies, attributes, population_tp)
results = performStatistics(nSamples, nPopulation, fv)
printResults(results, prefixes)
| 0.211743 | 0.432842 |
# Manual Freature Engineering
```
import featuretools as ft
import numpy as np
import pandas as pd
data = ft.demo.load_mock_customer()
data
transactions = data['transactions']
sessions = data['sessions']
customers = data['customers']
transactions.head()
sessions.head()
customers.head()
customers['joined_day'] = customers['join_date'].dt.day
customers['joined_month'] = customers['join_date'].dt.month
customers['joined_year'] = customers['join_date'].dt.year
customers.head()
transcations_and_sessions = transactions.merge(sessions,on='session_id',right_index=True,how='left')
transcations_and_sessions.head()
grp = transcations_and_sessions.groupby('customer_id')['amount'].agg(['mean','max','min'])
grp.columns = ['mean transcation_amount','max transcation_amount','min transcation_amount']
grp.head()
ss = customers.merge(grp,on='customer_id',right_index=True,how='left')
ss
```
# Automatic Feature Engineering
```
import featuretools as ft
data = ft.demo.load_mock_customer()
transcations_df = data['transactions'].merge(data['sessions']).merge(data['customers'])
products_df = data['products']
transcations_df
es = ft.EntitySet(id="cutomer_data")
es = es.entity_from_dataframe(entity_id="transactions",
dataframe=transcations_df,
index="transaction_id",
)
es
es = es.entity_from_dataframe(entity_id="products",
dataframe=products_df,
index="product_id",
)
es
new_relationship = ft.Relationship(es["products"]["product_id"],
es["transactions"]["product_id"])
es = es.add_relationship(new_relationship)
es
feature_matrix, feature_defs = ft.dfs(entityset=es,
target_entity="transactions",
agg_primitives=['mean','sum','mode'],
trans_primitives=['month','hour'])
feature_matrix
feature_defs
feature_matrix, feature_defs = ft.dfs(entityset=es,
target_entity="transactions",
agg_primitives=['mean','sum','mode'],
trans_primitives=['month','hour'],
max_depth=5)
feature_matrix
feature_defs
feature_matrix, feature_defs = ft.dfs(entityset=es,
target_entity="transactions",
max_depth=5)
feature_matrix
feature_defs
```
# Feature Selection
```
X = feature_matrix.loc[:,feature_matrix.columns != 'device']
y = feature_matrix['device']
import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings('ignore')
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
categorcial_feature_mask = X.dtypes==object
categorcial_cols = X.columns[categorcial_feature_mask].tolist()
X[categorcial_cols] = X[categorcial_cols].apply(lambda col:encoder.fit_transform(col))
from sklearn.preprocessing import OneHotEncoder
ohe = OneHotEncoder(categorical_features = categorcial_feature_mask, sparse=False)
X = ohe.fit_transform(X)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(0,1))
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
```
# Univariate Selection
```
from sklearn.feature_selection import SelectKBest,chi2
bestfeatures = SelectKBest(score_func=chi2, k=10)
select = bestfeatures.fit(X_train,y_train)
X_train_selected = select.transform(X_train)
print("X_train.shape: {}".format(X_train.shape))
print("X_train_selected.shape: {}".format(X_train_selected.shape))
mask = select.get_support()
mask
import matplotlib.pyplot as plt
plt.matshow(mask.reshape(1,-1),cmap='gray_r')
plt.xlabel("samle index")
plt.yticks(())
from sklearn.ensemble import RandomForestClassifier
X_test_selected = select.transform(X_test)
model = RandomForestClassifier(n_estimators=100,random_state=42)
model.fit (X_train,y_train)
print("Score with all features: {:.3f}".format(model.score(X_test,y_test)))
model.fit (X_train_selected,y_train)
print("Score with selected features: {:.3f}".format(model.score(X_test_selected,y_test)))
```
# Feature Importance
```
from sklearn.ensemble import ExtraTreesClassifier
model = ExtraTreesClassifier()
model.fit(X_train,y_train)
model.feature_importances_
feat_importances = pd.Series(model.feature_importances_)
feat_importances.nlargest(10).plot(kind='barh')
plt.show()
```
# Model-Based Feature Selection
```
from sklearn.feature_selection import SelectFromModel
from sklearn.ensemble import RandomForestClassifier
select = SelectFromModel(RandomForestClassifier(n_estimators=100,random_state=42),threshold="mean")
select.fit(X_train,y_train)
X_train_selected = select.transform(X_train)
print("X_train.shape: {}".format(X_train.shape))
print("X_train_selected.shape: {}".format(X_train_selected.shape))
mask = select.get_support()
plt.matshow(mask.reshape(1,-1),cmap="gray_r")
plt.xlabel("Sample index")
plt.yticks(())
plt.show()
X_test_selected = select.transform(X_test)
model = RandomForestClassifier(n_estimators=100,random_state=42)
model.fit(X_train,y_train)
print("Score with all features: {:.3f}".format(model.score(X_test,y_test)))
model.fit(X_train_selected,y_train)
print("Score with selected features: {:.3f}".format(model.score(X_test_selected,y_test)))
```
# Recursive Feature Eleimination
```
from sklearn.feature_selection import RFE
select = RFE(RandomForestClassifier(n_estimators=100,random_state=42),n_features_to_select=10)
select.fit(X_train,y_train)
X_train_selected = select.transform(X_train)
print("X_train.shape: {}".format(X_train.shape))
print("X_train_selected.shape: {}".format(X_train_selected.shape))
mask = select.get_support()
plt.matshow(mask.reshape(1,-1),cmap='gray_r')
plt.xlabel("Sample index")
plt.yticks(())
plt.show()
X_test_selected = select.transform(X_test)
model.fit(X_train,y_train)
print("Scoree with all features: {:,.3f}".format(model.score(X_test,y_test)))
model.fit(X_train_selected,y_train)
print("Scoree with selected features: {:,.3f}".format(model.score(X_test_selected,y_test)))
```
|
github_jupyter
|
import featuretools as ft
import numpy as np
import pandas as pd
data = ft.demo.load_mock_customer()
data
transactions = data['transactions']
sessions = data['sessions']
customers = data['customers']
transactions.head()
sessions.head()
customers.head()
customers['joined_day'] = customers['join_date'].dt.day
customers['joined_month'] = customers['join_date'].dt.month
customers['joined_year'] = customers['join_date'].dt.year
customers.head()
transcations_and_sessions = transactions.merge(sessions,on='session_id',right_index=True,how='left')
transcations_and_sessions.head()
grp = transcations_and_sessions.groupby('customer_id')['amount'].agg(['mean','max','min'])
grp.columns = ['mean transcation_amount','max transcation_amount','min transcation_amount']
grp.head()
ss = customers.merge(grp,on='customer_id',right_index=True,how='left')
ss
import featuretools as ft
data = ft.demo.load_mock_customer()
transcations_df = data['transactions'].merge(data['sessions']).merge(data['customers'])
products_df = data['products']
transcations_df
es = ft.EntitySet(id="cutomer_data")
es = es.entity_from_dataframe(entity_id="transactions",
dataframe=transcations_df,
index="transaction_id",
)
es
es = es.entity_from_dataframe(entity_id="products",
dataframe=products_df,
index="product_id",
)
es
new_relationship = ft.Relationship(es["products"]["product_id"],
es["transactions"]["product_id"])
es = es.add_relationship(new_relationship)
es
feature_matrix, feature_defs = ft.dfs(entityset=es,
target_entity="transactions",
agg_primitives=['mean','sum','mode'],
trans_primitives=['month','hour'])
feature_matrix
feature_defs
feature_matrix, feature_defs = ft.dfs(entityset=es,
target_entity="transactions",
agg_primitives=['mean','sum','mode'],
trans_primitives=['month','hour'],
max_depth=5)
feature_matrix
feature_defs
feature_matrix, feature_defs = ft.dfs(entityset=es,
target_entity="transactions",
max_depth=5)
feature_matrix
feature_defs
X = feature_matrix.loc[:,feature_matrix.columns != 'device']
y = feature_matrix['device']
import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings('ignore')
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
categorcial_feature_mask = X.dtypes==object
categorcial_cols = X.columns[categorcial_feature_mask].tolist()
X[categorcial_cols] = X[categorcial_cols].apply(lambda col:encoder.fit_transform(col))
from sklearn.preprocessing import OneHotEncoder
ohe = OneHotEncoder(categorical_features = categorcial_feature_mask, sparse=False)
X = ohe.fit_transform(X)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(0,1))
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
from sklearn.feature_selection import SelectKBest,chi2
bestfeatures = SelectKBest(score_func=chi2, k=10)
select = bestfeatures.fit(X_train,y_train)
X_train_selected = select.transform(X_train)
print("X_train.shape: {}".format(X_train.shape))
print("X_train_selected.shape: {}".format(X_train_selected.shape))
mask = select.get_support()
mask
import matplotlib.pyplot as plt
plt.matshow(mask.reshape(1,-1),cmap='gray_r')
plt.xlabel("samle index")
plt.yticks(())
from sklearn.ensemble import RandomForestClassifier
X_test_selected = select.transform(X_test)
model = RandomForestClassifier(n_estimators=100,random_state=42)
model.fit (X_train,y_train)
print("Score with all features: {:.3f}".format(model.score(X_test,y_test)))
model.fit (X_train_selected,y_train)
print("Score with selected features: {:.3f}".format(model.score(X_test_selected,y_test)))
from sklearn.ensemble import ExtraTreesClassifier
model = ExtraTreesClassifier()
model.fit(X_train,y_train)
model.feature_importances_
feat_importances = pd.Series(model.feature_importances_)
feat_importances.nlargest(10).plot(kind='barh')
plt.show()
from sklearn.feature_selection import SelectFromModel
from sklearn.ensemble import RandomForestClassifier
select = SelectFromModel(RandomForestClassifier(n_estimators=100,random_state=42),threshold="mean")
select.fit(X_train,y_train)
X_train_selected = select.transform(X_train)
print("X_train.shape: {}".format(X_train.shape))
print("X_train_selected.shape: {}".format(X_train_selected.shape))
mask = select.get_support()
plt.matshow(mask.reshape(1,-1),cmap="gray_r")
plt.xlabel("Sample index")
plt.yticks(())
plt.show()
X_test_selected = select.transform(X_test)
model = RandomForestClassifier(n_estimators=100,random_state=42)
model.fit(X_train,y_train)
print("Score with all features: {:.3f}".format(model.score(X_test,y_test)))
model.fit(X_train_selected,y_train)
print("Score with selected features: {:.3f}".format(model.score(X_test_selected,y_test)))
from sklearn.feature_selection import RFE
select = RFE(RandomForestClassifier(n_estimators=100,random_state=42),n_features_to_select=10)
select.fit(X_train,y_train)
X_train_selected = select.transform(X_train)
print("X_train.shape: {}".format(X_train.shape))
print("X_train_selected.shape: {}".format(X_train_selected.shape))
mask = select.get_support()
plt.matshow(mask.reshape(1,-1),cmap='gray_r')
plt.xlabel("Sample index")
plt.yticks(())
plt.show()
X_test_selected = select.transform(X_test)
model.fit(X_train,y_train)
print("Scoree with all features: {:,.3f}".format(model.score(X_test,y_test)))
model.fit(X_train_selected,y_train)
print("Scoree with selected features: {:,.3f}".format(model.score(X_test_selected,y_test)))
| 0.474388 | 0.767254 |
# Principal Components Regression
Principal Components Regression is a technique for analyzing multiple regression data that suffer from
multicollinearity. When multicollinearity occurs, least squares estimates are unbiased, but their variances
are large so they may be far from the true value. By adding a degree of bias to the regression estimates, principal components regression reduces the standard errors. It is hoped that the net effect will be to give more reliable estimates. Another biased regression technique, ridge regression, is also available in NCSS. Ridge regression is the more popular of the two methods.It Provides dimensionality reduction.
Multicollinearity is discussed both in the Multiple Regression chapter and in the Ridge Regression chapter, so we
will not repeat the discussion here. However, it is important to understand the impact of multicollinearity so that
you can decide if some evasive action (like pc regression) would be beneficial.
Import libraries
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split
import pandas as pd
from sklearn import metrics
from sklearn.metrics import r2_score
```
Import dataset
```
dataset = pd.read_csv('/home/webtunix/Desktop/Regression/random.csv')
print(len(dataset))
```
Split dataset into x and y
```
x = dataset.iloc[:,1:4].values
y = dataset.iloc[:,4].values
```
Split dataset into training and testing sets
```
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size = 0.3)
```
Apply PCA model
```
model = PCA(n_components=3)
model.fit(X_train,y_train)
```
Score of model
```
model.score(X_train,y_train)
```
Variance ratio of model
```
print (model.explained_variance_ratio_)
```
Components of model
```
print model.components_
```
# Research Infinite Solutions LLP
by Research Infinite Solutions (https://www.ris-ai.com//)
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split
import pandas as pd
from sklearn import metrics
from sklearn.metrics import r2_score
dataset = pd.read_csv('/home/webtunix/Desktop/Regression/random.csv')
print(len(dataset))
x = dataset.iloc[:,1:4].values
y = dataset.iloc[:,4].values
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size = 0.3)
model = PCA(n_components=3)
model.fit(X_train,y_train)
model.score(X_train,y_train)
print (model.explained_variance_ratio_)
print model.components_
| 0.404507 | 0.981841 |
# Part 1 - Becoming Familiar with The Data
The steps he is following are:
- **Understanding the problem**. Look at each variable and understand its meaning and importance.
- **Univariable study**. Focus on the dependent variable.
- **Multivariate study**. Try to explore how dependent and independent variables relate.
- **Basic Cleaning**. Clean the dataset and handle missing data.
- **Test Assumptions**. We'll check whether the dataset meets the assumptions required by most multivariate techniques.
## 0. Install the necessary packages.
```
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from scipy.stats import norm
from sklearn.preprocessing import StandardScaler
from scipy import stats
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
df_train = pd.read_csv("data/train.csv")
# Check the column names
# df_train.columns
```
## 1. Understanding the Problem
In this step it is useful to take a look at each one of the variables involved in the dataset.
He suggests creating an excel spreadsheet with the following columns:
- Variable
- Type
- Segment: we can identify three possible segments: building, space, or location.
- Expectation: our expectation about the variable influence in "SalePrice". We can use a categorical scale with "High", "Medium", and "Low".
- Conclusion
- Comments
I do that in the DataOverview.csv file inside the data folder.
# 2. Analysis of SalePrice Variable
```
df_train['SalePrice'].describe()
sns.displot(df_train['SalePrice']);
print("Skewness: %f" % df_train['SalePrice'].skew())
print("Kurtosis: %f" % df_train['SalePrice'].kurt())
#scatter plot grlivarea/saleprice
var = 'GrLivArea'
data = pd.concat([df_train['SalePrice'], df_train[var]], axis=1)
data.plot.scatter(x=var, y='SalePrice', ylim=(0,800000));
#box plot overallqual/saleprice
var = 'OverallQual'
data = pd.concat([df_train['SalePrice'], df_train[var]], axis=1)
f, ax = plt.subplots(figsize=(8, 6))
fig = sns.boxplot(x=var, y="SalePrice", data=data)
fig.axis(ymin=0, ymax=800000);
```
# 3. Multivariate Study
```
# Corelation matrix
# Here we can have an overview also for existence of multicollinearity or variables that convey the same information
# we can delete one of those
corrmat = df_train.corr()
f, ax = plt.subplots(figsize = (18, 12))
sns.heatmap(corrmat, vmax = .8, square = True);
# saleprice correlation matrix
k = 10 # number of variables for heatmap
cols = corrmat.nlargest(k, 'SalePrice')['SalePrice'].index
cm = np.corrcoef(df_train[cols].values.T)
sns.set(font_scale = 1.25)
hm = sns.heatmap(cm, cbar = True, annot = True, square = True, fmt = '.2f', annot_kws={'size':10}, yticklabels=cols.values, xticklabels=cols.values)
plt.show()
#scatterplot
sns.set()
cols = ['SalePrice', 'OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'FullBath', 'YearBuilt']
sns.pairplot(df_train[cols], size = 2.5)
plt.show();
```
|
github_jupyter
|
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from scipy.stats import norm
from sklearn.preprocessing import StandardScaler
from scipy import stats
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
df_train = pd.read_csv("data/train.csv")
# Check the column names
# df_train.columns
df_train['SalePrice'].describe()
sns.displot(df_train['SalePrice']);
print("Skewness: %f" % df_train['SalePrice'].skew())
print("Kurtosis: %f" % df_train['SalePrice'].kurt())
#scatter plot grlivarea/saleprice
var = 'GrLivArea'
data = pd.concat([df_train['SalePrice'], df_train[var]], axis=1)
data.plot.scatter(x=var, y='SalePrice', ylim=(0,800000));
#box plot overallqual/saleprice
var = 'OverallQual'
data = pd.concat([df_train['SalePrice'], df_train[var]], axis=1)
f, ax = plt.subplots(figsize=(8, 6))
fig = sns.boxplot(x=var, y="SalePrice", data=data)
fig.axis(ymin=0, ymax=800000);
# Corelation matrix
# Here we can have an overview also for existence of multicollinearity or variables that convey the same information
# we can delete one of those
corrmat = df_train.corr()
f, ax = plt.subplots(figsize = (18, 12))
sns.heatmap(corrmat, vmax = .8, square = True);
# saleprice correlation matrix
k = 10 # number of variables for heatmap
cols = corrmat.nlargest(k, 'SalePrice')['SalePrice'].index
cm = np.corrcoef(df_train[cols].values.T)
sns.set(font_scale = 1.25)
hm = sns.heatmap(cm, cbar = True, annot = True, square = True, fmt = '.2f', annot_kws={'size':10}, yticklabels=cols.values, xticklabels=cols.values)
plt.show()
#scatterplot
sns.set()
cols = ['SalePrice', 'OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'FullBath', 'YearBuilt']
sns.pairplot(df_train[cols], size = 2.5)
plt.show();
| 0.628065 | 0.955444 |
# CNN Exploration
```
# Imports
import os
import librosa
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D
from tensorflow.python.keras import utils
from keras.utils import to_categorical
# Reading in the data
mel_specs = pd.read_csv('../data/genre_mel_specs_clean.csv')
# First 5 rows for reference
mel_specs.head()
```
## Data Preprocessing
### Function to Get a Subset of the Genres
```
def get_genre_subset(data, genre_subset):
'''
This function takes in a dataframe and a list of genres and returns a new dataframe only including
the genres in the given list. Its index is reset and new labels are created so that the labels are 0
through one less than the number of genres.
'''
# Filtering the dataframe for the subset of the genres and resetting the index
df = data.loc[data['labels'].isin(genre_subset)]
df = df.reset_index().drop(columns=['index'])
# Creating a new label dictionary
new_label_dict = {}
for i in range(len(genre_subset)):
new_label_dict[genre_subset[i]] = i
# Changing labels to be the new labels
df['y'] = df['labels'].map(new_label_dict)
return df
```
### Function to Preprocess the Features and Targets
```
def preprocess_mel_spec_data(data, genre_subset):
'''
This function takes in a dataframe of audio files and a list of genres,
calls the function get_genre_subset to get a dataframe including only the given genres,
and completes all of the data preprocessing steps needed to run a neural network.
Preprecessing steps include:
1. Reshaping the mel spectrograms to their original form (128 x 660)
2. Defining the array of targets
3. Train test split
4. Standardizing the data
5. Reshaping the data to be 128 x 660 x 1, where the 1 represents a single color channel
6. One-hot-encoding target data
Parameters:
data (DataFrame): a dataframe of audio files, flattened mel spectrograms, and genre labels
genre_subset (list): a list of genres included in the dataframe
Returns:
X_train (array): training set of features
X_test (array): testing set of features
y_train (array): training set of targets
y_test (array): testing set of targets
'''
# Getting a subset of the genres using our genre_subset function
subset = get_genre_subset(data, genre_subset)
# Dropping label columns to prepare our feature vector
specs = subset.drop(columns=['labels', 'y'])
# Reshaping the arrays to their original "image" form
X = []
for i in range(len(genre_subset)*100):
X.append(np.array(specs.iloc[i]).reshape(128,660))
# Converting list X to an array
X = np.array(X)
# Defining our targets
y = subset.loc[subset['labels'].isin(genre_subset), 'y'].values
# train test split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, stratify=y, test_size=.2)
# Scaling our data to be between 0 and 1
X_train /= -80
X_test /= -80
# Reshaping images to be 128 x 660 x 1
X_train = X_train.reshape(X_train.shape[0], 128, 660, 1)
X_test = X_test.reshape(X_test.shape[0], 128, 660, 1)
# One hot encoding our labels
y_train = to_categorical(y_train, len(genre_subset))
y_test = to_categorical(y_test, len(genre_subset))
return X_train, X_test, y_train, y_test
# List of all the genres
genre_list = {
'classical': 0,
'hiphop': 1,
'jazz': 2,
'metal': 3,
'pop': 4,
'rock': 5
}
# List of a subset of the genres
genre_subset = [
'hiphop',
'jazz',
'metal',
'pop'
]
# Using our function to get our features and targets
X_train, X_test, y_train, y_test = preprocess_mel_spec_data(mel_specs, genre_subset)
```
## CNN Model for Subset of Genres
```
np.random.seed(23456)
# Initiating an empty neural network
cnn_model = Sequential(name='cnn_1')
# Adding convolutional layer
cnn_model.add(Conv2D(filters=16,
kernel_size=(3,3),
activation='relu',
input_shape=(128,660,1)))
# Adding max pooling layer
cnn_model.add(MaxPooling2D(pool_size=(2,4)))
# Adding convolutional layer
cnn_model.add(Conv2D(filters=32,
kernel_size=(3,3),
activation='relu'))
# Adding max pooling layer
cnn_model.add(MaxPooling2D(pool_size=(2,4)))
# Adding a flattened layer to input our image data
cnn_model.add(Flatten())
# Adding a dense layer with 64 neurons
cnn_model.add(Dense(64, activation='relu'))
# Adding a dropout layer for regularization
cnn_model.add(Dropout(0.25))
# Adding an output layer
cnn_model.add(Dense(7, activation='softmax'))
# Compiling our neural network
cnn_model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# Fitting our neural network
history = cnn_model.fit(X_train,
y_train,
batch_size=16,
validation_data=(X_test, y_test),
epochs=15)
# Checking the model summary
cnn_model.summary()
# The code in this cell was adapted from a lecture at General Assembly
# Check out our train loss and test loss over epochs.
train_loss = history.history['loss']
test_loss = history.history['val_loss']
# Set figure size.
plt.figure(figsize=(12, 8))
# Generate line plot of training, testing loss over epochs.
plt.plot(train_loss, label='Training Loss', color='blue')
plt.plot(test_loss, label='Testing Loss', color='red')
# Set title
plt.title('Training and Testing Loss by Epoch', fontsize = 25)
plt.xlabel('Epoch', fontsize = 18)
plt.ylabel('Categorical Crossentropy', fontsize = 18)
plt.xticks(range(1,11), range(1,11))
plt.legend(fontsize = 18);
# Making predictions from the cnn model
predictions = cnn_model.predict(X_test, verbose=1)
```
### Confusion Matrix
```
# Calculating the confusion matrix
# row: actual
# columns: predicted
conf_matrix = confusion_matrix(np.argmax(y_test, 1), np.argmax(predictions, 1))
conf_matrix
# Creating a dataframe of the confusion matrix with labels for readability
confusion_df = pd.DataFrame(conf_matrix)
confusion_df
# List of a subset of the genres
genre_subset = [
0:'hiphop',
1:'jazz',
2:'meta',
3:'pop'
]
# Renaming rows and columns with labes
confusion_df = confusion_df.rename(columns=genre_labels)
confusion_df.index = confusion_df.columns
confusion_df
```
## CNN Model for Binary Classification of Genres
```
# List of a subset of the genres
genre_subset_2 = [
'metal',
'classical'
]
# Using our function to get our features and targets
X_train, X_test, y_train, y_test = preprocess_mel_spec_data(mel_specs, genre_subset_2)
np.random.seed(23456)
# Initiating an empty neural network
cnn_model_2 = Sequential(name='cnn_2')
# Adding convolutional layer
cnn_model_2.add(Conv2D(filters=16,
kernel_size=(3,3),
activation='relu',
input_shape=(128,660,1)))
# Adding max pooling layer
cnn_model_2.add(MaxPooling2D(pool_size=(2,4)))
# Adding convolutional layer
cnn_model_2.add(Conv2D(filters=32,
kernel_size=(3,3),
activation='relu'))
# Adding max pooling layer
cnn_model_2.add(MaxPooling2D(pool_size=(2,4)))
# Adding a flattened layer to input our image data
cnn_model_2.add(Flatten())
# Adding a dense layer with 64 neurons
cnn_model_2.add(Dense(64, activation='relu'))
# Adding a dropout layer for regularization
cnn_model_2.add(Dropout(0.25))
# Adding an output layer
cnn_model_2.add(Dense(2, activation='softmax'))
# Compiling our neural network
cnn_model_2.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# Fitting our neural network
history = cnn_model_2.fit(X_train,
y_train,
batch_size=16,
validation_data=(X_test, y_test),
epochs=15)
# Checking the model summary
cnn_model_2.summary()
# The code in this cell was adapted from a lecture at General Assembly
# Check out our train loss and test loss over epochs.
train_loss = history.history['loss']
test_loss = history.history['val_loss']
# Set figure size.
plt.figure(figsize=(12, 8))
# Generate line plot of training, testing loss over epochs.
plt.plot(train_loss, label='Training Loss', color='blue')
plt.plot(test_loss, label='Testing Loss', color='red')
# Set title
plt.title('Training and Testing Loss by Epoch', fontsize = 25)
plt.xlabel('Epoch', fontsize = 18)
plt.ylabel('Categorical Crossentropy', fontsize = 18)
plt.xticks(range(1,11), range(1,11))
plt.legend(fontsize = 18);
# Making predictions from the cnn model
predictions_2 = cnn_model_2.predict(X_test, verbose=1)
```
### Confusion Matrix
```
# Calculating the confusion matrix
# row: actual
# columns: predicted
conf_matrix_2 = confusion_matrix(np.argmax(y_test, 1), np.argmax(predictions_2, 1))
conf_matrix_2
# Creating a dataframe of the confusion matrix with labels for readability
confusion_df_2 = pd.DataFrame(conf_matrix_2)
# List of a subset of the genres
genre_labels_2 = {
0:'metal',
1:'classical'
}
# Renaming rows and columns with labes
confusion_df_2 = confusion_df_2.rename(columns=genre_labels_2)
confusion_df_2.index = confusion_df_2.columns
confusion_df_2
```
|
github_jupyter
|
# Imports
import os
import librosa
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D
from tensorflow.python.keras import utils
from keras.utils import to_categorical
# Reading in the data
mel_specs = pd.read_csv('../data/genre_mel_specs_clean.csv')
# First 5 rows for reference
mel_specs.head()
def get_genre_subset(data, genre_subset):
'''
This function takes in a dataframe and a list of genres and returns a new dataframe only including
the genres in the given list. Its index is reset and new labels are created so that the labels are 0
through one less than the number of genres.
'''
# Filtering the dataframe for the subset of the genres and resetting the index
df = data.loc[data['labels'].isin(genre_subset)]
df = df.reset_index().drop(columns=['index'])
# Creating a new label dictionary
new_label_dict = {}
for i in range(len(genre_subset)):
new_label_dict[genre_subset[i]] = i
# Changing labels to be the new labels
df['y'] = df['labels'].map(new_label_dict)
return df
def preprocess_mel_spec_data(data, genre_subset):
'''
This function takes in a dataframe of audio files and a list of genres,
calls the function get_genre_subset to get a dataframe including only the given genres,
and completes all of the data preprocessing steps needed to run a neural network.
Preprecessing steps include:
1. Reshaping the mel spectrograms to their original form (128 x 660)
2. Defining the array of targets
3. Train test split
4. Standardizing the data
5. Reshaping the data to be 128 x 660 x 1, where the 1 represents a single color channel
6. One-hot-encoding target data
Parameters:
data (DataFrame): a dataframe of audio files, flattened mel spectrograms, and genre labels
genre_subset (list): a list of genres included in the dataframe
Returns:
X_train (array): training set of features
X_test (array): testing set of features
y_train (array): training set of targets
y_test (array): testing set of targets
'''
# Getting a subset of the genres using our genre_subset function
subset = get_genre_subset(data, genre_subset)
# Dropping label columns to prepare our feature vector
specs = subset.drop(columns=['labels', 'y'])
# Reshaping the arrays to their original "image" form
X = []
for i in range(len(genre_subset)*100):
X.append(np.array(specs.iloc[i]).reshape(128,660))
# Converting list X to an array
X = np.array(X)
# Defining our targets
y = subset.loc[subset['labels'].isin(genre_subset), 'y'].values
# train test split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, stratify=y, test_size=.2)
# Scaling our data to be between 0 and 1
X_train /= -80
X_test /= -80
# Reshaping images to be 128 x 660 x 1
X_train = X_train.reshape(X_train.shape[0], 128, 660, 1)
X_test = X_test.reshape(X_test.shape[0], 128, 660, 1)
# One hot encoding our labels
y_train = to_categorical(y_train, len(genre_subset))
y_test = to_categorical(y_test, len(genre_subset))
return X_train, X_test, y_train, y_test
# List of all the genres
genre_list = {
'classical': 0,
'hiphop': 1,
'jazz': 2,
'metal': 3,
'pop': 4,
'rock': 5
}
# List of a subset of the genres
genre_subset = [
'hiphop',
'jazz',
'metal',
'pop'
]
# Using our function to get our features and targets
X_train, X_test, y_train, y_test = preprocess_mel_spec_data(mel_specs, genre_subset)
np.random.seed(23456)
# Initiating an empty neural network
cnn_model = Sequential(name='cnn_1')
# Adding convolutional layer
cnn_model.add(Conv2D(filters=16,
kernel_size=(3,3),
activation='relu',
input_shape=(128,660,1)))
# Adding max pooling layer
cnn_model.add(MaxPooling2D(pool_size=(2,4)))
# Adding convolutional layer
cnn_model.add(Conv2D(filters=32,
kernel_size=(3,3),
activation='relu'))
# Adding max pooling layer
cnn_model.add(MaxPooling2D(pool_size=(2,4)))
# Adding a flattened layer to input our image data
cnn_model.add(Flatten())
# Adding a dense layer with 64 neurons
cnn_model.add(Dense(64, activation='relu'))
# Adding a dropout layer for regularization
cnn_model.add(Dropout(0.25))
# Adding an output layer
cnn_model.add(Dense(7, activation='softmax'))
# Compiling our neural network
cnn_model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# Fitting our neural network
history = cnn_model.fit(X_train,
y_train,
batch_size=16,
validation_data=(X_test, y_test),
epochs=15)
# Checking the model summary
cnn_model.summary()
# The code in this cell was adapted from a lecture at General Assembly
# Check out our train loss and test loss over epochs.
train_loss = history.history['loss']
test_loss = history.history['val_loss']
# Set figure size.
plt.figure(figsize=(12, 8))
# Generate line plot of training, testing loss over epochs.
plt.plot(train_loss, label='Training Loss', color='blue')
plt.plot(test_loss, label='Testing Loss', color='red')
# Set title
plt.title('Training and Testing Loss by Epoch', fontsize = 25)
plt.xlabel('Epoch', fontsize = 18)
plt.ylabel('Categorical Crossentropy', fontsize = 18)
plt.xticks(range(1,11), range(1,11))
plt.legend(fontsize = 18);
# Making predictions from the cnn model
predictions = cnn_model.predict(X_test, verbose=1)
# Calculating the confusion matrix
# row: actual
# columns: predicted
conf_matrix = confusion_matrix(np.argmax(y_test, 1), np.argmax(predictions, 1))
conf_matrix
# Creating a dataframe of the confusion matrix with labels for readability
confusion_df = pd.DataFrame(conf_matrix)
confusion_df
# List of a subset of the genres
genre_subset = [
0:'hiphop',
1:'jazz',
2:'meta',
3:'pop'
]
# Renaming rows and columns with labes
confusion_df = confusion_df.rename(columns=genre_labels)
confusion_df.index = confusion_df.columns
confusion_df
# List of a subset of the genres
genre_subset_2 = [
'metal',
'classical'
]
# Using our function to get our features and targets
X_train, X_test, y_train, y_test = preprocess_mel_spec_data(mel_specs, genre_subset_2)
np.random.seed(23456)
# Initiating an empty neural network
cnn_model_2 = Sequential(name='cnn_2')
# Adding convolutional layer
cnn_model_2.add(Conv2D(filters=16,
kernel_size=(3,3),
activation='relu',
input_shape=(128,660,1)))
# Adding max pooling layer
cnn_model_2.add(MaxPooling2D(pool_size=(2,4)))
# Adding convolutional layer
cnn_model_2.add(Conv2D(filters=32,
kernel_size=(3,3),
activation='relu'))
# Adding max pooling layer
cnn_model_2.add(MaxPooling2D(pool_size=(2,4)))
# Adding a flattened layer to input our image data
cnn_model_2.add(Flatten())
# Adding a dense layer with 64 neurons
cnn_model_2.add(Dense(64, activation='relu'))
# Adding a dropout layer for regularization
cnn_model_2.add(Dropout(0.25))
# Adding an output layer
cnn_model_2.add(Dense(2, activation='softmax'))
# Compiling our neural network
cnn_model_2.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# Fitting our neural network
history = cnn_model_2.fit(X_train,
y_train,
batch_size=16,
validation_data=(X_test, y_test),
epochs=15)
# Checking the model summary
cnn_model_2.summary()
# The code in this cell was adapted from a lecture at General Assembly
# Check out our train loss and test loss over epochs.
train_loss = history.history['loss']
test_loss = history.history['val_loss']
# Set figure size.
plt.figure(figsize=(12, 8))
# Generate line plot of training, testing loss over epochs.
plt.plot(train_loss, label='Training Loss', color='blue')
plt.plot(test_loss, label='Testing Loss', color='red')
# Set title
plt.title('Training and Testing Loss by Epoch', fontsize = 25)
plt.xlabel('Epoch', fontsize = 18)
plt.ylabel('Categorical Crossentropy', fontsize = 18)
plt.xticks(range(1,11), range(1,11))
plt.legend(fontsize = 18);
# Making predictions from the cnn model
predictions_2 = cnn_model_2.predict(X_test, verbose=1)
# Calculating the confusion matrix
# row: actual
# columns: predicted
conf_matrix_2 = confusion_matrix(np.argmax(y_test, 1), np.argmax(predictions_2, 1))
conf_matrix_2
# Creating a dataframe of the confusion matrix with labels for readability
confusion_df_2 = pd.DataFrame(conf_matrix_2)
# List of a subset of the genres
genre_labels_2 = {
0:'metal',
1:'classical'
}
# Renaming rows and columns with labes
confusion_df_2 = confusion_df_2.rename(columns=genre_labels_2)
confusion_df_2.index = confusion_df_2.columns
confusion_df_2
| 0.780746 | 0.92944 |
_Lambda School Data Science — Regression 2_
This sprint, your project is Caterpillar Tube Pricing: Predict the prices suppliers will quote for industrial tube assemblies.
# Log-Linear Regression, Feature Engineering
#### Objectives
- log-transform regression target with right-skewed distribution
- use regression metric: RMSLE
- do feature engineering with relational data
## Process
#### Francois Chollet, [Deep Learning with Python](https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/README.md), Chapter 4: Fundamentals of machine learning, "A universal workflow of machine learning"
> **1. Define the problem at hand and the data on which you’ll train.** Collect this data, or annotate it with labels if need be.
> **2. Choose how you’ll measure success on your problem.** Which metrics will you monitor on your validation data?
> **3. Determine your evaluation protocol:** hold-out validation? K-fold validation? Which portion of the data should you use for validation?
> **4. Develop a first model that does better than a basic baseline:** a model with statistical power.
> **5. Develop a model that overfits.** The universal tension in machine learning is between optimization and generalization; the ideal model is one that stands right at the border between underfitting and overfitting; between undercapacity and overcapacity. To figure out where this border lies, first you must cross it.
> **6. Regularize your model and tune its hyperparameters, based on performance on the validation data.** Repeatedly modify your model, train it, evaluate on your validation data (not the test data, at this point), modify it again, and repeat, until the model is as good as it can get.
> **Iterate on feature engineering: add new features, or remove features that don’t seem to be informative.** Once you’ve developed a satisfactory model configuration, you can train your final production model on all the available data (training and validation) and evaluate it one last time on the test set.
## Define the problem 🚜
#### [Description](https://www.kaggle.com/c/caterpillar-tube-pricing/overview/description)
> Like snowflakes, it's difficult to find two tubes in Caterpillar's diverse catalogue of machinery that are exactly alike. Tubes can vary across a number of dimensions, including base materials, number of bends, bend radius, bolt patterns, and end types.
> Currently, Caterpillar relies on a variety of suppliers to manufacture these tube assemblies, each having their own unique pricing model. This competition provides detailed tube, component, and annual volume datasets, and challenges you to predict the price a supplier will quote for a given tube assembly.
## Define the data on which you'll train
#### [Data Description](https://www.kaggle.com/c/caterpillar-tube-pricing/data)
> The dataset is comprised of a large number of relational tables that describe the physical properties of tube assemblies. You are challenged to combine the characteristics of each tube assembly with supplier pricing dynamics in order to forecast a quote price for each tube. The quote price is labeled as cost in the data.
## Get data
### Option 1. Kaggle web UI
Sign in to Kaggle and go to the [Caterpillar Tube Pricing](https://www.kaggle.com/c/caterpillar-tube-pricing) competition. Go to the Data page. After you have accepted the rules of the competition, use the download buttons to download the data.
### Option 2. Kaggle API
1. [Follow these instructions](https://github.com/Kaggle/kaggle-api#api-credentials) to create a Kaggle “API Token” and download your `kaggle.json` file.
2. Put `kaggle.json` in the correct location.
- If you're using Anaconda, put the file in the directory specified in the [instructions](https://github.com/Kaggle/kaggle-api#api-credentials).
- If you're using Google Colab, upload the file to your Google Drive, and run this cell:
```
from google.colab import drive
drive.mount('/content/drive')
%env KAGGLE_CONFIG_DIR=/content/drive/My Drive/
```
3. Install the Kaggle API package.
```
pip install kaggle
```
4. After you have accepted the rules of the competiton, use the Kaggle API package to get the data.
```
kaggle competitions download -c caterpillar-tube-pricing
```
### Option 3. Google Drive
Download [zip file](https://drive.google.com/uc?export=download&id=1oGky3xR6133pub7S4zIEFbF4x1I87jvC) from Google Drive.
```
# from google.colab import files
# files.upload()
# !unzip caterpillar-tube-pricing.zip
# !unzip data.zip
```
#### Get filenames & shapes
[Python Standard Library: glob](https://docs.python.org/3/library/glob.html)
> The `glob` module finds all the pathnames matching a specified pattern
```
from glob import glob
import pandas as pd
for path in glob('competition_data/*.csv'):
df = pd.read_csv(path)
print(path, df.shape)
```
## Choose how you'll measure success on your problem
> Which metrics will you monitor on your validation data?
#### [Evaluation](https://www.kaggle.com/c/caterpillar-tube-pricing/overview/evaluation)
> Submissions are evaluated one the Root Mean Squared Logarithmic Error (RMSLE). The RMSLE is calculated as
>
> $\sqrt{\frac{1}{n} \sum_{i=1}^{n}\left(\log \left(p_{i}+1\right)-\log \left(a_{i}+1\right)\right)^{2}}$
>
> Where:
>
> - $n$ is the number of price quotes in the test set
> - $p_i$ is your predicted price
> - $a_i$ is the actual price
> - $log(x)$ is the natural logarithm
#### [Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/model_evaluation.html#mean-squared-log-error)
> The `mean_squared_log_error` function is best to use when targets have exponential growth, such as population counts, average sales of a commodity over a span of years etc. Note that this metric penalizes an under-predicted estimate greater than an over-predicted estimate.
## Determine your evaluation protocol
> Which portion of the data should you use for validation?
#### Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)
> You will want to create your own training and validation sets (by splitting the Kaggle “training” data). You will just use your smaller training set (a subset of Kaggle’s training data) for building your model, and you can evaluate it on your validation set (also a subset of Kaggle’s training data) before you submit to Kaggle.
> When is a random subset not good enough?
> - Time series
> - New people, new boats, new…
#### Does the test set have different dates?
#### Does the test set have different tube assemblies?
#### Make the validation set like the test set
## Begin with baselines for regression
## Develop a first model that does better than a basic baseline
### Fit Random Forest with 1 feature: `quantity`
## Log-transform regression target with right-skewed distribution
### Plot right-skewed distribution
#### Terence Parr & Jeremy Howard, [The Mechanics of Machine Learning, Chapter 5.5](https://mlbook.explained.ai/prep.html#logtarget)
> Transforming the target variable (using the mathematical log function) into a tighter, more uniform space makes life easier for any model.
> The only problem is that, while easy to execute, understanding why taking the log of the target variable works and how it affects the training/testing process is intellectually challenging. You can skip this section for now, if you like, but just remember that this technique exists and check back here if needed in the future.
> Optimally, the distribution of prices would be a narrow “bell curve” distribution without a tail. This would make predictions based upon average prices more accurate. We need a mathematical operation that transforms the widely-distributed target prices into a new space. The “price in dollars space” has a long right tail because of outliers and we want to squeeze that space into a new space that is normally distributed (“bell curved”). More specifically, we need to shrink large values a lot and smaller values a little. That magic operation is called the logarithm or log for short.
> To make actual predictions, we have to take the exp of model predictions to get prices in dollars instead of log dollars.
#### Wikipedia, [Logarithm](https://en.wikipedia.org/wiki/Logarithm)
> Addition, multiplication, and exponentiation are three fundamental arithmetic operations. Addition can be undone by subtraction. Multiplication can be undone by division. The idea and purpose of **logarithms** is also to **undo** a fundamental arithmetic operation, namely raising a number to a certain power, an operation also known as **exponentiation.**
> For example, raising 2 to the third power yields 8.
> The logarithm (with respect to base 2) of 8 is 3, reflecting the fact that 2 was raised to the third power to get 8.
### Use Numpy for exponents and logarithms functions
- https://docs.scipy.org/doc/numpy/reference/routines.math.html#exponents-and-logarithms
### Refit model with log-transformed target
## Interlude: Moore's Law dataset
#### Background
- https://en.wikipedia.org/wiki/Moore%27s_law
- https://en.wikipedia.org/wiki/Transistor_count
#### Scrape HTML tables with Pandas!
- https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_html.html
- https://medium.com/@ageitgey/quick-tip-the-easiest-way-to-grab-data-out-of-a-web-page-in-python-7153cecfca58
#### More web scraping options
- https://automatetheboringstuff.com/chapter11/
```
# Scrape data
tables = pd.read_html('https://en.wikipedia.org/wiki/Transistor_count', header=0)
moore = tables[0]
moore = moore[['Date of introduction', 'Transistor count']].dropna()
# Clean data
for column in moore:
moore[column] = (moore[column]
.str.split('[').str[0] # Remove citations
.str.replace(r'\D','') # Remove non-digit characters
.astype(int))
moore = moore.sort_values(by='Date of introduction')
# Plot distribution of transistor counts
sns.distplot(moore['Transistor count']);
# Plot relationship between date & transistors
moore.plot(x='Date of introduction', y='Transistor count', kind='scatter', alpha=0.5);
# Log-transform the target
moore['log(Transistor count)'] = np.log1p(moore['Transistor count'])
# Plot distribution of log-transformed target
sns.distplot(moore['log(Transistor count)']);
# Plot relationship between date & log-transformed target
moore.plot(x='Date of introduction', y='log(Transistor count)', kind='scatter', alpha=0.5);
# Fit Linear Regression with log-transformed target
from sklearn.linear_model import LinearRegression
model = LinearRegression()
X = moore[['Date of introduction']]
y_log = moore['log(Transistor count)']
model.fit(X, y_log)
y_pred_log = model.predict(X)
# Plot line of best fit, in units of log-transistors
ax = moore.plot(x='Date of introduction', y='log(Transistor count)', kind='scatter', alpha=0.5)
ax.plot(X, y_pred_log);
# Convert log-transistors to transistors
y_pred = np.expm1(y_pred_log)
# Plot line of best fit, in units of transistors
ax = moore.plot(x='Date of introduction', y='Transistor count', kind='scatter', alpha=0.5)
ax.plot(X, y_pred);
```
# Back to Caterpillar 🚜
### Select more features
#### [Data Description](https://www.kaggle.com/c/caterpillar-tube-pricing/data)
> **train_set.csv and test_set.csv**
> This file contains information on price quotes from our suppliers. Prices can be quoted in 2 ways: bracket and non-bracket pricing. Bracket pricing has multiple levels of purchase based on quantity (in other words, the cost is given assuming a purchase of quantity tubes). Non-bracket pricing has a minimum order amount (min_order) for which the price would apply. Each quote is issued with an annual_usage, an estimate of how many tube assemblies will be purchased in a given year.
```
# !pip install category_encoders
```
## Do feature engineering with relational data
#### [Data Description](https://www.kaggle.com/c/caterpillar-tube-pricing/data)
> The dataset is comprised of a large number of relational tables that describe the physical properties of tube assemblies. You are challenged to combine the characteristics of each tube assembly with supplier pricing dynamics in order to forecast a quote price for each tube.
> **tube.csv**
> This file contains information on tube assemblies, which are the primary focus of the competition. Tube Assemblies are made of multiple parts. The main piece is the tube which has a specific diameter, wall thickness, length, number of bends and bend radius. Either end of the tube (End A or End X) typically has some form of end connection allowing the tube assembly to attach to other features. Special tooling is typically required for short end straight lengths (end_a_1x, end_a_2x refer to if the end length is less than 1 times or 2 times the tube diameter, respectively). Other components can be permanently attached to a tube such as bosses, brackets or other custom features.
```
for path in glob('competition_data/*.csv'):
df = pd.read_csv(path)
shared_columns = set(df.columns) & set(train.columns)
if shared_columns:
print(path, df.shape)
print(df.columns.tolist(), '\n')
```
# Assignment
- Start a clean notebook.
- Get the [Caterpillar data from Kaggle](https://www.kaggle.com/c/caterpillar-tube-pricing/data).
- Do train/validate/test split.
- Select features from `train_set.csv`, `tube.csv`, and at least one more file.
- Fit a model.
- Get your validation RMSLE (or RMSE with log-transformed targets).
- [Submit](https://www.kaggle.com/c/caterpillar-tube-pricing/submit) your predictions to the Kaggle competition.
- Commit your notebook to your fork of the GitHub repo.
## Stretch Goals
- Improve your scores on Kaggle.
- Make visualizations and share on Slack.
- Look at [Kaggle Kernels](https://www.kaggle.com/c/caterpillar-tube-pricing/kernels) for ideas about feature engineerng and visualization.
Read [Better Explained](https://betterexplained.com/) Exponents & Logs series:
1. [An Intuitive Guide To Exponential Functions & e](https://betterexplained.com/articles/an-intuitive-guide-to-exponential-functions-e/)
2. [Demystifying the Natural Logarithm (ln)](https://betterexplained.com/articles/demystifying-the-natural-logarithm-ln/)
3. [A Visual Guide to Simple, Compound and Continuous Interest Rates](https://betterexplained.com/articles/a-visual-guide-to-simple-compound-and-continuous-interest-rates/)
4. [Common Definitions of e (Colorized)](https://betterexplained.com/articles/definitions-of-e-colorized/)
5. [Understanding Exponents (Why does 0^0 = 1?)](https://betterexplained.com/articles/understanding-exponents-why-does-00-1/)
6. [Using Logarithms in the Real World](https://betterexplained.com/articles/using-logs-in-the-real-world/)
7. [How To Think With Exponents And Logarithms](https://betterexplained.com/articles/think-with-exponents/)
8. [Understanding Discrete vs. Continuous Growth](https://betterexplained.com/articles/understanding-discrete-vs-continuous-growth/)
9. [What does an exponent really mean?](https://betterexplained.com/articles/what-does-an-exponent-mean/)
10. [Q: Why is e special? (2.718..., not 2, 3.7 or another number?)](https://betterexplained.com/articles/q-why-is-e-special-2-718-not-other-number/)
|
github_jupyter
|
from google.colab import drive
drive.mount('/content/drive')
%env KAGGLE_CONFIG_DIR=/content/drive/My Drive/
```
3. Install the Kaggle API package.
4. After you have accepted the rules of the competiton, use the Kaggle API package to get the data.
### Option 3. Google Drive
Download [zip file](https://drive.google.com/uc?export=download&id=1oGky3xR6133pub7S4zIEFbF4x1I87jvC) from Google Drive.
#### Get filenames & shapes
[Python Standard Library: glob](https://docs.python.org/3/library/glob.html)
> The `glob` module finds all the pathnames matching a specified pattern
## Choose how you'll measure success on your problem
> Which metrics will you monitor on your validation data?
#### [Evaluation](https://www.kaggle.com/c/caterpillar-tube-pricing/overview/evaluation)
> Submissions are evaluated one the Root Mean Squared Logarithmic Error (RMSLE). The RMSLE is calculated as
>
> $\sqrt{\frac{1}{n} \sum_{i=1}^{n}\left(\log \left(p_{i}+1\right)-\log \left(a_{i}+1\right)\right)^{2}}$
>
> Where:
>
> - $n$ is the number of price quotes in the test set
> - $p_i$ is your predicted price
> - $a_i$ is the actual price
> - $log(x)$ is the natural logarithm
#### [Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/model_evaluation.html#mean-squared-log-error)
> The `mean_squared_log_error` function is best to use when targets have exponential growth, such as population counts, average sales of a commodity over a span of years etc. Note that this metric penalizes an under-predicted estimate greater than an over-predicted estimate.
## Determine your evaluation protocol
> Which portion of the data should you use for validation?
#### Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)
> You will want to create your own training and validation sets (by splitting the Kaggle “training” data). You will just use your smaller training set (a subset of Kaggle’s training data) for building your model, and you can evaluate it on your validation set (also a subset of Kaggle’s training data) before you submit to Kaggle.
> When is a random subset not good enough?
> - Time series
> - New people, new boats, new…
#### Does the test set have different dates?
#### Does the test set have different tube assemblies?
#### Make the validation set like the test set
## Begin with baselines for regression
## Develop a first model that does better than a basic baseline
### Fit Random Forest with 1 feature: `quantity`
## Log-transform regression target with right-skewed distribution
### Plot right-skewed distribution
#### Terence Parr & Jeremy Howard, [The Mechanics of Machine Learning, Chapter 5.5](https://mlbook.explained.ai/prep.html#logtarget)
> Transforming the target variable (using the mathematical log function) into a tighter, more uniform space makes life easier for any model.
> The only problem is that, while easy to execute, understanding why taking the log of the target variable works and how it affects the training/testing process is intellectually challenging. You can skip this section for now, if you like, but just remember that this technique exists and check back here if needed in the future.
> Optimally, the distribution of prices would be a narrow “bell curve” distribution without a tail. This would make predictions based upon average prices more accurate. We need a mathematical operation that transforms the widely-distributed target prices into a new space. The “price in dollars space” has a long right tail because of outliers and we want to squeeze that space into a new space that is normally distributed (“bell curved”). More specifically, we need to shrink large values a lot and smaller values a little. That magic operation is called the logarithm or log for short.
> To make actual predictions, we have to take the exp of model predictions to get prices in dollars instead of log dollars.
#### Wikipedia, [Logarithm](https://en.wikipedia.org/wiki/Logarithm)
> Addition, multiplication, and exponentiation are three fundamental arithmetic operations. Addition can be undone by subtraction. Multiplication can be undone by division. The idea and purpose of **logarithms** is also to **undo** a fundamental arithmetic operation, namely raising a number to a certain power, an operation also known as **exponentiation.**
> For example, raising 2 to the third power yields 8.
> The logarithm (with respect to base 2) of 8 is 3, reflecting the fact that 2 was raised to the third power to get 8.
### Use Numpy for exponents and logarithms functions
- https://docs.scipy.org/doc/numpy/reference/routines.math.html#exponents-and-logarithms
### Refit model with log-transformed target
## Interlude: Moore's Law dataset
#### Background
- https://en.wikipedia.org/wiki/Moore%27s_law
- https://en.wikipedia.org/wiki/Transistor_count
#### Scrape HTML tables with Pandas!
- https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_html.html
- https://medium.com/@ageitgey/quick-tip-the-easiest-way-to-grab-data-out-of-a-web-page-in-python-7153cecfca58
#### More web scraping options
- https://automatetheboringstuff.com/chapter11/
# Back to Caterpillar 🚜
### Select more features
#### [Data Description](https://www.kaggle.com/c/caterpillar-tube-pricing/data)
> **train_set.csv and test_set.csv**
> This file contains information on price quotes from our suppliers. Prices can be quoted in 2 ways: bracket and non-bracket pricing. Bracket pricing has multiple levels of purchase based on quantity (in other words, the cost is given assuming a purchase of quantity tubes). Non-bracket pricing has a minimum order amount (min_order) for which the price would apply. Each quote is issued with an annual_usage, an estimate of how many tube assemblies will be purchased in a given year.
## Do feature engineering with relational data
#### [Data Description](https://www.kaggle.com/c/caterpillar-tube-pricing/data)
> The dataset is comprised of a large number of relational tables that describe the physical properties of tube assemblies. You are challenged to combine the characteristics of each tube assembly with supplier pricing dynamics in order to forecast a quote price for each tube.
> **tube.csv**
> This file contains information on tube assemblies, which are the primary focus of the competition. Tube Assemblies are made of multiple parts. The main piece is the tube which has a specific diameter, wall thickness, length, number of bends and bend radius. Either end of the tube (End A or End X) typically has some form of end connection allowing the tube assembly to attach to other features. Special tooling is typically required for short end straight lengths (end_a_1x, end_a_2x refer to if the end length is less than 1 times or 2 times the tube diameter, respectively). Other components can be permanently attached to a tube such as bosses, brackets or other custom features.
| 0.857634 | 0.984276 |
# Example 0.5
In this notebook we will look at calculating and presenting some descriptive statistics. There are lots of great datasets freely available online. The dataset we will use comes from the 1994 US census and is available as part of the UCI Machine Learning Repository: http://archive.ics.uci.edu/ml/datasets/Adult
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
```
We will start by reading in the data and ensuring that the column names have been specified correctly. In this case the data is in the `adult.data` file, which is a CSV, and the meta-data, the column names, is in `adult.header`. Sometimes these data are combined in a single file, sometimes they are separate.
```
header_file = "adult.header"
data_file = "adult.data"
with open(header_file) as f:
header = f.readlines()[0].split(',')
df = pd.read_table(data_file, delimiter = ",", names=header)
```
Word processor software will typically have functionality for creating tables. Often when working with code it is useful to be able to construct a table in plain text. An excellent tool for creating tables for LaTeX, HTML or Markdown is [Tables Generator](https://www.tablesgenerator.com/).
## Question
Fill in the values in the following table
| Variable | Value |
|-------------------------------------|:-----:|
| Number, _N_ | ? |
| Sex, female, _N_ (%) | ? (?) |
| Age [years], mean (SD) | ? (?) |
| Hours worked per week, median (IQR) | ? (?) |
If this seems too easy, why not try generate the text for a markdown table too!
- [hint](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.value_counts.html)
- [hint](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.quantile.html)
## Question
Create a histogram of the ages with 70 bins. What do you notice?
- [hint](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.hist.html)
## Question
Create another histogram, this time ensure that there is only a single age per column. On top of the histogram draw vertical lines representing the mean and plus/minus two standard deviations. What do you notice?
## Question
Create a boxplot of the ages for females and males. What statistics are used to compute the size of the box, the midline, and the whiskers and points? Given the histogram above, can you predict what it will look like?
## Question
Draw a Tufte style boxplot of the same data. For example, in one variation (of many) the midline is replaced by a point, the box is omitted and the whiskers extend to the most extreme points.
- [hint](https://jrnold.github.io/ggthemes/reference/geom_tufteboxplot.html)
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
header_file = "adult.header"
data_file = "adult.data"
with open(header_file) as f:
header = f.readlines()[0].split(',')
df = pd.read_table(data_file, delimiter = ",", names=header)
| 0.149811 | 0.988547 |
# Flights Data Exploration Challenge
In this challenge, you'll explore a real-world dataset containing flights data from the US Department of Transportation.
Let's start by loading and viewing the data.
```
import pandas as pd
df_flights = pd.read_csv('data/flights.csv')
df_flights.head()
```
The dataset contains observations of US domestic flights in 2013, and consists of the following fields:
- **Year**: The year of the flight (all records are from 2013)
- **Month**: The month of the flight
- **DayofMonth**: The day of the month on which the flight departed
- **DayOfWeek**: The day of the week on which the flight departed - from 1 (Monday) to 7 (Sunday)
- **Carrier**: The two-letter abbreviation for the airline.
- **OriginAirportID**: A unique numeric identifier for the departure aiport
- **OriginAirportName**: The full name of the departure airport
- **OriginCity**: The departure airport city
- **OriginState**: The departure airport state
- **DestAirportID**: A unique numeric identifier for the destination aiport
- **DestAirportName**: The full name of the destination airport
- **DestCity**: The destination airport city
- **DestState**: The destination airport state
- **CRSDepTime**: The scheduled departure time
- **DepDelay**: The number of minutes departure was delayed (flight that left ahead of schedule have a negative value)
- **DelDelay15**: A binary indicator that departure was delayed by more than 15 minutes (and therefore considered "late")
- **CRSArrTime**: The scheduled arrival time
- **ArrDelay**: The number of minutes arrival was delayed (flight that arrived ahead of schedule have a negative value)
- **ArrDelay15**: A binary indicator that arrival was delayed by more than 15 minutes (and therefore considered "late")
- **Cancelled**: A binary indicator that the flight was cancelled
Your challenge is to explore the flight data to analyze possible factors that affect delays in departure or arrival of a flight.
1. Start by cleaning the data.
- Identify any null or missing data, and impute appropriate replacement values.
- Identify and eliminate any outliers in the **DepDelay** and **ArrDelay** columns.
2. Explore the cleaned data.
- View summary statistics for the numeric fields in the dataset.
- Determine the distribution of the **DepDelay** and **ArrDelay** columns.
- Use statistics, aggregate functions, and visualizations to answer the following questions:
- *What are the average (mean) departure and arrival delays?*
- *How do the carriers compare in terms of arrival delay performance?*
- *Is there a noticable difference in arrival delays for different days of the week?*
- *Which departure airport has the highest average departure delay?*
- *Do **late** departures tend to result in longer arrival delays than on-time departures?*
- *Which route (from origin airport to destination airport) has the most **late** arrivals?*
- *Which route has the highest average arrival delay?*
```
df_flights.shape
# Identify any null or missing data, and impute appropriate replacement values.
#finding the null values
df_flights.isnull().sum()
df_flights[df_flights.isnull().any(axis=1)][['DepDelay','DepDel15']]
#DepDel15 is having null values
df_flights.DepDel15 = df_flights.DepDel15.fillna(0)
df_flights.isna().sum()
#Identify and eliminate any outliers in the DepDelay and ArrDelay columns.
import matplotlib.pyplot as plt
import seaborn as sns
def distribution_stats(var):
plt.figure(figsize=(16,4))
print(f'Mean :{var.mean()}\nMedian:{var.median()}\nMode:{var.mode()[0]}\nStd:{var.std()}')
print(f'Min :{var.min()}\nMax:{var.max()}')
sns.boxplot(x=var)
plt.show()
print('DepDelay')
distribution_stats(df_flights.DepDelay)
print('ArrDelay')
distribution_stats(df_flights.ArrDelay)
#Expected Result
# DepDelay
# Minimum:-63.00
# Mean:10.35
# Median:-1.00
# Mode:-3.00
# Maximum:1425.00
# ArrDelay
# Minimum:-75.00
# Mean:6.50
# Median:-3.00
# Mode:0.00
# Maximum:1440.00
```
# Removing the Outliers
```
df_flights.ArrDelay.quantile(0.01),df_flights.ArrDelay.quantile(0.90),df_flights.DepDelay.quantile(0.01),df_flights.DepDelay.quantile(0.90)
DepDelay_01pcntile = df_flights.DepDelay.quantile(0.01)
DepDelay_90pcntile = df_flights.DepDelay.quantile(0.90)
ArrDelay_01pcntile = df_flights.ArrDelay.quantile(0.01)
ArrDelay_90pcntile = df_flights.ArrDelay.quantile(0.90)
# Trim outliers for DepDelay based on 1% and 90% percentiles
df_flights = df_flights[df_flights.DepDelay < DepDelay_90pcntile]
df_flights = df_flights[df_flights.DepDelay > DepDelay_01pcntile]
df_flights = df_flights[df_flights.ArrDelay < ArrDelay_90pcntile]
df_flights = df_flights[df_flights.ArrDelay > ArrDelay_01pcntile]
print('DepDelay')
distribution_stats(df_flights.DepDelay)
print('ArrDelay')
distribution_stats(df_flights.ArrDelay)
```
# Explore the cleaned data.
- Determine the distribution of the DepDelay and ArrDelay columns.
- Use statistics, aggregate functions, and visualizations to answer the following questions:
- What are the average (mean) departure and arrival delays?
- How do the carriers compare in terms of arrival delay performance?
- Is there a noticable difference in arrival delays for different days of the week?
- Which departure airport has the highest average departure delay?
- Do late departures tend to result in longer arrival delays than on-time departures?
- Which route (from origin airport to destination airport) has the most late arrivals?
- Which route has the highest average arrival delay?
```
#Determine the distribution of the DepDelay and ArrDelay columns.
delayFields = ['DepDelay','ArrDelay']
for col in delayFields:
df_flights[col].plot.density()
# What are the average (mean) departure and arrival delays?
df_flights[delayFields].mean()
```
### How do the carriers compare in terms of arrival delay performance?
```
plt.figure(figsize=(16,6))
sns.barplot(data=df_flights, x=df_flights.Carrier,y=df_flights.ArrDelay)
plt.show()
plt.figure(figsize=(16,6))
sns.boxplot(data=df_flights,x=df_flights.Carrier,y=df_flights.ArrDelay)
plt.show()
```
### Is there a noticable difference in arrival delays for different days of the week?
```
plt.figure(figsize=(16,6))
sns.barplot(data=df_flights, x=df_flights.DayOfWeek,y=df_flights.ArrDelay)
plt.show()
plt.figure(figsize=(16,6))
sns.boxplot(data=df_flights,x=df_flights.DayOfWeek,y=df_flights.ArrDelay)
plt.show()
```
### Which departure airport has the highest average departure delay?
```
df_flights.columns
highest_avg_DepDelay=df_flights.groupby('OriginAirportName').mean()['DepDelay'].sort_values(ascending=False)
highest_avg_DepDelay.plot(kind = "bar", figsize=(14,10))
```
### Do late departures tend to result in longer arrival delays than on-time departures?
```
plt.figure(figsize=(10,6))
sns.boxplot(data=df_flights,x=df_flights.DepDel15,y=df_flights.ArrDelay)
plt.show()
sns.barplot(data=df_flights,x=df_flights.DepDel15,y=df_flights.ArrDelay)
plt.show()
```
### Which route (from origin airport to destination airport) has the most **late** arrivals?
```
df_flights.columns
# showing top 5
print(df_flights.groupby(['OriginAirportName','DestAirportName']).sum()['ArrDel15'].sort_values(ascending=False)[:5])
```
### Which route has the highest average arrival delay?
```
# showing the top 5
print(df_flights.groupby(['OriginAirportName','DestAirportName']).mean()['ArrDelay'].sort_values(ascending=False)[:5])
```
|
github_jupyter
|
import pandas as pd
df_flights = pd.read_csv('data/flights.csv')
df_flights.head()
df_flights.shape
# Identify any null or missing data, and impute appropriate replacement values.
#finding the null values
df_flights.isnull().sum()
df_flights[df_flights.isnull().any(axis=1)][['DepDelay','DepDel15']]
#DepDel15 is having null values
df_flights.DepDel15 = df_flights.DepDel15.fillna(0)
df_flights.isna().sum()
#Identify and eliminate any outliers in the DepDelay and ArrDelay columns.
import matplotlib.pyplot as plt
import seaborn as sns
def distribution_stats(var):
plt.figure(figsize=(16,4))
print(f'Mean :{var.mean()}\nMedian:{var.median()}\nMode:{var.mode()[0]}\nStd:{var.std()}')
print(f'Min :{var.min()}\nMax:{var.max()}')
sns.boxplot(x=var)
plt.show()
print('DepDelay')
distribution_stats(df_flights.DepDelay)
print('ArrDelay')
distribution_stats(df_flights.ArrDelay)
#Expected Result
# DepDelay
# Minimum:-63.00
# Mean:10.35
# Median:-1.00
# Mode:-3.00
# Maximum:1425.00
# ArrDelay
# Minimum:-75.00
# Mean:6.50
# Median:-3.00
# Mode:0.00
# Maximum:1440.00
df_flights.ArrDelay.quantile(0.01),df_flights.ArrDelay.quantile(0.90),df_flights.DepDelay.quantile(0.01),df_flights.DepDelay.quantile(0.90)
DepDelay_01pcntile = df_flights.DepDelay.quantile(0.01)
DepDelay_90pcntile = df_flights.DepDelay.quantile(0.90)
ArrDelay_01pcntile = df_flights.ArrDelay.quantile(0.01)
ArrDelay_90pcntile = df_flights.ArrDelay.quantile(0.90)
# Trim outliers for DepDelay based on 1% and 90% percentiles
df_flights = df_flights[df_flights.DepDelay < DepDelay_90pcntile]
df_flights = df_flights[df_flights.DepDelay > DepDelay_01pcntile]
df_flights = df_flights[df_flights.ArrDelay < ArrDelay_90pcntile]
df_flights = df_flights[df_flights.ArrDelay > ArrDelay_01pcntile]
print('DepDelay')
distribution_stats(df_flights.DepDelay)
print('ArrDelay')
distribution_stats(df_flights.ArrDelay)
#Determine the distribution of the DepDelay and ArrDelay columns.
delayFields = ['DepDelay','ArrDelay']
for col in delayFields:
df_flights[col].plot.density()
# What are the average (mean) departure and arrival delays?
df_flights[delayFields].mean()
plt.figure(figsize=(16,6))
sns.barplot(data=df_flights, x=df_flights.Carrier,y=df_flights.ArrDelay)
plt.show()
plt.figure(figsize=(16,6))
sns.boxplot(data=df_flights,x=df_flights.Carrier,y=df_flights.ArrDelay)
plt.show()
plt.figure(figsize=(16,6))
sns.barplot(data=df_flights, x=df_flights.DayOfWeek,y=df_flights.ArrDelay)
plt.show()
plt.figure(figsize=(16,6))
sns.boxplot(data=df_flights,x=df_flights.DayOfWeek,y=df_flights.ArrDelay)
plt.show()
df_flights.columns
highest_avg_DepDelay=df_flights.groupby('OriginAirportName').mean()['DepDelay'].sort_values(ascending=False)
highest_avg_DepDelay.plot(kind = "bar", figsize=(14,10))
plt.figure(figsize=(10,6))
sns.boxplot(data=df_flights,x=df_flights.DepDel15,y=df_flights.ArrDelay)
plt.show()
sns.barplot(data=df_flights,x=df_flights.DepDel15,y=df_flights.ArrDelay)
plt.show()
df_flights.columns
# showing top 5
print(df_flights.groupby(['OriginAirportName','DestAirportName']).sum()['ArrDel15'].sort_values(ascending=False)[:5])
# showing the top 5
print(df_flights.groupby(['OriginAirportName','DestAirportName']).mean()['ArrDelay'].sort_values(ascending=False)[:5])
| 0.444324 | 0.985566 |
```
%matplotlib inline
import pandas as pd
import numpy as np
import cv2
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from termcolor import colored
face_cascade = cv2.CascadeClassifier('/home/mckc/Downloads/opencv-2.4.13/data/haarcascades_GPU/haarcascade_frontalface_default.xml')
def load_data():
import pandas as pd
import numpy as np
from PIL import Image
import cv2
from skimage.transform import resize
train = pd.read_csv('/home/mckc/TwoClass//train.csv')
test = pd.read_csv('/home/mckc/TwoClass//test.csv')
print 'the training data shape is ',train.shape
print 'the test data shape is ', test.shape
train_faces = np.zeros((1,96,96),dtype=np.uint8)
Y_train=[]
missing = []
multiple = []
for i in range(train.shape[0]):
image = np.array(cv2.imread(train.values[i,0], cv2.CV_LOAD_IMAGE_GRAYSCALE))
#print image
faces = face_cascade.detectMultiScale(image,scaleFactor=1.2,minNeighbors=6,minSize=(70, 70))
n_faces = len(faces)
if n_faces is 1:
for (x,y,w,h) in faces:
fac = np.array(image)[y:(y+h),x:(x+h)]
out = (resize(fac,(96,96))).reshape((1,96,96))
train_faces = np.vstack((train_faces,out))
Y_train = np.append(Y_train,train.values[i,1])
else:
if n_faces > 1:
missing = np.append(missing,i)
else:
multiple = np.append(multiple,i)
if i % 20==0:
print colored((float(i)/train.shape[0]*100 ,' Percentage complete'), 'green')
print 'missing count:',len(missing),'\nmuiltiple images count',len(multiple)
train_faces = train_faces[1:,:,:]
test_faces = np.zeros((1,96,96),dtype=np.uint8)
Y_test = []
file_names = []
for i in range(test.shape[0]):
image = np.array(cv2.imread(test.values[i,0], cv2.CV_LOAD_IMAGE_GRAYSCALE))
faces = face_cascade.detectMultiScale(image,scaleFactor=1.2,minNeighbors=6,minSize=(70, 70))
n_faces = len(faces)
if n_faces is 1:
for (x,y,w,h) in faces:
fac = np.array(image)[y:(y+h),x:(x+h)]
out = (resize(fac,(96,96))).reshape((1,96,96))
test_faces = np.vstack((test_faces,out))
Y_test = np.append(Y_test,test.values[i,1])
file_names = np.append(file_names,test.values[i,0])
else:
if n_faces > 1:
missing = np.append(missing,i)
else:
multiple = np.append(multiple,i)
if i % 20==0:
print colored((float(i)/train.shape[0]*100 ,' Percentage complete'), 'green')
test_faces = test_faces[1:,:,:]
print len(missing),len(multiple)
print 'the training file shape',train_faces.shape,Y_train.shape
print 'the testing file shape',test_faces.shape,Y_test.shape
return train_faces,test_faces,Y_train,Y_test,file_names
def simulate(X,Y):
import scipy as sp
import scipy.ndimage
complete = np.zeros((1,96,96),dtype=np.uint8)
Y_complete = []
for i in range(len(X)):
complete = np.vstack((complete,X[i,:,:].reshape(1,96,96)))
complete = np.vstack((complete,scipy.ndimage.rotate(X[i,:,:], angle = 5,reshape=False,cval=1).reshape(1,96,96)))
complete = np.vstack((complete,scipy.ndimage.rotate(X[i,:,:], angle = 10,reshape=False,cval=1).reshape(1,96,96)))
complete = np.vstack((complete,scipy.ndimage.rotate(X[i,:,:], angle = 15,reshape=False,cval=1).reshape(1,96,96)))
complete = np.vstack((complete,scipy.ndimage.rotate(X[i,:,:], angle = -5,reshape=False,cval=1).reshape(1,96,96)))
complete = np.vstack((complete,scipy.ndimage.rotate(X[i,:,:], angle = -15,reshape=False,cval=1).reshape(1,96,96)))
complete = np.vstack((complete,scipy.ndimage.rotate(X[i,:,:], angle = -10,reshape=False,cval=1).reshape(1,96,96)))
rotated = np.fliplr(X[i,:,:])
complete = np.vstack((complete,scipy.ndimage.rotate(rotated, angle = 5,reshape=False,cval=1).reshape(1,96,96)))
complete = np.vstack((complete,scipy.ndimage.rotate(rotated, angle = 10,reshape=False,cval=1).reshape(1,96,96)))
complete = np.vstack((complete,scipy.ndimage.rotate(rotated, angle = 15,reshape=False,cval=1).reshape(1,96,96)))
complete = np.vstack((complete,scipy.ndimage.rotate(rotated, angle = -5,reshape=False,cval=1).reshape(1,96,96)))
complete = np.vstack((complete,scipy.ndimage.rotate(rotated, angle = -10,reshape=False,cval=1).reshape(1,96,96)))
complete = np.vstack((complete,scipy.ndimage.rotate(rotated, angle = -15,reshape=False,cval=1).reshape(1,96,96)))
complete = np.vstack((complete,rotated.reshape(1,96,96)))
Y_complete = np.append(Y_complete,([Y[i]]*14))
if i % 10==0:
print colored((float(i)/len(X)*100 ,' Percentage complete'),'green')
complete = complete[1:,:,:]
return complete,Y_complete
X_tr,X_tst,Y_tr,Y_tst,file_names = load_data()
import time
start_time = time.clock()
X,Y = simulate(X_tr,Y_tr)
print X.shape,Y.shape
print time.clock() - start_time, "seconds"
def standard(X):
return (X - X.mean())/X.max()
X_test = standard(X_tst)
X = standard(X)
X_normal = X.reshape(-1,9216)
X_test_normal = X_test.reshape(-1,9216)
map, Y_number = np.unique(Y, return_inverse=True)
Y_test_numer = np.unique(Y_tst, return_inverse=True)[1]
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
```
clf = LogisticRegression(verbose=0,n_jobs=-1)
clf.fit(X_normal,Y_number)
Y_logictic= clf.predict(X_test.reshape(-1,9216))
Y_log_vales = map[Y_logictic]
print 'Accuracy of the model is ',accuracy_score(Y_tst,Y_log_vales)
confusion_matrix(Y_log_vales,Y_tst)
```
recognizer = RandomForestClassifier(500,verbose=0,oob_score=True,n_jobs=-1)
recognizer.fit(X_normal,Y_number)
Y_rf= recognizer.predict(X_test.reshape(-1,9216))
Y_rf_vales = map[Y_rf]
print 'Accuracy of the model is ',accuracy_score(Y_tst,Y_rf_vales)
confusion_matrix(Y_tst,Y_rf_vales)
importances = recognizer.feature_importances_
importance_image = importances.reshape(96,96)
#plt.figure(figsize=(7,7))
plt.imshow(importance_image,cmap=cm.Greys_r)
for i in range(len(Y_test_numer)):
print file_names[i],Y_rf_vales[i]
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras import backend as K
from keras.optimizers import Adam,SGD
from keras.utils import np_utils
from keras.callbacks import EarlyStopping
early_stopping = EarlyStopping(monitor='val_loss', patience=2)
Y_Keras = np_utils.to_categorical(Y_number, 2)
# Create first network with Keras
from keras.models import Sequential
from keras.layers import Dense, Activation,Dropout
model = Sequential()
model.add(Dense(1000, input_dim=9216,activation='sigmoid'))
#model.add(Dense(500,activation='sigmoid'))
model.add(Dense(1000,activation='relu'))
model.add(Dense(2,activation='softmax'))
sgd = SGD(lr=0.0001, decay=1e-6, momentum=0.9, nesterov=True)
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
import time
model.fit(X.reshape(-1,9216), Y_Keras, nb_epoch=30, batch_size=5,verbose=1
,validation_data=(X_test.reshape(-1,9216), np_utils.to_categorical(Y_test_numer, 2)))
Y_kr= model.predict_classes(X_test.reshape(-1,9216))
Y_kr_vales = map[Y_kr]
print 'Accuracy of the model is ',accuracy_score(Y_tst,Y_kr_vales,'\n')
confusion_matrix(Y_tst,Y_kr_vales)
import lasagne
from lasagne.layers.cuda_convnet import Conv2DCCLayer as Conv2DLayer
from lasagne.layers.cuda_convnet import MaxPool2DCCLayer as MaxPool2DLayer
from lasagne import layers
from lasagne.objectives import categorical_crossentropy
from lasagne.updates import nesterov_momentum
from nolearn.lasagne import BatchIterator,visualize,NeuralNet
#Conv2DLayer = layers.Conv2DLayer
#MaxPool2DLayer = layers.MaxPool2DLayer
net = NeuralNet(
layers=[
('input', layers.InputLayer),
('conv1', Conv2DLayer),
('pool1', MaxPool2DLayer),
('dropout1', layers.DropoutLayer),
('conv2', Conv2DLayer),
('pool2', MaxPool2DLayer),
('dropout2', layers.DropoutLayer),
('conv3', Conv2DLayer),
('pool3', MaxPool2DLayer),
('dropout3', layers.DropoutLayer),
('hidden4', layers.DenseLayer),
('dropout4', layers.DropoutLayer),
('hidden5', layers.DenseLayer),
('output', layers.DenseLayer),
],
input_shape=(None, 1, 96, 96),
conv1_num_filters=32, conv1_filter_size=(3, 3), pool1_pool_size=(2, 2),
dropout1_p=0.1,
conv2_num_filters=64, conv2_filter_size=(2, 2), pool2_pool_size=(2, 2),
dropout2_p=0.2,
conv3_num_filters=128, conv3_filter_size=(2, 2), pool3_pool_size=(2, 2),
dropout3_p=0.3,
hidden4_num_units=1000,
dropout4_p=0.5,
hidden5_num_units=1000,
output_nonlinearity=lasagne.nonlinearities.softmax,
output_num_units=2,
update = nesterov_momentum,
update_learning_rate=0.001,
update_momentum=0.9,
max_epochs=30,
verbose=1
)
net.fit(X.reshape(-1,1,96,96).astype(np.float32), Y_number.astype(np.uint8))
Y_las= net.predict(X_test.reshape(-1,9216))
Y_las_vales = map[Y_kr]
print 'Accuracy of the model is ',accuracy_score(Y_tst,Y_las_vales,'\n')
confusion_matrix(Y_tst,Y_las_vales)
def plot_loss(net):
train_loss = [row['train_loss'] for row in net.train_history_]
valid_loss = [row['valid_loss'] for row in net.train_history_]
plt.plot(train_loss, label='train loss')
plt.plot(valid_loss, label='valid loss')
plt.xlabel('epoch')
plt.ylabel('loss')
plt.legend(loc='best')
return plt
plot_loss(net)
from PIL import Image
from skimage.transform import resize
jpgfile = Image.open("/home/mckc/Downloads/1.jpg")
grey = rgb2gray(np.array(jpgfile))
faces = face_cascade.detectMultiScale(grey.astype(np.uint8),scaleFactor=1.1,minNeighbors=3,minSize=(30, 30))
print faces
for (x,y,w,h) in faces:
fac = np.array(grey[y:(y+h),x:(x+h)])
out = resize(fac,(96,96))
plt.imshow(out,cmap=cm.Greys_r)
trial = standard(out)
print 'Linear Regression Value',map,clf.predict_proba(trial.reshape(-1,9216)),map[clf.predict((trial.reshape(-1,9216)))]
print 'Random Forest Value',map,recognizer.predict_proba(trial.reshape(-1,9216)),map[recognizer
.predict((trial.reshape(-1,9216)))]
print 'Lasagne Value',map,net.predict_proba(trial.reshape(-1,1,96,96).astype(np.float16)),map[net.predict((trial.reshape(-1,1,96,96).astype(np.float16)))]
print 'Keras Value',map,model.predict(trial.reshape(-1,9216).astype(np.float64))
from PIL import Image
from skimage.transform import resize
jpgfile = Image.open("/home/mckc/Downloads/2.jpg")
grey = rgb2gray(np.array(jpgfile))
faces = face_cascade.detectMultiScale(grey.astype(np.uint8),scaleFactor=1.1,minNeighbors=4,minSize=(30, 30))
print faces
for (x,y,w,h) in faces:
fac = np.array(grey[y:(y+h),x:(x+h)])
out = resize(fac,(96,96))
plt.imshow(out,cmap=cm.Greys_r)
trial = standard(out)
print 'Linear Regression Value',map,clf.predict_proba(trial.reshape(-1,9216)),map[clf.predict((trial.reshape(-1,9216)))]
print 'Random Forest Value',map,recognizer.predict_proba(trial.reshape(-1,9216)),map[recognizer
.predict((trial.reshape(-1,9216)))]
print 'Lasagne Value',map,net.predict_proba(trial.reshape(-1,1,96,96).astype(np.float16)),map[net.predict((trial.reshape(-1,1,96,96).astype(np.float16)))]
print 'Keras Value',map,model.predict(trial.reshape(-1,9216).astype(np.float64))
import sys
sys.setrecursionlimit(150000)
from keras.models import load_model
model.save('my_model.h5') # creates a HDF5 file 'my_model.h5'
del model # deletes the existing model
# returns a compiled model
# identical to the previous one
model = load_model('my_model.h5')
import cPickle
# save the classifier
with open('my_dumped_classifier.pkl', 'wb') as fid:
cPickle.dump(model, fid)
# load it again
with open('my_dumped_classifier.pkl', 'rb') as fid:
gnb_loaded = cPickle.load(fid)
model = load_model('my_model.h5')
```
|
github_jupyter
|
%matplotlib inline
import pandas as pd
import numpy as np
import cv2
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from termcolor import colored
face_cascade = cv2.CascadeClassifier('/home/mckc/Downloads/opencv-2.4.13/data/haarcascades_GPU/haarcascade_frontalface_default.xml')
def load_data():
import pandas as pd
import numpy as np
from PIL import Image
import cv2
from skimage.transform import resize
train = pd.read_csv('/home/mckc/TwoClass//train.csv')
test = pd.read_csv('/home/mckc/TwoClass//test.csv')
print 'the training data shape is ',train.shape
print 'the test data shape is ', test.shape
train_faces = np.zeros((1,96,96),dtype=np.uint8)
Y_train=[]
missing = []
multiple = []
for i in range(train.shape[0]):
image = np.array(cv2.imread(train.values[i,0], cv2.CV_LOAD_IMAGE_GRAYSCALE))
#print image
faces = face_cascade.detectMultiScale(image,scaleFactor=1.2,minNeighbors=6,minSize=(70, 70))
n_faces = len(faces)
if n_faces is 1:
for (x,y,w,h) in faces:
fac = np.array(image)[y:(y+h),x:(x+h)]
out = (resize(fac,(96,96))).reshape((1,96,96))
train_faces = np.vstack((train_faces,out))
Y_train = np.append(Y_train,train.values[i,1])
else:
if n_faces > 1:
missing = np.append(missing,i)
else:
multiple = np.append(multiple,i)
if i % 20==0:
print colored((float(i)/train.shape[0]*100 ,' Percentage complete'), 'green')
print 'missing count:',len(missing),'\nmuiltiple images count',len(multiple)
train_faces = train_faces[1:,:,:]
test_faces = np.zeros((1,96,96),dtype=np.uint8)
Y_test = []
file_names = []
for i in range(test.shape[0]):
image = np.array(cv2.imread(test.values[i,0], cv2.CV_LOAD_IMAGE_GRAYSCALE))
faces = face_cascade.detectMultiScale(image,scaleFactor=1.2,minNeighbors=6,minSize=(70, 70))
n_faces = len(faces)
if n_faces is 1:
for (x,y,w,h) in faces:
fac = np.array(image)[y:(y+h),x:(x+h)]
out = (resize(fac,(96,96))).reshape((1,96,96))
test_faces = np.vstack((test_faces,out))
Y_test = np.append(Y_test,test.values[i,1])
file_names = np.append(file_names,test.values[i,0])
else:
if n_faces > 1:
missing = np.append(missing,i)
else:
multiple = np.append(multiple,i)
if i % 20==0:
print colored((float(i)/train.shape[0]*100 ,' Percentage complete'), 'green')
test_faces = test_faces[1:,:,:]
print len(missing),len(multiple)
print 'the training file shape',train_faces.shape,Y_train.shape
print 'the testing file shape',test_faces.shape,Y_test.shape
return train_faces,test_faces,Y_train,Y_test,file_names
def simulate(X,Y):
import scipy as sp
import scipy.ndimage
complete = np.zeros((1,96,96),dtype=np.uint8)
Y_complete = []
for i in range(len(X)):
complete = np.vstack((complete,X[i,:,:].reshape(1,96,96)))
complete = np.vstack((complete,scipy.ndimage.rotate(X[i,:,:], angle = 5,reshape=False,cval=1).reshape(1,96,96)))
complete = np.vstack((complete,scipy.ndimage.rotate(X[i,:,:], angle = 10,reshape=False,cval=1).reshape(1,96,96)))
complete = np.vstack((complete,scipy.ndimage.rotate(X[i,:,:], angle = 15,reshape=False,cval=1).reshape(1,96,96)))
complete = np.vstack((complete,scipy.ndimage.rotate(X[i,:,:], angle = -5,reshape=False,cval=1).reshape(1,96,96)))
complete = np.vstack((complete,scipy.ndimage.rotate(X[i,:,:], angle = -15,reshape=False,cval=1).reshape(1,96,96)))
complete = np.vstack((complete,scipy.ndimage.rotate(X[i,:,:], angle = -10,reshape=False,cval=1).reshape(1,96,96)))
rotated = np.fliplr(X[i,:,:])
complete = np.vstack((complete,scipy.ndimage.rotate(rotated, angle = 5,reshape=False,cval=1).reshape(1,96,96)))
complete = np.vstack((complete,scipy.ndimage.rotate(rotated, angle = 10,reshape=False,cval=1).reshape(1,96,96)))
complete = np.vstack((complete,scipy.ndimage.rotate(rotated, angle = 15,reshape=False,cval=1).reshape(1,96,96)))
complete = np.vstack((complete,scipy.ndimage.rotate(rotated, angle = -5,reshape=False,cval=1).reshape(1,96,96)))
complete = np.vstack((complete,scipy.ndimage.rotate(rotated, angle = -10,reshape=False,cval=1).reshape(1,96,96)))
complete = np.vstack((complete,scipy.ndimage.rotate(rotated, angle = -15,reshape=False,cval=1).reshape(1,96,96)))
complete = np.vstack((complete,rotated.reshape(1,96,96)))
Y_complete = np.append(Y_complete,([Y[i]]*14))
if i % 10==0:
print colored((float(i)/len(X)*100 ,' Percentage complete'),'green')
complete = complete[1:,:,:]
return complete,Y_complete
X_tr,X_tst,Y_tr,Y_tst,file_names = load_data()
import time
start_time = time.clock()
X,Y = simulate(X_tr,Y_tr)
print X.shape,Y.shape
print time.clock() - start_time, "seconds"
def standard(X):
return (X - X.mean())/X.max()
X_test = standard(X_tst)
X = standard(X)
X_normal = X.reshape(-1,9216)
X_test_normal = X_test.reshape(-1,9216)
map, Y_number = np.unique(Y, return_inverse=True)
Y_test_numer = np.unique(Y_tst, return_inverse=True)[1]
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
recognizer = RandomForestClassifier(500,verbose=0,oob_score=True,n_jobs=-1)
recognizer.fit(X_normal,Y_number)
Y_rf= recognizer.predict(X_test.reshape(-1,9216))
Y_rf_vales = map[Y_rf]
print 'Accuracy of the model is ',accuracy_score(Y_tst,Y_rf_vales)
confusion_matrix(Y_tst,Y_rf_vales)
importances = recognizer.feature_importances_
importance_image = importances.reshape(96,96)
#plt.figure(figsize=(7,7))
plt.imshow(importance_image,cmap=cm.Greys_r)
for i in range(len(Y_test_numer)):
print file_names[i],Y_rf_vales[i]
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras import backend as K
from keras.optimizers import Adam,SGD
from keras.utils import np_utils
from keras.callbacks import EarlyStopping
early_stopping = EarlyStopping(monitor='val_loss', patience=2)
Y_Keras = np_utils.to_categorical(Y_number, 2)
# Create first network with Keras
from keras.models import Sequential
from keras.layers import Dense, Activation,Dropout
model = Sequential()
model.add(Dense(1000, input_dim=9216,activation='sigmoid'))
#model.add(Dense(500,activation='sigmoid'))
model.add(Dense(1000,activation='relu'))
model.add(Dense(2,activation='softmax'))
sgd = SGD(lr=0.0001, decay=1e-6, momentum=0.9, nesterov=True)
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
import time
model.fit(X.reshape(-1,9216), Y_Keras, nb_epoch=30, batch_size=5,verbose=1
,validation_data=(X_test.reshape(-1,9216), np_utils.to_categorical(Y_test_numer, 2)))
Y_kr= model.predict_classes(X_test.reshape(-1,9216))
Y_kr_vales = map[Y_kr]
print 'Accuracy of the model is ',accuracy_score(Y_tst,Y_kr_vales,'\n')
confusion_matrix(Y_tst,Y_kr_vales)
import lasagne
from lasagne.layers.cuda_convnet import Conv2DCCLayer as Conv2DLayer
from lasagne.layers.cuda_convnet import MaxPool2DCCLayer as MaxPool2DLayer
from lasagne import layers
from lasagne.objectives import categorical_crossentropy
from lasagne.updates import nesterov_momentum
from nolearn.lasagne import BatchIterator,visualize,NeuralNet
#Conv2DLayer = layers.Conv2DLayer
#MaxPool2DLayer = layers.MaxPool2DLayer
net = NeuralNet(
layers=[
('input', layers.InputLayer),
('conv1', Conv2DLayer),
('pool1', MaxPool2DLayer),
('dropout1', layers.DropoutLayer),
('conv2', Conv2DLayer),
('pool2', MaxPool2DLayer),
('dropout2', layers.DropoutLayer),
('conv3', Conv2DLayer),
('pool3', MaxPool2DLayer),
('dropout3', layers.DropoutLayer),
('hidden4', layers.DenseLayer),
('dropout4', layers.DropoutLayer),
('hidden5', layers.DenseLayer),
('output', layers.DenseLayer),
],
input_shape=(None, 1, 96, 96),
conv1_num_filters=32, conv1_filter_size=(3, 3), pool1_pool_size=(2, 2),
dropout1_p=0.1,
conv2_num_filters=64, conv2_filter_size=(2, 2), pool2_pool_size=(2, 2),
dropout2_p=0.2,
conv3_num_filters=128, conv3_filter_size=(2, 2), pool3_pool_size=(2, 2),
dropout3_p=0.3,
hidden4_num_units=1000,
dropout4_p=0.5,
hidden5_num_units=1000,
output_nonlinearity=lasagne.nonlinearities.softmax,
output_num_units=2,
update = nesterov_momentum,
update_learning_rate=0.001,
update_momentum=0.9,
max_epochs=30,
verbose=1
)
net.fit(X.reshape(-1,1,96,96).astype(np.float32), Y_number.astype(np.uint8))
Y_las= net.predict(X_test.reshape(-1,9216))
Y_las_vales = map[Y_kr]
print 'Accuracy of the model is ',accuracy_score(Y_tst,Y_las_vales,'\n')
confusion_matrix(Y_tst,Y_las_vales)
def plot_loss(net):
train_loss = [row['train_loss'] for row in net.train_history_]
valid_loss = [row['valid_loss'] for row in net.train_history_]
plt.plot(train_loss, label='train loss')
plt.plot(valid_loss, label='valid loss')
plt.xlabel('epoch')
plt.ylabel('loss')
plt.legend(loc='best')
return plt
plot_loss(net)
from PIL import Image
from skimage.transform import resize
jpgfile = Image.open("/home/mckc/Downloads/1.jpg")
grey = rgb2gray(np.array(jpgfile))
faces = face_cascade.detectMultiScale(grey.astype(np.uint8),scaleFactor=1.1,minNeighbors=3,minSize=(30, 30))
print faces
for (x,y,w,h) in faces:
fac = np.array(grey[y:(y+h),x:(x+h)])
out = resize(fac,(96,96))
plt.imshow(out,cmap=cm.Greys_r)
trial = standard(out)
print 'Linear Regression Value',map,clf.predict_proba(trial.reshape(-1,9216)),map[clf.predict((trial.reshape(-1,9216)))]
print 'Random Forest Value',map,recognizer.predict_proba(trial.reshape(-1,9216)),map[recognizer
.predict((trial.reshape(-1,9216)))]
print 'Lasagne Value',map,net.predict_proba(trial.reshape(-1,1,96,96).astype(np.float16)),map[net.predict((trial.reshape(-1,1,96,96).astype(np.float16)))]
print 'Keras Value',map,model.predict(trial.reshape(-1,9216).astype(np.float64))
from PIL import Image
from skimage.transform import resize
jpgfile = Image.open("/home/mckc/Downloads/2.jpg")
grey = rgb2gray(np.array(jpgfile))
faces = face_cascade.detectMultiScale(grey.astype(np.uint8),scaleFactor=1.1,minNeighbors=4,minSize=(30, 30))
print faces
for (x,y,w,h) in faces:
fac = np.array(grey[y:(y+h),x:(x+h)])
out = resize(fac,(96,96))
plt.imshow(out,cmap=cm.Greys_r)
trial = standard(out)
print 'Linear Regression Value',map,clf.predict_proba(trial.reshape(-1,9216)),map[clf.predict((trial.reshape(-1,9216)))]
print 'Random Forest Value',map,recognizer.predict_proba(trial.reshape(-1,9216)),map[recognizer
.predict((trial.reshape(-1,9216)))]
print 'Lasagne Value',map,net.predict_proba(trial.reshape(-1,1,96,96).astype(np.float16)),map[net.predict((trial.reshape(-1,1,96,96).astype(np.float16)))]
print 'Keras Value',map,model.predict(trial.reshape(-1,9216).astype(np.float64))
import sys
sys.setrecursionlimit(150000)
from keras.models import load_model
model.save('my_model.h5') # creates a HDF5 file 'my_model.h5'
del model # deletes the existing model
# returns a compiled model
# identical to the previous one
model = load_model('my_model.h5')
import cPickle
# save the classifier
with open('my_dumped_classifier.pkl', 'wb') as fid:
cPickle.dump(model, fid)
# load it again
with open('my_dumped_classifier.pkl', 'rb') as fid:
gnb_loaded = cPickle.load(fid)
model = load_model('my_model.h5')
| 0.173428 | 0.313092 |
```
import torch
from torchvision.transforms import Normalize
import numpy as np
import cv2
import argparse
import json
from models import hmr, SMPL
from utils.imutils import crop
from utils.renderer import Renderer
import config
import constants
def bbox_from_openpose(openpose_file, rescale=1.2, detection_thresh=0.2):
"""Get center and scale for bounding box from openpose detections."""
with open(openpose_file, 'r') as f:
keypoints = json.load(f)['people'][0]['pose_keypoints_2d']
keypoints = np.reshape(np.array(keypoints), (-1,3))
valid = keypoints[:,-1] > detection_thresh
valid_keypoints = keypoints[valid][:,:-1]
center = valid_keypoints.mean(axis=0)
bbox_size = (valid_keypoints.max(axis=0) - valid_keypoints.min(axis=0)).max()
# adjust bounding box tightness
scale = bbox_size / 200.0
scale *= rescale
return center, scale
def bbox_from_json(bbox_file):
"""Get center and scale of bounding box from bounding box annotations.
The expected format is [top_left(x), top_left(y), width, height].
"""
with open(bbox_file, 'r') as f:
bbox = np.array(json.load(f)['bbox']).astype(np.float32)
ul_corner = bbox[:2]
center = ul_corner + 0.5 * bbox[2:]
width = max(bbox[2], bbox[3])
scale = width / 200.0
# make sure the bounding box is rectangular
return center, scale
def process_image(img_file, bbox_file, openpose_file, input_res=224):
"""Read image, do preprocessing and possibly crop it according to the bounding box.
If there are bounding box annotations, use them to crop the image.
If no bounding box is specified but openpose detections are available, use them to get the bounding box.
"""
normalize_img = Normalize(mean=constants.IMG_NORM_MEAN, std=constants.IMG_NORM_STD)
img = cv2.imread(img_file)[:,:,::-1].copy() # PyTorch does not support negative stride at the moment
if bbox_file is None and openpose_file is None:
# Assume that the person is centerered in the image
height = img.shape[0]
width = img.shape[1]
center = np.array([width // 2, height // 2])
scale = max(height, width) / 200
else:
if bbox_file is not None:
center, scale = bbox_from_json(bbox_file)
elif openpose_file is not None:
center, scale = bbox_from_openpose(openpose_file)
img = crop(img, center, scale, (input_res, input_res))
img = img.astype(np.float32) / 255.
img = torch.from_numpy(img).permute(2,0,1)
norm_img = normalize_img(img.clone())[None]
return img, norm_img
parser = argparse.ArgumentParser()
parser.add_argument('--checkpoint', required=True, help='Path to pretrained checkpoint')
parser.add_argument('--img', type=str, required=True, help='Path to input image')
parser.add_argument('--bbox', type=str, default=None, help='Path to .json file containing bounding box coordinates')
parser.add_argument('--openpose', type=str, default=None, help='Path to .json containing openpose detections')
parser.add_argument('--outfile', type=str, default=None, help='Filename of output images. If not set use input filename.')
if __name__ == '__main__':
args = parser.parse_args(['--checkpoint=data/model_checkpoint.pt','--img=examples/im1010.jpg'])
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
# Load pretrained model
model = hmr(config.SMPL_MEAN_PARAMS).to(device)
checkpoint = torch.load(args.checkpoint)
model.load_state_dict(checkpoint['model'], strict=False)
# Load SMPL model
smpl = SMPL(config.SMPL_MODEL_DIR,
batch_size=1,
create_transl=False).to(device)
model.eval()
# Setup renderer for visualization
renderer = Renderer(focal_length=constants.FOCAL_LENGTH, img_res=constants.IMG_RES, faces=smpl.faces)
# Preprocess input image and generate predictions
img, norm_img = process_image(args.img, args.bbox, args.openpose, input_res=constants.IMG_RES)
with torch.no_grad():
pred_rotmat, pred_betas, pred_camera = model(norm_img.to(device))
pred_output = smpl(betas=pred_betas, body_pose=pred_rotmat[:,1:], global_orient=pred_rotmat[:,0].unsqueeze(1), pose2rot=False)
pred_vertices = pred_output.vertices
# Calculate camera parameters for rendering
camera_translation = torch.stack([pred_camera[:,1], pred_camera[:,2], 2*constants.FOCAL_LENGTH/(constants.IMG_RES * pred_camera[:,0] +1e-9)],dim=-1)
camera_translation = camera_translation[0].cpu().numpy()
pred_vertices = pred_vertices[0].cpu().numpy()
img = img.permute(1,2,0).cpu().numpy()
# Render parametric shape
img_shape = renderer(pred_vertices, [0,0,1], np.ones_like(img))
# Render side views
aroundy = cv2.Rodrigues(np.array([0, np.radians(90.), 0]))[0]
center = pred_vertices.mean(axis=0)
rot_vertices = np.dot((pred_vertices - center), aroundy) + center
# Render non-parametric shape
img_shape_side = renderer(rot_vertices, camera_translation, np.ones_like(img))
outfile = args.img.split('.')[0] if args.outfile is None else args.outfile
# Save reconstructions
cv2.imwrite(outfile + '_shape.png', 255 * img_shape[:,:,::-1])
cv2.imwrite(outfile + '_shape_side.png', 255 * img_shape_side[:,:,::-1])
# Creating an iterator based on the concept of basic rotations https://en.wikipedia.org/wiki/Rotation_matrix#Basic_rotations
t_pose = torch.zeros(1,24,3,3,device='cuda')
t_pose[:] = torch.eye(3)
t_betas = torch.zeros(1,10,device='cuda')
t_pose_model = smpl(betas=t_betas, body_pose=t_pose[:,1:], global_orient=t_pose[:,0].unsqueeze(1), pose2rot=False)
# Setup renderer for visualization
renderer = Renderer(focal_length=constants.FOCAL_LENGTH, img_res=constants.IMG_RES, faces=smpl.faces)
# Calculate camera parameters for rendering
camera_translation = torch.stack([pred_camera[:,1], pred_camera[:,2], 2*constants.FOCAL_LENGTH/(constants.IMG_RES * pred_camera[:,0] +1e-9)],dim=-1)
camera_translation = camera_translation[0].cpu().numpy()
pred_vertices = t_pose_model.vertices
pred_vertices = pred_vertices[0].cpu().numpy()
img = img.permute(1,2,0).cpu().numpy()
# Render parametric shape
img_shape = renderer(pred_vertices, camera_translation, np.ones_like(img))
if __name__ == '__main__':
args = parser.parse_args(['--checkpoint=data/model_checkpoint.pt','--img=examples/im1010.jpg'])
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
# Load pretrained model
model = hmr(config.SMPL_MEAN_PARAMS).to(device)
checkpoint = torch.load(args.checkpoint)
model.load_state_dict(checkpoint['model'], strict=False)
# Load SMPL model
smpl = SMPL(config.SMPL_MODEL_DIR,
batch_size=1,
create_transl=False).to(device)
model.eval()
# Setup renderer for visualization
renderer = Renderer(focal_length=constants.FOCAL_LENGTH, img_res=constants.IMG_RES, faces=smpl.faces)
# Preprocess input image and generate predictions
img, norm_img = process_image(args.img, args.bbox, args.openpose, input_res=constants.IMG_RES)
with torch.no_grad():
pred_rotmat, pred_betas, pred_camera = model(norm_img.to(device))
#pred_output = smpl(betas=pred_betas, body_pose=pred_rotmat[:,1:], global_orient=pred_rotmat[:,0].unsqueeze(1), pose2rot=False)
#pred_output = smpl(betas=pred_betas, body_pose=zero_pose, global_orient=pred_rotmat[:,0].unsqueeze(1), pose2rot=False)
#pred_vertices = pred_output.vertices
# Calculate camera parameters for rendering
camera_translation = torch.stack([pred_camera[:,1], pred_camera[:,2], 2*constants.FOCAL_LENGTH/(constants.IMG_RES * pred_camera[:,0] +1e-9)],dim=-1)
camera_translation = camera_translation[0].cpu().numpy()
pred_vertices = pred_vertices[0].cpu().numpy()
img = img.permute(1,2,0).cpu().numpy()
# Render parametric shape
img_shape = renderer(pred_vertices, camera_translation, img)
# Render side views
aroundy = cv2.Rodrigues(np.array([0, np.radians(90.), 0]))[0]
center = pred_vertices.mean(axis=0)
rot_vertices = np.dot((pred_vertices - center), aroundy) + center
# Render non-parametric shape
img_shape_side = renderer(rot_vertices, camera_translation, np.ones_like(img))
outfile = args.img.split('.')[0] if args.outfile is None else args.outfile
# Save reconstructions
cv2.imwrite(outfile + '_shape.png', 255 * img_shape[:,:,::-1])
cv2.imwrite(outfile + '_shape_side.png', 255 * img_shape_side[:,:,::-1])
pred_betas
import opendr
from opendr.renderer import ColoredRenderer
from opendr.lighting import LambertianPointLight
from opendr.camera import ProjectPoints
import matplotlib.pyplot as plt
%matplotlib inline
t_pose_model.vertices
m = t_pose_model
rn = ColoredRenderer()
## Assign attributes to renderer
w, h = (640, 480)
rn.camera = ProjectPoints(v=m, rt=np.zeros(3), t=np.array([0, 0, 2.]), f=np.array([w,w])/2., c=np.array([w,h])/2., k=np.zeros(5))
rn.frustum = {'near': 1., 'far': 10., 'width': w, 'height': h}
rn.set(v=m, f=m.f, bgcolor=np.zeros(3))
## Construct point light source
rn.vc = LambertianPointLight(
f=t_pose_model.f,
v=rn.v,
num_verts=len(m),
light_pos=np.array([-1000,-1000,-2000]),
vc=np.ones_like(m)*.9,
light_color=np.array([1., 1., 1.]))
## Since we are in Docker without access to X, it's better to save the images. This is easier with matplotlib than with openCV, because cv2.imwrite requires the image to be converted to a compatible form first.
import matplotlib.pyplot as plt
plt.imshow(rn.r)
t_betas
pred_betas
np.zeros(10)
pred_rotmat.size()
zero_pose = torch.zeros(1,23,3,3,device='cuda')
zero_pose[:] = torch.eye(3)
zero_pose[0,13] = torch.tensor([[0,0,1],
[0,1,0],
[-1,0,0]]
zero_pose
pred_output.body_pose
pred_rotmat[:,0]
pred_rotmat[:,0].size()
pred_rotmat[:,1:].size()
pred_rotmat[:,0].unsqueeze(1)
pred_rotmat[:,0].unsqueeze(1).size()
```
|
github_jupyter
|
import torch
from torchvision.transforms import Normalize
import numpy as np
import cv2
import argparse
import json
from models import hmr, SMPL
from utils.imutils import crop
from utils.renderer import Renderer
import config
import constants
def bbox_from_openpose(openpose_file, rescale=1.2, detection_thresh=0.2):
"""Get center and scale for bounding box from openpose detections."""
with open(openpose_file, 'r') as f:
keypoints = json.load(f)['people'][0]['pose_keypoints_2d']
keypoints = np.reshape(np.array(keypoints), (-1,3))
valid = keypoints[:,-1] > detection_thresh
valid_keypoints = keypoints[valid][:,:-1]
center = valid_keypoints.mean(axis=0)
bbox_size = (valid_keypoints.max(axis=0) - valid_keypoints.min(axis=0)).max()
# adjust bounding box tightness
scale = bbox_size / 200.0
scale *= rescale
return center, scale
def bbox_from_json(bbox_file):
"""Get center and scale of bounding box from bounding box annotations.
The expected format is [top_left(x), top_left(y), width, height].
"""
with open(bbox_file, 'r') as f:
bbox = np.array(json.load(f)['bbox']).astype(np.float32)
ul_corner = bbox[:2]
center = ul_corner + 0.5 * bbox[2:]
width = max(bbox[2], bbox[3])
scale = width / 200.0
# make sure the bounding box is rectangular
return center, scale
def process_image(img_file, bbox_file, openpose_file, input_res=224):
"""Read image, do preprocessing and possibly crop it according to the bounding box.
If there are bounding box annotations, use them to crop the image.
If no bounding box is specified but openpose detections are available, use them to get the bounding box.
"""
normalize_img = Normalize(mean=constants.IMG_NORM_MEAN, std=constants.IMG_NORM_STD)
img = cv2.imread(img_file)[:,:,::-1].copy() # PyTorch does not support negative stride at the moment
if bbox_file is None and openpose_file is None:
# Assume that the person is centerered in the image
height = img.shape[0]
width = img.shape[1]
center = np.array([width // 2, height // 2])
scale = max(height, width) / 200
else:
if bbox_file is not None:
center, scale = bbox_from_json(bbox_file)
elif openpose_file is not None:
center, scale = bbox_from_openpose(openpose_file)
img = crop(img, center, scale, (input_res, input_res))
img = img.astype(np.float32) / 255.
img = torch.from_numpy(img).permute(2,0,1)
norm_img = normalize_img(img.clone())[None]
return img, norm_img
parser = argparse.ArgumentParser()
parser.add_argument('--checkpoint', required=True, help='Path to pretrained checkpoint')
parser.add_argument('--img', type=str, required=True, help='Path to input image')
parser.add_argument('--bbox', type=str, default=None, help='Path to .json file containing bounding box coordinates')
parser.add_argument('--openpose', type=str, default=None, help='Path to .json containing openpose detections')
parser.add_argument('--outfile', type=str, default=None, help='Filename of output images. If not set use input filename.')
if __name__ == '__main__':
args = parser.parse_args(['--checkpoint=data/model_checkpoint.pt','--img=examples/im1010.jpg'])
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
# Load pretrained model
model = hmr(config.SMPL_MEAN_PARAMS).to(device)
checkpoint = torch.load(args.checkpoint)
model.load_state_dict(checkpoint['model'], strict=False)
# Load SMPL model
smpl = SMPL(config.SMPL_MODEL_DIR,
batch_size=1,
create_transl=False).to(device)
model.eval()
# Setup renderer for visualization
renderer = Renderer(focal_length=constants.FOCAL_LENGTH, img_res=constants.IMG_RES, faces=smpl.faces)
# Preprocess input image and generate predictions
img, norm_img = process_image(args.img, args.bbox, args.openpose, input_res=constants.IMG_RES)
with torch.no_grad():
pred_rotmat, pred_betas, pred_camera = model(norm_img.to(device))
pred_output = smpl(betas=pred_betas, body_pose=pred_rotmat[:,1:], global_orient=pred_rotmat[:,0].unsqueeze(1), pose2rot=False)
pred_vertices = pred_output.vertices
# Calculate camera parameters for rendering
camera_translation = torch.stack([pred_camera[:,1], pred_camera[:,2], 2*constants.FOCAL_LENGTH/(constants.IMG_RES * pred_camera[:,0] +1e-9)],dim=-1)
camera_translation = camera_translation[0].cpu().numpy()
pred_vertices = pred_vertices[0].cpu().numpy()
img = img.permute(1,2,0).cpu().numpy()
# Render parametric shape
img_shape = renderer(pred_vertices, [0,0,1], np.ones_like(img))
# Render side views
aroundy = cv2.Rodrigues(np.array([0, np.radians(90.), 0]))[0]
center = pred_vertices.mean(axis=0)
rot_vertices = np.dot((pred_vertices - center), aroundy) + center
# Render non-parametric shape
img_shape_side = renderer(rot_vertices, camera_translation, np.ones_like(img))
outfile = args.img.split('.')[0] if args.outfile is None else args.outfile
# Save reconstructions
cv2.imwrite(outfile + '_shape.png', 255 * img_shape[:,:,::-1])
cv2.imwrite(outfile + '_shape_side.png', 255 * img_shape_side[:,:,::-1])
# Creating an iterator based on the concept of basic rotations https://en.wikipedia.org/wiki/Rotation_matrix#Basic_rotations
t_pose = torch.zeros(1,24,3,3,device='cuda')
t_pose[:] = torch.eye(3)
t_betas = torch.zeros(1,10,device='cuda')
t_pose_model = smpl(betas=t_betas, body_pose=t_pose[:,1:], global_orient=t_pose[:,0].unsqueeze(1), pose2rot=False)
# Setup renderer for visualization
renderer = Renderer(focal_length=constants.FOCAL_LENGTH, img_res=constants.IMG_RES, faces=smpl.faces)
# Calculate camera parameters for rendering
camera_translation = torch.stack([pred_camera[:,1], pred_camera[:,2], 2*constants.FOCAL_LENGTH/(constants.IMG_RES * pred_camera[:,0] +1e-9)],dim=-1)
camera_translation = camera_translation[0].cpu().numpy()
pred_vertices = t_pose_model.vertices
pred_vertices = pred_vertices[0].cpu().numpy()
img = img.permute(1,2,0).cpu().numpy()
# Render parametric shape
img_shape = renderer(pred_vertices, camera_translation, np.ones_like(img))
if __name__ == '__main__':
args = parser.parse_args(['--checkpoint=data/model_checkpoint.pt','--img=examples/im1010.jpg'])
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
# Load pretrained model
model = hmr(config.SMPL_MEAN_PARAMS).to(device)
checkpoint = torch.load(args.checkpoint)
model.load_state_dict(checkpoint['model'], strict=False)
# Load SMPL model
smpl = SMPL(config.SMPL_MODEL_DIR,
batch_size=1,
create_transl=False).to(device)
model.eval()
# Setup renderer for visualization
renderer = Renderer(focal_length=constants.FOCAL_LENGTH, img_res=constants.IMG_RES, faces=smpl.faces)
# Preprocess input image and generate predictions
img, norm_img = process_image(args.img, args.bbox, args.openpose, input_res=constants.IMG_RES)
with torch.no_grad():
pred_rotmat, pred_betas, pred_camera = model(norm_img.to(device))
#pred_output = smpl(betas=pred_betas, body_pose=pred_rotmat[:,1:], global_orient=pred_rotmat[:,0].unsqueeze(1), pose2rot=False)
#pred_output = smpl(betas=pred_betas, body_pose=zero_pose, global_orient=pred_rotmat[:,0].unsqueeze(1), pose2rot=False)
#pred_vertices = pred_output.vertices
# Calculate camera parameters for rendering
camera_translation = torch.stack([pred_camera[:,1], pred_camera[:,2], 2*constants.FOCAL_LENGTH/(constants.IMG_RES * pred_camera[:,0] +1e-9)],dim=-1)
camera_translation = camera_translation[0].cpu().numpy()
pred_vertices = pred_vertices[0].cpu().numpy()
img = img.permute(1,2,0).cpu().numpy()
# Render parametric shape
img_shape = renderer(pred_vertices, camera_translation, img)
# Render side views
aroundy = cv2.Rodrigues(np.array([0, np.radians(90.), 0]))[0]
center = pred_vertices.mean(axis=0)
rot_vertices = np.dot((pred_vertices - center), aroundy) + center
# Render non-parametric shape
img_shape_side = renderer(rot_vertices, camera_translation, np.ones_like(img))
outfile = args.img.split('.')[0] if args.outfile is None else args.outfile
# Save reconstructions
cv2.imwrite(outfile + '_shape.png', 255 * img_shape[:,:,::-1])
cv2.imwrite(outfile + '_shape_side.png', 255 * img_shape_side[:,:,::-1])
pred_betas
import opendr
from opendr.renderer import ColoredRenderer
from opendr.lighting import LambertianPointLight
from opendr.camera import ProjectPoints
import matplotlib.pyplot as plt
%matplotlib inline
t_pose_model.vertices
m = t_pose_model
rn = ColoredRenderer()
## Assign attributes to renderer
w, h = (640, 480)
rn.camera = ProjectPoints(v=m, rt=np.zeros(3), t=np.array([0, 0, 2.]), f=np.array([w,w])/2., c=np.array([w,h])/2., k=np.zeros(5))
rn.frustum = {'near': 1., 'far': 10., 'width': w, 'height': h}
rn.set(v=m, f=m.f, bgcolor=np.zeros(3))
## Construct point light source
rn.vc = LambertianPointLight(
f=t_pose_model.f,
v=rn.v,
num_verts=len(m),
light_pos=np.array([-1000,-1000,-2000]),
vc=np.ones_like(m)*.9,
light_color=np.array([1., 1., 1.]))
## Since we are in Docker without access to X, it's better to save the images. This is easier with matplotlib than with openCV, because cv2.imwrite requires the image to be converted to a compatible form first.
import matplotlib.pyplot as plt
plt.imshow(rn.r)
t_betas
pred_betas
np.zeros(10)
pred_rotmat.size()
zero_pose = torch.zeros(1,23,3,3,device='cuda')
zero_pose[:] = torch.eye(3)
zero_pose[0,13] = torch.tensor([[0,0,1],
[0,1,0],
[-1,0,0]]
zero_pose
pred_output.body_pose
pred_rotmat[:,0]
pred_rotmat[:,0].size()
pred_rotmat[:,1:].size()
pred_rotmat[:,0].unsqueeze(1)
pred_rotmat[:,0].unsqueeze(1).size()
| 0.823719 | 0.468487 |
# Web_crawler
This is a web-crawler created by Bill Chang, and it is made for getting the flight data from website.
The purpose is to collect the data for d3.js visualization project, and it is only for personal practice and education.
## Read the html Using BeautifulSoup
import urllib to read url content
Note that Python3 does not read the **html** code as a string but as a bytearray, so we need to convert it to one with **decode.**
```
import urllib
from bs4 import BeautifulSoup
import urllib.request
def create_soup(url):
#url = input('Enter the url -') #enter the url
byte_arrary = urllib.request.urlopen(url)
content_bytes = byte_arrary.read()
html = content_bytes.decode("utf8")
byte_arrary.close()
return BeautifulSoup(html)
#use beautifulsoup to read and get the data we want
#print ('The title of page is: \n',soup.title.string)
```
## Find out the data
According to the data, we can find that there are two kinds of airplane_schedule.
One kind is direct, another is non-direct.
Hence, we have to carefully extract the data.
```
def week(schedule,day,index):
if schedule[index].find('i') == None:
return 'No'
else:
return 'Yes'
def element_get(data):
# crawl the data by tag selector
dep_time = data.select('span.schedules__departure-time')[0].contents[0]
dep_place = data.select('span.schedules__departure-code')[0].contents[0]
arr_time = data.select('span.schedules__arrival-time')[0].contents[0]
arr_place = data.select('span.schedules__arrival-code')[0].contents[0]
#fli_dur = data.select('div.schedules__item.schedules__duration')[0].contents[0].strip()
fli_code = data.select('span.schedules__airline-code')[0].contents[2].strip()
aircraft = data.select('span.schedules__aircraft-code')[0].contents[0]
try:
alt = 0
wait = data.select('div.schedules__item.schedules__transit-airport')[0].contents[0].strip()
except IndexError:
alt = 1
Mon = None
Tus = None
Wed = None
Thur = None
Fri = None
Sat = None
Sun = None
week_schedule = data.find('div',{'class':"schedules__item schedules__week"}).contents
Mon = week(week_schedule,Mon,1)
Tus = week(week_schedule,Tus,3)
Wed = week(week_schedule,Wed,5)
Thur = week(week_schedule,Thur,7)
Fri = week(week_schedule,Fri,9)
Sat = week(week_schedule,Sat,11)
Sun = week(week_schedule,Sun,13)
if alt == 1:
return [fli_code,dep_time,dep_place,arr_time,arr_place,aircraft,[Mon,Tus,Wed,Thur,Fri,Sat,Sun]]
if alt == 0:
return [fli_code,dep_time,dep_place,arr_time,arr_place,aircraft]
def title_get(data):
# function to get the title on the web
title_table = data.select('nav.breadcrumbs.breadcrumbs-schedules-city_to_city.breadcrumbs-schedules')
title_list = title_table[0].select('span[itemprop="title"]')
title_part1 = title_list[2].contents[0]
title_part2 = title_list[3].contents[0]
title_part3 = title_list[4].contents[0].split()[3]
title_part4 = title_list[5].contents[0].strip()
title = '('+title_part1+'-'+title_part2+') '+'到'+' ('+title_part3+'-'+title_part4+')'
return title
def data_dict(soup):
#flight table preprocessing
#title = [title_get(soup)]
table = soup.select('li.schedules__data-list.js-schedule-item')
#table2 = soup.select('li.schedules__data-list.js-schedule-item.schedule__non-direct.js-schedule-non-direct.hide')
#table3 = soup.select('li.schedules__data-list.js-schedule-item.js-schedule-non-direct.hide')
#table4 = soup.select('li.schedules__data-list.js-schedule-item.schedule__non-direct.js-schedule-non-direct-codeshare.hide')
list1 = []
#list2 = []
#list3 = []
#list4 = []
for i in table:
list1.append(element_get(i))
'''
for i in table2:
list2.append(element_get(i))
for i in table3:
list3.append(element_get(i))
for i in table4:
list4.append([title_get(soup)]+element_get(i))
'''
#final_table = []
final_dict = {}
# i not in list2 and i not in list3 and
'''
for i in list1:
if i not in list4:
final_table.append(i)
'''
for i in list1:
if i[0] not in final_dict:
final_dict[i[0]] = i
return final_dict
```
## Output the data to csv
When we write the data, remember to add **newline=''** to solve it write one more line in the csv.
Use writerows to write many lines in only one method.
In open, if we want to append data to old file, use 'a' to replace 'w'.
```
import csv
def write_data(csv_name,input_dict):
with open(csv_name,'w',newline='') as myfile:
wr = csv.writer(myfile,delimiter=',',quoting=csv.QUOTE_ALL)
wr.writerows(input_dict.values())
def test_read_data():
# check the data can be read correctly
new_list=[]
with open ('test.csv','r') as myfile:
rd = csv.reader(myfile,delimiter=',',quoting=csv.QUOTE_ALL)
for row in rd:
new_list.append(row)
return new_list
import pickle
def data_append_csv():
# get website data (the link got from first crawler)
with open('web_list.txt','rb') as f: # read the url list (get from crawler1)
link_list = pickle.load(f)
step = 1
total = len(link_list)
for link in link_list:
step += 1
percent = round((step/total),3)*100
print ('-',percent,'%-',sep='',end='')
my_soup = create_soup(link)
my_dict = data_dict(my_soup)
write_data(my_dict)
#data_append_csv()
test_list = test_read_data()
test_dict = {}
for inner_list in test_list:
test_dict[inner_list[0]] = inner_list
for inner_list in test_list:
if len(inner_list) == 7:
test_dict[inner_list[0]] = inner_list
write_data('test2.csv',test_dict)
```
|
github_jupyter
|
import urllib
from bs4 import BeautifulSoup
import urllib.request
def create_soup(url):
#url = input('Enter the url -') #enter the url
byte_arrary = urllib.request.urlopen(url)
content_bytes = byte_arrary.read()
html = content_bytes.decode("utf8")
byte_arrary.close()
return BeautifulSoup(html)
#use beautifulsoup to read and get the data we want
#print ('The title of page is: \n',soup.title.string)
def week(schedule,day,index):
if schedule[index].find('i') == None:
return 'No'
else:
return 'Yes'
def element_get(data):
# crawl the data by tag selector
dep_time = data.select('span.schedules__departure-time')[0].contents[0]
dep_place = data.select('span.schedules__departure-code')[0].contents[0]
arr_time = data.select('span.schedules__arrival-time')[0].contents[0]
arr_place = data.select('span.schedules__arrival-code')[0].contents[0]
#fli_dur = data.select('div.schedules__item.schedules__duration')[0].contents[0].strip()
fli_code = data.select('span.schedules__airline-code')[0].contents[2].strip()
aircraft = data.select('span.schedules__aircraft-code')[0].contents[0]
try:
alt = 0
wait = data.select('div.schedules__item.schedules__transit-airport')[0].contents[0].strip()
except IndexError:
alt = 1
Mon = None
Tus = None
Wed = None
Thur = None
Fri = None
Sat = None
Sun = None
week_schedule = data.find('div',{'class':"schedules__item schedules__week"}).contents
Mon = week(week_schedule,Mon,1)
Tus = week(week_schedule,Tus,3)
Wed = week(week_schedule,Wed,5)
Thur = week(week_schedule,Thur,7)
Fri = week(week_schedule,Fri,9)
Sat = week(week_schedule,Sat,11)
Sun = week(week_schedule,Sun,13)
if alt == 1:
return [fli_code,dep_time,dep_place,arr_time,arr_place,aircraft,[Mon,Tus,Wed,Thur,Fri,Sat,Sun]]
if alt == 0:
return [fli_code,dep_time,dep_place,arr_time,arr_place,aircraft]
def title_get(data):
# function to get the title on the web
title_table = data.select('nav.breadcrumbs.breadcrumbs-schedules-city_to_city.breadcrumbs-schedules')
title_list = title_table[0].select('span[itemprop="title"]')
title_part1 = title_list[2].contents[0]
title_part2 = title_list[3].contents[0]
title_part3 = title_list[4].contents[0].split()[3]
title_part4 = title_list[5].contents[0].strip()
title = '('+title_part1+'-'+title_part2+') '+'到'+' ('+title_part3+'-'+title_part4+')'
return title
def data_dict(soup):
#flight table preprocessing
#title = [title_get(soup)]
table = soup.select('li.schedules__data-list.js-schedule-item')
#table2 = soup.select('li.schedules__data-list.js-schedule-item.schedule__non-direct.js-schedule-non-direct.hide')
#table3 = soup.select('li.schedules__data-list.js-schedule-item.js-schedule-non-direct.hide')
#table4 = soup.select('li.schedules__data-list.js-schedule-item.schedule__non-direct.js-schedule-non-direct-codeshare.hide')
list1 = []
#list2 = []
#list3 = []
#list4 = []
for i in table:
list1.append(element_get(i))
'''
for i in table2:
list2.append(element_get(i))
for i in table3:
list3.append(element_get(i))
for i in table4:
list4.append([title_get(soup)]+element_get(i))
'''
#final_table = []
final_dict = {}
# i not in list2 and i not in list3 and
'''
for i in list1:
if i not in list4:
final_table.append(i)
'''
for i in list1:
if i[0] not in final_dict:
final_dict[i[0]] = i
return final_dict
import csv
def write_data(csv_name,input_dict):
with open(csv_name,'w',newline='') as myfile:
wr = csv.writer(myfile,delimiter=',',quoting=csv.QUOTE_ALL)
wr.writerows(input_dict.values())
def test_read_data():
# check the data can be read correctly
new_list=[]
with open ('test.csv','r') as myfile:
rd = csv.reader(myfile,delimiter=',',quoting=csv.QUOTE_ALL)
for row in rd:
new_list.append(row)
return new_list
import pickle
def data_append_csv():
# get website data (the link got from first crawler)
with open('web_list.txt','rb') as f: # read the url list (get from crawler1)
link_list = pickle.load(f)
step = 1
total = len(link_list)
for link in link_list:
step += 1
percent = round((step/total),3)*100
print ('-',percent,'%-',sep='',end='')
my_soup = create_soup(link)
my_dict = data_dict(my_soup)
write_data(my_dict)
#data_append_csv()
test_list = test_read_data()
test_dict = {}
for inner_list in test_list:
test_dict[inner_list[0]] = inner_list
for inner_list in test_list:
if len(inner_list) == 7:
test_dict[inner_list[0]] = inner_list
write_data('test2.csv',test_dict)
| 0.076708 | 0.744227 |
# Introduction to Reservoir Computing with ReservoirPy
**by Inria - Mnemosyne, Bordeaux, France.**
## Summary
- <a href="#concepts">Concepts and key features</a>
- <a href="#chapitre1">Chapter 1 : A simple task</a>
- <a href="#chapitre2">Chapter 2 : Generative models</a>
- <a href="#chapitre3">Chapter 3 : Online learning</a>
- <a href="#bonus">Go further : Understand reservoir hyperparameters</a>
## Concepts and key features <a id="concepts"></a>
ReservoirPy project is about:
- Numpy, Scipy, and only Numpy and Scipy
- Efficient execution (distributed computations, optimized learning rules)
- *Online* and *offline* learning rules
- Convenient tools and tutorials for hyperparameters optimization
- Docs: https://reservoirpy.readthedocs.io/en/latest/
- GitHub: https://github.com/reservoirpy/reservoirpy
## Generic information
- All vectors and data arrays are NumPy arrays
- Time is always represented as the first axis of arrays.
```
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import reservoirpy as rpy
from reservoirpy import mat_gen
# just a little tweak to center the plots, nothing to worry about
from IPython.core.display import HTML
HTML("""
<style>
.img-center {
display: block;
margin-left: auto;
margin-right: auto;
}
.output_png {
display: table-cell;
text-align: center;
vertical-align: middle;
}
</style>
""")
rpy.set_seed(42)
def plot_results(y_pred, y_test, sample=500):
fig = plt.figure(figsize=(15, 7))
plt.subplot(211)
plt.plot(np.arange(sample), y_pred[:sample], lw=3, label="ESN prediction")
plt.plot(np.arange(sample), y_test[:sample], linestyle="--", lw=2, label="True value")
plt.plot(np.abs(y_test[:sample] - y_pred[:sample]), label="Absolute deviation")
plt.legend()
plt.show()
```
## Chapter 1 : Reservoir Computing for chaotic timeseries forecasting <span id="chapitre1"/>
**Mackey-Glass timeseries**
Mackey-Glass equation are a set of delayed differential equations
describing the temporal behaviour of different physiological signal,
for example, the relative quantity of mature blood cells over time.
The equations are defined as:
$$
\frac{dP(t)}{dt} = \frac{a P(t - \tau)}{1 + P(t - \tau)^n} - bP(t)
$$
where $a = 0.2$, $b = 0.1$, $n = 10$, and the time delay $\tau = 17$.
$\tau$ controls the chaotic behaviour of the equations (the higher it is,
the more chaotic the timeserie becomes.
$\tau=17$ already gives good chaotic results.)
```
from reservoirpy.datasets import mackey_glass
from reservoirpy.observables import nrmse, rsquare
timesteps = 2510
tau = 17
X = mackey_glass(timesteps, tau=tau)
# rescale between -1 and 1
X = 2 * (X - X.min()) / (X.max() - X.min()) - 1
def plot_mackey_glass(X, sample, tau):
fig = plt.figure(figsize=(13, 5))
N = sample
ax = plt.subplot((121))
t = np.linspace(0, N, N)
for i in range(N-1):
ax.plot(t[i:i+2], X[i:i+2], color=plt.cm.magma(255*i//N), lw=1.0)
plt.title(f"Timeseries - {N} timesteps")
plt.xlabel("$t$")
plt.ylabel("$P(t)$")
ax2 = plt.subplot((122))
ax2.margins(0.05)
for i in range(N-1):
ax2.plot(X[i:i+2], X[i+tau:i+tau+2], color=plt.cm.magma(255*i//N), lw=1.0)
plt.title(f"Phase diagram: $P(t) = f(P(t-\\tau))$")
plt.xlabel("$P(t-\\tau)$")
plt.ylabel("$P(t)$")
plt.tight_layout()
plt.show()
plot_mackey_glass(X, 500, tau)
```
- Not completely unpredictable... (not random)
- ...but not easily predictable (not periodic)
- Similar to ECG rythms, stocks, weather...
### 1.1. Task 1: 10 timesteps ahead forecast
Predict $P(t + 10)$ given $P(t)$.
#### Data preprocessing
```
def plot_train_test(X_train, y_train, X_test, y_test):
sample = 500
test_len = X_test.shape[0]
fig = plt.figure(figsize=(15, 5))
plt.plot(np.arange(0, 500), X_train[-sample:], label="Données d'entraînement")
plt.plot(np.arange(0, 500), y_train[-sample:], label="Objectif d'entraînement")
plt.plot(np.arange(500, 500+test_len), X_test, label="Données de test")
plt.plot(np.arange(500, 500+test_len), y_test, label="Objectif de test")
plt.legend()
plt.show()
from reservoirpy.datasets import to_forecasting
x, y = to_forecasting(X, forecast=10)
X_train1, y_train1 = x[:2000], y[:2000]
X_test1, y_test1 = x[2000:], y[2000:]
plot_train_test(X_train1, y_train1, X_test1, y_test1)
```
### Build your first Echo State Network
```
units = 100
leak_rate = 0.3
spectral_radius = 1.25
input_scaling = 1.0
connectivity = 0.1
input_connectivity = 0.2
regularization = 1e-8
seed = 1234
def reset_esn():
from reservoirpy.nodes import Reservoir, Ridge
reservoir = Reservoir(units, input_scaling=input_scaling, sr=spectral_radius,
lr=leak_rate, rc_connectivity=connectivity,
input_connectivity=input_connectivity, seed=seed)
readout = Ridge(1, ridge=regularization)
return reservoir >> readout
```
<style>
.img-center {
display: block;
margin-left: auto;
margin-right: auto;
width: 50%;
}
</style>
<div class="img-center">
<img src="./static/task1.png" width="600">
</div>
```
from reservoirpy.nodes import Reservoir, Ridge
reservoir = Reservoir(units, input_scaling=input_scaling, sr=spectral_radius,
lr=leak_rate, rc_connectivity=connectivity,
input_connectivity=input_connectivity, seed=seed)
readout = Ridge(1, ridge=regularization)
esn = reservoir >> readout
```
<div class="img-center">
<img src="./static/matrices.png" width="900">
</div>
```
y = esn(X[0]) # initialisation
reservoir.Win is not None, reservoir.W is not None, readout.Wout is not None
np.all(readout.Wout == 0.0)
```
#### Entraînement de l'ESN
L'apprentissage est *offline* ("hors-ligne") : il n'a lieu qu'une seule fois, sur l'ensemble des données d'entraînement.
```
esn = esn.fit(X_train1, y_train1)
def plot_readout(readout):
Wout = readout.Wout
bias = readout.bias
Wout = np.r_[bias, Wout]
fig = plt.figure(figsize=(15, 5))
ax = fig.add_subplot(111)
ax.grid(axis="y")
ax.set_ylabel("Coefs. de $W_{out}$")
ax.set_xlabel("Neurones du reservoir")
ax.bar(np.arange(Wout.size), Wout.ravel()[::-1])
plt.show()
plot_readout(readout)
```
#### Test de l'ESN
```
y_pred1 = esn.run(X_test1)
plot_results(y_pred1, y_test1)
```
Coefficient de détermination $R^2$ et erreur quadratique normalisée :
```
rsquare(y_test1, y_pred1), nrmse(y_test1, y_pred1)
```
### 1.2 Compliquons la tâche
Passons d'un horizon de prédiction de 10 pas de temps à un horizon de 100 pas de temps
```
x, y = to_forecasting(X, forecast=100)
X_train2, y_train2 = x[:2000], y[:2000]
X_test2, y_test2 = x[2000:], y[2000:]
plot_train_test(X_train2, y_train2, X_test2, y_test2)
y_pred2 = esn.fit(X_train2, y_train2).run(X_test2)
plot_results(y_pred2, y_test2, sample=400)
```
Determination coefficient $R^2$ and NRMSE:
```
rsquare(y_test2, y_pred2), nrmse(y_test2, y_pred2)
```
## Chapitre 2 : Déployer les capacités génératives des ESNs <span id="chapitre2"/>
- Entraînement de l'ESN sur une tâche de prédiction de 1 pas de temps.
- Test de l'ESN **sur ses propres prédictions** (mode génératif).
<div>
<img src="./static/generative.png" width="900">
</div>
```
units = 500
leak_rate = 0.3 # - leaking rate
spectral_radius = 0.99 # - rayon spectral
input_scaling = 1.0 # - facteur de mise à l'échelle des entrées (input scaling)
connectivity = 0.1 # - densité des connexions du reservoir vers lui même
input_connectivity = 0.2 # et des entrées vers le reservoir
regularization = 1e-4 # - coefficient de régularisation (L2)
seed = 1234 # reproductibilité
def plot_generation(X_gen, X_t, nb_generations, warming_out=None, warming_inputs=None, seed_timesteps=0):
plt.figure(figsize=(15, 5))
if warming_out is not None:
plt.plot(np.vstack([warming_out, X_gen]), label="Série générée")
else:
plt.plot(X_gen, label="Série générée")
plt.plot(np.arange(nb_generations)+seed_timesteps, X_t, linestyle="--", label="Série réelle")
if warming_inputs is not None:
plt.plot(np.arange(seed_timesteps), warming_inputs, linestyle="--", label="Données d'échauffement")
plt.plot(np.arange(nb_generations)+seed_timesteps, np.abs(X_t - X_gen),
label="Erreur absolue")
if seed_timesteps > 0:
plt.fill_between([0, seed_timesteps], *plt.ylim(), facecolor='lightgray', alpha=0.5, label="Echauffement")
plt.plot([], [], ' ', label=f"$R^2 = {round(rsquare(X_t, X_gen), 4)}$")
plt.plot([], [], ' ', label=f"$NRMSE = {round(nrmse(X_t, X_gen), 4)}$")
plt.legend(
)
plt.show()
```
#### Entraînement à la prévision sur un horizon court
```
esn = reset_esn()
x, y = to_forecasting(X, forecast=1)
X_train3, y_train3 = x[:2000], y[:2000]
X_test3, y_test3 = x[2000:], y[2000:]
esn = esn.fit(X_train3, y_train3)
```
#### Génération
- 100 pas de temps de la série de test utilisés comme "échauffement";
- 300 pas de temps générés "à partir de rien";
```
seed_timesteps = 100
warming_inputs = X_test3[:seed_timesteps]
warming_out = esn.run(warming_inputs, reset=True) # échauffement
nb_generations = 400
X_gen = np.zeros((nb_generations, 1))
y = warming_out[-1]
for t in range(nb_generations): # génération
y = esn(y)
X_gen[t, :] = y
X_t = X_test3[seed_timesteps: nb_generations+seed_timesteps]
plot_generation(X_gen, X_t, nb_generations, warming_out=warming_out,
warming_inputs=warming_inputs, seed_timesteps=seed_timesteps)
```
## Chapitre 3 : Apprentissage hors-ligne <span id="chapitre3"/>
Apprentissage se déroulant de manière *incrémentale*.
Utilisation de l'algorithme **FORCE** *(Sussillo and Abott, 2009)*
<div>
<img src="./static/online.png" width="700">
</div>
```
units = 100
leak_rate = 0.3
spectral_radius = 1.25
input_scaling = 1.0
connectivity = 0.1
input_connectivity = 0.2
seed = 1234
from reservoirpy.nodes import FORCE
reservoir = Reservoir(units, input_scaling=input_scaling, sr=spectral_radius,
lr=leak_rate, rc_connectivity=connectivity,
input_connectivity=input_connectivity, seed=seed)
readout = FORCE(1)
esn_online = reservoir >> readout
```
#### Entraînement pas à pas
```
outputs_pre = np.zeros(X_train1.shape)
for t, (x, y) in enumerate(zip(X_train1, y_train1)): # pour chaque pas de temps de la série :
outputs_pre[t, :] = esn_online.train(x, y)
plot_results(outputs_pre, y_train1, sample=100)
plot_results(outputs_pre, y_train1, sample=500)
```
#### Entraînement sur une séquence complète
```
reservoir = Reservoir(units, input_scaling=input_scaling, sr=spectral_radius,
lr=leak_rate, rc_connectivity=connectivity,
input_connectivity=input_connectivity, seed=seed)
readout = FORCE(1)
esn_online = reservoir >> readout
esn_online.train(X_train1, y_train1)
pred_online = esn_online.run(X_test1) # Wout est maintenant figée
plot_results(pred_online, y_test1, sample=500)
```
Determination coefficient $R^2$ and NRMSE:
```
rsquare(y_test1, pred_online), nrmse(y_test1, pred_online)
```
## Chapitre 4 : Application réelle : chute de robot <span id="chapitre4"/>
<div>
<img src="./static/sigmaban.png" width="500">
</div>
#### Chargement et préparation des données
```
import glob
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from joblib import delayed, Parallel
from tqdm import tqdm
features = ['com_x', 'com_y', 'com_z', 'trunk_pitch', 'trunk_roll', 'left_x', 'left_y',
'right_x', 'right_y', 'left_ankle_pitch', 'left_ankle_roll', 'left_hip_pitch',
'left_hip_roll', 'left_hip_yaw', 'left_knee', 'right_ankle_pitch',
'right_ankle_roll', 'right_hip_pitch', 'right_hip_roll',
'right_hip_yaw', 'right_knee']
prediction = ['fallen']
force = ['force_orientation', 'force_magnitude']
files = glob.glob("r4-data/experiments/*")
dfs = []
with Parallel(n_jobs=-1) as parallel:
dfs = parallel(delayed(pd.read_csv)(f, compression="gzip", header=0, sep=",") for f in tqdm(files))
X = []
Y = []
F = []
for i, df in enumerate(dfs):
X.append(df[features].values)
Y.append(df[prediction].values)
F.append(df["force_magnitude"].values)
Y_train = []
for y in Y:
y_shift = np.roll(y, -500)
y_shift[-500:] = y[-500:]
Y_train.append(y_shift)
def plot_robot(Y, Y_train, F):
plt.figure(figsize=(10, 7))
plt.plot(Y_train[1], label="Objectif")
plt.plot(Y[1], label="Indicateur de chute")
plt.plot(F[1], label="Force appliquée")
plt.legend()
plt.show()
plot_robot(Y, Y_train, F)
```
#### Entraînement de l'ESN
```
X_train, X_test, y_train, y_test = train_test_split(X, Y_train, test_size=0.2, random_state=42)
from reservoirpy.nodes import ESN
reservoir = Reservoir(300, lr=0.5, sr=0.99, input_bias=False)
readout = Ridge(1, ridge=1e-3)
esn = ESN(reservoir=reservoir, readout=readout, workers=-1) # version distribuée
esn = esn.fit(X_train, y_train)
res = esn.run(X_test)
from reservoirpy.observables import rmse
scores = []
for y_t, y_p in zip(y_test, res):
score = rmse(y_t, y_p)
scores.append(score)
filt_scores = []
for y_t, y_p in zip(y_test, res):
y_f = y_p.copy()
y_f[y_f > 0.5] = 1.0
y_f[y_f <= 0.5] = 0.0
score = rmse(y_t, y_f)
filt_scores.append(score)
def plot_robot_results(y_test, y_pred):
for y_t, y_p in zip(y_test, y_pred):
if y_t.max() > 0.5:
y_shift = np.roll(y, 500)
y_shift[:500] = 0.0
plt.figure(figsize=(7, 5))
plt.plot(y_t, label="Objectif")
plt.plot(y_shift, label="Chute")
plt.plot(y_p, label="Prediction")
plt.legend()
plt.show()
break
plot_robot_results(y_test, res)
print("RMSE moyenne :", f"{np.mean(scores):.4f}", "±", f"{np.std(scores):.5f}")
print("RMSE moyenne (avec seuil) :", f"{np.mean(filt_scores):.4f}", "±", f"{np.std(filt_scores):.5f}")
```
## Chapitre 5 : Application réelle : le chant des canaris domestiques <span id="chapitre5"/>
Les données peuvent être téléchargées sur Zenodo :
https://zenodo.org/record/4736597
```
im = plt.imread("./static/canary.png")
plt.figure(figsize=(5, 5)); plt.imshow(im); plt.axis('off'); plt.show()
from IPython.display import Audio
audio = Audio(filename="./static/song.wav")
display(audio)
```
Plusieurs motifs temporels répétitifs differents à décoder : les *phrases*.
- Un label par type de phrase à classifier dans le temps.
- Un label *SIL* annotant le silence, à détecter également pour permettre la segmentation du chant.
```
im = plt.imread("./static/canary_outputs.png")
plt.figure(figsize=(15, 15)); plt.imshow(im); plt.axis('off'); plt.show()
```
#### Chargement et préparation des données
```
import os
import glob
import math
import pandas as pd
import librosa as lbr
from tqdm import tqdm
from sklearn.utils.multiclass import unique_labels
from sklearn.preprocessing import OneHotEncoder
win_length = 1024
n_fft = 2048
hop_length = 512
fmin = 500
fmax = 8000
lifter = 40
n_mfcc = 13
def load_data(directory, max_songs=100):
audios = sorted(glob.glob(directory + "/**/*.wav", recursive=True))
annotations = sorted(glob.glob(directory + "/**/*.csv", recursive=True))
X = []
Y = []
vocab = set()
max_songs = min(len(audios), max_songs)
for audio, annotation, _ in tqdm(zip(audios, annotations, range(max_songs)), total=max_songs):
df = pd.read_csv(annotation)
wav, rate = lbr.load(audio, sr=None)
x = lbr.feature.mfcc(wav, sr=rate,
win_length=win_length, hop_length=hop_length,
n_fft=n_fft, fmin=fmin, fmax=fmax, lifter=lifter,
n_mfcc=n_mfcc)
delta = lbr.feature.delta(x, mode="wrap")
delta2 = lbr.feature.delta(x, order=2, mode="wrap")
X.append(np.vstack([x, delta, delta2]).T)
y = [["SIL"]] * x.shape[1]
for annot in df.itertuples():
start = max(0, round(annot.start * rate / hop_length))
end = min(x.shape[1], round(annot.end * rate / hop_length))
y[start:end] = [[annot.syll]] * (end - start)
vocab.add(annot.syll)
Y.append(y)
return X, Y, list(vocab)
X, Y, vocab = load_data("./canary-data")
```
#### One-hot encoding des labels de phrase
```
one_hot = OneHotEncoder(categories=[vocab], sparse=False)
Y = [one_hot.fit_transform(np.array(y)) for y in Y]
```
Dans un premier (court) temps, on se limitera à traiter 100 chants. L'entraînement sera effectué sur les 90 premiers, et le test sur les 10 derniers.
Au total, il y a 459 chants, il est donc possible de varier les constructions de jeu de données, voire de tester la robustesse des résultats par validation croisée.
```
X_train, y_train = X[:-10], Y[:-10]
X_test, y_test = X[-10:], Y[-10:]
```
#### Entraînement de l'ESN
```
from reservoirpy.nodes import ESN
units = 1000
leak_rate = 0.05
spectral_radius = 0.5
inputs_scaling = 0.001
connectivity = 0.1
input_connectivity = 0.1
regularization = 1e-5
seed = 1234
Win = mat_gen.generate_input_weights(units, n_mfcc*3, input_scaling=inputs_scaling,
proba=input_connectivity, input_bias=True,
seed=seed)
reservoir = Reservoir(units, sr=spectral_radius, Win=Win,
lr=leak_rate, rc_connectivity=connectivity,
seed=seed)
readout = Ridge(len(vocab), ridge=regularization)
esn = ESN(reservoir=reservoir, readout=readout, workers=-1, backend="loky")
esn = esn.fit(X_train, y_train)
outputs = esn.run(X_test)
from sklearn.metrics import accuracy_score
scores = []
for y_t, y_p in zip(y_test, outputs):
targets = np.vstack(one_hot.inverse_transform(y_t)).flatten()
top_1 = np.argmax(y_p, axis=1)
top_1 = np.array([vocab[t] for t in top_1])
accuracy = accuracy_score(targets, top_1)
scores.append(accuracy)
scores # pour chaque chant testé
print("Précision moyenne :", f"{np.mean(scores):.4f}", "±", f"{np.std(scores):.5f}")
```
#### Pour aller plus loin
- Essayer de varier le nombre de chants entraînés : combien faut-il de chants au minimum pour reproduire les résultats obtenus sur 100 chants ?
- Trouver le meilleur jeu d'hyperparamètres : outils basés sur la bibliothèque *hyperopt* (https://github.com/hyperopt/hyperopt).
- Essayer d'appliquer un *input scaling* différent pour chaque groupe de variables d'intérêt.
## Merci de votre attention.
**Nathan Trouvain <br>
Inria - Mnemosyne**
<br>
<br>
<br>
R4 - 9 novembre 2021
## Aller plus loin : Comprendre les hyperparamètres et leurs effets <span id="bonus"/>
```
units = 100 # - nombre de neurones dans le reservoir
leak_rate = 0.3 # - leaking rate
spectral_radius = 1.25 # - rayon spectral
input_scaling = 1.0 # - facteur de mise à l'échelle des entrées
connectivity = 0.1 # - densité des connexions du reservoir vers lui même
input_connectivity = 0.2 # et des entrées vers le reservoir
regularization = 1e-8 # - coefficient de régularisation (L2)
seed = 1234 # reproductibilité
```
### 1. Le rayon spectral
Le rayon spectral est **la valeur propre maximale de la matrice des poids du réservoir ($W$)**.
```
states = []
radii = [0.1, 1.25, 10.0]
for sr in radii:
reservoir = Reservoir(units, sr=sr, input_scaling=0.1, lr=leak_rate, rc_connectivity=connectivity,
input_connectivity=input_connectivity)
s = reservoir.run(X_test1[:500])
states.append(s)
units_nb = 20
plt.figure(figsize=(15, 8))
for i, s in enumerate(states):
plt.subplot(len(radii)*100+10+i+1)
plt.plot(s[:, :units_nb], alpha=0.6)
plt.ylabel(f"$sr={radii[i]}$")
plt.xlabel(f"Activations ({units_nb} neurons)")
plt.show()
```
- $-$ rayon spectral $\rightarrow$ dynamiques **stables**
- $+$ rayon spectral $\rightarrow$ dynamiques **chaotiques**
Rayon spectral et *Echo State Property* : rayon spectral $\rightarrow$ 1 (assure que les états internes ne sont pas affectés par l'initialisation).
### 2. Le facteur de mise à l'échelle des entrées (*input scaling*)
Il s'agit d'un **coefficient appliqué à $W_{in}$**, venant changer l'échelle des données en entrée.
```
states = []
scalings = [0.01, 0.1, 1.]
for iss in scalings:
reservoir = Reservoir(units, sr=spectral_radius, input_scaling=iss, lr=leak_rate,
rc_connectivity=connectivity, input_connectivity=input_connectivity)
s = reservoir.run(X_test1[:500])
states.append(s)
def correlation(states, inputs):
return np.mean([np.correlate(states[:, i].flatten(), inputs.flatten()) for i in range(states.shape[1])])
units_nb = 20
plt.figure(figsize=(15, 8))
for i, s in enumerate(states):
plt.subplot(len(scalings)*100+10+i+1)
plt.plot(s[:, :units_nb], alpha=0.6)
plt.ylabel(f"$iss={scalings[i]}$")
plt.xlabel(f"Activations ({units_nb} neurons)")
plt.show()
```
Correlation moyenne des activités des neurones du reservoir avec les entrées :
```
for i, s in enumerate(states):
corr = correlation(states[i], X_test1[:500])
print(f"ISS : {scalings[i]}, correlation moyenne : {corr}")
```
- $+$ input scaling $\rightarrow$ activités **corrélées aux données**
- $-$ input scaling $\rightarrow$ activités **libres**
L'*input scaling* peut aussi être utilisé pour ajuster l'influence de chaque donnée en entrée.
### 3. Le taux de fuite de la mémoire (*leaking rate*)
$$
x(t+1) = \underbrace{\color{red}{(1 - \alpha)} x(t)}_{\text{état actuel}} + \underbrace{\color{red}\alpha f(u(t+1), x(t))}_{\text{données suivantes}}
$$
avec $\alpha \in [0, 1]$ et:
$$ f(u, x) = \tanh(W_{in} \cdotp u + W \cdotp x) $$
```
states = []
rates = [0.02, 0.2, 0.9]
for lr in rates:
reservoir = Reservoir(units, sr=spectral_radius, input_scaling=input_scaling, lr=lr,
rc_connectivity=connectivity, input_connectivity=input_connectivity)
s = reservoir.run(X_test1[:500])
states.append(s)
units_nb = 20
plt.figure(figsize=(15, 8))
for i, s in enumerate(states):
plt.subplot(len(rates)*100+10+i+1)
plt.plot(s[:, :units_nb] + 2*i)
plt.ylabel(f"$lr={rates[i]}$")
plt.xlabel(f"States ({units_nb} neurons)")
plt.show()
```
- $+$ leaking rate $\rightarrow$ **faible inertie**, faible mémorisation des états précédents
- $-$ leaking rate $\rightarrow$ **forte inertie**, grande mémorisation des états précédents
Le *leaking rate* contrôle la "mémoire" de l'ESN. Il peut être vu comme l'inverse de sa constante de temps
|
github_jupyter
|
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import reservoirpy as rpy
from reservoirpy import mat_gen
# just a little tweak to center the plots, nothing to worry about
from IPython.core.display import HTML
HTML("""
<style>
.img-center {
display: block;
margin-left: auto;
margin-right: auto;
}
.output_png {
display: table-cell;
text-align: center;
vertical-align: middle;
}
</style>
""")
rpy.set_seed(42)
def plot_results(y_pred, y_test, sample=500):
fig = plt.figure(figsize=(15, 7))
plt.subplot(211)
plt.plot(np.arange(sample), y_pred[:sample], lw=3, label="ESN prediction")
plt.plot(np.arange(sample), y_test[:sample], linestyle="--", lw=2, label="True value")
plt.plot(np.abs(y_test[:sample] - y_pred[:sample]), label="Absolute deviation")
plt.legend()
plt.show()
from reservoirpy.datasets import mackey_glass
from reservoirpy.observables import nrmse, rsquare
timesteps = 2510
tau = 17
X = mackey_glass(timesteps, tau=tau)
# rescale between -1 and 1
X = 2 * (X - X.min()) / (X.max() - X.min()) - 1
def plot_mackey_glass(X, sample, tau):
fig = plt.figure(figsize=(13, 5))
N = sample
ax = plt.subplot((121))
t = np.linspace(0, N, N)
for i in range(N-1):
ax.plot(t[i:i+2], X[i:i+2], color=plt.cm.magma(255*i//N), lw=1.0)
plt.title(f"Timeseries - {N} timesteps")
plt.xlabel("$t$")
plt.ylabel("$P(t)$")
ax2 = plt.subplot((122))
ax2.margins(0.05)
for i in range(N-1):
ax2.plot(X[i:i+2], X[i+tau:i+tau+2], color=plt.cm.magma(255*i//N), lw=1.0)
plt.title(f"Phase diagram: $P(t) = f(P(t-\\tau))$")
plt.xlabel("$P(t-\\tau)$")
plt.ylabel("$P(t)$")
plt.tight_layout()
plt.show()
plot_mackey_glass(X, 500, tau)
def plot_train_test(X_train, y_train, X_test, y_test):
sample = 500
test_len = X_test.shape[0]
fig = plt.figure(figsize=(15, 5))
plt.plot(np.arange(0, 500), X_train[-sample:], label="Données d'entraînement")
plt.plot(np.arange(0, 500), y_train[-sample:], label="Objectif d'entraînement")
plt.plot(np.arange(500, 500+test_len), X_test, label="Données de test")
plt.plot(np.arange(500, 500+test_len), y_test, label="Objectif de test")
plt.legend()
plt.show()
from reservoirpy.datasets import to_forecasting
x, y = to_forecasting(X, forecast=10)
X_train1, y_train1 = x[:2000], y[:2000]
X_test1, y_test1 = x[2000:], y[2000:]
plot_train_test(X_train1, y_train1, X_test1, y_test1)
units = 100
leak_rate = 0.3
spectral_radius = 1.25
input_scaling = 1.0
connectivity = 0.1
input_connectivity = 0.2
regularization = 1e-8
seed = 1234
def reset_esn():
from reservoirpy.nodes import Reservoir, Ridge
reservoir = Reservoir(units, input_scaling=input_scaling, sr=spectral_radius,
lr=leak_rate, rc_connectivity=connectivity,
input_connectivity=input_connectivity, seed=seed)
readout = Ridge(1, ridge=regularization)
return reservoir >> readout
from reservoirpy.nodes import Reservoir, Ridge
reservoir = Reservoir(units, input_scaling=input_scaling, sr=spectral_radius,
lr=leak_rate, rc_connectivity=connectivity,
input_connectivity=input_connectivity, seed=seed)
readout = Ridge(1, ridge=regularization)
esn = reservoir >> readout
y = esn(X[0]) # initialisation
reservoir.Win is not None, reservoir.W is not None, readout.Wout is not None
np.all(readout.Wout == 0.0)
esn = esn.fit(X_train1, y_train1)
def plot_readout(readout):
Wout = readout.Wout
bias = readout.bias
Wout = np.r_[bias, Wout]
fig = plt.figure(figsize=(15, 5))
ax = fig.add_subplot(111)
ax.grid(axis="y")
ax.set_ylabel("Coefs. de $W_{out}$")
ax.set_xlabel("Neurones du reservoir")
ax.bar(np.arange(Wout.size), Wout.ravel()[::-1])
plt.show()
plot_readout(readout)
y_pred1 = esn.run(X_test1)
plot_results(y_pred1, y_test1)
rsquare(y_test1, y_pred1), nrmse(y_test1, y_pred1)
x, y = to_forecasting(X, forecast=100)
X_train2, y_train2 = x[:2000], y[:2000]
X_test2, y_test2 = x[2000:], y[2000:]
plot_train_test(X_train2, y_train2, X_test2, y_test2)
y_pred2 = esn.fit(X_train2, y_train2).run(X_test2)
plot_results(y_pred2, y_test2, sample=400)
rsquare(y_test2, y_pred2), nrmse(y_test2, y_pred2)
units = 500
leak_rate = 0.3 # - leaking rate
spectral_radius = 0.99 # - rayon spectral
input_scaling = 1.0 # - facteur de mise à l'échelle des entrées (input scaling)
connectivity = 0.1 # - densité des connexions du reservoir vers lui même
input_connectivity = 0.2 # et des entrées vers le reservoir
regularization = 1e-4 # - coefficient de régularisation (L2)
seed = 1234 # reproductibilité
def plot_generation(X_gen, X_t, nb_generations, warming_out=None, warming_inputs=None, seed_timesteps=0):
plt.figure(figsize=(15, 5))
if warming_out is not None:
plt.plot(np.vstack([warming_out, X_gen]), label="Série générée")
else:
plt.plot(X_gen, label="Série générée")
plt.plot(np.arange(nb_generations)+seed_timesteps, X_t, linestyle="--", label="Série réelle")
if warming_inputs is not None:
plt.plot(np.arange(seed_timesteps), warming_inputs, linestyle="--", label="Données d'échauffement")
plt.plot(np.arange(nb_generations)+seed_timesteps, np.abs(X_t - X_gen),
label="Erreur absolue")
if seed_timesteps > 0:
plt.fill_between([0, seed_timesteps], *plt.ylim(), facecolor='lightgray', alpha=0.5, label="Echauffement")
plt.plot([], [], ' ', label=f"$R^2 = {round(rsquare(X_t, X_gen), 4)}$")
plt.plot([], [], ' ', label=f"$NRMSE = {round(nrmse(X_t, X_gen), 4)}$")
plt.legend(
)
plt.show()
esn = reset_esn()
x, y = to_forecasting(X, forecast=1)
X_train3, y_train3 = x[:2000], y[:2000]
X_test3, y_test3 = x[2000:], y[2000:]
esn = esn.fit(X_train3, y_train3)
seed_timesteps = 100
warming_inputs = X_test3[:seed_timesteps]
warming_out = esn.run(warming_inputs, reset=True) # échauffement
nb_generations = 400
X_gen = np.zeros((nb_generations, 1))
y = warming_out[-1]
for t in range(nb_generations): # génération
y = esn(y)
X_gen[t, :] = y
X_t = X_test3[seed_timesteps: nb_generations+seed_timesteps]
plot_generation(X_gen, X_t, nb_generations, warming_out=warming_out,
warming_inputs=warming_inputs, seed_timesteps=seed_timesteps)
units = 100
leak_rate = 0.3
spectral_radius = 1.25
input_scaling = 1.0
connectivity = 0.1
input_connectivity = 0.2
seed = 1234
from reservoirpy.nodes import FORCE
reservoir = Reservoir(units, input_scaling=input_scaling, sr=spectral_radius,
lr=leak_rate, rc_connectivity=connectivity,
input_connectivity=input_connectivity, seed=seed)
readout = FORCE(1)
esn_online = reservoir >> readout
outputs_pre = np.zeros(X_train1.shape)
for t, (x, y) in enumerate(zip(X_train1, y_train1)): # pour chaque pas de temps de la série :
outputs_pre[t, :] = esn_online.train(x, y)
plot_results(outputs_pre, y_train1, sample=100)
plot_results(outputs_pre, y_train1, sample=500)
reservoir = Reservoir(units, input_scaling=input_scaling, sr=spectral_radius,
lr=leak_rate, rc_connectivity=connectivity,
input_connectivity=input_connectivity, seed=seed)
readout = FORCE(1)
esn_online = reservoir >> readout
esn_online.train(X_train1, y_train1)
pred_online = esn_online.run(X_test1) # Wout est maintenant figée
plot_results(pred_online, y_test1, sample=500)
rsquare(y_test1, pred_online), nrmse(y_test1, pred_online)
import glob
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from joblib import delayed, Parallel
from tqdm import tqdm
features = ['com_x', 'com_y', 'com_z', 'trunk_pitch', 'trunk_roll', 'left_x', 'left_y',
'right_x', 'right_y', 'left_ankle_pitch', 'left_ankle_roll', 'left_hip_pitch',
'left_hip_roll', 'left_hip_yaw', 'left_knee', 'right_ankle_pitch',
'right_ankle_roll', 'right_hip_pitch', 'right_hip_roll',
'right_hip_yaw', 'right_knee']
prediction = ['fallen']
force = ['force_orientation', 'force_magnitude']
files = glob.glob("r4-data/experiments/*")
dfs = []
with Parallel(n_jobs=-1) as parallel:
dfs = parallel(delayed(pd.read_csv)(f, compression="gzip", header=0, sep=",") for f in tqdm(files))
X = []
Y = []
F = []
for i, df in enumerate(dfs):
X.append(df[features].values)
Y.append(df[prediction].values)
F.append(df["force_magnitude"].values)
Y_train = []
for y in Y:
y_shift = np.roll(y, -500)
y_shift[-500:] = y[-500:]
Y_train.append(y_shift)
def plot_robot(Y, Y_train, F):
plt.figure(figsize=(10, 7))
plt.plot(Y_train[1], label="Objectif")
plt.plot(Y[1], label="Indicateur de chute")
plt.plot(F[1], label="Force appliquée")
plt.legend()
plt.show()
plot_robot(Y, Y_train, F)
X_train, X_test, y_train, y_test = train_test_split(X, Y_train, test_size=0.2, random_state=42)
from reservoirpy.nodes import ESN
reservoir = Reservoir(300, lr=0.5, sr=0.99, input_bias=False)
readout = Ridge(1, ridge=1e-3)
esn = ESN(reservoir=reservoir, readout=readout, workers=-1) # version distribuée
esn = esn.fit(X_train, y_train)
res = esn.run(X_test)
from reservoirpy.observables import rmse
scores = []
for y_t, y_p in zip(y_test, res):
score = rmse(y_t, y_p)
scores.append(score)
filt_scores = []
for y_t, y_p in zip(y_test, res):
y_f = y_p.copy()
y_f[y_f > 0.5] = 1.0
y_f[y_f <= 0.5] = 0.0
score = rmse(y_t, y_f)
filt_scores.append(score)
def plot_robot_results(y_test, y_pred):
for y_t, y_p in zip(y_test, y_pred):
if y_t.max() > 0.5:
y_shift = np.roll(y, 500)
y_shift[:500] = 0.0
plt.figure(figsize=(7, 5))
plt.plot(y_t, label="Objectif")
plt.plot(y_shift, label="Chute")
plt.plot(y_p, label="Prediction")
plt.legend()
plt.show()
break
plot_robot_results(y_test, res)
print("RMSE moyenne :", f"{np.mean(scores):.4f}", "±", f"{np.std(scores):.5f}")
print("RMSE moyenne (avec seuil) :", f"{np.mean(filt_scores):.4f}", "±", f"{np.std(filt_scores):.5f}")
im = plt.imread("./static/canary.png")
plt.figure(figsize=(5, 5)); plt.imshow(im); plt.axis('off'); plt.show()
from IPython.display import Audio
audio = Audio(filename="./static/song.wav")
display(audio)
im = plt.imread("./static/canary_outputs.png")
plt.figure(figsize=(15, 15)); plt.imshow(im); plt.axis('off'); plt.show()
import os
import glob
import math
import pandas as pd
import librosa as lbr
from tqdm import tqdm
from sklearn.utils.multiclass import unique_labels
from sklearn.preprocessing import OneHotEncoder
win_length = 1024
n_fft = 2048
hop_length = 512
fmin = 500
fmax = 8000
lifter = 40
n_mfcc = 13
def load_data(directory, max_songs=100):
audios = sorted(glob.glob(directory + "/**/*.wav", recursive=True))
annotations = sorted(glob.glob(directory + "/**/*.csv", recursive=True))
X = []
Y = []
vocab = set()
max_songs = min(len(audios), max_songs)
for audio, annotation, _ in tqdm(zip(audios, annotations, range(max_songs)), total=max_songs):
df = pd.read_csv(annotation)
wav, rate = lbr.load(audio, sr=None)
x = lbr.feature.mfcc(wav, sr=rate,
win_length=win_length, hop_length=hop_length,
n_fft=n_fft, fmin=fmin, fmax=fmax, lifter=lifter,
n_mfcc=n_mfcc)
delta = lbr.feature.delta(x, mode="wrap")
delta2 = lbr.feature.delta(x, order=2, mode="wrap")
X.append(np.vstack([x, delta, delta2]).T)
y = [["SIL"]] * x.shape[1]
for annot in df.itertuples():
start = max(0, round(annot.start * rate / hop_length))
end = min(x.shape[1], round(annot.end * rate / hop_length))
y[start:end] = [[annot.syll]] * (end - start)
vocab.add(annot.syll)
Y.append(y)
return X, Y, list(vocab)
X, Y, vocab = load_data("./canary-data")
one_hot = OneHotEncoder(categories=[vocab], sparse=False)
Y = [one_hot.fit_transform(np.array(y)) for y in Y]
X_train, y_train = X[:-10], Y[:-10]
X_test, y_test = X[-10:], Y[-10:]
from reservoirpy.nodes import ESN
units = 1000
leak_rate = 0.05
spectral_radius = 0.5
inputs_scaling = 0.001
connectivity = 0.1
input_connectivity = 0.1
regularization = 1e-5
seed = 1234
Win = mat_gen.generate_input_weights(units, n_mfcc*3, input_scaling=inputs_scaling,
proba=input_connectivity, input_bias=True,
seed=seed)
reservoir = Reservoir(units, sr=spectral_radius, Win=Win,
lr=leak_rate, rc_connectivity=connectivity,
seed=seed)
readout = Ridge(len(vocab), ridge=regularization)
esn = ESN(reservoir=reservoir, readout=readout, workers=-1, backend="loky")
esn = esn.fit(X_train, y_train)
outputs = esn.run(X_test)
from sklearn.metrics import accuracy_score
scores = []
for y_t, y_p in zip(y_test, outputs):
targets = np.vstack(one_hot.inverse_transform(y_t)).flatten()
top_1 = np.argmax(y_p, axis=1)
top_1 = np.array([vocab[t] for t in top_1])
accuracy = accuracy_score(targets, top_1)
scores.append(accuracy)
scores # pour chaque chant testé
print("Précision moyenne :", f"{np.mean(scores):.4f}", "±", f"{np.std(scores):.5f}")
units = 100 # - nombre de neurones dans le reservoir
leak_rate = 0.3 # - leaking rate
spectral_radius = 1.25 # - rayon spectral
input_scaling = 1.0 # - facteur de mise à l'échelle des entrées
connectivity = 0.1 # - densité des connexions du reservoir vers lui même
input_connectivity = 0.2 # et des entrées vers le reservoir
regularization = 1e-8 # - coefficient de régularisation (L2)
seed = 1234 # reproductibilité
states = []
radii = [0.1, 1.25, 10.0]
for sr in radii:
reservoir = Reservoir(units, sr=sr, input_scaling=0.1, lr=leak_rate, rc_connectivity=connectivity,
input_connectivity=input_connectivity)
s = reservoir.run(X_test1[:500])
states.append(s)
units_nb = 20
plt.figure(figsize=(15, 8))
for i, s in enumerate(states):
plt.subplot(len(radii)*100+10+i+1)
plt.plot(s[:, :units_nb], alpha=0.6)
plt.ylabel(f"$sr={radii[i]}$")
plt.xlabel(f"Activations ({units_nb} neurons)")
plt.show()
states = []
scalings = [0.01, 0.1, 1.]
for iss in scalings:
reservoir = Reservoir(units, sr=spectral_radius, input_scaling=iss, lr=leak_rate,
rc_connectivity=connectivity, input_connectivity=input_connectivity)
s = reservoir.run(X_test1[:500])
states.append(s)
def correlation(states, inputs):
return np.mean([np.correlate(states[:, i].flatten(), inputs.flatten()) for i in range(states.shape[1])])
units_nb = 20
plt.figure(figsize=(15, 8))
for i, s in enumerate(states):
plt.subplot(len(scalings)*100+10+i+1)
plt.plot(s[:, :units_nb], alpha=0.6)
plt.ylabel(f"$iss={scalings[i]}$")
plt.xlabel(f"Activations ({units_nb} neurons)")
plt.show()
for i, s in enumerate(states):
corr = correlation(states[i], X_test1[:500])
print(f"ISS : {scalings[i]}, correlation moyenne : {corr}")
states = []
rates = [0.02, 0.2, 0.9]
for lr in rates:
reservoir = Reservoir(units, sr=spectral_radius, input_scaling=input_scaling, lr=lr,
rc_connectivity=connectivity, input_connectivity=input_connectivity)
s = reservoir.run(X_test1[:500])
states.append(s)
units_nb = 20
plt.figure(figsize=(15, 8))
for i, s in enumerate(states):
plt.subplot(len(rates)*100+10+i+1)
plt.plot(s[:, :units_nb] + 2*i)
plt.ylabel(f"$lr={rates[i]}$")
plt.xlabel(f"States ({units_nb} neurons)")
plt.show()
| 0.695338 | 0.976669 |
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow.contrib.keras as keras
from tensorflow.contrib.keras import backend as K
from tensorflow.contrib.keras.python.keras.models import Sequential
from tensorflow.contrib.keras.python.keras.layers import Dense, Dropout, Flatten
import time
from enum import Enum
import math
class Dataset(Enum):
AND_GRID = 0 #Basic AND function
BASIC_GRID = 1 #Basic grid 4x4 with 8xO and 8xX
BOOL_FUNC = 2 #Boolean function of 5 variables
POLYGON = 3 #Polygon shape dividing 2D grid to 2 classes
MULTI_CLASS = 4 #Function dividing 2D grid to 8 classes
ONE_DIM = 5 #One dimensional function
ONE_DIM_MEMORY = 6 #One dimensional function with memory
def get_num_classes(dataset):
n = 2
if dataset == Dataset.MULTI_CLASS:
n = 8
elif dataset in [Dataset.ONE_DIM, Dataset.ONE_DIM_MEMORY]:
n = 1
return n
print("TensorFlow version =", tf.__version__)
print("Keras backend =", keras.backend.backend())
print("Default float type =", keras.backend.floatx())
print("Image data structure =", keras.backend.image_data_format())
def load_train_data(dataset, no_points):
x_all = np.loadtxt('data/' + dataset.name + "_" + str(no_points) + '_xs.txt')
print("Loaded",x_all.shape,"examples from", 'data/' + dataset.name + "_" + str(no_points) + '_xs.txt')
y_all = np.loadtxt('data/' + dataset.name + "_" + str(no_points) + '_ys.txt')
print("Loaded",y_all.shape,"labels from", 'data/' + dataset.name + "_" + str(no_points) + '_ys.txt')
return x_all, y_all
def prepare_train_data(dataset, no_points=10000, train_ratio=0.7):
print("Preparing training data for dataset", dataset.name)
x_all, y_all = load_train_data(dataset, no_points)
assert(x_all.shape[0] == y_all.shape[0])
if x_all.ndim == 1:
x_all = x_all.reshape((x_all.shape[0],1))
z_all = np.append(x_all, y_all.reshape((y_all.shape[0],1)), axis=1)
#z_all = z_all.astype('float32')
np.random.seed(0)
np.random.shuffle(z_all)
train_size = math.floor(x_all.shape[0] * train_ratio)
test_size = x_all.shape[0] - train_size
num_classes = get_num_classes(dataset)
print("Number of classes =", num_classes)
x_train = z_all[0:train_size, 0:x_all.shape[1]]
y_train = z_all[0:train_size, -1]
x_test = z_all[train_size:, 0:x_all.shape[1]]
y_test = z_all[train_size:, -1]
if num_classes > 1:
print("Changing labels to one-hot encoding...")
print('y_train[0] before changing to one-hot-encoding: ', y_train[0])
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print('y_train[0] after changing to one-hot-encoding: ', y_train[0])
elif num_classes == 1:
print("Normalizing outputs of a real function to be approximated...")
y_max = z_all[:,-1].max()
print("Previous y_max =", y_max,"y_min =",z_all[:,-1].min())
z_all[:,-1] += y_max
z_all[:,-1] /= 2.0 * y_max
y_train = z_all[0:train_size, -1]
y_test = z_all[train_size:, -1]
print("After normalization y_max =", z_all[:,-1].max(),"y_min =",z_all[:,-1].min())
print("\nReturning:")
print("x_train: shape =", x_train.shape, "dtype =", x_train.dtype)
print("y_train: shape =", y_train.shape, "dtype =", y_train.dtype)
print("x_test: shape =", x_test.shape, "dtype =", x_test.dtype)
print("y_test: shape =", y_test.shape, "dtype =", y_test.dtype)
return x_train, y_train, x_test, y_test, num_classes
def test_model(model, x_test, y_test):
if len(x_test) > 0:
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
else:
print("Cannot test model: No test data supplied.")
def train_model(model, batch_size, epochs, x_train, y_train, x_test, y_test, plot_outputs=True, plot_epochs=500):
print("==== Training ====")
start_time = time.time()
y_predicts = []
if plot_outputs:
epochs_done = 0
while epochs_done < epochs:
epoch_start_time = time.time()
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=plot_epochs,
verbose=0
#,validation_data=(x_test, y_test) #This calculates validation on test set after each epoch = too slow
)
epochs_done += plot_epochs
y_pred = model.predict(x_train)
y_predicts.append(y_pred)
print("After",epochs_done,"epochs:")
print(plot_epochs,"epochs time =", time.time() - epoch_start_time)
test_model(model, x_test, y_test)
plot_output(x_train, y_pred, y_train)
else:
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=0)
y_pred = model.predict(x_train)
y_predicts.append(y_pred)
test_model(model, x_test, y_test)
elapsed_time = time.time() - start_time
print("Total time =", elapsed_time)
return y_predicts
def plot_output(x_train, y_predicted, y_train):
if x_train.shape[1] > 1:
plt.scatter(x_train.T[0],x_train.T[1],c=np.argmax(y_predicted, axis=1))
plt.title("ANN output for training data")
else:
z_train = np.append(x_train, y_train.reshape((y_train.shape[0],1)), axis=1)
z_pred = np.append(x_train, y_predicted.reshape((y_predicted.shape[0],1)), axis=1)
z_train = z_train[z_train[:,0].argsort()] #Sort acc to 1st column = x values
z_pred = z_pred[z_pred[:,0].argsort()]
plt.plot(z_train[:,0], z_train[:,1], 'b-', z_pred[:,0], z_pred[:,1], 'r--')
plt.title("ANN output (red) VS training data (blue)")
plt.show()
def plot_train_data(x_train, y_train):
if y_train.ndim >= 2:
plt.scatter(x_train.T[0],x_train.T[1],c=np.argmax(y_train, axis=1))
else:
z = np.append(x_train, y_train.reshape((y_train.shape[0],1)), axis=1)
z = z[z[:,0].argsort()]
plt.plot(z[:,0], z[:,1], 'bo')
plt.title("Training data")
plt.show()
def get_new_model(dataset, x_train, num_classes, SGD_learn_rate=0.1):
model = Sequential()
if dataset in [Dataset.AND_GRID, Dataset.BASIC_GRID, Dataset.POLYGON, Dataset.MULTI_CLASS]:
if dataset == Dataset.AND_GRID:
model.add(Dense(5, input_shape=x_train[0].shape, activation='sigmoid'))
elif dataset in [Dataset.BASIC_GRID, Dataset.POLYGON, Dataset.MULTI_CLASS]:
model.add(Dense(10, input_shape=x_train[0].shape, activation='sigmoid'))
model.add(Dense(5, activation='sigmoid'))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.SGD(lr=SGD_learn_rate),
metrics=['accuracy'])
elif dataset == Dataset.ONE_DIM:
model.add(Dense(10, input_shape=x_train[0].shape, activation='sigmoid'))
#model.add(Dense(5, activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss=keras.losses.mean_squared_error,
optimizer=keras.optimizers.SGD(lr=SGD_learn_rate),
metrics=['mean_squared_error'])
else:
model = None
print("Not defined yet.")
return model
dataset = Dataset.ONE_DIM
x_train, y_train, x_test, y_test, num_classes = prepare_train_data(dataset, no_points=900, train_ratio=1.0)
plot_train_data(x_train, y_train)
model = get_new_model(dataset, x_train, num_classes, SGD_learn_rate=1.0)
batch_size = 8
epochs = 4000
#y_predicted = train_model(model, batch_size, epochs, x_train[0:20], y_train[0:20], x_test[0:20], y_test[0:20])
y_predicts = train_model(model, batch_size, epochs, x_train, y_train, x_test, y_test, plot_epochs=1000)
dataset = Dataset.POLYGON
x_train, y_train, x_test, y_test, num_classes = prepare_train_data(dataset, no_points=900, train_ratio=1.0)
plot_train_data(x_train, y_train)
model = get_new_model(dataset, x_train, num_classes, SGD_learn_rate=0.1)
batch_size = 8
epochs = 1000
y_predicts = train_model(model, batch_size, epochs, x_train, y_train, x_test, y_test, plot_epochs=250)
```
|
github_jupyter
|
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow.contrib.keras as keras
from tensorflow.contrib.keras import backend as K
from tensorflow.contrib.keras.python.keras.models import Sequential
from tensorflow.contrib.keras.python.keras.layers import Dense, Dropout, Flatten
import time
from enum import Enum
import math
class Dataset(Enum):
AND_GRID = 0 #Basic AND function
BASIC_GRID = 1 #Basic grid 4x4 with 8xO and 8xX
BOOL_FUNC = 2 #Boolean function of 5 variables
POLYGON = 3 #Polygon shape dividing 2D grid to 2 classes
MULTI_CLASS = 4 #Function dividing 2D grid to 8 classes
ONE_DIM = 5 #One dimensional function
ONE_DIM_MEMORY = 6 #One dimensional function with memory
def get_num_classes(dataset):
n = 2
if dataset == Dataset.MULTI_CLASS:
n = 8
elif dataset in [Dataset.ONE_DIM, Dataset.ONE_DIM_MEMORY]:
n = 1
return n
print("TensorFlow version =", tf.__version__)
print("Keras backend =", keras.backend.backend())
print("Default float type =", keras.backend.floatx())
print("Image data structure =", keras.backend.image_data_format())
def load_train_data(dataset, no_points):
x_all = np.loadtxt('data/' + dataset.name + "_" + str(no_points) + '_xs.txt')
print("Loaded",x_all.shape,"examples from", 'data/' + dataset.name + "_" + str(no_points) + '_xs.txt')
y_all = np.loadtxt('data/' + dataset.name + "_" + str(no_points) + '_ys.txt')
print("Loaded",y_all.shape,"labels from", 'data/' + dataset.name + "_" + str(no_points) + '_ys.txt')
return x_all, y_all
def prepare_train_data(dataset, no_points=10000, train_ratio=0.7):
print("Preparing training data for dataset", dataset.name)
x_all, y_all = load_train_data(dataset, no_points)
assert(x_all.shape[0] == y_all.shape[0])
if x_all.ndim == 1:
x_all = x_all.reshape((x_all.shape[0],1))
z_all = np.append(x_all, y_all.reshape((y_all.shape[0],1)), axis=1)
#z_all = z_all.astype('float32')
np.random.seed(0)
np.random.shuffle(z_all)
train_size = math.floor(x_all.shape[0] * train_ratio)
test_size = x_all.shape[0] - train_size
num_classes = get_num_classes(dataset)
print("Number of classes =", num_classes)
x_train = z_all[0:train_size, 0:x_all.shape[1]]
y_train = z_all[0:train_size, -1]
x_test = z_all[train_size:, 0:x_all.shape[1]]
y_test = z_all[train_size:, -1]
if num_classes > 1:
print("Changing labels to one-hot encoding...")
print('y_train[0] before changing to one-hot-encoding: ', y_train[0])
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print('y_train[0] after changing to one-hot-encoding: ', y_train[0])
elif num_classes == 1:
print("Normalizing outputs of a real function to be approximated...")
y_max = z_all[:,-1].max()
print("Previous y_max =", y_max,"y_min =",z_all[:,-1].min())
z_all[:,-1] += y_max
z_all[:,-1] /= 2.0 * y_max
y_train = z_all[0:train_size, -1]
y_test = z_all[train_size:, -1]
print("After normalization y_max =", z_all[:,-1].max(),"y_min =",z_all[:,-1].min())
print("\nReturning:")
print("x_train: shape =", x_train.shape, "dtype =", x_train.dtype)
print("y_train: shape =", y_train.shape, "dtype =", y_train.dtype)
print("x_test: shape =", x_test.shape, "dtype =", x_test.dtype)
print("y_test: shape =", y_test.shape, "dtype =", y_test.dtype)
return x_train, y_train, x_test, y_test, num_classes
def test_model(model, x_test, y_test):
if len(x_test) > 0:
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
else:
print("Cannot test model: No test data supplied.")
def train_model(model, batch_size, epochs, x_train, y_train, x_test, y_test, plot_outputs=True, plot_epochs=500):
print("==== Training ====")
start_time = time.time()
y_predicts = []
if plot_outputs:
epochs_done = 0
while epochs_done < epochs:
epoch_start_time = time.time()
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=plot_epochs,
verbose=0
#,validation_data=(x_test, y_test) #This calculates validation on test set after each epoch = too slow
)
epochs_done += plot_epochs
y_pred = model.predict(x_train)
y_predicts.append(y_pred)
print("After",epochs_done,"epochs:")
print(plot_epochs,"epochs time =", time.time() - epoch_start_time)
test_model(model, x_test, y_test)
plot_output(x_train, y_pred, y_train)
else:
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=0)
y_pred = model.predict(x_train)
y_predicts.append(y_pred)
test_model(model, x_test, y_test)
elapsed_time = time.time() - start_time
print("Total time =", elapsed_time)
return y_predicts
def plot_output(x_train, y_predicted, y_train):
if x_train.shape[1] > 1:
plt.scatter(x_train.T[0],x_train.T[1],c=np.argmax(y_predicted, axis=1))
plt.title("ANN output for training data")
else:
z_train = np.append(x_train, y_train.reshape((y_train.shape[0],1)), axis=1)
z_pred = np.append(x_train, y_predicted.reshape((y_predicted.shape[0],1)), axis=1)
z_train = z_train[z_train[:,0].argsort()] #Sort acc to 1st column = x values
z_pred = z_pred[z_pred[:,0].argsort()]
plt.plot(z_train[:,0], z_train[:,1], 'b-', z_pred[:,0], z_pred[:,1], 'r--')
plt.title("ANN output (red) VS training data (blue)")
plt.show()
def plot_train_data(x_train, y_train):
if y_train.ndim >= 2:
plt.scatter(x_train.T[0],x_train.T[1],c=np.argmax(y_train, axis=1))
else:
z = np.append(x_train, y_train.reshape((y_train.shape[0],1)), axis=1)
z = z[z[:,0].argsort()]
plt.plot(z[:,0], z[:,1], 'bo')
plt.title("Training data")
plt.show()
def get_new_model(dataset, x_train, num_classes, SGD_learn_rate=0.1):
model = Sequential()
if dataset in [Dataset.AND_GRID, Dataset.BASIC_GRID, Dataset.POLYGON, Dataset.MULTI_CLASS]:
if dataset == Dataset.AND_GRID:
model.add(Dense(5, input_shape=x_train[0].shape, activation='sigmoid'))
elif dataset in [Dataset.BASIC_GRID, Dataset.POLYGON, Dataset.MULTI_CLASS]:
model.add(Dense(10, input_shape=x_train[0].shape, activation='sigmoid'))
model.add(Dense(5, activation='sigmoid'))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.SGD(lr=SGD_learn_rate),
metrics=['accuracy'])
elif dataset == Dataset.ONE_DIM:
model.add(Dense(10, input_shape=x_train[0].shape, activation='sigmoid'))
#model.add(Dense(5, activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss=keras.losses.mean_squared_error,
optimizer=keras.optimizers.SGD(lr=SGD_learn_rate),
metrics=['mean_squared_error'])
else:
model = None
print("Not defined yet.")
return model
dataset = Dataset.ONE_DIM
x_train, y_train, x_test, y_test, num_classes = prepare_train_data(dataset, no_points=900, train_ratio=1.0)
plot_train_data(x_train, y_train)
model = get_new_model(dataset, x_train, num_classes, SGD_learn_rate=1.0)
batch_size = 8
epochs = 4000
#y_predicted = train_model(model, batch_size, epochs, x_train[0:20], y_train[0:20], x_test[0:20], y_test[0:20])
y_predicts = train_model(model, batch_size, epochs, x_train, y_train, x_test, y_test, plot_epochs=1000)
dataset = Dataset.POLYGON
x_train, y_train, x_test, y_test, num_classes = prepare_train_data(dataset, no_points=900, train_ratio=1.0)
plot_train_data(x_train, y_train)
model = get_new_model(dataset, x_train, num_classes, SGD_learn_rate=0.1)
batch_size = 8
epochs = 1000
y_predicts = train_model(model, batch_size, epochs, x_train, y_train, x_test, y_test, plot_epochs=250)
| 0.636353 | 0.629319 |
<center><h1><strong>tau-data Indonesia</strong></h1></center>
<center><h2><strong><font color="blue">Exploratory Data Analysis-02: Data Visualizations</font></strong></h2></center>
<img alt="" src="images/Cover.jpg"/>
<b><center>(C) Taufik Sutanto</center>
<center><h3><font color="blue">https://tau-data.id/eda-02/ ~ taufik@tau-data.id </font></h3></center>
```
import warnings; warnings.simplefilter('ignore')
import pandas as pd, matplotlib.pyplot as plt, seaborn as sns, numpy as np
import matplotlib.cm as cm
from collections import Counter
plt.style.use('bmh'); sns.set()
# Importing CSV data https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html
try:
# Running Locally
price = pd.read_csv('data/price.csv')
except:
# Running in Google Colab
!mkdir data
!wget -P data/ https://raw.githubusercontent.com/taudata-indonesia/eLearning/master/data/price.csv
price = pd.read_csv('data/price.csv')
# Dari EDA-01 - Bisa juga Load PreProcessed Data
price.drop("Observation", axis=1, inplace=True)
price.drop_duplicates(inplace=True)
price['Parking'] = price['Parking'].astype('category')
price['City_Category'] = price['City_Category'].astype('category')
price2 = price[np.abs(price.House_Price - price.House_Price.mean())<=(2*price.House_Price.std())]
price2.info()
```
# Statistika Deskriptif
```
# Statistika Sederhana dari data "Numerik"-nya
price2.describe(include='all').transpose()
```
## Apakah ada kecenderungan perbedaan harga rumah akibat dari tipe tempat parkir?
```
p= sns.catplot(x="Parking", y="House_Price", data=price2)
# Apa yang bisa dilihat dari hasil ini?
```
# Tambah dimensi di Visualisasi untuk melihat insight yang lebih jelas/baik
```
# Bisa juga plot dengan informasi dari 3 variabel sekaligus
# (untuk melihat kemungkinan faktor interaksi)
p= sns.catplot(x="Parking", y="House_Price", hue="City_Category", kind="swarm", data=price2)
```
# Ada informasi apakah dari hasil diatas?
```
plt.figure(figsize=(8,6)) # https://matplotlib.org/api/_as_gen/matplotlib.pyplot.figure.html#matplotlib.pyplot.figure
p = sns.countplot(x="Parking", data=price2)
```
# Adding labels? ... Hhhmmm...
```
def groupedbarplot(df, width=0.8, annotate="values", ax=None, **kw):
ax = ax or plt.gca()
n = len(df.columns)
w = 1./n
pos = (np.linspace(w/2., 1-w/2., n)-0.5)*width
w *= width
bars = []
for col, x in zip(df.columns, pos):
bars.append(ax.bar(np.arange(len(df))+x, df[col].values, width=w, **kw))
# matplotlib.pyplot.bar(x, height, width=0.8, bottom=None, *, align='center', data=None, **kwargs)
for val, xi in zip(df[col].values, np.arange(len(df))+x):
if annotate:
txt = val if annotate == "values" else col
ax.annotate(txt, xy=(xi, val), xytext=(0,2),
textcoords="offset points",
ha="center", va="bottom")
ax.set_xticks(np.arange(len(df)))
ax.set_xticklabels(df.index)
return bars
counts = price2.groupby(["Parking", "City_Category"]).size().unstack()
plt.figure(figsize=(12,8))
groupedbarplot(counts)
plt.show()
price2.groupby(["Parking", "City_Category"]).size().unstack()
```
# Horizontal? Why?
```
ax = sns.countplot(y = 'Parking', hue = 'City_Category', palette = 'muted', data=price2)
tips=sns.load_dataset('tips')
categorical = tips.select_dtypes(include = ['category']).columns
fig, ax = plt.subplots(2, 2, figsize=(20, 10))
for variable, subplot in zip(categorical, ax.flatten()):
sns.countplot(tips[variable], ax=subplot)
```
# Stacked/Segmented Chart
```
CT = pd.crosstab(index=price2["City_Category"], columns=price2["Parking"])
p = CT.plot(kind="bar", figsize=(8,8), stacked=True)
# ini dilakukan jika kita ingin menyimpan plotnya ke dalam suatu file
p.figure.savefig('barChart.png')
# lihat di folder ipynb-nya akan muncul file baru.
```
# Mosaic Plot for multiple categorical data analysis
```
from statsmodels.graphics.mosaicplot import mosaic
p = mosaic(tips, ['sex','smoker','time'])
# PieChart
plot = price2.City_Category.value_counts().plot(kind='pie')
```
# Show Values?
```
data = price2['Parking']
proporsion = Counter(data)
values = [float(v) for v in proporsion.values()]
colors = ['r', 'g', 'b', 'y']
labels = proporsion.keys()
explode = (0.2, 0, 0, 0)
plt.pie(values, colors=colors, labels= values, explode=explode, shadow=True)
plt.title('Proporsi Tipe Parkir')
plt.legend(labels,loc='best')
plt.show()
# Jika ada outlier grafiknya menjadi tidak jelas (data = price, bukan price2)
p = sns.boxplot(x="House_Price", y="Parking", data=price)
# BoxPlots
p = sns.boxplot(x="House_Price", y="Parking", data=price2)
# Apa makna pola yang terlihat di data oleh BoxPlot ini?
```
# Boxplot dapat juga dipisahkan berdasarkan suatu kategori
```
p = sns.catplot(x="Parking", y="House_Price", hue="City_Category", kind="box", data=price2)
```
* Ada dugaan/interpretasi (baru) apakah dari boxPlot diatas?
* Sebutkan beberapa kelemahan (PitFalls) Box Plot?
# Swarn Plot & Violin Plot
```
p= sns.catplot(x="day", y="total_bill", hue="sex", kind="swarm", data=tips)
p = sns.violinplot(x="day", y="total_bill", data=tips,palette='rainbow')
col = 'House_Price'
plot = sns.displot(data=price2, x=col, kde=True)
plot = sns.displot(data=price2, x=col, hue='Parking', kind="kde")
numerical = price2.select_dtypes(include = ['int64','float64']).columns
price2[numerical].hist(figsize=(15, 6), layout=(2, 4));
p = sns.scatterplot(x=price2['House_Price'], y=price2['Dist_Market'], hue = price2['Parking'])
```
# Joined
```
p = sns.jointplot(x=price2['House_Price'], y=price2['Rainfall'])
```
# Conditional Plot
```
cond_plot = sns.FacetGrid(data=price2, col='Parking', hue='City_Category')#, hue_order=["Yes", "No"]
p = cond_plot.map(sns.scatterplot, 'Dist_Hospital', 'House_Price').add_legend()
```
# Pairwise Plot
```
# Coba kita perhatikan sebagiannya saja dulu dan coba kelompokkan berdasarkan "Parking"
p = sns.pairplot(price2[['House_Price','Builtup','Dist_Hospital','Parking']], hue="Parking")
# Ada pola menarik?
```
# 3D Visualization: 3D Scatter Plot
https://pythonprogramming.net/matplotlib-3d-scatterplot-tutorial/
```
%matplotlib inline
fig = plt.figure(figsize=(12, 10))
ax = fig.add_subplot(111, projection='3d')
x = price2['House_Price']
y = price2['Dist_Hospital']
z = price2['Rainfall']
warna = cm.rainbow(np.linspace(0, 1, len(y)))
ax.scatter(x, y, z, s=50, c=warna, marker='o')
ax.set_xlabel('Harga')
ax.set_ylabel('Jarak ke RS')
ax.set_zlabel('Curah Hujan')
plt.show()
%matplotlib inline
fig = plt.figure(figsize=(12, 10))
ax = fig.add_subplot(111, projection='3d')
x = price2['House_Price']
y = price2['Dist_Hospital']
z = price2['Rainfall']
warna = cm.rainbow(np.linspace(0, 1, len(y)))
ax.scatter(x, y, z, s=50, c=warna, marker='o')
ax.set_xlabel('Harga')
ax.set_ylabel('Jarak ke RS')
ax.set_zlabel('Curah Hujan')
plt.show()
```
# 3D Visualization: 3D Bar Plots
Bar plots are used quite frequently in data visualisation projects since they’re able to convey information, usually some type of comparison, in a simple and intuitive way. The beauty of 3D bar plots is that they maintain the simplicity of 2D bar plots while extending their capacity to represent comparative information.
https://towardsdatascience.com/an-easy-introduction-to-3d-plotting-with-matplotlib-801561999725
```
import random
fig = plt.figure(figsize=(12, 10))
ax = plt.axes(projection="3d")
num_bars = 15
x_pos = random.sample(range(20), num_bars)
y_pos = random.sample(range(20), num_bars)
z_pos = [0] * num_bars
x_size = np.ones(num_bars)
y_size = np.ones(num_bars)
z_size = random.sample(range(20), num_bars)
ax.bar3d(x_pos, y_pos, z_pos, x_size, y_size, z_size, color='aqua')
plt.show()
```
# Checking Correlations
```
price2.corr()
# HeatMap untuk menyelidiki korelasi
corr2 = price2.corr() # We already examined SalePrice correlations
plt.figure(figsize=(12, 10))
sns.heatmap(corr2[(corr2 >= 0.5) | (corr2 <= -0.4)],
cmap='viridis', vmax=1.0, vmin=-1.0, linewidths=0.1,
annot=True, annot_kws={"size": 14}, square=True);
iris = sns.load_dataset("iris")
g = sns.pairplot(iris, hue="species")
pd.plotting.parallel_coordinates(iris, 'species', color=('r', 'g', 'b'))
plt.show()
```
# Time Series Plot
```
# Load an example dataset with long-form data
fmri = sns.load_dataset("fmri")
fmri.sample(10)
# Plot the responses for different events and regions
plot = sns.lineplot(x="timepoint", y="signal", data=fmri)
plot = sns.lineplot(x="timepoint", y="signal", hue="region", style="event", data=fmri)
```
# Spatial Visualization
```
def generateBaseMap(default_location=[-0.789275, 113.921], default_zoom_start=5):
base_map = folium.Map(location=default_location, control_scale=True, zoom_start=default_zoom_start)
return base_map
# Load Data
try:
# Running Locally, yakinkan module folium sudah terinstall
df_loc = pd.read_csv('data/df_loc.csv')
except:
# Running in Google Colab, yakinkan folder "data" sudah ada
!wget -P data/ https://raw.githubusercontent.com/taudata-indonesia/eLearning/master/data/df_loc.csv
df_loc = pd.read_csv('data/df_loc.csv')
!pip install folium
df_loc.head()
import folium
from folium.plugins import HeatMap
base_map = generateBaseMap()
HeatMap(data=df_loc[['lat', 'lon', 'count']].groupby(['lat', 'lon']).sum().reset_index().values.tolist(), radius=8, max_zoom=13).add_to(base_map)
base_map
```
# Latihan Studi Kasus: Data Tips Restaurant
Sebuah dataset dari suatu Restaurant memuat variabel-variabel berikut:
* total_bill: Total bill (cost of the meal), including tax, in US dollars
* tip: Tip (gratuity) in US dollars
* sex: Sex of person paying for the meal (0=male, 1=female)
* smoker: Smoker in party? (0=No, 1=Yes)
* day: 3=Thur, 4=Fri, 5=Sat, 6=Sun
* time: 0=Day, 1=Night
* size: Size of the party
https://www.kaggle.com/ranjeetjain3/seaborn-tips-dataset
```
# Loading Contoh Data studi kasus pertama di atas
tips = sns.load_dataset('tips') # Loading dari SeaBorn library's dataset
# Ukuran Data
N, P = tips.shape
print('baris = ', N, ', Kolom = ', P)
tips.head()
```
# Latihan:
## Silahkan Latihan untuk menjawab pertanyaan-pertanyaan berikut:
1. Adakah tipe variabel yang kurang tepat di data tersebut?
2. Apakah data numeriknya cenderung berdistribusi normal?
3. Apakah ada outlier, noise, missing values, dan-atau duplikasi data?
4. Apakah pelanggan pria dan wanita cenderung proporsional (balance)?
5. Dari data yang ada apakah Pria atau wanita ada kecenderungan memberi tips lebih besar?
6. Dari data yang ada apakah ada kecenderungan tips lebih besar di hari-hari tertentu?
7. Dari data yang ada apakah customer perokok cenderung memberi tips lebih besar?
8. Apakah pola di nomer 5 dan 7 dipengaruhi hari?
9. Pola apalagi yang dapat anda temukan? (misal, bisakah anda menyarankan tata letak kursi/meja restaurant dari data ini?)
9. Final question: dari hasil EDA anda saran apa saja yang akan anda berikan ke pemilik restaurant?
* Skills/kompetensi apa yang terasa sangat diperlukan dari latihan ini?
# End of Module
<hr>
|
github_jupyter
|
import warnings; warnings.simplefilter('ignore')
import pandas as pd, matplotlib.pyplot as plt, seaborn as sns, numpy as np
import matplotlib.cm as cm
from collections import Counter
plt.style.use('bmh'); sns.set()
# Importing CSV data https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html
try:
# Running Locally
price = pd.read_csv('data/price.csv')
except:
# Running in Google Colab
!mkdir data
!wget -P data/ https://raw.githubusercontent.com/taudata-indonesia/eLearning/master/data/price.csv
price = pd.read_csv('data/price.csv')
# Dari EDA-01 - Bisa juga Load PreProcessed Data
price.drop("Observation", axis=1, inplace=True)
price.drop_duplicates(inplace=True)
price['Parking'] = price['Parking'].astype('category')
price['City_Category'] = price['City_Category'].astype('category')
price2 = price[np.abs(price.House_Price - price.House_Price.mean())<=(2*price.House_Price.std())]
price2.info()
# Statistika Sederhana dari data "Numerik"-nya
price2.describe(include='all').transpose()
p= sns.catplot(x="Parking", y="House_Price", data=price2)
# Apa yang bisa dilihat dari hasil ini?
# Bisa juga plot dengan informasi dari 3 variabel sekaligus
# (untuk melihat kemungkinan faktor interaksi)
p= sns.catplot(x="Parking", y="House_Price", hue="City_Category", kind="swarm", data=price2)
plt.figure(figsize=(8,6)) # https://matplotlib.org/api/_as_gen/matplotlib.pyplot.figure.html#matplotlib.pyplot.figure
p = sns.countplot(x="Parking", data=price2)
def groupedbarplot(df, width=0.8, annotate="values", ax=None, **kw):
ax = ax or plt.gca()
n = len(df.columns)
w = 1./n
pos = (np.linspace(w/2., 1-w/2., n)-0.5)*width
w *= width
bars = []
for col, x in zip(df.columns, pos):
bars.append(ax.bar(np.arange(len(df))+x, df[col].values, width=w, **kw))
# matplotlib.pyplot.bar(x, height, width=0.8, bottom=None, *, align='center', data=None, **kwargs)
for val, xi in zip(df[col].values, np.arange(len(df))+x):
if annotate:
txt = val if annotate == "values" else col
ax.annotate(txt, xy=(xi, val), xytext=(0,2),
textcoords="offset points",
ha="center", va="bottom")
ax.set_xticks(np.arange(len(df)))
ax.set_xticklabels(df.index)
return bars
counts = price2.groupby(["Parking", "City_Category"]).size().unstack()
plt.figure(figsize=(12,8))
groupedbarplot(counts)
plt.show()
price2.groupby(["Parking", "City_Category"]).size().unstack()
ax = sns.countplot(y = 'Parking', hue = 'City_Category', palette = 'muted', data=price2)
tips=sns.load_dataset('tips')
categorical = tips.select_dtypes(include = ['category']).columns
fig, ax = plt.subplots(2, 2, figsize=(20, 10))
for variable, subplot in zip(categorical, ax.flatten()):
sns.countplot(tips[variable], ax=subplot)
CT = pd.crosstab(index=price2["City_Category"], columns=price2["Parking"])
p = CT.plot(kind="bar", figsize=(8,8), stacked=True)
# ini dilakukan jika kita ingin menyimpan plotnya ke dalam suatu file
p.figure.savefig('barChart.png')
# lihat di folder ipynb-nya akan muncul file baru.
from statsmodels.graphics.mosaicplot import mosaic
p = mosaic(tips, ['sex','smoker','time'])
# PieChart
plot = price2.City_Category.value_counts().plot(kind='pie')
data = price2['Parking']
proporsion = Counter(data)
values = [float(v) for v in proporsion.values()]
colors = ['r', 'g', 'b', 'y']
labels = proporsion.keys()
explode = (0.2, 0, 0, 0)
plt.pie(values, colors=colors, labels= values, explode=explode, shadow=True)
plt.title('Proporsi Tipe Parkir')
plt.legend(labels,loc='best')
plt.show()
# Jika ada outlier grafiknya menjadi tidak jelas (data = price, bukan price2)
p = sns.boxplot(x="House_Price", y="Parking", data=price)
# BoxPlots
p = sns.boxplot(x="House_Price", y="Parking", data=price2)
# Apa makna pola yang terlihat di data oleh BoxPlot ini?
p = sns.catplot(x="Parking", y="House_Price", hue="City_Category", kind="box", data=price2)
p= sns.catplot(x="day", y="total_bill", hue="sex", kind="swarm", data=tips)
p = sns.violinplot(x="day", y="total_bill", data=tips,palette='rainbow')
col = 'House_Price'
plot = sns.displot(data=price2, x=col, kde=True)
plot = sns.displot(data=price2, x=col, hue='Parking', kind="kde")
numerical = price2.select_dtypes(include = ['int64','float64']).columns
price2[numerical].hist(figsize=(15, 6), layout=(2, 4));
p = sns.scatterplot(x=price2['House_Price'], y=price2['Dist_Market'], hue = price2['Parking'])
p = sns.jointplot(x=price2['House_Price'], y=price2['Rainfall'])
cond_plot = sns.FacetGrid(data=price2, col='Parking', hue='City_Category')#, hue_order=["Yes", "No"]
p = cond_plot.map(sns.scatterplot, 'Dist_Hospital', 'House_Price').add_legend()
# Coba kita perhatikan sebagiannya saja dulu dan coba kelompokkan berdasarkan "Parking"
p = sns.pairplot(price2[['House_Price','Builtup','Dist_Hospital','Parking']], hue="Parking")
# Ada pola menarik?
%matplotlib inline
fig = plt.figure(figsize=(12, 10))
ax = fig.add_subplot(111, projection='3d')
x = price2['House_Price']
y = price2['Dist_Hospital']
z = price2['Rainfall']
warna = cm.rainbow(np.linspace(0, 1, len(y)))
ax.scatter(x, y, z, s=50, c=warna, marker='o')
ax.set_xlabel('Harga')
ax.set_ylabel('Jarak ke RS')
ax.set_zlabel('Curah Hujan')
plt.show()
%matplotlib inline
fig = plt.figure(figsize=(12, 10))
ax = fig.add_subplot(111, projection='3d')
x = price2['House_Price']
y = price2['Dist_Hospital']
z = price2['Rainfall']
warna = cm.rainbow(np.linspace(0, 1, len(y)))
ax.scatter(x, y, z, s=50, c=warna, marker='o')
ax.set_xlabel('Harga')
ax.set_ylabel('Jarak ke RS')
ax.set_zlabel('Curah Hujan')
plt.show()
import random
fig = plt.figure(figsize=(12, 10))
ax = plt.axes(projection="3d")
num_bars = 15
x_pos = random.sample(range(20), num_bars)
y_pos = random.sample(range(20), num_bars)
z_pos = [0] * num_bars
x_size = np.ones(num_bars)
y_size = np.ones(num_bars)
z_size = random.sample(range(20), num_bars)
ax.bar3d(x_pos, y_pos, z_pos, x_size, y_size, z_size, color='aqua')
plt.show()
price2.corr()
# HeatMap untuk menyelidiki korelasi
corr2 = price2.corr() # We already examined SalePrice correlations
plt.figure(figsize=(12, 10))
sns.heatmap(corr2[(corr2 >= 0.5) | (corr2 <= -0.4)],
cmap='viridis', vmax=1.0, vmin=-1.0, linewidths=0.1,
annot=True, annot_kws={"size": 14}, square=True);
iris = sns.load_dataset("iris")
g = sns.pairplot(iris, hue="species")
pd.plotting.parallel_coordinates(iris, 'species', color=('r', 'g', 'b'))
plt.show()
# Load an example dataset with long-form data
fmri = sns.load_dataset("fmri")
fmri.sample(10)
# Plot the responses for different events and regions
plot = sns.lineplot(x="timepoint", y="signal", data=fmri)
plot = sns.lineplot(x="timepoint", y="signal", hue="region", style="event", data=fmri)
def generateBaseMap(default_location=[-0.789275, 113.921], default_zoom_start=5):
base_map = folium.Map(location=default_location, control_scale=True, zoom_start=default_zoom_start)
return base_map
# Load Data
try:
# Running Locally, yakinkan module folium sudah terinstall
df_loc = pd.read_csv('data/df_loc.csv')
except:
# Running in Google Colab, yakinkan folder "data" sudah ada
!wget -P data/ https://raw.githubusercontent.com/taudata-indonesia/eLearning/master/data/df_loc.csv
df_loc = pd.read_csv('data/df_loc.csv')
!pip install folium
df_loc.head()
import folium
from folium.plugins import HeatMap
base_map = generateBaseMap()
HeatMap(data=df_loc[['lat', 'lon', 'count']].groupby(['lat', 'lon']).sum().reset_index().values.tolist(), radius=8, max_zoom=13).add_to(base_map)
base_map
# Loading Contoh Data studi kasus pertama di atas
tips = sns.load_dataset('tips') # Loading dari SeaBorn library's dataset
# Ukuran Data
N, P = tips.shape
print('baris = ', N, ', Kolom = ', P)
tips.head()
| 0.556882 | 0.925701 |
# 1 - Beginning Workflows
In this lesson, we'll cover the basics of using atomate to run calculations. This will be a hands-on lesson where we dive into running a full workflows and break that down into components to understand how the various moving parts give us the ability to scale from 1 calculation to 10's of thousands.
```
import mp_workshop.atomate
```
# Building a workflow
To begin, we'll start by grabbing a structure from materials project using pymatgen and the MPRester interface we learned about in a previous course
```
from pymatgen import MPRester
mpr = MPRester()
struc = mpr.get_structure_by_material_id("mp-27")
print(struc)
```
Now, let's construct a workflow using atomate to optimize this structure in DFT
```
from atomate.vasp.workflows.presets.core import wf_structure_optimization
wf = wf_structure_optimization(struc,{"DB_FILE": None})
print(wf)
```
Get some more information on the workflow
```
wf.as_dict()
```
# Running with Fake VASP to simulate a DFT calculation
Due to a combination of licensing issues and just not being able to run this quickly on the jupyter server, we're going to simulate VASP running with a magic function. You will later learn about powerups, which let you modify a workflow. For this exercise we're going to use a powerup that will replace the normal VASP running functionality with something that just copies files we've prepared for you
```
from atomate.vasp.powerups import use_fake_vasp
```
## Lets do some work to get the path to fake VASP files
```
from mp_workshop.atomate import si_struct_opt_path
print(si_struct_opt_path)
wf = use_fake_vasp(wf, ref_dirs={"Si-structure optimization": si_struct_opt_path})
wf.as_dict()
```
## Now we have to get ourself a LaunchPad so that we can submit this workflow to our database
Atomate uses Fireworks as its workflow engine. Fireworks hides the database with an object called a LaunchPad. This allows you to submit and query workflows from anywhere you have database access. We need to get ourselves a LaunchPad object so we can submit our workflow
```
from fireworks.core.launchpad import LaunchPad
lp = LaunchPad.auto_load()
```
For this one time, we have to initialize the database. In everyday use, we'll only do this once. For this lesson, we'll use it a few times:
```
lp.reset(None,require_password=False)
```
We can use the launchpad to add a workkflow to our database:
```
lp.add_wf(wf)
```
# Monitoring Workflows
Fireworks lets you monitor the status of workflows and fireworks using both python and the command line. Let's start off by looking at the status of our workflow. For each bit of python code, i'll include a cell with a command line command using jupyter-notebook's '!' functionality. In practice, we use the command line tools quite a bit and will be emphasized in this notebook.
**Command Line Access in Jupyter**: Jupyter lets you running command line commands by prefacing them with the exclamation mark:
```
# Lets get workflows
def get_wflows():
for wf_id in lp.get_wf_ids():
for key,value in lp.get_wf_summary_dict(wf_id).items():
print(key, ": ",value)
print("\n")
get_wflows()
```
This is how you get workflow information on the command line
```
!lpad get_wflows
def get_fws():
for fw_id in lp.get_fw_ids():
fw = lp.get_fw_dict_by_id(fw_id)
for prop in ["fw_id","updated_on","state","name"]:
print(prop, ": ",fw[prop])
print("\n")
get_fws()
```
This command line gets you the same information
```
!lpad get_fws
!lpad --help
# Let's look at what this command can do:
!lpad get_fws --help
```
# Now lets run this workflow
There are a few different ways to run a workflow. The first is to just run it within this notebook directly.
```
from fireworks.core.rocket_launcher import launch_rocket
# Lets move into a temporary working directory
import os
os.mkdir("temp")
os.chdir("temp")
launch_rocket(lp)
```
Now, lets see how that changed our fireworks
```
!lpad get_fws
```
This let me run a single firework in the notebook. What if I wanted to run multiple fireworks? First lets reset the old firework and add some more workflows to our database
```
# We can do the same thing using the command line:
!lpad rerun_fws
!lpad get_fws
# Let's add the workflow a few more times to have multiple fireworks in database
lp.add_wf(wf)
lp.add_wf(wf)
```
We can run all of the available fireworks using a 2 lines of python and a single command:
```
from fireworks.core.rocket_launcher import rapidfire
rapidfire(lp)
```
This let us run fireworks until we no longer had any to run. But we're still running fireworks in our jupyter notebook. If I want to run on this on another machine I need to do something else. Normally, we would want to launch these jobs to our supercomputing queue and let that run them as resources become available.
### Using the queue launcher:
Setting up the queue launcher unfortunately takes some work. There are configuration files to tell atomate how to submit jobs, where the database is and what special parameters to use for this supercomputer.
This has all been setup for you in this workshop. Once setup, to use the queue, we simply launch the fireworks to the queue.
Lets start off by clearing the database of fireworks
```
lp.reset(None,require_password=False)
from atomate.vasp.workflows.presets.core import wf_bandstructure
wf = wf_bandstructure(struc,{"DB_FILE": None})
wf.as_dict()
from mp_workshop.atomate import si_static_path,si_nscf_line_path,si_nscf_uniform_path
wf = use_fake_vasp(wf,{"Si-structure optimization":si_struct_opt_path,
"Si-static": si_static_path,
"Si-nscf uniform" : si_nscf_uniform_path,
"Si-nscf line": si_nscf_line_path})
lp.add_wf(wf)
!lpad get_fws
```
Fireworks has a commmand line method to submit jobs to the SLURM queue:
```
!qlaunch -r rapidfire --nlaunches 1
```
Now, the supercomputer will take care of running the jobs and eventually we can test to see that they are working
```
!lpad get_fws
```
Now lets have qlaunch submit fireworks until all are done.
```
!qlaunch -r rapidfire
!lpad get_fws
```
### Now, we have a completed workflow
|
github_jupyter
|
import mp_workshop.atomate
from pymatgen import MPRester
mpr = MPRester()
struc = mpr.get_structure_by_material_id("mp-27")
print(struc)
from atomate.vasp.workflows.presets.core import wf_structure_optimization
wf = wf_structure_optimization(struc,{"DB_FILE": None})
print(wf)
wf.as_dict()
from atomate.vasp.powerups import use_fake_vasp
from mp_workshop.atomate import si_struct_opt_path
print(si_struct_opt_path)
wf = use_fake_vasp(wf, ref_dirs={"Si-structure optimization": si_struct_opt_path})
wf.as_dict()
from fireworks.core.launchpad import LaunchPad
lp = LaunchPad.auto_load()
lp.reset(None,require_password=False)
lp.add_wf(wf)
# Lets get workflows
def get_wflows():
for wf_id in lp.get_wf_ids():
for key,value in lp.get_wf_summary_dict(wf_id).items():
print(key, ": ",value)
print("\n")
get_wflows()
!lpad get_wflows
def get_fws():
for fw_id in lp.get_fw_ids():
fw = lp.get_fw_dict_by_id(fw_id)
for prop in ["fw_id","updated_on","state","name"]:
print(prop, ": ",fw[prop])
print("\n")
get_fws()
!lpad get_fws
!lpad --help
# Let's look at what this command can do:
!lpad get_fws --help
from fireworks.core.rocket_launcher import launch_rocket
# Lets move into a temporary working directory
import os
os.mkdir("temp")
os.chdir("temp")
launch_rocket(lp)
!lpad get_fws
# We can do the same thing using the command line:
!lpad rerun_fws
!lpad get_fws
# Let's add the workflow a few more times to have multiple fireworks in database
lp.add_wf(wf)
lp.add_wf(wf)
from fireworks.core.rocket_launcher import rapidfire
rapidfire(lp)
lp.reset(None,require_password=False)
from atomate.vasp.workflows.presets.core import wf_bandstructure
wf = wf_bandstructure(struc,{"DB_FILE": None})
wf.as_dict()
from mp_workshop.atomate import si_static_path,si_nscf_line_path,si_nscf_uniform_path
wf = use_fake_vasp(wf,{"Si-structure optimization":si_struct_opt_path,
"Si-static": si_static_path,
"Si-nscf uniform" : si_nscf_uniform_path,
"Si-nscf line": si_nscf_line_path})
lp.add_wf(wf)
!lpad get_fws
!qlaunch -r rapidfire --nlaunches 1
!lpad get_fws
!qlaunch -r rapidfire
!lpad get_fws
| 0.346984 | 0.953449 |
# Constructing Agents
## An agent is a thing inside your environnment
The initial **Thing** class is really just a shell. We'll want to define whether the thing is *alive*, how to display our thing's state, and how to display maybe a picture of our thing, like if our thing were a function we could make a picture. The **Agent** class is a subclass that performs actions based on what it perceives in the environment. To keep things general, this agent class will take in a user-defined FUNCTION that turns perceptions into actions.
## The Environment class
Is a shell for all environments. It owns **things** and **agents**. It specifies:
* The thing classes it can hold. These things can be just things (like dirt) or agents (like vacuums that can do stuff)
* What it can perceive (percept classes -- like what sensors are on my robot?)
* What it can do. Like if a vacuum sucks up dirt, then it can change the amount of dirt in its environment.
* Specify a default location for new things, like where more dirt might go.
* Specify changes we won't allow ("exogenous_change")
* Tell us if all of the agents are dead
* Perform one time step in our environmental "game" definition.
* Each agent gets to perceive its state
* Each agent gets to perform an action
* Perform a bunch of steps
* List all the things at a location
* List some things at a location?
* Add a thing at a location (or default location)
* Delete a specified thing
## The Direction Class
* Specify a heading (**R**ight, **L**eft, **U**p, **Do**wn)
* Move Forward
## Environments on a plane XYEnvironment(Environment)
* Rectangle with width and height
* also initialize a list of observers -- this might be a list provided to the GUI that tells us when things change
* Things near a location (based on perceptible_distance = 1 or specified radius)
* Return what things I can see
* Execute an action
* bump against an edge
* turn left or right
* move forward
* grab a thing
* release a thing
* Add observers who get to find out what's happened
* Move to where I say to move:
* If there's an *Obstacle* at my destination I bump. Obstacles are their own trivial class that can be extended into more
complicated obstacles that are sets of coordinates.
* Otherwise tell all observers the thing moved and remove the thing from the old destination and put it in the new one
* Return True/False for whether or not I moved the thing
* Add a thing to a location. Say what to do if there's a thing there.
* Check to see if the location some jerk specified is actually in my rectangle.
* Randomly choose a location in my rectangle, and maybe I'll list some patches that aren't allowed.
* Delete a thing from the environment. If that thing is an agent drop everything it's holding.
* Add walls so the vacuum doesn't fall down the stairs. A *Wall* is its own trivial class.
* Describe the new heading after a turn happens
## GraphicEnvironment(XYEnvironment)
Handles the GUI
# Finally let's make a vacuum environment
<p align="center>
<img src="images/vacuum.svg">
</p>
* Initialize Dirt as a Trivial thing
* Extend the XYEnvironment to **VacuumEnvironment**:
* The things are Wall, Dirt, and Four agents (reflux, random, tabledriven, and modelbased) for vacuum behavior. We'll get to those.
* The environment knows if an agent (a.k.a. a vacuum) is standing in dirt and if it will bump into something if it moves forward.
* An agent can execute an action:
* If the action is suck it gets 100 points (**performance**) and it deletes the dirt, otherwise the performance is -1 and the action is exected according to the agent's logic.
The only fully-defined environment is for two grids:
### The ReflexVacuumAgent
This is actually only for the trivial world with 2 grids.
```python
class TrivialVacuumEnvironment(Environment):
"""This environment has two locations, A and B. Each can be Dirty
or Clean. The agent perceives its location and the location's
status. This serves as an example of how to implement a simple
Environment."""
def __init__(self):
super().__init__()
self.status = {loc_A: random.choice(['Clean', 'Dirty']),
loc_B: random.choice(['Clean', 'Dirty'])}
def thing_classes(self):
return [Wall, Dirt, ReflexVacuumAgent, RandomVacuumAgent, TableDrivenVacuumAgent, ModelBasedVacuumAgent]
def percept(self, agent):
"""Returns the agent's location, and the location status (Dirty/Clean)."""
return agent.location, self.status[agent.location]
def execute_action(self, agent, action):
"""Change agent's location and/or location's status; track performance.
Score 10 for each dirt cleaned; -1 for each move."""
if action == 'Right':
agent.location = loc_B
agent.performance -= 1
elif action == 'Left':
agent.location = loc_A
agent.performance -= 1
elif action == 'Suck':
if self.status[agent.location] == 'Dirty':
agent.performance += 10
self.status[agent.location] = 'Clean'
def default_location(self, thing):
"""Agents start in either location at random."""
return random.choice([loc_A, loc_B])
```
<p align="center">
<img src="images/simple_reflex_agent.jpg">
</p>
Perceive the location and status
* If the status is dirty, suck
* Otherwise move to the other location
But more generally it is
```python
def SimpleReflexAgentProgram(rules, interpret_input):
"""
[Figure 2.10]
This agent takes action based solely on the percept.
"""
def program(percept):
state = interpret_input(percept)
rule = rule_match(state, rules)
action = rule.action
return action
return program
```
So maybe we'd say
* if the location is dirty, suck (performance +100)
* Otherwise if no bump move forward (performance -1)
* Otherwise move heading to the left or right -- probably want to be random (performance -1)
state = (location, heading, status)
Note that the examples don't have a heading as part of the state, but it would be necessary for the environment to track to know what happens when we say to move right or left. (We don't need to know to determine action, though. If there's a bump we need to change the heading.)
```
from agents import ReflexVacuumAgent, TrivialVacuumEnvironment
agent = ReflexVacuumAgent()
environment = TrivialVacuumEnvironment()
environment.add_thing(agent)
environment.status
environment.run()
environment.status
from agents import Agent
```
The TraceAgent function reports on the agents perception-action movements.
```
from agents import TraceAgent
```
|
github_jupyter
|
class TrivialVacuumEnvironment(Environment):
"""This environment has two locations, A and B. Each can be Dirty
or Clean. The agent perceives its location and the location's
status. This serves as an example of how to implement a simple
Environment."""
def __init__(self):
super().__init__()
self.status = {loc_A: random.choice(['Clean', 'Dirty']),
loc_B: random.choice(['Clean', 'Dirty'])}
def thing_classes(self):
return [Wall, Dirt, ReflexVacuumAgent, RandomVacuumAgent, TableDrivenVacuumAgent, ModelBasedVacuumAgent]
def percept(self, agent):
"""Returns the agent's location, and the location status (Dirty/Clean)."""
return agent.location, self.status[agent.location]
def execute_action(self, agent, action):
"""Change agent's location and/or location's status; track performance.
Score 10 for each dirt cleaned; -1 for each move."""
if action == 'Right':
agent.location = loc_B
agent.performance -= 1
elif action == 'Left':
agent.location = loc_A
agent.performance -= 1
elif action == 'Suck':
if self.status[agent.location] == 'Dirty':
agent.performance += 10
self.status[agent.location] = 'Clean'
def default_location(self, thing):
"""Agents start in either location at random."""
return random.choice([loc_A, loc_B])
def SimpleReflexAgentProgram(rules, interpret_input):
"""
[Figure 2.10]
This agent takes action based solely on the percept.
"""
def program(percept):
state = interpret_input(percept)
rule = rule_match(state, rules)
action = rule.action
return action
return program
from agents import ReflexVacuumAgent, TrivialVacuumEnvironment
agent = ReflexVacuumAgent()
environment = TrivialVacuumEnvironment()
environment.add_thing(agent)
environment.status
environment.run()
environment.status
from agents import Agent
from agents import TraceAgent
| 0.89828 | 0.914367 |
#Install Libraries
```
!pip install Fortuna #randomness
!pip install names
!pip install MonkeyScope
```
#Import Libraries
```
import pandas as pd
from Fortuna import random_int, percent_true, FlexCat, RelativeWeightedChoice
from MonkeyScope import distribution
```
#Mock Dictionary To Take Items From
```
# Took names from other notebooks, thanks January/February cohort
mock_dict = {
"male_first_names": (
"Liam", "Noah", "Oliver", "Elijah", "William", "James", "Benjamin", "Lucas",
"Henry", "Alexander", "Mason", "Michael", "Ethan", "Daniel", "Jacob",
"Logan", "Jackson", "Levi", "Sebastian", "Mateo", "Jack", "Owen",
"Theodore", "Aiden", "Samuel", "Joseph", "John", "David", "Wyatt",
"Matthew", "Luke", "Asher", "Carter", "Julian", "Grayson", "Leo", "Jayden",
"Gabriel", "Isaac", "Lincoln", "Anthony", "Hudson", "Dylan", "Ezra",
"Thomas", "Charles", "Christopher", "Jaxon", "Maverick", "Josiah", "Isaiah",
"Andrew", "Elias", "Joshua", "Nathan", "Caleb", "Ryan", "Adrian", "Miles",
"Eli", "Nolan", "Christian", "Aaron", "Cameron", "Ezekiel", "Colton",
"Luca", "Landon", "Hunter", "Jonathan", "Santiago", "Axel", "Easton",
"Cooper", "Jeremiah", "Angel", "Roman", "Connor", "Jameson", "Robert",
"Greyson", "Jordan", "Ian", "Carson", "Jaxson", "Leonardo", "Nicholas",
"Dominic", "Austin", "Everett", "Brooks", "Xavier", "Kai", "Jose", "Parker",
"Adam", "Jace", "Wesley", "Kayden", "Silas", "Bennett", "Declan", "Waylon",
"Weston", "Evan", "Emmett", "Micah", "Ryder", "Beau", "Damian", "Brayden",
"Gael", "Rowan", "Harrison", "Bryson", "Sawyer", "Amir", "Kingston",
"Jason", "Giovanni", "Vincent", "Ayden", "Chase", "Myles", "Diego",
"Nathaniel", "Legend", "Jonah", "River", "Tyler", "Cole", "Braxton",
"George", "Milo", "Zachary", "Ashton", "Luis", "Jasper", "Kaiden", "Adriel",
"Gavin", "Bentley", "Calvin", "Zion", "Juan", "Maxwell", "Max", "Ryker",
"Carlos", "Emmanuel", "Jayce", "Lorenzo", "Ivan", "Jude", "August", "Kevin",
"Malachi", "Elliott", "Rhett", "Archer", "Karter", "Arthur", "Luka",
"Elliot", "Thiago", "Brandon", "Camden", "Justin", "Jesus", "Maddox",
"King", "Theo", "Enzo", "Matteo", "Emiliano", "Dean", "Hayden", "Finn",
"Brody", "Antonio", "Abel", "Alex", "Tristan", "Graham", "Zayden", "Judah",
"Xander", "Miguel", "Atlas", "Messiah", "Barrett", "Tucker", "Timothy",
"Alan", "Edward", "Leon", "Dawson", "Eric", "Ace", "Victor", "Abraham",
"Nicolas", "Jesse", "Charlie", "Patrick", "Walker", "Joel", "Richard",
"Beckett", "Blake", "Alejandro", "Avery", "Grant", "Peter", "Oscar",
"Matias", "Amari", "Lukas", "Andres", "Arlo", "Colt", "Adonis", "Kyrie",
"Steven", "Felix", "Preston", "Marcus", "Holden", "Emilio", "Remington",
"Jeremy", "Kaleb", "Brantley", "Bryce", "Mark", "Knox", "Israel", "Phoenix",
"Kobe", "Nash", "Griffin", "Caden", "Kenneth", "Kyler", "Hayes", "Jax",
"Rafael", "Beckham", "Javier", "Maximus", "Simon", "Paul", "Omar", "Kaden",
"Kash", "Lane", "Bryan", "Riley", "Zane", "Louis", "Aidan", "Paxton",
"Maximiliano", "Karson", "Cash", "Cayden", "Emerson", "Tobias", "Ronan",
"Brian", "Dallas", "Bradley", "Jorge", "Walter", "Josue", "Khalil",
"Damien", "Jett", "Kairo", "Zander", "Andre", "Cohen", "Crew", "Hendrix",
"Colin", "Chance", "Malakai", "Clayton", "Daxton", "Malcolm", "Lennox",
"Martin", "Jaden", "Kayson", "Bodhi", "Francisco", "Cody", "Erick",
"Kameron", "Atticus", "Dante", "Jensen", "Cruz", "Finley", "Brady"
),
"female_first_names": (
"Olivia", "Emma", "Ava", "Charlotte", "Sophia", "Amelia", "Isabella", "Mia",
"Evelyn", "Harper", "Camila", "Gianna", "Abigail", "Luna", "Ella",
"Elizabeth", "Sofia", "Emily", "Avery", "Mila", "Scarlett", "Eleanor",
"Madison", "Layla", "Penelope", "Aria", "Chloe", "Grace", "Ellie", "Nora",
"Hazel", "Zoey", "Riley", "Victoria", "Lily", "Aurora", "Violet", "Nova",
"Hannah", "Emilia", "Zoe", "Stella", "Everly", "Isla", "Leah", "Lillian",
"Addison", "Willow", "Lucy", "Paisley", "Natalie", "Naomi", "Eliana",
"Brooklyn", "Elena", "Aubrey", "Claire", "Ivy", "Kinsley", "Audrey", "Maya",
"Genesis", "Skylar", "Bella", "Aaliyah", "Madelyn", "Savannah", "Anna",
"Delilah", "Serenity", "Caroline", "Kennedy", "Valentina", "Ruby", "Sophie",
"Alice", "Gabriella", "Sadie", "Ariana", "Allison", "Hailey", "Autumn",
"Nevaeh", "Natalia", "Quinn", "Josephine", "Sarah", "Cora", "Emery",
"Samantha", "Piper", "Leilani", "Eva", "Everleigh", "Madeline", "Lydia",
"Jade", "Peyton", "Brielle", "Adeline", "Vivian", "Rylee", "Clara",
"Raelynn", "Melanie", "Melody", "Julia", "Athena", "Maria", "Liliana",
"Hadley", "Arya", "Rose", "Reagan", "Eliza", "Adalynn", "Kaylee", "Lyla",
"Mackenzie", "Alaia", "Isabelle", "Charlie", "Arianna", "Mary", "Remi",
"Margaret", "Iris", "Parker", "Ximena", "Eden", "Ayla", "Kylie", "Elliana",
"Josie", "Katherine", "Faith", "Alexandra", "Eloise", "Adalyn", "Amaya",
"Jasmine", "Amara", "Daisy", "Reese", "Valerie", "Brianna", "Cecilia",
"Andrea", "Summer", "Valeria", "Norah", "Ariella", "Esther", "Ashley",
"Emerson", "Aubree", "Isabel", "Anastasia", "Ryleigh", "Khloe", "Taylor",
"Londyn", "Lucia", "Emersyn", "Callie", "Sienna", "Blakely", "Kehlani",
"Genevieve", "Alina", "Bailey", "Juniper", "Maeve", "Molly", "Harmony",
"Georgia", "Magnolia", "Catalina", "Freya", "Juliette", "Sloane", "June",
"Sara", "Ada", "Kimberly", "River", "Ember", "Juliana", "Aliyah", "Millie",
"Brynlee", "Teagan", "Morgan", "Jordyn", "London", "Alaina", "Olive",
"Rosalie", "Alyssa", "Ariel", "Finley", "Arabella", "Journee", "Hope",
"Leila", "Alana", "Gemma", "Vanessa", "Gracie", "Noelle", "Marley", "Elise",
"Presley", "Kamila", "Zara", "Amy", "Kayla", "Payton", "Blake", "Ruth",
"Alani", "Annabelle", "Sage", "Aspen", "Laila", "Lila", "Rachel", "Trinity",
"Daniela", "Alexa", "Lilly", "Lauren", "Elsie", "Margot", "Adelyn", "Zuri",
"Brooke", "Sawyer", "Lilah", "Lola", "Selena", "Mya", "Sydney", "Diana",
"Ana", "Vera", "Alayna", "Nyla", "Elaina", "Rebecca", "Angela", "Kali",
"Alivia", "Raegan", "Rowan", "Phoebe", "Camilla", "Joanna", "Malia"),
"last_names": (
"Smith", "Johnson", "Williams", "Brown", "Jones", "Garcia", "Miller",
"Davis", "Rodriguez", "Martinez", "Hernandez", "Lopez", "Gonzales",
"Wilson", "Anderson", "Thomas", "Taylor", "Moore", "Jackson", "Martin",
"Lee", "Perez", "Thompson", "White", "Harris", "Sanchez", "Clark",
"Ramirez", "Lewis", "Robinson", "Walker", "Young", "Allen", "King",
"Wright", "Scott", "Torres", "Nguyen", "Hill", "Flores", "Green", "Adams",
"Nelson", "Baker", "Hall", "Rivera", "Campbell", "Mitchell", "Carter",
"Roberts", "Gomez", "Phillips", "Evans", "Turner", "Diaz", "Parker", "Cruz",
"Edwards", "Collins", "Reyes", "Stewart", "Morris", "Morales", "Murphy",
"Cook", "Rogers", "Gutierrez", "Ortiz", "Morgan", "Cooper", "Peterson",
"Bailey", "Reed", "Kelly", "Howard", "Ramos", "Kim", "Cox", "Ward",
"Richardson", "Watson", "Brooks", "Chavez", "Wood", "James", "Bennet",
"Gray", "Mendoza", "Ruiz", "Hughes", "Price", "Alvarez", "Castillo",
"Sanders", "Patel", "Myers", "Long", "Ross", "Foster", "Jimenez"),
"city": (
"Nova", "Amali", "Fernanda", "Alia", "Angeli", "Elliot", "Justice",
"Maeyor", "Ceceli", "Glori", "Ariya", "Virginia", "Cheyenne", "Aleah",
"Jemma", "Henley", "Meredith", "Leyla", "Lennox", "Ensley", "Zahra",
"Reina", "Frankie", "Lylah", "Nalani", "Reyna", "Saige", "Ivanna", "Aleena",
"Emerie", "Ivory", "Leslie", "Alora", "Ashlyn", "Bethany", "Bonnie",
"Sasha", "Xiomara", "Salem", "Adrianna", "Dayana", "Clementine", "Karina",
"Karsyn", "Emmie", "Julie", "Julieta", "Briana", "Carly", "Macy", "Marie",
"Oaklee", "Christina", "Malaysia", "Ellis", "Irene", "Anne", "Anahi",
"Mara", "Rhea", "Davina", "Dallas", "Jayda", "Mariam", "Skyla", "Siena",
"Elora", "Marilyn", "Jazmin", "Megan", "Rosa", "Savanna", "Allyson",
"Milan", "Coraline", "Johanna", "Melany", "Chelsea", "Michaela", "Melina",
),
"append": (
"st.", "pl.", "rd.", "ln.", "ave.", "blvd.", "ct.", "plaza", "terrace",
"run", "trail"),
"state": (
"Alaska", "Alabama", "Arkansas", "American Samoa", "Arizona", "California",
"Colorado", "Connecticut", "D.C.", "Delaware", "Florida",
"Georgia", "Guam", "Hawaii", "Iowa", "Idaho", "Illinois", "Indiana",
"Kansas", "Kentucky", "Louisiana", "Massachusetts", "Maryland", "Maine",
"Minnesota", "Missouri", "Mississippi", "Montana", "North Carolina",
"North Dakota", "Nebraska", "New Hampshire", "New Jersey", "New Mexico",
"Nevada", "New York", "Ohio", "Oklahoma", "Oregon", "Pennsylvania",
"Puerto Rico", "Rhode Island", "South Carolina", "South Dakota",
"Tennessee", "Texas", "Utah", "Virginia", "Virgin Islands", "Vermont",
"Washington", "Wisconsin", "West Virginia", "Wyoming", "Michigan"
),
"industry":
("unsure", "health and wellness", "data storage and security", "customer relationship management", "travel", "accounting and finance",
"application and data integration", "human resources and workforce management", "supply chain and logistics", "food and grocery",
"web development", "lighting and LED", "infrastructure and hosting", "collaboration and project management", "data and broadband",
"music", "real estate"),
"time_of_day":
("Morning: 6am-10am",
"noon: 10am-2pm",
"afternoon: 2pm-6pm",
"Evening: 6pm-9pm"),
}
gend = {"gender":
("female", "male", "transgender", "non-binary/non-conforming", "prefer not to say"),
}
lang = {"language_preference":
("english", "spanish", "chinese", "other")
}
exp = {"experience":
("none", "beginner", "intermediate", "advanced"),
}
tech = {"tech_stack":
("JavaScript", "HTML", "CSS", "Python", "SQL", "React", "Redux", "Java", "Node.js", "Typescript",
"C#", "Bash/Shell", "C++", "PHP", "C", "Powershell", "Go", "Kotlin", "Rust", "Ruby",
"Dart", "Assembly", "Swift", "R", "Redux"), "new_stack": "unsure"
}
#From searchlight
soft_skills = { "career" :
("Coachability", "Attention_to_Detail", "Hardworking", "Creativity",
"Dependability", "Strategic Thinking", "Collaboration", "Trustworthiness",
"Enthusiasm", "Persuasion", "Empathy", "Charisma", "Active Listening", "Humility",
"Critical Thinking", "Adaptability", "Fast Learner", "Managing Stress",
"Being a Self-Starter","Personable", "Curiosity", "Emotional Intelligence",
"Poise", "Ambition", "Handling Ambiguity", "Competitiveness", "Methodical",
"Customer-orientation", "Decisiveness", "Conscientiousness", "Teaching Others",
"Independence", "Intelligence", "Intuition", "Good Judgement", "Optimism",
"Persistence", "Problem Solving", "Results-driven", "Risk-taking", "Resourcefulness")
}
ind = {"industry":
("unsure", "health and wellness", "data storage and security", "customer relationship management", "travel", "accounting and finance",
"application and data integration", "human resources and workforce management", "supply chain and logistics", "food and grocery",
"web development", "lighting and LED", "infrastructure and hosting", "collaboration and project management", "data and broadband",
"music", "real estate"),
}
tizo = {"time_zone":
("EST(New York)", "CST(Chicago)", "MST(Denver)", "PST(Los Angeles)",
"GMT(London)", "AST(Saudi Arabia)", "JST(Japan)"),
}
tod = {"time_of_day":
("Morning: 6am-10am",
"noon: 10am-2pm",
"afternoon: 2pm-6pm",
"Evening: 6pm-9pm"),
}
#some disclaimer here: your application will not be denied on the basis of stating that you are not comfortable sharing
conv = {"convictions":
("not comfortable sharing", "misdemeanor", "felony", "none"),
}
#some disclaimer here: your application will not be denied on the basis of stating that you are not comfortable sharing
toc = {"type_of_crime":
("not comfortable sharing","crime against a person", "crime against property", "inchoate", "statutory", "financial", "cyber", "other"),
"other":("N/A", "not comfrotable sharing")
}
#Are underdog devs adverstizing? Will they be? This should populate with where they are investing in marketing
refer = {"referred":
("friend", "mentor", "google", "facebook", "linkedin"),
}
```
#Combined Application Features
- Personal Information\
- Profile ID\
- First Name \ Last Name \
- Email \ Phone Number \
- Gender \
- Date of birth \
- Street Address \ zipcode \ city \ state \
- Time Zone \
- Veteran Status \
- Multilingual? Language preference \
- Convictions \ type of crime \
- financial assistance needed \ material resources needed
(more stringent checkins and address may be needed if this is a yes) \
- Career Information
- Tech/Career/Life/All \
- Tech Stack \
- Years of Experience per stack \
- Industry \
- Hobbies \ Interests \
- Number of Mentees \
- Time of day \
- Other:
- how did you hear about us? \
- anything you want us to know \
- date_submitted
#Functions to call in classes
### Accurately gendered names
```
# FlexCat works with the dictionary to choose called random values from the keys
random_name = FlexCat(mock_dict, key_bias = "flat_uniform")
def gender():
if percent_true(75):
return "male"
if percent_true(85):
return "female"
if percent_true(20):
return "transgender"
if percent_true(75):
return "non_binary"
else:
return "prefer_not_to_say"
def firstname():
if gender() == "male":
return random_name("male_first_names")
if gender() == "female":
return random_name("female_first_names")
else:
return random_name("male_first_names", "female_first_names", "last_names")
```
## Weights and languages from Statista_Data Notebook from Dan Kositzke
```
mentor_stack = ["JavaScript", "HTML", "CSS", "Python", "SQL", "React", "Redux", "Java", "Node.js", "Typescript",
"C#", "Bash/Shell", "C++", "PHP", "C", "Powershell", "Go", "Kotlin", "Rust", "Ruby",
"Dart", "Assembly", "Swift", "R", "Redux"]
#Notice that "unsure" is in this list vs. the mentor list.
mentee_stack = ["JavaScript", "HTML", "CSS", "Python", "SQL", "Unsure", "React", "Redux", "Java", "Node.js", "Typescript",
"C#", "Bash/Shell", "C++", "PHP", "C", "Powershell", "Go", "Redux", "Rust", "Ruby",
"Dart", "Assembly", "Swift", "R"]
weights = [64.96, 56.07, 48.24, 47.08, 35.35, 33.91, 30.19, 27.86,
27.13, 24.31, 21.98, 21.01, 10.75, 9.55, 8.32, 7.03, 6.75, 6.02, 5.61, 5.1,
5.07, 4.66, 4.66, 3.01, 2.8]
mentor_weighted = RelativeWeightedChoice(zip(weights, mentor_stack))
mentee_weighted = RelativeWeightedChoice(zip(weights, mentee_stack))
```
### statistically accurate weighted techstacks
```
def techStack():
techstack = []
for i in range(5):
techstack += [mentor_weighted()]
return techstack
techStack()
def newStack():
newstack = []
for i in range(5):
newstack += [mentee_weighted()]
return newstack
```
### Simple Functions
```
timezone = FlexCat(tizo, key_bias= "front_linear", val_bias = "front_poisson")
language_preference = FlexCat(lang, key_bias = "front_linear", val_bias = "front_gauss")
financialaid = percent_true(50.0)
```
### Convitions and type of crime related
```
convictions = FlexCat(conv, val_bias = "flat_uniform")
typecrime = FlexCat(toc, val_bias= "flat_uniform")
typecrime(cat_key="other")
def type_of_crime():
if convictions() == "none" or convictions() == "not comfortable sharing":
return "N/A"
else:
return typecrime(cat_key="type_of_crime")
```
#Classes
```
class Mentee:
def __init__(self):
self.profile_id = f"mentee{random_int(1111111,999999999)}"
self.first_name = firstname()
self.last_name = random_name("last_names")
self.email = f"{self.first_name}_{self.last_name}{random_int(1,1000)}@fake.com"
self.phone_number = f"({random_int(100, 999)})-{random_int(100, 999)}-{random_int(1000, 9999)}"
self.gender = gender()
self.timezone = timezone()
self.street_address = f"{random_int(11,99999)} {random_name('last_names')} {random_name('append')}"
self.city = random_name("city")
self.state = random_name("state")
self.veteran_status = percent_true(10.0)
self.language_preference = language_preference()
self.convictions = convictions()
self.crimes = type_of_crime()
self.financialaid = percent_true(50.0)
self.new_stack = newStack()
def __repr__(self):
output = (
f"Profile ID: {self.profile_id}",
f"First Name: {self.first_name}",
f"Last Name: {self.last_name}",
f"Gender: {self.gender}",
f"Language Preference: {self.language_preference}",
f"Veteran: {self.veteran_status}",
f"Email: {self.email}",
f"Phone Number: {self.phone_number}",
f"Street Address: {self.street_address}",
f"City: {self.city}",
f"State: {self.state}",
f"Time Zone: {self.timezone}",
f"Conviction: {self.convictions}",
f"Type of crime: {self.crimes}",
f"Financial Aid: {self.financialaid}",
f"Tech Stack: {self.new_stack}",
)
return "\n".join(output)
def to_dict(self):
return {
"Profile ID": self.profile_id,
"First Name": self.first_name,
"Last Name": self.last_name,
"Gender": self.gender,
"Language Preference": self.language_preference,
"Veteran": self.veteran_status,
"Email": self.email,
"Phone Number": self.phone_number,
"Street Address": self.street_address,
"City": self.city,
"State": self.state,
"Time Zone": self.timezone,
"Conviction": self.convictions,
"Type of crime": self.crimes,
"Financial Aid": self.financialaid,
"Tech Stack": self.new_stack,
}
class Mentor:
def __init__(self):
self.profile_id = f"mentor{random_int(1111111,999999999)}"
self.first_name = firstname()
self.last_name = random_name("last_names")
self.email = f"{self.first_name}_{self.last_name}{random_int(1,1000)}@fake.com"
self.phone_number = f"({random_int(100, 999)})-{random_int(100, 999)}-{random_int(1000, 9999)}"
self.gender = gender()
self.street_address = f"{random_int(11,99999)} {random_name('last_names')} {random_name('append')}"
self.city = random_name("city")
self.state = random_name("state")
self.timezone = timezone()
self.veteran_status = percent_true(10.0)
self.language_preference = language_preference()
self.tech_stack = techStack()
def __repr__(self):
output = (
f"Profile ID: {self.profile_id}",
f"First Name: {self.first_name}",
f"Last Name: {self.last_name}",
f"Gender: {self.gender}",
f"Language Preference: {self.language_preference}",
f"Veteran: {self.veteran_status}",
f"Email: {self.email}",
f"Phone Number: {self.phone_number}",
f"Street Address: {self.street_address}",
f"City: {self.city}",
f"State: {self.state}",
f"Time Zone: {self.timezone}",
f"Tech Stack: {self.tech_stack}",
)
return "\n".join(output)
def to_dict(self):
return {
"Profile ID": self.profile_id,
"First Name": self.first_name,
"Last Name": self.last_name,
"Gender": self.gender,
"Language Preference": self.language_preference,
"Veteran": self.veteran_status,
"Email": self.email,
"Phone Number": self.phone_number,
"Street Address": self.street_address,
"City": self.city,
"State": self.state,
"Time Zone": self.timezone,
"Tech Stack": self.tech_stack,
}
mentee = Mentee()
mentee
mentor = Mentor()
mentor
```
# Populate a dataframe with mock data
```
mentee_df = pd.DataFrame(Mentee().to_dict() for i in range(1000))
mentee_df.head()
mentor_df = pd.DataFrame(Mentor().to_dict() for i in range(1000))
mentor_df.head(5)
```
|
github_jupyter
|
!pip install Fortuna #randomness
!pip install names
!pip install MonkeyScope
import pandas as pd
from Fortuna import random_int, percent_true, FlexCat, RelativeWeightedChoice
from MonkeyScope import distribution
# Took names from other notebooks, thanks January/February cohort
mock_dict = {
"male_first_names": (
"Liam", "Noah", "Oliver", "Elijah", "William", "James", "Benjamin", "Lucas",
"Henry", "Alexander", "Mason", "Michael", "Ethan", "Daniel", "Jacob",
"Logan", "Jackson", "Levi", "Sebastian", "Mateo", "Jack", "Owen",
"Theodore", "Aiden", "Samuel", "Joseph", "John", "David", "Wyatt",
"Matthew", "Luke", "Asher", "Carter", "Julian", "Grayson", "Leo", "Jayden",
"Gabriel", "Isaac", "Lincoln", "Anthony", "Hudson", "Dylan", "Ezra",
"Thomas", "Charles", "Christopher", "Jaxon", "Maverick", "Josiah", "Isaiah",
"Andrew", "Elias", "Joshua", "Nathan", "Caleb", "Ryan", "Adrian", "Miles",
"Eli", "Nolan", "Christian", "Aaron", "Cameron", "Ezekiel", "Colton",
"Luca", "Landon", "Hunter", "Jonathan", "Santiago", "Axel", "Easton",
"Cooper", "Jeremiah", "Angel", "Roman", "Connor", "Jameson", "Robert",
"Greyson", "Jordan", "Ian", "Carson", "Jaxson", "Leonardo", "Nicholas",
"Dominic", "Austin", "Everett", "Brooks", "Xavier", "Kai", "Jose", "Parker",
"Adam", "Jace", "Wesley", "Kayden", "Silas", "Bennett", "Declan", "Waylon",
"Weston", "Evan", "Emmett", "Micah", "Ryder", "Beau", "Damian", "Brayden",
"Gael", "Rowan", "Harrison", "Bryson", "Sawyer", "Amir", "Kingston",
"Jason", "Giovanni", "Vincent", "Ayden", "Chase", "Myles", "Diego",
"Nathaniel", "Legend", "Jonah", "River", "Tyler", "Cole", "Braxton",
"George", "Milo", "Zachary", "Ashton", "Luis", "Jasper", "Kaiden", "Adriel",
"Gavin", "Bentley", "Calvin", "Zion", "Juan", "Maxwell", "Max", "Ryker",
"Carlos", "Emmanuel", "Jayce", "Lorenzo", "Ivan", "Jude", "August", "Kevin",
"Malachi", "Elliott", "Rhett", "Archer", "Karter", "Arthur", "Luka",
"Elliot", "Thiago", "Brandon", "Camden", "Justin", "Jesus", "Maddox",
"King", "Theo", "Enzo", "Matteo", "Emiliano", "Dean", "Hayden", "Finn",
"Brody", "Antonio", "Abel", "Alex", "Tristan", "Graham", "Zayden", "Judah",
"Xander", "Miguel", "Atlas", "Messiah", "Barrett", "Tucker", "Timothy",
"Alan", "Edward", "Leon", "Dawson", "Eric", "Ace", "Victor", "Abraham",
"Nicolas", "Jesse", "Charlie", "Patrick", "Walker", "Joel", "Richard",
"Beckett", "Blake", "Alejandro", "Avery", "Grant", "Peter", "Oscar",
"Matias", "Amari", "Lukas", "Andres", "Arlo", "Colt", "Adonis", "Kyrie",
"Steven", "Felix", "Preston", "Marcus", "Holden", "Emilio", "Remington",
"Jeremy", "Kaleb", "Brantley", "Bryce", "Mark", "Knox", "Israel", "Phoenix",
"Kobe", "Nash", "Griffin", "Caden", "Kenneth", "Kyler", "Hayes", "Jax",
"Rafael", "Beckham", "Javier", "Maximus", "Simon", "Paul", "Omar", "Kaden",
"Kash", "Lane", "Bryan", "Riley", "Zane", "Louis", "Aidan", "Paxton",
"Maximiliano", "Karson", "Cash", "Cayden", "Emerson", "Tobias", "Ronan",
"Brian", "Dallas", "Bradley", "Jorge", "Walter", "Josue", "Khalil",
"Damien", "Jett", "Kairo", "Zander", "Andre", "Cohen", "Crew", "Hendrix",
"Colin", "Chance", "Malakai", "Clayton", "Daxton", "Malcolm", "Lennox",
"Martin", "Jaden", "Kayson", "Bodhi", "Francisco", "Cody", "Erick",
"Kameron", "Atticus", "Dante", "Jensen", "Cruz", "Finley", "Brady"
),
"female_first_names": (
"Olivia", "Emma", "Ava", "Charlotte", "Sophia", "Amelia", "Isabella", "Mia",
"Evelyn", "Harper", "Camila", "Gianna", "Abigail", "Luna", "Ella",
"Elizabeth", "Sofia", "Emily", "Avery", "Mila", "Scarlett", "Eleanor",
"Madison", "Layla", "Penelope", "Aria", "Chloe", "Grace", "Ellie", "Nora",
"Hazel", "Zoey", "Riley", "Victoria", "Lily", "Aurora", "Violet", "Nova",
"Hannah", "Emilia", "Zoe", "Stella", "Everly", "Isla", "Leah", "Lillian",
"Addison", "Willow", "Lucy", "Paisley", "Natalie", "Naomi", "Eliana",
"Brooklyn", "Elena", "Aubrey", "Claire", "Ivy", "Kinsley", "Audrey", "Maya",
"Genesis", "Skylar", "Bella", "Aaliyah", "Madelyn", "Savannah", "Anna",
"Delilah", "Serenity", "Caroline", "Kennedy", "Valentina", "Ruby", "Sophie",
"Alice", "Gabriella", "Sadie", "Ariana", "Allison", "Hailey", "Autumn",
"Nevaeh", "Natalia", "Quinn", "Josephine", "Sarah", "Cora", "Emery",
"Samantha", "Piper", "Leilani", "Eva", "Everleigh", "Madeline", "Lydia",
"Jade", "Peyton", "Brielle", "Adeline", "Vivian", "Rylee", "Clara",
"Raelynn", "Melanie", "Melody", "Julia", "Athena", "Maria", "Liliana",
"Hadley", "Arya", "Rose", "Reagan", "Eliza", "Adalynn", "Kaylee", "Lyla",
"Mackenzie", "Alaia", "Isabelle", "Charlie", "Arianna", "Mary", "Remi",
"Margaret", "Iris", "Parker", "Ximena", "Eden", "Ayla", "Kylie", "Elliana",
"Josie", "Katherine", "Faith", "Alexandra", "Eloise", "Adalyn", "Amaya",
"Jasmine", "Amara", "Daisy", "Reese", "Valerie", "Brianna", "Cecilia",
"Andrea", "Summer", "Valeria", "Norah", "Ariella", "Esther", "Ashley",
"Emerson", "Aubree", "Isabel", "Anastasia", "Ryleigh", "Khloe", "Taylor",
"Londyn", "Lucia", "Emersyn", "Callie", "Sienna", "Blakely", "Kehlani",
"Genevieve", "Alina", "Bailey", "Juniper", "Maeve", "Molly", "Harmony",
"Georgia", "Magnolia", "Catalina", "Freya", "Juliette", "Sloane", "June",
"Sara", "Ada", "Kimberly", "River", "Ember", "Juliana", "Aliyah", "Millie",
"Brynlee", "Teagan", "Morgan", "Jordyn", "London", "Alaina", "Olive",
"Rosalie", "Alyssa", "Ariel", "Finley", "Arabella", "Journee", "Hope",
"Leila", "Alana", "Gemma", "Vanessa", "Gracie", "Noelle", "Marley", "Elise",
"Presley", "Kamila", "Zara", "Amy", "Kayla", "Payton", "Blake", "Ruth",
"Alani", "Annabelle", "Sage", "Aspen", "Laila", "Lila", "Rachel", "Trinity",
"Daniela", "Alexa", "Lilly", "Lauren", "Elsie", "Margot", "Adelyn", "Zuri",
"Brooke", "Sawyer", "Lilah", "Lola", "Selena", "Mya", "Sydney", "Diana",
"Ana", "Vera", "Alayna", "Nyla", "Elaina", "Rebecca", "Angela", "Kali",
"Alivia", "Raegan", "Rowan", "Phoebe", "Camilla", "Joanna", "Malia"),
"last_names": (
"Smith", "Johnson", "Williams", "Brown", "Jones", "Garcia", "Miller",
"Davis", "Rodriguez", "Martinez", "Hernandez", "Lopez", "Gonzales",
"Wilson", "Anderson", "Thomas", "Taylor", "Moore", "Jackson", "Martin",
"Lee", "Perez", "Thompson", "White", "Harris", "Sanchez", "Clark",
"Ramirez", "Lewis", "Robinson", "Walker", "Young", "Allen", "King",
"Wright", "Scott", "Torres", "Nguyen", "Hill", "Flores", "Green", "Adams",
"Nelson", "Baker", "Hall", "Rivera", "Campbell", "Mitchell", "Carter",
"Roberts", "Gomez", "Phillips", "Evans", "Turner", "Diaz", "Parker", "Cruz",
"Edwards", "Collins", "Reyes", "Stewart", "Morris", "Morales", "Murphy",
"Cook", "Rogers", "Gutierrez", "Ortiz", "Morgan", "Cooper", "Peterson",
"Bailey", "Reed", "Kelly", "Howard", "Ramos", "Kim", "Cox", "Ward",
"Richardson", "Watson", "Brooks", "Chavez", "Wood", "James", "Bennet",
"Gray", "Mendoza", "Ruiz", "Hughes", "Price", "Alvarez", "Castillo",
"Sanders", "Patel", "Myers", "Long", "Ross", "Foster", "Jimenez"),
"city": (
"Nova", "Amali", "Fernanda", "Alia", "Angeli", "Elliot", "Justice",
"Maeyor", "Ceceli", "Glori", "Ariya", "Virginia", "Cheyenne", "Aleah",
"Jemma", "Henley", "Meredith", "Leyla", "Lennox", "Ensley", "Zahra",
"Reina", "Frankie", "Lylah", "Nalani", "Reyna", "Saige", "Ivanna", "Aleena",
"Emerie", "Ivory", "Leslie", "Alora", "Ashlyn", "Bethany", "Bonnie",
"Sasha", "Xiomara", "Salem", "Adrianna", "Dayana", "Clementine", "Karina",
"Karsyn", "Emmie", "Julie", "Julieta", "Briana", "Carly", "Macy", "Marie",
"Oaklee", "Christina", "Malaysia", "Ellis", "Irene", "Anne", "Anahi",
"Mara", "Rhea", "Davina", "Dallas", "Jayda", "Mariam", "Skyla", "Siena",
"Elora", "Marilyn", "Jazmin", "Megan", "Rosa", "Savanna", "Allyson",
"Milan", "Coraline", "Johanna", "Melany", "Chelsea", "Michaela", "Melina",
),
"append": (
"st.", "pl.", "rd.", "ln.", "ave.", "blvd.", "ct.", "plaza", "terrace",
"run", "trail"),
"state": (
"Alaska", "Alabama", "Arkansas", "American Samoa", "Arizona", "California",
"Colorado", "Connecticut", "D.C.", "Delaware", "Florida",
"Georgia", "Guam", "Hawaii", "Iowa", "Idaho", "Illinois", "Indiana",
"Kansas", "Kentucky", "Louisiana", "Massachusetts", "Maryland", "Maine",
"Minnesota", "Missouri", "Mississippi", "Montana", "North Carolina",
"North Dakota", "Nebraska", "New Hampshire", "New Jersey", "New Mexico",
"Nevada", "New York", "Ohio", "Oklahoma", "Oregon", "Pennsylvania",
"Puerto Rico", "Rhode Island", "South Carolina", "South Dakota",
"Tennessee", "Texas", "Utah", "Virginia", "Virgin Islands", "Vermont",
"Washington", "Wisconsin", "West Virginia", "Wyoming", "Michigan"
),
"industry":
("unsure", "health and wellness", "data storage and security", "customer relationship management", "travel", "accounting and finance",
"application and data integration", "human resources and workforce management", "supply chain and logistics", "food and grocery",
"web development", "lighting and LED", "infrastructure and hosting", "collaboration and project management", "data and broadband",
"music", "real estate"),
"time_of_day":
("Morning: 6am-10am",
"noon: 10am-2pm",
"afternoon: 2pm-6pm",
"Evening: 6pm-9pm"),
}
gend = {"gender":
("female", "male", "transgender", "non-binary/non-conforming", "prefer not to say"),
}
lang = {"language_preference":
("english", "spanish", "chinese", "other")
}
exp = {"experience":
("none", "beginner", "intermediate", "advanced"),
}
tech = {"tech_stack":
("JavaScript", "HTML", "CSS", "Python", "SQL", "React", "Redux", "Java", "Node.js", "Typescript",
"C#", "Bash/Shell", "C++", "PHP", "C", "Powershell", "Go", "Kotlin", "Rust", "Ruby",
"Dart", "Assembly", "Swift", "R", "Redux"), "new_stack": "unsure"
}
#From searchlight
soft_skills = { "career" :
("Coachability", "Attention_to_Detail", "Hardworking", "Creativity",
"Dependability", "Strategic Thinking", "Collaboration", "Trustworthiness",
"Enthusiasm", "Persuasion", "Empathy", "Charisma", "Active Listening", "Humility",
"Critical Thinking", "Adaptability", "Fast Learner", "Managing Stress",
"Being a Self-Starter","Personable", "Curiosity", "Emotional Intelligence",
"Poise", "Ambition", "Handling Ambiguity", "Competitiveness", "Methodical",
"Customer-orientation", "Decisiveness", "Conscientiousness", "Teaching Others",
"Independence", "Intelligence", "Intuition", "Good Judgement", "Optimism",
"Persistence", "Problem Solving", "Results-driven", "Risk-taking", "Resourcefulness")
}
ind = {"industry":
("unsure", "health and wellness", "data storage and security", "customer relationship management", "travel", "accounting and finance",
"application and data integration", "human resources and workforce management", "supply chain and logistics", "food and grocery",
"web development", "lighting and LED", "infrastructure and hosting", "collaboration and project management", "data and broadband",
"music", "real estate"),
}
tizo = {"time_zone":
("EST(New York)", "CST(Chicago)", "MST(Denver)", "PST(Los Angeles)",
"GMT(London)", "AST(Saudi Arabia)", "JST(Japan)"),
}
tod = {"time_of_day":
("Morning: 6am-10am",
"noon: 10am-2pm",
"afternoon: 2pm-6pm",
"Evening: 6pm-9pm"),
}
#some disclaimer here: your application will not be denied on the basis of stating that you are not comfortable sharing
conv = {"convictions":
("not comfortable sharing", "misdemeanor", "felony", "none"),
}
#some disclaimer here: your application will not be denied on the basis of stating that you are not comfortable sharing
toc = {"type_of_crime":
("not comfortable sharing","crime against a person", "crime against property", "inchoate", "statutory", "financial", "cyber", "other"),
"other":("N/A", "not comfrotable sharing")
}
#Are underdog devs adverstizing? Will they be? This should populate with where they are investing in marketing
refer = {"referred":
("friend", "mentor", "google", "facebook", "linkedin"),
}
# FlexCat works with the dictionary to choose called random values from the keys
random_name = FlexCat(mock_dict, key_bias = "flat_uniform")
def gender():
if percent_true(75):
return "male"
if percent_true(85):
return "female"
if percent_true(20):
return "transgender"
if percent_true(75):
return "non_binary"
else:
return "prefer_not_to_say"
def firstname():
if gender() == "male":
return random_name("male_first_names")
if gender() == "female":
return random_name("female_first_names")
else:
return random_name("male_first_names", "female_first_names", "last_names")
mentor_stack = ["JavaScript", "HTML", "CSS", "Python", "SQL", "React", "Redux", "Java", "Node.js", "Typescript",
"C#", "Bash/Shell", "C++", "PHP", "C", "Powershell", "Go", "Kotlin", "Rust", "Ruby",
"Dart", "Assembly", "Swift", "R", "Redux"]
#Notice that "unsure" is in this list vs. the mentor list.
mentee_stack = ["JavaScript", "HTML", "CSS", "Python", "SQL", "Unsure", "React", "Redux", "Java", "Node.js", "Typescript",
"C#", "Bash/Shell", "C++", "PHP", "C", "Powershell", "Go", "Redux", "Rust", "Ruby",
"Dart", "Assembly", "Swift", "R"]
weights = [64.96, 56.07, 48.24, 47.08, 35.35, 33.91, 30.19, 27.86,
27.13, 24.31, 21.98, 21.01, 10.75, 9.55, 8.32, 7.03, 6.75, 6.02, 5.61, 5.1,
5.07, 4.66, 4.66, 3.01, 2.8]
mentor_weighted = RelativeWeightedChoice(zip(weights, mentor_stack))
mentee_weighted = RelativeWeightedChoice(zip(weights, mentee_stack))
def techStack():
techstack = []
for i in range(5):
techstack += [mentor_weighted()]
return techstack
techStack()
def newStack():
newstack = []
for i in range(5):
newstack += [mentee_weighted()]
return newstack
timezone = FlexCat(tizo, key_bias= "front_linear", val_bias = "front_poisson")
language_preference = FlexCat(lang, key_bias = "front_linear", val_bias = "front_gauss")
financialaid = percent_true(50.0)
convictions = FlexCat(conv, val_bias = "flat_uniform")
typecrime = FlexCat(toc, val_bias= "flat_uniform")
typecrime(cat_key="other")
def type_of_crime():
if convictions() == "none" or convictions() == "not comfortable sharing":
return "N/A"
else:
return typecrime(cat_key="type_of_crime")
class Mentee:
def __init__(self):
self.profile_id = f"mentee{random_int(1111111,999999999)}"
self.first_name = firstname()
self.last_name = random_name("last_names")
self.email = f"{self.first_name}_{self.last_name}{random_int(1,1000)}@fake.com"
self.phone_number = f"({random_int(100, 999)})-{random_int(100, 999)}-{random_int(1000, 9999)}"
self.gender = gender()
self.timezone = timezone()
self.street_address = f"{random_int(11,99999)} {random_name('last_names')} {random_name('append')}"
self.city = random_name("city")
self.state = random_name("state")
self.veteran_status = percent_true(10.0)
self.language_preference = language_preference()
self.convictions = convictions()
self.crimes = type_of_crime()
self.financialaid = percent_true(50.0)
self.new_stack = newStack()
def __repr__(self):
output = (
f"Profile ID: {self.profile_id}",
f"First Name: {self.first_name}",
f"Last Name: {self.last_name}",
f"Gender: {self.gender}",
f"Language Preference: {self.language_preference}",
f"Veteran: {self.veteran_status}",
f"Email: {self.email}",
f"Phone Number: {self.phone_number}",
f"Street Address: {self.street_address}",
f"City: {self.city}",
f"State: {self.state}",
f"Time Zone: {self.timezone}",
f"Conviction: {self.convictions}",
f"Type of crime: {self.crimes}",
f"Financial Aid: {self.financialaid}",
f"Tech Stack: {self.new_stack}",
)
return "\n".join(output)
def to_dict(self):
return {
"Profile ID": self.profile_id,
"First Name": self.first_name,
"Last Name": self.last_name,
"Gender": self.gender,
"Language Preference": self.language_preference,
"Veteran": self.veteran_status,
"Email": self.email,
"Phone Number": self.phone_number,
"Street Address": self.street_address,
"City": self.city,
"State": self.state,
"Time Zone": self.timezone,
"Conviction": self.convictions,
"Type of crime": self.crimes,
"Financial Aid": self.financialaid,
"Tech Stack": self.new_stack,
}
class Mentor:
def __init__(self):
self.profile_id = f"mentor{random_int(1111111,999999999)}"
self.first_name = firstname()
self.last_name = random_name("last_names")
self.email = f"{self.first_name}_{self.last_name}{random_int(1,1000)}@fake.com"
self.phone_number = f"({random_int(100, 999)})-{random_int(100, 999)}-{random_int(1000, 9999)}"
self.gender = gender()
self.street_address = f"{random_int(11,99999)} {random_name('last_names')} {random_name('append')}"
self.city = random_name("city")
self.state = random_name("state")
self.timezone = timezone()
self.veteran_status = percent_true(10.0)
self.language_preference = language_preference()
self.tech_stack = techStack()
def __repr__(self):
output = (
f"Profile ID: {self.profile_id}",
f"First Name: {self.first_name}",
f"Last Name: {self.last_name}",
f"Gender: {self.gender}",
f"Language Preference: {self.language_preference}",
f"Veteran: {self.veteran_status}",
f"Email: {self.email}",
f"Phone Number: {self.phone_number}",
f"Street Address: {self.street_address}",
f"City: {self.city}",
f"State: {self.state}",
f"Time Zone: {self.timezone}",
f"Tech Stack: {self.tech_stack}",
)
return "\n".join(output)
def to_dict(self):
return {
"Profile ID": self.profile_id,
"First Name": self.first_name,
"Last Name": self.last_name,
"Gender": self.gender,
"Language Preference": self.language_preference,
"Veteran": self.veteran_status,
"Email": self.email,
"Phone Number": self.phone_number,
"Street Address": self.street_address,
"City": self.city,
"State": self.state,
"Time Zone": self.timezone,
"Tech Stack": self.tech_stack,
}
mentee = Mentee()
mentee
mentor = Mentor()
mentor
mentee_df = pd.DataFrame(Mentee().to_dict() for i in range(1000))
mentee_df.head()
mentor_df = pd.DataFrame(Mentor().to_dict() for i in range(1000))
mentor_df.head(5)
| 0.47171 | 0.709508 |
#
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
```
from __future__ import print_function, division
import thinkstats2
import thinkplot
import pandas as pd
import numpy as np
from fractions import Fraction
%matplotlib inline
def scalar_product(x, y):
x = np.asarray(x)
y = np.asarray(y)
return np.sum(x * y)
scalar_product([1,2,3], (4,5,6))
scalar_product([1,2,3], 2)
scalar_product([1,2,3], [2])
try:
scalar_product([1,2,3], (4,5,6,7))
except ValueError as e:
print(e)
class ArrayWrapper:
def __init__(self, array):
self.array = np.asarray(array)
def __eq__(self, other):
return np.array_equal(self.array, other.array)
def __add__(self, other):
return self.__class__(self.array + other.array)
def __sub__(self, other):
return self.__class__(self.array - other.array)
def __str__(self):
return str(self.array)
def __repr__(self):
return '%s(\n%s)' % (self.__class__.__name__, str(self.array))
def __len__(self):
return len(self.array)
def __getitem__(self, index):
return self.array[index]
def __setitem__(self, index, elt):
self.array[index] = elt
@property
def t(self):
return self.__class__(self.array.transpose())
class Vector(ArrayWrapper):
def __mul__(self, other):
return scalar_product(self.array, other.array)
def random_array(*shape):
return np.random.randint(1, 10, shape)
x = Vector(random_array(3))
x
x[0], x[1], x[2]
x[1] += 1
for elt in x:
print(elt)
y = Vector(x.array)
y
x == y
x.t
x == x.t
y = Vector(random_array(3))
y
x == y
x+y
x-y
x*y
def mm_product(array1, array2):
dtype = np.result_type(array1, array2)
array = np.zeros((len(array1), len(array2)), dtype=dtype)
for i, row1 in enumerate(array1):
for j, row2 in enumerate(array2):
array[i][j] = scalar_product(row1, row2)
return array
class Matrix(ArrayWrapper):
def __mul__(self, other):
return self.__class__(mm_product(self.array, other.t.array))
def __truediv__(self, other):
return self.__class__(np.linalg.solve(self.array, other.array.flat))
A = Matrix(random_array(3, 3))
A
len(A)
for row in A:
print(row)
B = Matrix(random_array(3, 3))
B
A+B
A-B
A*B
A.array.dot(B.array)
x = Vector(random_array(3))
x
A*x
def mv_product(A, x):
dtype = np.result_type(A, x)
array = np.zeros(len(A), dtype=dtype)
for i, row in enumerate(A):
array[i] = scalar_product(row, x)
return Vector(array)
mv_product(A.array, x.array)
A.array.dot(x.array)
x = Matrix(random_array(3, 1))
x
x == x.t
x.t * x
x * x.t
x * x
A * x
A.array.dot(x.array)
scalar = Matrix([[2]])
scalar
scalar == scalar.t
scalar * scalar
x * scalar
A * scalar
b = A * x
b
b.array
np.linalg.solve(A.array, b.array)
print(A / b)
A.array.shape
b.array.shape
m = np.hstack([A.array, b.array]).astype(Fraction)
print(m)
m[1] -= m[0]
print(m)
m[:, :-1]
m[:, -1]
def solve_augmented(m):
m = m.astype(float)
return np.linalg.solve(m[:, :-1], m[:,-1])
print(solve_augmented(m))
row1 = 0
row2 = 1
col = 0
pivot = m[row1, col]
victim = m[row2, col]
m[row1], pivot, victim, m[row1] * Fraction(victim, pivot)
m[row2] -= m[row1] * Fraction(victim, pivot)
print(m)
def clobber(m, row1, row2, col):
pivot = m[row1, col]
victim = m[row2, col]
m[row2] -= m[row1] * Fraction(victim, pivot)
clobber(m, 0, 2, 0)
print(m)
clobber(m, 1, 2, 1)
print(m)
m[2] /= m[2,2]
print(m)
```
|
github_jupyter
|
from __future__ import print_function, division
import thinkstats2
import thinkplot
import pandas as pd
import numpy as np
from fractions import Fraction
%matplotlib inline
def scalar_product(x, y):
x = np.asarray(x)
y = np.asarray(y)
return np.sum(x * y)
scalar_product([1,2,3], (4,5,6))
scalar_product([1,2,3], 2)
scalar_product([1,2,3], [2])
try:
scalar_product([1,2,3], (4,5,6,7))
except ValueError as e:
print(e)
class ArrayWrapper:
def __init__(self, array):
self.array = np.asarray(array)
def __eq__(self, other):
return np.array_equal(self.array, other.array)
def __add__(self, other):
return self.__class__(self.array + other.array)
def __sub__(self, other):
return self.__class__(self.array - other.array)
def __str__(self):
return str(self.array)
def __repr__(self):
return '%s(\n%s)' % (self.__class__.__name__, str(self.array))
def __len__(self):
return len(self.array)
def __getitem__(self, index):
return self.array[index]
def __setitem__(self, index, elt):
self.array[index] = elt
@property
def t(self):
return self.__class__(self.array.transpose())
class Vector(ArrayWrapper):
def __mul__(self, other):
return scalar_product(self.array, other.array)
def random_array(*shape):
return np.random.randint(1, 10, shape)
x = Vector(random_array(3))
x
x[0], x[1], x[2]
x[1] += 1
for elt in x:
print(elt)
y = Vector(x.array)
y
x == y
x.t
x == x.t
y = Vector(random_array(3))
y
x == y
x+y
x-y
x*y
def mm_product(array1, array2):
dtype = np.result_type(array1, array2)
array = np.zeros((len(array1), len(array2)), dtype=dtype)
for i, row1 in enumerate(array1):
for j, row2 in enumerate(array2):
array[i][j] = scalar_product(row1, row2)
return array
class Matrix(ArrayWrapper):
def __mul__(self, other):
return self.__class__(mm_product(self.array, other.t.array))
def __truediv__(self, other):
return self.__class__(np.linalg.solve(self.array, other.array.flat))
A = Matrix(random_array(3, 3))
A
len(A)
for row in A:
print(row)
B = Matrix(random_array(3, 3))
B
A+B
A-B
A*B
A.array.dot(B.array)
x = Vector(random_array(3))
x
A*x
def mv_product(A, x):
dtype = np.result_type(A, x)
array = np.zeros(len(A), dtype=dtype)
for i, row in enumerate(A):
array[i] = scalar_product(row, x)
return Vector(array)
mv_product(A.array, x.array)
A.array.dot(x.array)
x = Matrix(random_array(3, 1))
x
x == x.t
x.t * x
x * x.t
x * x
A * x
A.array.dot(x.array)
scalar = Matrix([[2]])
scalar
scalar == scalar.t
scalar * scalar
x * scalar
A * scalar
b = A * x
b
b.array
np.linalg.solve(A.array, b.array)
print(A / b)
A.array.shape
b.array.shape
m = np.hstack([A.array, b.array]).astype(Fraction)
print(m)
m[1] -= m[0]
print(m)
m[:, :-1]
m[:, -1]
def solve_augmented(m):
m = m.astype(float)
return np.linalg.solve(m[:, :-1], m[:,-1])
print(solve_augmented(m))
row1 = 0
row2 = 1
col = 0
pivot = m[row1, col]
victim = m[row2, col]
m[row1], pivot, victim, m[row1] * Fraction(victim, pivot)
m[row2] -= m[row1] * Fraction(victim, pivot)
print(m)
def clobber(m, row1, row2, col):
pivot = m[row1, col]
victim = m[row2, col]
m[row2] -= m[row1] * Fraction(victim, pivot)
clobber(m, 0, 2, 0)
print(m)
clobber(m, 1, 2, 1)
print(m)
m[2] /= m[2,2]
print(m)
| 0.811601 | 0.849784 |
```
!pip install eli5
import pandas as pd
import numpy as np
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import cross_val_score
import eli5
from eli5.sklearn import PermutationImportance
# Żeby odczytywać słowniki:
from ast import literal_eval
from tqdm import tqdm_notebook
cd "/content/drive/My Drive/Colab Notebooks/Matrix_DW"
ls data
df = pd.read_csv('data/men_shoes.csv', low_memory=False)
def run_model(feats, model = DecisionTreeRegressor(max_depth=5)):
X = df[ feats ].values
y = df['prices_amountmin'].values
scores = cross_val_score(model, X, y, scoring='neg_mean_absolute_error')
return np.mean(scores), np.std(scores)
df['brand_cat'] = df['brand'].map(lambda x: str(x).lower()).factorize()[0]
run_model(['brand_cat'])
model = RandomForestRegressor(max_depth=5, n_estimators=100, random_state=0)
run_model(['brand_cat'], model)
df.head()
df.features.head().values
def parse_features(x):
output_dict = {}
if str(x) == 'nan': return output_dict
features = literal_eval(x.replace('\\"', '"'))
for item in features:
key = item['key'].lower().strip()
value = item['value'][0].lower().strip()
output_dict[key] = value
return output_dict
df['features_parsed'] = df['features'].map(parse_features)
keys = set()
df['features_parsed'].map( lambda x: keys.update(x.keys()) )
len(keys)
def get_name_feat(key):
return 'feat_' + key
# tqdm_notebook -> postęp
for key in tqdm_notebook(keys):
df[get_name_feat(key)] = df.features_parsed.map(lambda feats: feats[key] if key in feats else np.nan)
df.columns
keys_stat = {}
for key in keys:
keys_stat[key] = df[ False == df[ get_name_feat(key) ].isnull() ].shape[0] / df.shape[0] * 100
{k:v for k,v in keys_stat.items() if v > 30 }
df['feat_brand_cat'] = df['feat_brand'].factorize()[0]
df['feat_color_cat'] = df['feat_color'].factorize()[0]
df['feat_gender_cat'] = df['feat_gender'].factorize()[0]
df['feat_manufacturer part number_cat'] = df['feat_manufacturer part number'].factorize()[0]
df['feat_material_cat'] = df['feat_material'].factorize()[0]
df['feat_sport_cat'] = df['feat_sport'].factorize()[0]
df['feat_style_cat'] = df['feat_style'].factorize()[0]
df['brand'] = df['brand'].map(lambda x: str(x).lower() )
df[ df.brand != df.feat_brand ][ ['brand', 'feat_brand'] ].head()
feats = [ '' ]
feats = ['brand_cat', 'feat_brand_cat', 'feat_gender_cat', 'feat_material_cat', 'feat_sport_cat', 'feat_style_cat']
model = RandomForestRegressor(max_depth=5, n_estimators=100)
run_model(feats , model)
X = df[ feats ].values
y = df['prices_amountmin'].values
m = RandomForestRegressor(max_depth=5, n_estimators=100, random_state=0)
m.fit(X, y)
perm = PermutationImportance(m, random_state=1).fit(X, y);
eli5.show_weights(perm, feature_names=feats)
df['brand'].value_counts()
df[ df['brand'] == 'nike' ].features_parsed.head().values
```
|
github_jupyter
|
!pip install eli5
import pandas as pd
import numpy as np
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import cross_val_score
import eli5
from eli5.sklearn import PermutationImportance
# Żeby odczytywać słowniki:
from ast import literal_eval
from tqdm import tqdm_notebook
cd "/content/drive/My Drive/Colab Notebooks/Matrix_DW"
ls data
df = pd.read_csv('data/men_shoes.csv', low_memory=False)
def run_model(feats, model = DecisionTreeRegressor(max_depth=5)):
X = df[ feats ].values
y = df['prices_amountmin'].values
scores = cross_val_score(model, X, y, scoring='neg_mean_absolute_error')
return np.mean(scores), np.std(scores)
df['brand_cat'] = df['brand'].map(lambda x: str(x).lower()).factorize()[0]
run_model(['brand_cat'])
model = RandomForestRegressor(max_depth=5, n_estimators=100, random_state=0)
run_model(['brand_cat'], model)
df.head()
df.features.head().values
def parse_features(x):
output_dict = {}
if str(x) == 'nan': return output_dict
features = literal_eval(x.replace('\\"', '"'))
for item in features:
key = item['key'].lower().strip()
value = item['value'][0].lower().strip()
output_dict[key] = value
return output_dict
df['features_parsed'] = df['features'].map(parse_features)
keys = set()
df['features_parsed'].map( lambda x: keys.update(x.keys()) )
len(keys)
def get_name_feat(key):
return 'feat_' + key
# tqdm_notebook -> postęp
for key in tqdm_notebook(keys):
df[get_name_feat(key)] = df.features_parsed.map(lambda feats: feats[key] if key in feats else np.nan)
df.columns
keys_stat = {}
for key in keys:
keys_stat[key] = df[ False == df[ get_name_feat(key) ].isnull() ].shape[0] / df.shape[0] * 100
{k:v for k,v in keys_stat.items() if v > 30 }
df['feat_brand_cat'] = df['feat_brand'].factorize()[0]
df['feat_color_cat'] = df['feat_color'].factorize()[0]
df['feat_gender_cat'] = df['feat_gender'].factorize()[0]
df['feat_manufacturer part number_cat'] = df['feat_manufacturer part number'].factorize()[0]
df['feat_material_cat'] = df['feat_material'].factorize()[0]
df['feat_sport_cat'] = df['feat_sport'].factorize()[0]
df['feat_style_cat'] = df['feat_style'].factorize()[0]
df['brand'] = df['brand'].map(lambda x: str(x).lower() )
df[ df.brand != df.feat_brand ][ ['brand', 'feat_brand'] ].head()
feats = [ '' ]
feats = ['brand_cat', 'feat_brand_cat', 'feat_gender_cat', 'feat_material_cat', 'feat_sport_cat', 'feat_style_cat']
model = RandomForestRegressor(max_depth=5, n_estimators=100)
run_model(feats , model)
X = df[ feats ].values
y = df['prices_amountmin'].values
m = RandomForestRegressor(max_depth=5, n_estimators=100, random_state=0)
m.fit(X, y)
perm = PermutationImportance(m, random_state=1).fit(X, y);
eli5.show_weights(perm, feature_names=feats)
df['brand'].value_counts()
df[ df['brand'] == 'nike' ].features_parsed.head().values
| 0.444806 | 0.351756 |
# Adding to the API Documentation
Documentation is an integral part of every collaborative software project. Good documentation not only encourages users of the package to try out different functionalities, but it also makes maintaining and expanding code significantly easier. Every code contribution to the package must come with appropriate documentation of the API. This guide details how to do this.
## Docstrings
The main form of documentation are docstrings, multi-line comments beneath a class or function definition with a specific syntax, which detail its functionality. This package uses the
[NumPy docstring format](https://numpydoc.readthedocs.io/en/latest/format.html#numpydoc-docstring-guide>). As a rule, all functions which are exposed to the user *must* have appropriate docstrings. Below is an example of a docstring for a probabilistic numerical method.
```
# %load -r 1-162 ../../../src/probnum/linalg/linearsolvers/linearsolvers.py
"""Probabilistic numerical methods for solving linear systems.
This module provides routines to solve linear systems of equations in a
Bayesian framework. This means that a prior distribution over elements
of the linear system can be provided and is updated with information
collected by the solvers to return a posterior distribution.
"""
import warnings
from typing import Callable, Dict, Optional, Tuple, Union
import numpy as np
import scipy.sparse
from probnum import linops, randvars, utils
from probnum.linalg.linearsolvers.matrixbased import (
AsymmetricMatrixBasedSolver,
NoisySymmetricMatrixBasedSolver,
SymmetricMatrixBasedSolver,
)
from probnum.linalg.linearsolvers.solutionbased import SolutionBasedSolver
# Type aliases
SquareLinOp = Union[
np.ndarray, scipy.sparse.spmatrix, linops.LinearOperator, "randvars.RandomVariable"
]
RandomVecMat = Union[np.ndarray, "randvars.RandomVariable"]
def problinsolve(
A: SquareLinOp,
b: RandomVecMat,
A0: Optional[SquareLinOp] = None,
Ainv0: Optional[SquareLinOp] = None,
x0: Optional[RandomVecMat] = None,
assume_A: str = "sympos",
maxiter: Optional[int] = None,
atol: float = 10 ** -6,
rtol: float = 10 ** -6,
callback: Optional[Callable] = None,
**kwargs
) -> Tuple[
"randvars.RandomVariable",
"randvars.RandomVariable",
"randvars.RandomVariable",
Dict,
]:
"""Infer a solution to the linear system :math:`A x = b` in a Bayesian framework.
Probabilistic linear solvers infer solutions to problems of the form
.. math:: Ax=b,
where :math:`A \\in \\mathbb{R}^{n \\times n}` and :math:`b \\in \\mathbb{R}^{n}`.
They return a probability measure which quantifies uncertainty in the output arising
from finite computational resources. This solver can take prior information either
on the linear operator :math:`A` or its inverse :math:`H=A^{-1}` in the form of a
random variable ``A0`` or ``Ainv0`` and outputs a posterior belief over :math:`A` or
:math:`H`. This code implements the method described in Wenger et al. [1]_ based on
the work in Hennig et al. [2]_.
Parameters
----------
A :
*shape=(n, n)* -- A square linear operator (or matrix). Only matrix-vector
products :math:`v \\mapsto Av` are used internally.
b :
*shape=(n, ) or (n, nrhs)* -- Right-hand side vector, matrix or random
variable in :math:`A x = b`. For multiple right hand sides, ``nrhs`` problems
are solved sequentially with the posteriors over the matrices acting as priors
for subsequent solves. If the right-hand-side is assumed to be noisy, every
iteration of the solver samples a realization from ``b``.
A0 :
*shape=(n, n)* -- A square matrix, linear operator or random variable
representing the prior belief over the linear operator :math:`A`. If an array or
linear operator is given, a prior distribution is chosen automatically.
Ainv0 :
*shape=(n, n)* -- A square matrix, linear operator or random variable
representing the prior belief over the inverse :math:`H=A^{-1}`. This can be
viewed as taking the form of a pre-conditioner. If an array or linear operator
is given, a prior distribution is chosen automatically.
x0 :
*shape=(n, ) or (n, nrhs)* -- Prior belief for the solution of the linear
system. Will be ignored if ``Ainv0`` is given.
assume_A :
Assumptions on the linear operator which can influence solver choice and
behavior. The available options are (combinations of)
==================== =========
generic matrix ``gen``
symmetric ``sym``
positive definite ``pos``
(additive) noise ``noise``
==================== =========
maxiter :
Maximum number of iterations. Defaults to :math:`10n`, where :math:`n` is the
dimension of :math:`A`.
atol :
Absolute convergence tolerance.
rtol :
Relative convergence tolerance.
callback :
User-supplied function called after each iteration of the linear solver. It is
called as ``callback(xk, Ak, Ainvk, sk, yk, alphak, resid, **kwargs)`` and can
be used to return quantities from the iteration. Note that depending on the
function supplied, this can slow down the solver considerably.
kwargs : optional
Optional keyword arguments passed onto the solver iteration.
Returns
-------
x :
Approximate solution :math:`x` to the linear system. Shape of the return matches
the shape of ``b``.
A :
Posterior belief over the linear operator.
Ainv :
Posterior belief over the linear operator inverse :math:`H=A^{-1}`.
info :
Information on convergence of the solver.
Raises
------
ValueError
If size mismatches detected or input matrices are not square.
LinAlgError
If the matrix ``A`` is singular.
LinAlgWarning
If an ill-conditioned input ``A`` is detected.
Notes
-----
For a specific class of priors the posterior mean of :math:`x_k=Hb` coincides with
the iterates of the conjugate gradient method. The matrix-based view taken here
recovers the solution-based inference of :func:`bayescg` [3]_.
References
----------
.. [1] Wenger, J. and Hennig, P., Probabilistic Linear Solvers for Machine Learning,
*Advances in Neural Information Processing Systems (NeurIPS)*, 2020
.. [2] Hennig, P., Probabilistic Interpretation of Linear Solvers, *SIAM Journal on
Optimization*, 2015, 25, 234-260
.. [3] Bartels, S. et al., Probabilistic Linear Solvers: A Unifying View,
*Statistics and Computing*, 2019
See Also
--------
bayescg : Solve linear systems with prior information on the solution.
Examples
--------
>>> import numpy as np
>>> np.random.seed(1)
>>> n = 20
>>> A = np.random.rand(n, n)
>>> A = 0.5 * (A + A.T) + 5 * np.eye(n)
>>> b = np.random.rand(n)
>>> x, A, Ainv, info = problinsolve(A=A, b=b)
>>> print(info["iter"])
9
"""
```
**General Rules**
- Cover `Parameters`, `Returns`, `Raises` and `Examples`, if applicable, in every publicly visible docstring---in that order.
- Examples are tested via doctest. Ensure `doctest` does not fail by running the test suite.
- Include appropriate `References`, in particular for probabilistic numerical methods.
- Do not use docstrings as a clutch for spaghetti code!
**Parameters**
- Parameter types are automatically documented via type hints in the function signature.
- Always provide shape hints for objects with a `.shape` attribute in the following form:
```python
"""
Parameters
----------
arr :
*(shape=(m, ) or (m, n))* -- Parameter array of an example function.
"""
```
- Hyperparameters should have default values and explanations on how to choose them.
- For callables provide the expected signature as part of the docstring: `foobar(x, y, z, \*\*kwargs)`. Backslashes remove semantic meaning from special characters.
**Style**
- Stick to the imperative style of writing in the docstring header (i.e.: first line).
- Yes: "Compute the value".
- No: "This function computes the value / Let's compute the value".
The rest of the explanation talks about the function, e. g. "This function computes the value by computing another value".
- Use full sentences inside docstrings when describing something.
- Yes: "This value is irrelevant, because it is not being passed on"
- No: "Value irrelevant, not passed on".
- When in doubt, more explanation rather than less. A little text inside an example can be helpful, too.
- A little maths can go a long way, but too much usually adds confusion.
## Interface Documentation
Which functions and classes actually show up in the documentation is determined by an `__all__` statement in the corresponding `__init__.py` file inside a module. The order of this list is also reflected in the documentation. For example, `linalg` has the following `__init__.py`:
```
# %load ../../../src/probnum/linalg/__init__.py
"""Linear Algebra."""
from probnum.linalg.linearsolvers import *
# Public classes and functions. Order is reflected in documentation.
__all__ = [
"problinsolve",
"bayescg",
"ProbabilisticLinearSolver",
"MatrixBasedSolver",
"AsymmetricMatrixBasedSolver",
"SymmetricMatrixBasedSolver",
"SolutionBasedSolver",
]
# Set correct module paths. Corrects links and module paths in documentation.
ProbabilisticLinearSolver.__module__ = "probnum.linalg"
MatrixBasedSolver.__module__ = "probnum.linalg"
```
If you are documenting a subclass, which has a different path in the file structure than the import path due to `__all__` statements, you can correct the links to superclasses in the documentation via the `.__module__` attribute.
## Sphinx
ProbNum uses [Sphinx](https://www.sphinx-doc.org/en/master/) to parse docstrings in the codebase automatically and to create its API documentation. You can configure Sphinx itself or its extensions in the `./docs/conf.py` file.
```
from IPython.display import Image
display(Image(filename="../img/developer_guides/sphinx_logo.png", embed=True))
```
ProbNum makes use of a number of Sphinx plugins to improve the API documentation, for example to parse this Jupyter notebook. The full list of used packages can be found in `./docs/sphinx-requirements.txt` and `./docs/notebook-requirements.txt`.
## Building and Viewing the Documentation
In order to build the documentation locally and view the HTML version of the API documentation, simply run:
```bash
tox -e docs
```
This creates a static web page under `./docs/_build/html/` which you can view in your browser by opening
`./docs/_build/html/intro.html`.
Alternatively, if you want to build the docs in your current environment you can manually execute
```bash
cd docs
make clean
make html
```
For more information on `tox`, check out the [general development instructions](../development/contributing.rst).
|
github_jupyter
|
# %load -r 1-162 ../../../src/probnum/linalg/linearsolvers/linearsolvers.py
"""Probabilistic numerical methods for solving linear systems.
This module provides routines to solve linear systems of equations in a
Bayesian framework. This means that a prior distribution over elements
of the linear system can be provided and is updated with information
collected by the solvers to return a posterior distribution.
"""
import warnings
from typing import Callable, Dict, Optional, Tuple, Union
import numpy as np
import scipy.sparse
from probnum import linops, randvars, utils
from probnum.linalg.linearsolvers.matrixbased import (
AsymmetricMatrixBasedSolver,
NoisySymmetricMatrixBasedSolver,
SymmetricMatrixBasedSolver,
)
from probnum.linalg.linearsolvers.solutionbased import SolutionBasedSolver
# Type aliases
SquareLinOp = Union[
np.ndarray, scipy.sparse.spmatrix, linops.LinearOperator, "randvars.RandomVariable"
]
RandomVecMat = Union[np.ndarray, "randvars.RandomVariable"]
def problinsolve(
A: SquareLinOp,
b: RandomVecMat,
A0: Optional[SquareLinOp] = None,
Ainv0: Optional[SquareLinOp] = None,
x0: Optional[RandomVecMat] = None,
assume_A: str = "sympos",
maxiter: Optional[int] = None,
atol: float = 10 ** -6,
rtol: float = 10 ** -6,
callback: Optional[Callable] = None,
**kwargs
) -> Tuple[
"randvars.RandomVariable",
"randvars.RandomVariable",
"randvars.RandomVariable",
Dict,
]:
"""Infer a solution to the linear system :math:`A x = b` in a Bayesian framework.
Probabilistic linear solvers infer solutions to problems of the form
.. math:: Ax=b,
where :math:`A \\in \\mathbb{R}^{n \\times n}` and :math:`b \\in \\mathbb{R}^{n}`.
They return a probability measure which quantifies uncertainty in the output arising
from finite computational resources. This solver can take prior information either
on the linear operator :math:`A` or its inverse :math:`H=A^{-1}` in the form of a
random variable ``A0`` or ``Ainv0`` and outputs a posterior belief over :math:`A` or
:math:`H`. This code implements the method described in Wenger et al. [1]_ based on
the work in Hennig et al. [2]_.
Parameters
----------
A :
*shape=(n, n)* -- A square linear operator (or matrix). Only matrix-vector
products :math:`v \\mapsto Av` are used internally.
b :
*shape=(n, ) or (n, nrhs)* -- Right-hand side vector, matrix or random
variable in :math:`A x = b`. For multiple right hand sides, ``nrhs`` problems
are solved sequentially with the posteriors over the matrices acting as priors
for subsequent solves. If the right-hand-side is assumed to be noisy, every
iteration of the solver samples a realization from ``b``.
A0 :
*shape=(n, n)* -- A square matrix, linear operator or random variable
representing the prior belief over the linear operator :math:`A`. If an array or
linear operator is given, a prior distribution is chosen automatically.
Ainv0 :
*shape=(n, n)* -- A square matrix, linear operator or random variable
representing the prior belief over the inverse :math:`H=A^{-1}`. This can be
viewed as taking the form of a pre-conditioner. If an array or linear operator
is given, a prior distribution is chosen automatically.
x0 :
*shape=(n, ) or (n, nrhs)* -- Prior belief for the solution of the linear
system. Will be ignored if ``Ainv0`` is given.
assume_A :
Assumptions on the linear operator which can influence solver choice and
behavior. The available options are (combinations of)
==================== =========
generic matrix ``gen``
symmetric ``sym``
positive definite ``pos``
(additive) noise ``noise``
==================== =========
maxiter :
Maximum number of iterations. Defaults to :math:`10n`, where :math:`n` is the
dimension of :math:`A`.
atol :
Absolute convergence tolerance.
rtol :
Relative convergence tolerance.
callback :
User-supplied function called after each iteration of the linear solver. It is
called as ``callback(xk, Ak, Ainvk, sk, yk, alphak, resid, **kwargs)`` and can
be used to return quantities from the iteration. Note that depending on the
function supplied, this can slow down the solver considerably.
kwargs : optional
Optional keyword arguments passed onto the solver iteration.
Returns
-------
x :
Approximate solution :math:`x` to the linear system. Shape of the return matches
the shape of ``b``.
A :
Posterior belief over the linear operator.
Ainv :
Posterior belief over the linear operator inverse :math:`H=A^{-1}`.
info :
Information on convergence of the solver.
Raises
------
ValueError
If size mismatches detected or input matrices are not square.
LinAlgError
If the matrix ``A`` is singular.
LinAlgWarning
If an ill-conditioned input ``A`` is detected.
Notes
-----
For a specific class of priors the posterior mean of :math:`x_k=Hb` coincides with
the iterates of the conjugate gradient method. The matrix-based view taken here
recovers the solution-based inference of :func:`bayescg` [3]_.
References
----------
.. [1] Wenger, J. and Hennig, P., Probabilistic Linear Solvers for Machine Learning,
*Advances in Neural Information Processing Systems (NeurIPS)*, 2020
.. [2] Hennig, P., Probabilistic Interpretation of Linear Solvers, *SIAM Journal on
Optimization*, 2015, 25, 234-260
.. [3] Bartels, S. et al., Probabilistic Linear Solvers: A Unifying View,
*Statistics and Computing*, 2019
See Also
--------
bayescg : Solve linear systems with prior information on the solution.
Examples
--------
>>> import numpy as np
>>> np.random.seed(1)
>>> n = 20
>>> A = np.random.rand(n, n)
>>> A = 0.5 * (A + A.T) + 5 * np.eye(n)
>>> b = np.random.rand(n)
>>> x, A, Ainv, info = problinsolve(A=A, b=b)
>>> print(info["iter"])
9
"""
"""
Parameters
----------
arr :
*(shape=(m, ) or (m, n))* -- Parameter array of an example function.
"""
# %load ../../../src/probnum/linalg/__init__.py
"""Linear Algebra."""
from probnum.linalg.linearsolvers import *
# Public classes and functions. Order is reflected in documentation.
__all__ = [
"problinsolve",
"bayescg",
"ProbabilisticLinearSolver",
"MatrixBasedSolver",
"AsymmetricMatrixBasedSolver",
"SymmetricMatrixBasedSolver",
"SolutionBasedSolver",
]
# Set correct module paths. Corrects links and module paths in documentation.
ProbabilisticLinearSolver.__module__ = "probnum.linalg"
MatrixBasedSolver.__module__ = "probnum.linalg"
from IPython.display import Image
display(Image(filename="../img/developer_guides/sphinx_logo.png", embed=True))
tox -e docs
cd docs
make clean
make html
| 0.922922 | 0.974288 |
<!-- ---
title: Machine Translation using PyTorch Ignite
weight: 2
date: 2021-10-27
downloads: true
tags:
- Machine Translation
- T5
- NLP
- Transformers
- Bleu Score
- seq2seq models
--- -->
# Machine Translation using PyTorch Ignite
This tutorial is a brief introduction on how you can train a machine translation model (or any other seq2seq model) using PyTorch Ignite.
This notebook uses Models, Dataset and Tokenizers from Huggingface, hence they can be easily replaced by other models from the 🤗 Hub.
<!--more-->
## Required Dependencies
```
%%capture
!pip install pytorch-ignite
!pip install transformers
!pip install datasets
!pip install sentencepiece
```
### For TPUs
```
# VERSION = !curl -s https://api.github.com/repos/pytorch/xla/releases/latest | grep -Po '"tag_name": "v\K.*?(?=")'
# VERSION = VERSION[0].rstrip('.0') # remove trailing zero
# !pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-{VERSION}-cp37-cp37m-linux_x86_64.whl
```
## Common Configuration
We maintain a config dictionary which can be extended or changed to store parameters required during training. We can refer back to this code when we will use these parameters later.
In this example we are using ``t5-small``, which has 60M parameters. The way t5 models work is they taske an input with the a task-specific prefix. This prefix (like "Translate English to German") will let our model know which task it needs to perform. For more details refer to the original paper [here](https://arxiv.org/abs/1910.10683).
Here we train on less number of iterations per step and on a limited dataset, this can be modified using the ``train_dataset_length`` and ``epoch_length`` config.
```
config = {
"seed": 216,
"with_amp": False,
"num_epochs": 1,
"batch_size": 32,
"output_path_": "/content",
"model_name": "t5-small",
"tokenizer_name": "t5-small",
"freeze_encoder": False,
"num_workers": 4,
"weight_decay": 0.01,
"learning_rate": 1e-4,
"accumulation_steps": 1,
"epoch_length": 500,
"print_output_every": 50,
}
dataset_configs = {
"source_language":"English",
"source_text_id":"en",
"target_language":"German",
"target_text_id":"de",
"max_length": 80,
"train_dataset_length": -1,
"validation_dataset_length": 100,
"train_test_split": 0.3,
}
```
## Basic Setup
### Imports
```
import warnings
from datetime import datetime
from pathlib import Path
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
from torch.cuda.amp import GradScaler, autocast
from torch.utils.data import random_split
import ignite
import ignite.distributed as idist
from ignite.contrib.engines import common
from ignite.engine import Engine, Events
from ignite.handlers import Checkpoint, global_step_from_engine
from ignite.metrics import Bleu
from ignite.utils import manual_seed, setup_logger
from datasets import load_dataset
from transformers import T5ForConditionalGeneration, AutoTokenizer
warnings.filterwarnings("ignore")
```
### Preparing data
We will be using the [new_commentary](https://github.com/huggingface/datasets/blob/master/datasets/news_commentary/news_commentary.py) data (English - German) from the 🤗 Hub for this example.
```
from datasets import load_dataset
dataset = load_dataset("news_commentary", "de-en")
dataset = dataset.shuffle(seed=config["seed"])
dataset = dataset["train"]
dataset = dataset.train_test_split(test_size=dataset_configs["train_test_split"])
train_dataset, validation_dataset = dataset["train"], dataset["test"]
print("Lengths")
print("\t Train Set - {}".format(len(train_dataset)))
print("\t Val Set - {}".format(len(validation_dataset)))
```
Having a look at a dataset sample.
```
print("Example of a Datapoint \n")
print(train_dataset[0], "\n")
```
### Tokenizer
The tokenizer needs to be defined to convert the input from strings to token ids. The Machine Translation tokenizers need additional parameters about the source language and target language, refer [here](https://huggingface.co/transformers/model_doc/mbart.html#transformers.MBartTokenizer) for more info.
```
tokenizer = AutoTokenizer.from_pretrained(config["tokenizer_name"])
```
## Dataset Class
Tokenizes the data and returns a dictionary with inputs and targets.
If you want to train on a subset of the data - modify the ``train_dataset_length`` and ``validation_dataset_length`` in the dataset configs. Keep them as -1 for taking the whole length.
```
class TransformerDataset(torch.utils.data.Dataset):
def __init__(
self, data, src_text_id, tgt_text_id, tokenizer, max_length, length_dataset
):
self.data = data
self.src_text_id = src_text_id
self.tgt_text_id = tgt_text_id
self.tokenizer = tokenizer
self.max_length = max_length
self.length_dataset = length_dataset if length_dataset != -1 else len(self.data)
def __getitem__(self, idx):
# t5 models require a prefix describing the task
task_prefix = "translate {} to {}: ".format(dataset_configs["source_language"], dataset_configs["target_language"])
src_text = [task_prefix + str(self.data[idx]["translation"][self.src_text_id])]
tgt_text = [str(self.data[idx]["translation"][self.tgt_text_id])]
input_txt_tokenized = self.tokenizer(
src_text, max_length=self.max_length, padding="max_length", truncation=True
)
with self.tokenizer.as_target_tokenizer():
tgt_text_tokenized = self.tokenizer(
tgt_text,
max_length=self.max_length,
padding="max_length",
truncation=True,
)
# The pad token in target is replaced with -100 so that it doesn't get added to loss.
tgt_text_tokenized = [
[(l if l != self.tokenizer.pad_token_id else -100) for l in label]
for label in tgt_text_tokenized.input_ids
]
input_txt_tokenized.update({"tgt": tgt_text_tokenized[0]})
batch = {
k: torch.tensor(v).squeeze(0) for (k, v) in input_txt_tokenized.items()
}
return batch
def __len__(self):
return self.length_dataset
train_data = TransformerDataset(
train_dataset,
dataset_configs["source_text_id"],
dataset_configs["target_text_id"],
tokenizer,
dataset_configs["max_length"],
dataset_configs["train_dataset_length"],
)
val_data = TransformerDataset(
validation_dataset,
dataset_configs["source_text_id"],
dataset_configs["target_text_id"],
tokenizer,
dataset_configs["max_length"],
dataset_configs["validation_dataset_length"],
)
```
## Trainer
The trainer takes a batch of input and passes them through the model (along with targets in this case) and gets the loss.
#### Mixed Precision
The forward pass is wrapped in the autocast context manager for mixed precision training. It's turned off in this example as there won't be any memory advantages with ``batch_size`` 1 or 2. Change the ``with_amp`` flag in config to turn it on.
#### Gradient Accumulation
Gradient accumulation is implemented as batch size of 1 would lead to noisy updates otherwise. Check the ``accumulation_steps`` variable in config to define the number of steps to accumulate the gradient.
#### Trainer Handlers
Handlers can be defined and attached directly to the trainer engine. Here we also make use of a special function : `setup_common_training_handlers` which has a lot of the commonly used, useful handlers (like `save_every_iters`, `clear_cuda_cache` etc) already defined. To know more about this function, refer to the docs [here](https://pytorch.org/ignite/contrib/engines.html#ignite.contrib.engines.common.setup_common_training_handlers).
```
# Create Trainer
def create_trainer(model, optimizer, with_amp, train_sampler, logger):
device = idist.device()
scaler = GradScaler(enabled=with_amp)
def train_step(engine, batch):
model.train()
if batch["tgt"].device != device:
batch = {
k: v.to(device, non_blocking=True, dtype=torch.long)
for (k, v) in batch.items()
}
src_ids = batch["input_ids"]
src_attention_mask = batch["attention_mask"]
tgt = batch["tgt"]
with autocast(enabled=with_amp):
y = model(input_ids=src_ids, attention_mask=src_attention_mask, labels=tgt)
loss = y["loss"]
loss /= config["accumulation_steps"]
scaler.scale(loss).backward()
if engine.state.iteration % config["accumulation_steps"] == 0:
scaler.step(optimizer)
scaler.update()
optimizer.zero_grad()
return {"batch loss": loss.item()}
trainer = Engine(train_step)
trainer.logger = logger
metric_names = ["batch loss"]
common.setup_common_training_handlers(
trainer=trainer,
train_sampler=train_sampler,
output_names=metric_names,
clear_cuda_cache=False,
with_pbars=True,
)
return trainer
```
## Evaluator
Similar to trainer we create an evaluator for validation step. Here we calculate metrics (like Bleu Score). To do this Bleu score requires the sentences and not the logits. the ``ids_to_clean_text`` function is used to do that.
The ``print_output_every`` flag can be changed if you want to change the frequency of printing output sentences.
```
# Let's now setup evaluator engine to perform model's validation and compute metrics
def create_evaluator(model, tokenizer, metrics, logger, tag="val"):
device = idist.device()
def ids_to_clean_text(generated_ids):
gen_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
return list(map(str.strip, gen_text))
@torch.no_grad()
def evaluate_step(engine, batch):
model.eval()
if batch["tgt"].device != device:
batch = {
k: v.to(device, non_blocking=True, dtype=torch.long)
for (k, v) in batch.items()
}
src_ids = batch["input_ids"]
src_attention_mask = batch["attention_mask"]
tgt = batch["tgt"]
if idist.get_world_size() > 1:
y_pred = model.module.generate(input_ids=src_ids, attention_mask=src_attention_mask)
else:
y_pred = model.generate(input_ids=src_ids, attention_mask=src_attention_mask)
tgt = torch.where(tgt != -100, tgt, tokenizer.pad_token_id)
preds = ids_to_clean_text(y_pred)
tgt = ids_to_clean_text(tgt)
preds = [_preds.split() for _preds in preds]
tgt = [[_tgt.split()] for _tgt in tgt]
if engine.state.iteration % config["print_output_every"] == 0:
logger.info(f'\n Preds : {" ".join(preds[0])} \n')
logger.info(f'\n Target : {" ".join(tgt[0][0])} \n')
return preds, tgt
evaluator = Engine(evaluate_step)
for name, metric in metrics.items():
metric.attach(evaluator, name)
return evaluator
```
## Initializing Functions
Here we initialize the model and optimizer. \
The ``get_dataloader`` returns dataloaders for train and validation.
```
def freeze_params(model):
for par in model.parameters():
par.requires_grad = False
def initialize():
model = T5ForConditionalGeneration.from_pretrained(config["model_name"])
lr = config["learning_rate"] * idist.get_world_size()
no_decay = ["bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{
"params": [
p
for n, p in model.named_parameters()
if not any(nd in n for nd in no_decay)
],
"weight_decay": config["weight_decay"],
},
{
"params": [
p
for n, p in model.named_parameters()
if any(nd in n for nd in no_decay)
],
"weight_decay": 0.0,
},
]
if config["freeze_encoder"]:
freeze_params(model.get_encoder())
model = idist.auto_model(model)
optimizer = optim.AdamW(optimizer_grouped_parameters, lr=lr)
optimizer = idist.auto_optim(optimizer)
return model, optimizer
def get_dataloaders(train_dataset, val_dataset):
# Setup data loader also adapted to distributed config: nccl, gloo, xla-tpu
train_loader = idist.auto_dataloader(
train_dataset,
batch_size=config["batch_size"],
num_workers=config["num_workers"],
shuffle=True,
drop_last=True,
)
val_loader = idist.auto_dataloader(
val_dataset,
batch_size=2 * config["batch_size"],
num_workers=config["num_workers"],
shuffle=False,
)
return train_loader, val_loader
```
## Logging Handlers
This step is optional, however, we can pass a ``setup_logger()`` object to ``log_basic_info()`` and log all basic information such as different versions, current configuration, device and backend used by the current process (identified by its local rank), and number of processes (``world size``). idist (``ignite.distributed``) provides several utility functions like ``get_local_rank()``, ``backend()``, ``get_world_size()``, etc. to make this possible.
The ``log_metrics_eval`` is used to log metrics and evaluation time for running evaluation.
The ``get_save_handler`` is used to save the model to the output path whenever it is called.
```
def log_metrics_eval(logger, epoch, elapsed, tag, metrics):
metrics_output = "\n".join([f"\t{k}: {v}" for k, v in metrics.items()])
logger.info(
f"\nEpoch {epoch} - Evaluation time (seconds): {elapsed:.2f} - {tag} metrics:\n {metrics_output}"
)
def log_basic_info(logger, config):
logger.info(f"Train on CIFAR10")
logger.info(f"- PyTorch version: {torch.__version__}")
logger.info(f"- Ignite version: {ignite.__version__}")
if torch.cuda.is_available():
# explicitly import cudnn as torch.backends.cudnn can not be pickled with hvd spawning procs
from torch.backends import cudnn
logger.info(
f"- GPU Device: {torch.cuda.get_device_name(idist.get_local_rank())}"
)
logger.info(f"- CUDA version: {torch.version.cuda}")
logger.info(f"- CUDNN version: {cudnn.version()}")
logger.info("\n")
logger.info("Configuration:")
for key, value in config.items():
logger.info(f"\t{key}: {value}")
logger.info("\n")
if idist.get_world_size() > 1:
logger.info("\nDistributed setting:")
logger.info(f"\tbackend: {idist.backend()}")
logger.info(f"\tworld size: {idist.get_world_size()}")
logger.info("\n")
def get_save_handler(config):
return config["output_path_"]
```
## Begin Training
This is where the main logic resides, i.e. we will call all the above functions from within here:
1. Basic Setup
1. We set a ``manual_seed()`` and ``setup_logger()``, then log all basic information.
2. Initialise dataloaders, model and optimizer.
2. We use the above objects to create a trainer.
3. Evaluator
1. Define some relevant Ignite metrics like ``Bleu()``.
2. Create evaluator: ``evaluator`` to compute metrics on the ``val_dataloader``.
3. Define ``run_validation()`` to compute metrics on both dataloaders and log them. Then we attach this function to trainer to run after epochs.
4. Setup TensorBoard logging using ``setup_tb_logging()`` on the master process for the evaluators so that validation metrics along with the learning rate can be logged.
5. Define a ``Checkpoint()`` object to store the two best models (``n_saved``) by validation accuracy (defined in metrics as ``Bleu()``) and attach it to val_evaluator so that it can be executed everytime ``evaluator`` runs.
6. Try training on ``train_loader`` for ``num_epochs``
7. Close Tensorboard logger once training is completed.
```
def training(local_rank):
rank = idist.get_rank()
manual_seed(config["seed"] + rank)
device = idist.device()
logger = setup_logger(name="NMT", distributed_rank=local_rank)
log_basic_info(logger, config)
train_loader, val_loader = get_dataloaders(train_data, val_data)
model, optimizer = initialize()
trainer = create_trainer(
model, optimizer, config["with_amp"], train_loader.sampler, logger
)
metrics = {
"bleu": Bleu(ngram=4, smooth="smooth1", average="micro"),
"bleu_smooth_2": Bleu(ngram=4, smooth="smooth2", average="micro"),
}
evaluator = create_evaluator(
model, tokenizer, metrics, logger, tag="val"
)
@trainer.on(Events.EPOCH_COMPLETED(every=1) | Events.COMPLETED | Events.STARTED)
def run_validation(engine):
epoch = trainer.state.epoch
state = evaluator.run(val_loader)
log_metrics_eval(
logger, epoch, state.times["COMPLETED"], "Validation", state.metrics
)
if rank == 0:
now = datetime.now().strftime("%Y%m%d-%H%M%S")
folder_name = f"Translation_Model_backend-{idist.backend()}-{idist.get_world_size()}_{now}"
output_path = Path(config["output_path_"]) / folder_name
if not output_path.exists():
output_path.mkdir(parents=True)
logger.info(f"Output path: {output_path}")
evaluators = {"val": evaluator}
tb_logger = common.setup_tb_logging(
config["output_path_"], trainer, optimizer, evaluators=evaluators
)
best_model_handler = Checkpoint(
{"model": model},
get_save_handler(config),
filename_prefix="best",
n_saved=2,
global_step_transform=global_step_from_engine(trainer),
score_name="val_bleu",
score_function=Checkpoint.get_default_score_fn("bleu"),
)
evaluator.add_event_handler(Events.COMPLETED, best_model_handler)
try:
state = trainer.run(
train_loader,
max_epochs=config["num_epochs"],
epoch_length=config["epoch_length"],
)
except Exception as e:
logger.exception("")
raise e
if rank == 0:
tb_logger.close()
```
## Running
To run with TPU change ``backend`` to "xla-tpu" and ``nproc_per_node`` to 1 or 8.
```
def run():
with idist.Parallel(backend=None, nproc_per_node=None) as parallel:
parallel.run(training)
if __name__ == '__main__':
run()
```
|
github_jupyter
|
%%capture
!pip install pytorch-ignite
!pip install transformers
!pip install datasets
!pip install sentencepiece
# VERSION = !curl -s https://api.github.com/repos/pytorch/xla/releases/latest | grep -Po '"tag_name": "v\K.*?(?=")'
# VERSION = VERSION[0].rstrip('.0') # remove trailing zero
# !pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-{VERSION}-cp37-cp37m-linux_x86_64.whl
config = {
"seed": 216,
"with_amp": False,
"num_epochs": 1,
"batch_size": 32,
"output_path_": "/content",
"model_name": "t5-small",
"tokenizer_name": "t5-small",
"freeze_encoder": False,
"num_workers": 4,
"weight_decay": 0.01,
"learning_rate": 1e-4,
"accumulation_steps": 1,
"epoch_length": 500,
"print_output_every": 50,
}
dataset_configs = {
"source_language":"English",
"source_text_id":"en",
"target_language":"German",
"target_text_id":"de",
"max_length": 80,
"train_dataset_length": -1,
"validation_dataset_length": 100,
"train_test_split": 0.3,
}
import warnings
from datetime import datetime
from pathlib import Path
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
from torch.cuda.amp import GradScaler, autocast
from torch.utils.data import random_split
import ignite
import ignite.distributed as idist
from ignite.contrib.engines import common
from ignite.engine import Engine, Events
from ignite.handlers import Checkpoint, global_step_from_engine
from ignite.metrics import Bleu
from ignite.utils import manual_seed, setup_logger
from datasets import load_dataset
from transformers import T5ForConditionalGeneration, AutoTokenizer
warnings.filterwarnings("ignore")
from datasets import load_dataset
dataset = load_dataset("news_commentary", "de-en")
dataset = dataset.shuffle(seed=config["seed"])
dataset = dataset["train"]
dataset = dataset.train_test_split(test_size=dataset_configs["train_test_split"])
train_dataset, validation_dataset = dataset["train"], dataset["test"]
print("Lengths")
print("\t Train Set - {}".format(len(train_dataset)))
print("\t Val Set - {}".format(len(validation_dataset)))
print("Example of a Datapoint \n")
print(train_dataset[0], "\n")
tokenizer = AutoTokenizer.from_pretrained(config["tokenizer_name"])
class TransformerDataset(torch.utils.data.Dataset):
def __init__(
self, data, src_text_id, tgt_text_id, tokenizer, max_length, length_dataset
):
self.data = data
self.src_text_id = src_text_id
self.tgt_text_id = tgt_text_id
self.tokenizer = tokenizer
self.max_length = max_length
self.length_dataset = length_dataset if length_dataset != -1 else len(self.data)
def __getitem__(self, idx):
# t5 models require a prefix describing the task
task_prefix = "translate {} to {}: ".format(dataset_configs["source_language"], dataset_configs["target_language"])
src_text = [task_prefix + str(self.data[idx]["translation"][self.src_text_id])]
tgt_text = [str(self.data[idx]["translation"][self.tgt_text_id])]
input_txt_tokenized = self.tokenizer(
src_text, max_length=self.max_length, padding="max_length", truncation=True
)
with self.tokenizer.as_target_tokenizer():
tgt_text_tokenized = self.tokenizer(
tgt_text,
max_length=self.max_length,
padding="max_length",
truncation=True,
)
# The pad token in target is replaced with -100 so that it doesn't get added to loss.
tgt_text_tokenized = [
[(l if l != self.tokenizer.pad_token_id else -100) for l in label]
for label in tgt_text_tokenized.input_ids
]
input_txt_tokenized.update({"tgt": tgt_text_tokenized[0]})
batch = {
k: torch.tensor(v).squeeze(0) for (k, v) in input_txt_tokenized.items()
}
return batch
def __len__(self):
return self.length_dataset
train_data = TransformerDataset(
train_dataset,
dataset_configs["source_text_id"],
dataset_configs["target_text_id"],
tokenizer,
dataset_configs["max_length"],
dataset_configs["train_dataset_length"],
)
val_data = TransformerDataset(
validation_dataset,
dataset_configs["source_text_id"],
dataset_configs["target_text_id"],
tokenizer,
dataset_configs["max_length"],
dataset_configs["validation_dataset_length"],
)
# Create Trainer
def create_trainer(model, optimizer, with_amp, train_sampler, logger):
device = idist.device()
scaler = GradScaler(enabled=with_amp)
def train_step(engine, batch):
model.train()
if batch["tgt"].device != device:
batch = {
k: v.to(device, non_blocking=True, dtype=torch.long)
for (k, v) in batch.items()
}
src_ids = batch["input_ids"]
src_attention_mask = batch["attention_mask"]
tgt = batch["tgt"]
with autocast(enabled=with_amp):
y = model(input_ids=src_ids, attention_mask=src_attention_mask, labels=tgt)
loss = y["loss"]
loss /= config["accumulation_steps"]
scaler.scale(loss).backward()
if engine.state.iteration % config["accumulation_steps"] == 0:
scaler.step(optimizer)
scaler.update()
optimizer.zero_grad()
return {"batch loss": loss.item()}
trainer = Engine(train_step)
trainer.logger = logger
metric_names = ["batch loss"]
common.setup_common_training_handlers(
trainer=trainer,
train_sampler=train_sampler,
output_names=metric_names,
clear_cuda_cache=False,
with_pbars=True,
)
return trainer
# Let's now setup evaluator engine to perform model's validation and compute metrics
def create_evaluator(model, tokenizer, metrics, logger, tag="val"):
device = idist.device()
def ids_to_clean_text(generated_ids):
gen_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
return list(map(str.strip, gen_text))
@torch.no_grad()
def evaluate_step(engine, batch):
model.eval()
if batch["tgt"].device != device:
batch = {
k: v.to(device, non_blocking=True, dtype=torch.long)
for (k, v) in batch.items()
}
src_ids = batch["input_ids"]
src_attention_mask = batch["attention_mask"]
tgt = batch["tgt"]
if idist.get_world_size() > 1:
y_pred = model.module.generate(input_ids=src_ids, attention_mask=src_attention_mask)
else:
y_pred = model.generate(input_ids=src_ids, attention_mask=src_attention_mask)
tgt = torch.where(tgt != -100, tgt, tokenizer.pad_token_id)
preds = ids_to_clean_text(y_pred)
tgt = ids_to_clean_text(tgt)
preds = [_preds.split() for _preds in preds]
tgt = [[_tgt.split()] for _tgt in tgt]
if engine.state.iteration % config["print_output_every"] == 0:
logger.info(f'\n Preds : {" ".join(preds[0])} \n')
logger.info(f'\n Target : {" ".join(tgt[0][0])} \n')
return preds, tgt
evaluator = Engine(evaluate_step)
for name, metric in metrics.items():
metric.attach(evaluator, name)
return evaluator
def freeze_params(model):
for par in model.parameters():
par.requires_grad = False
def initialize():
model = T5ForConditionalGeneration.from_pretrained(config["model_name"])
lr = config["learning_rate"] * idist.get_world_size()
no_decay = ["bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{
"params": [
p
for n, p in model.named_parameters()
if not any(nd in n for nd in no_decay)
],
"weight_decay": config["weight_decay"],
},
{
"params": [
p
for n, p in model.named_parameters()
if any(nd in n for nd in no_decay)
],
"weight_decay": 0.0,
},
]
if config["freeze_encoder"]:
freeze_params(model.get_encoder())
model = idist.auto_model(model)
optimizer = optim.AdamW(optimizer_grouped_parameters, lr=lr)
optimizer = idist.auto_optim(optimizer)
return model, optimizer
def get_dataloaders(train_dataset, val_dataset):
# Setup data loader also adapted to distributed config: nccl, gloo, xla-tpu
train_loader = idist.auto_dataloader(
train_dataset,
batch_size=config["batch_size"],
num_workers=config["num_workers"],
shuffle=True,
drop_last=True,
)
val_loader = idist.auto_dataloader(
val_dataset,
batch_size=2 * config["batch_size"],
num_workers=config["num_workers"],
shuffle=False,
)
return train_loader, val_loader
def log_metrics_eval(logger, epoch, elapsed, tag, metrics):
metrics_output = "\n".join([f"\t{k}: {v}" for k, v in metrics.items()])
logger.info(
f"\nEpoch {epoch} - Evaluation time (seconds): {elapsed:.2f} - {tag} metrics:\n {metrics_output}"
)
def log_basic_info(logger, config):
logger.info(f"Train on CIFAR10")
logger.info(f"- PyTorch version: {torch.__version__}")
logger.info(f"- Ignite version: {ignite.__version__}")
if torch.cuda.is_available():
# explicitly import cudnn as torch.backends.cudnn can not be pickled with hvd spawning procs
from torch.backends import cudnn
logger.info(
f"- GPU Device: {torch.cuda.get_device_name(idist.get_local_rank())}"
)
logger.info(f"- CUDA version: {torch.version.cuda}")
logger.info(f"- CUDNN version: {cudnn.version()}")
logger.info("\n")
logger.info("Configuration:")
for key, value in config.items():
logger.info(f"\t{key}: {value}")
logger.info("\n")
if idist.get_world_size() > 1:
logger.info("\nDistributed setting:")
logger.info(f"\tbackend: {idist.backend()}")
logger.info(f"\tworld size: {idist.get_world_size()}")
logger.info("\n")
def get_save_handler(config):
return config["output_path_"]
def training(local_rank):
rank = idist.get_rank()
manual_seed(config["seed"] + rank)
device = idist.device()
logger = setup_logger(name="NMT", distributed_rank=local_rank)
log_basic_info(logger, config)
train_loader, val_loader = get_dataloaders(train_data, val_data)
model, optimizer = initialize()
trainer = create_trainer(
model, optimizer, config["with_amp"], train_loader.sampler, logger
)
metrics = {
"bleu": Bleu(ngram=4, smooth="smooth1", average="micro"),
"bleu_smooth_2": Bleu(ngram=4, smooth="smooth2", average="micro"),
}
evaluator = create_evaluator(
model, tokenizer, metrics, logger, tag="val"
)
@trainer.on(Events.EPOCH_COMPLETED(every=1) | Events.COMPLETED | Events.STARTED)
def run_validation(engine):
epoch = trainer.state.epoch
state = evaluator.run(val_loader)
log_metrics_eval(
logger, epoch, state.times["COMPLETED"], "Validation", state.metrics
)
if rank == 0:
now = datetime.now().strftime("%Y%m%d-%H%M%S")
folder_name = f"Translation_Model_backend-{idist.backend()}-{idist.get_world_size()}_{now}"
output_path = Path(config["output_path_"]) / folder_name
if not output_path.exists():
output_path.mkdir(parents=True)
logger.info(f"Output path: {output_path}")
evaluators = {"val": evaluator}
tb_logger = common.setup_tb_logging(
config["output_path_"], trainer, optimizer, evaluators=evaluators
)
best_model_handler = Checkpoint(
{"model": model},
get_save_handler(config),
filename_prefix="best",
n_saved=2,
global_step_transform=global_step_from_engine(trainer),
score_name="val_bleu",
score_function=Checkpoint.get_default_score_fn("bleu"),
)
evaluator.add_event_handler(Events.COMPLETED, best_model_handler)
try:
state = trainer.run(
train_loader,
max_epochs=config["num_epochs"],
epoch_length=config["epoch_length"],
)
except Exception as e:
logger.exception("")
raise e
if rank == 0:
tb_logger.close()
def run():
with idist.Parallel(backend=None, nproc_per_node=None) as parallel:
parallel.run(training)
if __name__ == '__main__':
run()
| 0.782829 | 0.900879 |
### EXERCISE:
- ***Create a quizzing game. make an HTTP request to the Open Trivia API at each round of the game to get a new question and present it to the user to answer. At the end of each round ask the user if he wants to play again. Keep playing forever until the user types "quit".***
- ***Don't forget to tell if the answer is correct or not at each round and keep the user's score.***
```
import requests
import json
import pprint
import random
import html
score_correct = 0
score_incorrect = 0
url = 'https://opentdb.com/api.php?amount=1'
endGame = ""
while endGame != "quit" :
r = requests.get(url)
if (r.status_code != 200) :
endGame = input("Sorry, there was a problem retrieving the question. Press enter to try again or type 'quit' to quit the game.")
else :
valid_answer = False
answer_number = 1
data = json.loads(r.text)
question = data['results'][0]['question']
answers = data['results'][0]['incorrect_answers']
correct_answer = data['results'][0]['correct_answer']
answers.append(correct_answer)
random.shuffle(answers)
print(html.unescape(question + '\n'))
for answer in answers :
print(str(answer_number) + "- " + html.unescape(answer))
answer_number += 1
while valid_answer == False :
user_answer = input("\nType the number of the correct answer - ")
try :
user_answer = int(user_answer)
if user_answer > len(answers) or user_answer <= 0 :
print("Invalid answer.")
else :
valid_answer = True
except :
print("Invalid answer. Use only numbers")
user_answer = answers[int(user_answer)-1]
if user_answer == correct_answer :
print("\nCongratulations! You answered correctly! The correct answer was: ", html.unescape(correct_answer))
score_correct += 1
else :
print("Sorry, " + html.unescape(user_answer) + "is incorrect. The correct answer is ", html.unescape(correct_answer))
score_incorrect += 1
print("\n=====================================================")
print("Your score is:")
print("Correct answers : " + str(score_correct))
print("Incorrect answers : " +str(score_incorrect))
print("\n=====================================================")
endGame = input("\nPress enter to play again or type 'quit' to quit the game.").lower()
print('\nThanks for playing')
```
|
github_jupyter
|
import requests
import json
import pprint
import random
import html
score_correct = 0
score_incorrect = 0
url = 'https://opentdb.com/api.php?amount=1'
endGame = ""
while endGame != "quit" :
r = requests.get(url)
if (r.status_code != 200) :
endGame = input("Sorry, there was a problem retrieving the question. Press enter to try again or type 'quit' to quit the game.")
else :
valid_answer = False
answer_number = 1
data = json.loads(r.text)
question = data['results'][0]['question']
answers = data['results'][0]['incorrect_answers']
correct_answer = data['results'][0]['correct_answer']
answers.append(correct_answer)
random.shuffle(answers)
print(html.unescape(question + '\n'))
for answer in answers :
print(str(answer_number) + "- " + html.unescape(answer))
answer_number += 1
while valid_answer == False :
user_answer = input("\nType the number of the correct answer - ")
try :
user_answer = int(user_answer)
if user_answer > len(answers) or user_answer <= 0 :
print("Invalid answer.")
else :
valid_answer = True
except :
print("Invalid answer. Use only numbers")
user_answer = answers[int(user_answer)-1]
if user_answer == correct_answer :
print("\nCongratulations! You answered correctly! The correct answer was: ", html.unescape(correct_answer))
score_correct += 1
else :
print("Sorry, " + html.unescape(user_answer) + "is incorrect. The correct answer is ", html.unescape(correct_answer))
score_incorrect += 1
print("\n=====================================================")
print("Your score is:")
print("Correct answers : " + str(score_correct))
print("Incorrect answers : " +str(score_incorrect))
print("\n=====================================================")
endGame = input("\nPress enter to play again or type 'quit' to quit the game.").lower()
print('\nThanks for playing')
| 0.127462 | 0.657947 |
# Equivalent layer technique for estimating total magnetization direction using airborne survey
#### Importing libraries
```
% matplotlib inline
import sys
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
import cPickle as pickle
import datetime
import timeit
import string as st
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
from scipy.optimize import nnls
from fatiando.gridder import regular
from fatiando.utils import ang2vec, vec2ang
from fatiando.mesher import Sphere, PointGrid,Prism
from fatiando.gravmag import sphere,prism
from fatiando.constants import CM, T2NT, G, SI2MGAL
notebook_name = 'airborne_EQL_magdirection_RM.ipynb'
```
#### Importing auxiliary functions
```
dir_modules = '../../../mypackage'
sys.path.append(dir_modules)
import auxiliary_functions as fc
```
#### Loading properties of the model
```
with open('data/model_multi.pickle') as f:
model_multi = pickle.load(f)
```
#### Loading properties grid
```
with open('data/airborne_survey.pickle') as f:
airborne = pickle.load(f)
```
#### Loading data
```
with open('data/data_set.pickle') as f:
data = pickle.load(f)
```
#### Loading results
```
with open('data/result_RM_airb.pickle') as f:
results = pickle.load(f)
```
### Saving files
```
saved_files = []
```
## Observation area
```
print 'Area limits: \n x_max = %.1f m \n x_min = %.1f m \n y_max = %.1f m \n y_min = %.1f m' % (airborne['area'][1],
airborne['area'][0],
airborne['area'][3],
airborne['area'][2])
```
### airborne survey information
```
print 'Shape : (%.0f,%.0f)'% airborne['shape']
print 'Number of data: %.1f' % airborne['N']
print 'dx: %.1f m' % airborne['dx']
print 'dy: %.1f m ' % airborne['dy']
```
## Properties of the model
### Main field
```
inc_gf,dec_gf = model_multi['main_field']
print'Main field inclination: %.1f degree' % inc_gf
print'Main field declination: %.1f degree' % dec_gf
```
### Magnetization direction
```
print 'Inclination: %.1f degree' % model_multi['inc_R']
print 'Declination: %.1f degree' % model_multi['dec_R']
inc_R,dec_R = model_multi['inc_R'],model_multi['dec_R']
```
### Coordinates equivalent sources
```
h = results['layer_depth']
shape_layer = (airborne['shape'][0],airborne['shape'][1])
xs,ys,zs = regular(airborne['area'],shape_layer,h)
```
## Results after L-curve
```
m_LM = results['magnetic_moment'][5]
inc_est = results['inc_est'][5]
dec_est = results['dec_est'][5]
mu = results['reg_parameter'][5]
phi = results['phi'][5]
print mu
```
### Visualization of the convergence
```
phi = (np.array(phi)/airborne['x'].size)
title_font = 22
bottom_font = 20
saturation_factor = 1.
plt.close('all')
plt.figure(figsize=(10,10), tight_layout=True)
plt.style.use('ggplot')
plt.plot(phi,'b-',linewidth=1.5)
plt.title('Convergence', fontsize=title_font)
plt.xlabel('iteration', fontsize = title_font)
plt.ylabel('Goal function ', fontsize = title_font)
plt.tick_params(axis='both', which='major', labelsize=15)
file_name = 'figs/airborne/convergence_LM_NNLS_magRM'
plt.savefig(file_name+'.png',dpi=600)
saved_files.append(file_name+'.png')
plt.savefig(file_name+'.eps',dpi=600)
saved_files.append(file_name+'.eps')
plt.show()
```
### Estimated magnetization direction
```
print (inc_est,dec_est)
print (inc_R,dec_R)
```
### Comparison between observed data and predicted data
```
pred = fc.tfa_layer(airborne['x'],airborne['y'],airborne['z'],
xs,ys,zs,inc_gf,dec_gf,m_LM,inc_est,dec_est)
res = pred - data['tfa_obs_RM_airb']
r_norm,r_mean,r_std = fc.residual(data['tfa_obs_RM_airb'],pred)
title_font = 22
bottom_font = 20
plt.figure(figsize=(28,11), tight_layout=True)
plt.style.use('ggplot')
ranges = np.abs([data['tfa_obs_RM_airb'].max(),
data['tfa_obs_RM_airb'].min(),
pred.max(), pred.min()]).max()
ranges_r = np.abs([res.max(),res.min()]).max()
## Observed data plot
ax1=plt.subplot(1,4,1)
plt.title('Observed data', fontsize=title_font)
plt.xlabel('y (km)',fontsize = title_font)
plt.ylabel('x (km)',fontsize = title_font)
plt.contourf(1e-3*airborne['y'].reshape(airborne['shape']),
1e-3*airborne['x'].reshape(airborne['shape']),
data['tfa_obs_RM_airb'].reshape(airborne['shape']),
30, cmap='viridis',vmin=-ranges, vmax=ranges)
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0)
cb.set_label('nT',size=bottom_font)
cb.ax.tick_params(labelsize=bottom_font)
## Predicted data plot
ax2=plt.subplot(1,4,2)
plt.title('Predicted data', fontsize=title_font)
plt.xlabel('y (km)',fontsize = title_font)
plt.ylabel('x (km)',fontsize = title_font)
plt.contourf(1e-3*airborne['y'].reshape(airborne['shape']),
1e-3*airborne['x'].reshape(airborne['shape']),
pred.reshape(airborne['shape']),
30, cmap='viridis', vmin=-ranges, vmax=ranges)
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0)
cb.set_label('nT',size=bottom_font)
cb.ax.tick_params(labelsize=bottom_font)
## Residuals plot and histogram
ax3=plt.subplot(1,4,3)
plt.title('Residuals and histogram', fontsize=title_font)
plt.xlabel('y (km)',fontsize = title_font)
plt.ylabel('x (km)',fontsize = title_font)
plt.contourf(1e-3*airborne['y'].reshape(airborne['shape']),
1e-3*airborne['x'].reshape(airborne['shape']),
res.reshape(airborne['shape']),
30, cmap='viridis', vmin=-ranges_r, vmax=ranges_r)
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0)
cb.set_label('nT',size=bottom_font)
cb.ax.tick_params(labelsize=bottom_font)
ax4=plt.subplot(1,4,4)
plt.title('Histogram of residuals', fontsize =title_font)
plt.xlabel('Residuals (nT)', fontsize = title_font)
plt.ylabel('Frequency', fontsize = title_font)
plt.text(0.02, 0.97, "mean = {:.2f}\nstd = {:.2f} ".format(np.mean(res), np.std(res)),
horizontalalignment='left',
verticalalignment='top',
transform = ax4.transAxes, fontsize=bottom_font)
n, bins, patches = plt.hist(res,bins=30, normed=True, facecolor='black')
gauss = mlab.normpdf(bins, 0., 10.)
plt.plot(bins, gauss, 'r-', linewidth=4.)
ax4.set_xticks([-100.0,-50.,0.0,50.,100.0])
ax4.set_yticks([.0,.010,.020,.030,.040,.05,.06])
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
##
file_name = 'figs/airborne/data_fitting_LM_NNLS_magRM'
plt.savefig(file_name+'.png',dpi=600)
saved_files.append(file_name+'.png')
plt.savefig(file_name+'.eps',dpi=600)
saved_files.append(file_name+'.eps')
plt.show()
title_font = 22
bottom_font = 20
saturation_factor = 1.
plt.close('all')
plt.figure(figsize=(10,10), tight_layout=True)
plt.title('Magnetic moment distribution', fontsize=title_font)
plt.contourf(1e-3*ys.reshape(shape_layer),1e-3*xs.reshape(shape_layer),
m_LM.reshape(shape_layer), 40, cmap='inferno')
cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0)
cb.set_label('$A.m^2$',size=bottom_font)
cb.ax.tick_params(labelsize=bottom_font)
plt.xlabel('y (km)', fontsize = title_font)
plt.ylabel('x (km)', fontsize = title_font)
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
file_name = 'figs/airborne/magnetic_moment_positive_LM_NNLS_magRM'
plt.savefig(file_name+'.png',dpi=600)
saved_files.append(file_name+'.png')
plt.savefig(file_name+'.eps',dpi=600)
saved_files.append(file_name+'.eps')
plt.show()
```
## Figure for paper
```
#title_font = 17
title_font = 5
#bottom_font = 14
bottom_font = 4
hist_font = 5
height_per_width = 17./15.
plt.figure(figsize=(4.33,4.33*height_per_width), tight_layout=True)
plt.style.use('ggplot')
ranges = np.abs([data['tfa_obs_RM_airb'].max(),
data['tfa_obs_RM_airb'].min(),
pred.max(), pred.min()]).max()
ranges_r = np.abs([res.max(),res.min()]).max()
## Observed data plot
ax1=plt.subplot(3,2,1)
plt.title('(a) Observed data', fontsize=title_font)
plt.xlabel('y (km)',fontsize = title_font)
plt.ylabel('x (km)',fontsize = title_font)
plt.contourf(1e-3*airborne['y'].reshape(airborne['shape']),
1e-3*airborne['x'].reshape(airborne['shape']),
data['tfa_obs_RM_airb'].reshape(airborne['shape']),
30, cmap='viridis',vmin=-ranges, vmax=ranges)
cbar = plt.colorbar(pad=0.01, aspect=20, shrink=1.0)
cbar.set_label('nT',size=title_font)
cbar.ax.tick_params(labelsize=bottom_font)
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
plt.title('(a) Observed data', fontsize=title_font)
plt.xlabel('y (km)',fontsize = title_font)
plt.ylabel('x (km)',fontsize = title_font)
## Predicted data plot
ax2=plt.subplot(3,2,2)
plt.contourf(1e-3*airborne['y'].reshape(airborne['shape']),
1e-3*airborne['x'].reshape(airborne['shape']),
pred.reshape(airborne['shape']),
30, cmap='viridis', vmin=-ranges, vmax=ranges)
cbar = plt.colorbar(pad=0.01, aspect=20, shrink=1.0)
cbar.set_label('nT',size=title_font)
cbar.ax.tick_params(labelsize=bottom_font)
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
plt.title('(b) Predicted data', fontsize=title_font)
plt.xlabel('y (km)',fontsize = title_font)
plt.ylabel('x (km)',fontsize = title_font)
## Residuals plot and histogram
ax3=plt.subplot(3,2,3)
plt.contourf(1e-3*airborne['y'].reshape(airborne['shape']),
1e-3*airborne['x'].reshape(airborne['shape']),
res.reshape(airborne['shape']),
30, cmap='viridis', vmin=-ranges_r, vmax=ranges_r)
cbar = plt.colorbar(pad=0.01, aspect=20, shrink=1.0)
cbar.set_label('nT',size=title_font)
cbar.ax.tick_params(labelsize=bottom_font)
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
plt.title('(c) Residuals', fontsize=title_font)
plt.xlabel('y (km)',fontsize = title_font)
plt.ylabel('x (km)',fontsize = title_font)
ax4= plt.subplot(3,2,4)
plt.text(0.02, 0.97, "mean = {:.2f}\nstd = {:.2f} ".format(np.mean(res), np.std(res)),
horizontalalignment='left',
verticalalignment='top',
transform = ax4.transAxes, fontsize=hist_font)
n, bins, patches = plt.hist(res,bins=20, normed=True, facecolor='black')
gauss = mlab.normpdf(bins, 0., 10.)
plt.plot(bins, gauss, 'r-', linewidth=1.)
ax4.set_xticks([-100.0,-50.,0.0,50.,100.0])
ax4.set_yticks([.0,.010,.020,.030,.040,.05,.06])
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
plt.title('(d) Histogram of residuals', fontsize =title_font)
plt.xlabel('Residuals (nT)', fontsize = title_font)
plt.ylabel('Frequency', fontsize = title_font)
ax5= plt.subplot(3,2,5)
plt.contourf(1e-3*ys.reshape(shape_layer),1e-3*xs.reshape(shape_layer),
m_LM.reshape(shape_layer)*1e-9, 30, cmap='inferno')
cbar = plt.colorbar(pad=0.01, aspect=20, shrink=1.0)
cbar.set_label('$10^{9}$ A$\cdot$m$^2$',size=title_font)
cbar.ax.tick_params(labelsize=bottom_font)
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
plt.title('(e) Magnetic moment distribution', fontsize=title_font)
plt.xlabel('y (km)', fontsize = title_font)
plt.ylabel('x (km)', fontsize = title_font)
ax6= plt.subplot(3,2,6)
plt.plot(phi, 'b-',linewidth=1.0)
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
plt.title('(f) Convergence', fontsize=title_font)
plt.xlabel('iteration', fontsize = title_font)
plt.ylabel('Goal function ', fontsize = title_font)
###########################################################################
#file_name = 'figs/airborne/results_compiled_LM_NNLS_magRM'
file_name = 'figs/airborne/Fig4'
#plt.savefig(file_name+'.png',dpi=400)
plt.savefig(file_name+'.png',dpi=1200)
saved_files.append(file_name+'.png')
#plt.savefig(file_name+'.eps',dpi=400)
plt.savefig(file_name+'.eps',dpi=1200)
saved_files.append(file_name+'.eps')
plt.show()
```
|
github_jupyter
|
% matplotlib inline
import sys
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
import cPickle as pickle
import datetime
import timeit
import string as st
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
from scipy.optimize import nnls
from fatiando.gridder import regular
from fatiando.utils import ang2vec, vec2ang
from fatiando.mesher import Sphere, PointGrid,Prism
from fatiando.gravmag import sphere,prism
from fatiando.constants import CM, T2NT, G, SI2MGAL
notebook_name = 'airborne_EQL_magdirection_RM.ipynb'
dir_modules = '../../../mypackage'
sys.path.append(dir_modules)
import auxiliary_functions as fc
with open('data/model_multi.pickle') as f:
model_multi = pickle.load(f)
with open('data/airborne_survey.pickle') as f:
airborne = pickle.load(f)
with open('data/data_set.pickle') as f:
data = pickle.load(f)
with open('data/result_RM_airb.pickle') as f:
results = pickle.load(f)
saved_files = []
print 'Area limits: \n x_max = %.1f m \n x_min = %.1f m \n y_max = %.1f m \n y_min = %.1f m' % (airborne['area'][1],
airborne['area'][0],
airborne['area'][3],
airborne['area'][2])
print 'Shape : (%.0f,%.0f)'% airborne['shape']
print 'Number of data: %.1f' % airborne['N']
print 'dx: %.1f m' % airborne['dx']
print 'dy: %.1f m ' % airborne['dy']
inc_gf,dec_gf = model_multi['main_field']
print'Main field inclination: %.1f degree' % inc_gf
print'Main field declination: %.1f degree' % dec_gf
print 'Inclination: %.1f degree' % model_multi['inc_R']
print 'Declination: %.1f degree' % model_multi['dec_R']
inc_R,dec_R = model_multi['inc_R'],model_multi['dec_R']
h = results['layer_depth']
shape_layer = (airborne['shape'][0],airborne['shape'][1])
xs,ys,zs = regular(airborne['area'],shape_layer,h)
m_LM = results['magnetic_moment'][5]
inc_est = results['inc_est'][5]
dec_est = results['dec_est'][5]
mu = results['reg_parameter'][5]
phi = results['phi'][5]
print mu
phi = (np.array(phi)/airborne['x'].size)
title_font = 22
bottom_font = 20
saturation_factor = 1.
plt.close('all')
plt.figure(figsize=(10,10), tight_layout=True)
plt.style.use('ggplot')
plt.plot(phi,'b-',linewidth=1.5)
plt.title('Convergence', fontsize=title_font)
plt.xlabel('iteration', fontsize = title_font)
plt.ylabel('Goal function ', fontsize = title_font)
plt.tick_params(axis='both', which='major', labelsize=15)
file_name = 'figs/airborne/convergence_LM_NNLS_magRM'
plt.savefig(file_name+'.png',dpi=600)
saved_files.append(file_name+'.png')
plt.savefig(file_name+'.eps',dpi=600)
saved_files.append(file_name+'.eps')
plt.show()
print (inc_est,dec_est)
print (inc_R,dec_R)
pred = fc.tfa_layer(airborne['x'],airborne['y'],airborne['z'],
xs,ys,zs,inc_gf,dec_gf,m_LM,inc_est,dec_est)
res = pred - data['tfa_obs_RM_airb']
r_norm,r_mean,r_std = fc.residual(data['tfa_obs_RM_airb'],pred)
title_font = 22
bottom_font = 20
plt.figure(figsize=(28,11), tight_layout=True)
plt.style.use('ggplot')
ranges = np.abs([data['tfa_obs_RM_airb'].max(),
data['tfa_obs_RM_airb'].min(),
pred.max(), pred.min()]).max()
ranges_r = np.abs([res.max(),res.min()]).max()
## Observed data plot
ax1=plt.subplot(1,4,1)
plt.title('Observed data', fontsize=title_font)
plt.xlabel('y (km)',fontsize = title_font)
plt.ylabel('x (km)',fontsize = title_font)
plt.contourf(1e-3*airborne['y'].reshape(airborne['shape']),
1e-3*airborne['x'].reshape(airborne['shape']),
data['tfa_obs_RM_airb'].reshape(airborne['shape']),
30, cmap='viridis',vmin=-ranges, vmax=ranges)
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0)
cb.set_label('nT',size=bottom_font)
cb.ax.tick_params(labelsize=bottom_font)
## Predicted data plot
ax2=plt.subplot(1,4,2)
plt.title('Predicted data', fontsize=title_font)
plt.xlabel('y (km)',fontsize = title_font)
plt.ylabel('x (km)',fontsize = title_font)
plt.contourf(1e-3*airborne['y'].reshape(airborne['shape']),
1e-3*airborne['x'].reshape(airborne['shape']),
pred.reshape(airborne['shape']),
30, cmap='viridis', vmin=-ranges, vmax=ranges)
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0)
cb.set_label('nT',size=bottom_font)
cb.ax.tick_params(labelsize=bottom_font)
## Residuals plot and histogram
ax3=plt.subplot(1,4,3)
plt.title('Residuals and histogram', fontsize=title_font)
plt.xlabel('y (km)',fontsize = title_font)
plt.ylabel('x (km)',fontsize = title_font)
plt.contourf(1e-3*airborne['y'].reshape(airborne['shape']),
1e-3*airborne['x'].reshape(airborne['shape']),
res.reshape(airborne['shape']),
30, cmap='viridis', vmin=-ranges_r, vmax=ranges_r)
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0)
cb.set_label('nT',size=bottom_font)
cb.ax.tick_params(labelsize=bottom_font)
ax4=plt.subplot(1,4,4)
plt.title('Histogram of residuals', fontsize =title_font)
plt.xlabel('Residuals (nT)', fontsize = title_font)
plt.ylabel('Frequency', fontsize = title_font)
plt.text(0.02, 0.97, "mean = {:.2f}\nstd = {:.2f} ".format(np.mean(res), np.std(res)),
horizontalalignment='left',
verticalalignment='top',
transform = ax4.transAxes, fontsize=bottom_font)
n, bins, patches = plt.hist(res,bins=30, normed=True, facecolor='black')
gauss = mlab.normpdf(bins, 0., 10.)
plt.plot(bins, gauss, 'r-', linewidth=4.)
ax4.set_xticks([-100.0,-50.,0.0,50.,100.0])
ax4.set_yticks([.0,.010,.020,.030,.040,.05,.06])
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
##
file_name = 'figs/airborne/data_fitting_LM_NNLS_magRM'
plt.savefig(file_name+'.png',dpi=600)
saved_files.append(file_name+'.png')
plt.savefig(file_name+'.eps',dpi=600)
saved_files.append(file_name+'.eps')
plt.show()
title_font = 22
bottom_font = 20
saturation_factor = 1.
plt.close('all')
plt.figure(figsize=(10,10), tight_layout=True)
plt.title('Magnetic moment distribution', fontsize=title_font)
plt.contourf(1e-3*ys.reshape(shape_layer),1e-3*xs.reshape(shape_layer),
m_LM.reshape(shape_layer), 40, cmap='inferno')
cb = plt.colorbar(pad=0.01, aspect=40, shrink=1.0)
cb.set_label('$A.m^2$',size=bottom_font)
cb.ax.tick_params(labelsize=bottom_font)
plt.xlabel('y (km)', fontsize = title_font)
plt.ylabel('x (km)', fontsize = title_font)
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
file_name = 'figs/airborne/magnetic_moment_positive_LM_NNLS_magRM'
plt.savefig(file_name+'.png',dpi=600)
saved_files.append(file_name+'.png')
plt.savefig(file_name+'.eps',dpi=600)
saved_files.append(file_name+'.eps')
plt.show()
#title_font = 17
title_font = 5
#bottom_font = 14
bottom_font = 4
hist_font = 5
height_per_width = 17./15.
plt.figure(figsize=(4.33,4.33*height_per_width), tight_layout=True)
plt.style.use('ggplot')
ranges = np.abs([data['tfa_obs_RM_airb'].max(),
data['tfa_obs_RM_airb'].min(),
pred.max(), pred.min()]).max()
ranges_r = np.abs([res.max(),res.min()]).max()
## Observed data plot
ax1=plt.subplot(3,2,1)
plt.title('(a) Observed data', fontsize=title_font)
plt.xlabel('y (km)',fontsize = title_font)
plt.ylabel('x (km)',fontsize = title_font)
plt.contourf(1e-3*airborne['y'].reshape(airborne['shape']),
1e-3*airborne['x'].reshape(airborne['shape']),
data['tfa_obs_RM_airb'].reshape(airborne['shape']),
30, cmap='viridis',vmin=-ranges, vmax=ranges)
cbar = plt.colorbar(pad=0.01, aspect=20, shrink=1.0)
cbar.set_label('nT',size=title_font)
cbar.ax.tick_params(labelsize=bottom_font)
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
plt.title('(a) Observed data', fontsize=title_font)
plt.xlabel('y (km)',fontsize = title_font)
plt.ylabel('x (km)',fontsize = title_font)
## Predicted data plot
ax2=plt.subplot(3,2,2)
plt.contourf(1e-3*airborne['y'].reshape(airborne['shape']),
1e-3*airborne['x'].reshape(airborne['shape']),
pred.reshape(airborne['shape']),
30, cmap='viridis', vmin=-ranges, vmax=ranges)
cbar = plt.colorbar(pad=0.01, aspect=20, shrink=1.0)
cbar.set_label('nT',size=title_font)
cbar.ax.tick_params(labelsize=bottom_font)
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
plt.title('(b) Predicted data', fontsize=title_font)
plt.xlabel('y (km)',fontsize = title_font)
plt.ylabel('x (km)',fontsize = title_font)
## Residuals plot and histogram
ax3=plt.subplot(3,2,3)
plt.contourf(1e-3*airborne['y'].reshape(airborne['shape']),
1e-3*airborne['x'].reshape(airborne['shape']),
res.reshape(airborne['shape']),
30, cmap='viridis', vmin=-ranges_r, vmax=ranges_r)
cbar = plt.colorbar(pad=0.01, aspect=20, shrink=1.0)
cbar.set_label('nT',size=title_font)
cbar.ax.tick_params(labelsize=bottom_font)
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
plt.title('(c) Residuals', fontsize=title_font)
plt.xlabel('y (km)',fontsize = title_font)
plt.ylabel('x (km)',fontsize = title_font)
ax4= plt.subplot(3,2,4)
plt.text(0.02, 0.97, "mean = {:.2f}\nstd = {:.2f} ".format(np.mean(res), np.std(res)),
horizontalalignment='left',
verticalalignment='top',
transform = ax4.transAxes, fontsize=hist_font)
n, bins, patches = plt.hist(res,bins=20, normed=True, facecolor='black')
gauss = mlab.normpdf(bins, 0., 10.)
plt.plot(bins, gauss, 'r-', linewidth=1.)
ax4.set_xticks([-100.0,-50.,0.0,50.,100.0])
ax4.set_yticks([.0,.010,.020,.030,.040,.05,.06])
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
plt.title('(d) Histogram of residuals', fontsize =title_font)
plt.xlabel('Residuals (nT)', fontsize = title_font)
plt.ylabel('Frequency', fontsize = title_font)
ax5= plt.subplot(3,2,5)
plt.contourf(1e-3*ys.reshape(shape_layer),1e-3*xs.reshape(shape_layer),
m_LM.reshape(shape_layer)*1e-9, 30, cmap='inferno')
cbar = plt.colorbar(pad=0.01, aspect=20, shrink=1.0)
cbar.set_label('$10^{9}$ A$\cdot$m$^2$',size=title_font)
cbar.ax.tick_params(labelsize=bottom_font)
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
plt.title('(e) Magnetic moment distribution', fontsize=title_font)
plt.xlabel('y (km)', fontsize = title_font)
plt.ylabel('x (km)', fontsize = title_font)
ax6= plt.subplot(3,2,6)
plt.plot(phi, 'b-',linewidth=1.0)
plt.tick_params(axis='both', which='major', labelsize=bottom_font)
plt.title('(f) Convergence', fontsize=title_font)
plt.xlabel('iteration', fontsize = title_font)
plt.ylabel('Goal function ', fontsize = title_font)
###########################################################################
#file_name = 'figs/airborne/results_compiled_LM_NNLS_magRM'
file_name = 'figs/airborne/Fig4'
#plt.savefig(file_name+'.png',dpi=400)
plt.savefig(file_name+'.png',dpi=1200)
saved_files.append(file_name+'.png')
#plt.savefig(file_name+'.eps',dpi=400)
plt.savefig(file_name+'.eps',dpi=1200)
saved_files.append(file_name+'.eps')
plt.show()
| 0.331228 | 0.856573 |
## Imports
```
from statistics import mean
import numpy as np
import pandas as pd
import math
import os
from collections import Counter
from functools import reduce
import glob
import copy
```
## Opening the CSV files
```
dataframes = [pd.read_csv(file, sep=',', index_col=0) for file in sorted(glob.glob('../preprocessed_datasets' + "/*."+'csv'))]
cohorts = [file.strip(".csv") for file in sorted(os.listdir('../preprocessed_datasets'))]
# reduce to BL visit only
all_cohorts = dict()
for name, df in zip(cohorts, dataframes):
all_cohorts[name] = df.loc[(df["Visit"] == 1) & (df["Diagnosis"].astype(str) == 'CU')]
```
## Functions to perform essential calculations
```
def cat_stat_df(dfs, result):
"""Counting different categories, calculate the % of categorical features, store results in a df"""
categorical = {'APOE4': [2.0, 1.0], 'Sex': ['Female'], 'Diagnosis': ['CU', 'MCI', 'AD']}
column_cat = ['Sex', 'Diagnosis', 'APOE4']
for cohort in dfs:
if dfs[cohort].empty==True:
continue
else:
calc_dict = dict()
df = dfs[cohort]
for col in column_cat:
ca = Counter(df[col].dropna())
calc_dict[col] = ca
cohort_df = pd.DataFrame(calc_dict).transpose()
cohort_df = cohort_df.dropna(how='all')
cohort_df.loc[cohort] = cohort_df.sum()
for i in categorical:
if i == 'Diagnosis':
if i in cohort_df.index:
result.loc[cohort, categorical[i]] = cohort_df.loc[cohort, cohort_df.loc[i].notna()].astype(int)
result.loc[cohort, categorical[i]] = result.loc[cohort, categorical[i]].replace({np.nan: 0})
result.loc[cohort, 'n'] = int(sum(cohort_df.loc[cohort, cohort_df.loc[i].notna()]))
result.loc[cohort, 'Total'] = int(len(dfs[cohort].index))
else:
result.loc[cohort, i] = np.nan
result.loc[cohort, 'n'] = int(len(dfs[cohort].index))
elif i == 'APOE4':
if 'APOE4' in list(cohort_df.index.astype(str)):
if '2.0' not in list(cohort_df.columns.astype(str)) and '2' not in list(cohort_df.columns.astype(str)):
cohort_df[2.0] = np.nan
result.loc[cohort, i] = round(100 * sum([val for val in cohort_df.loc[i, categorical[i]]]) /
sum([val for val in cohort_df.loc[i].dropna()]), 1)
else:
result.loc[cohort, i] = np.nan
elif i == 'Sex':
if (i in cohort_df.index) & ("Female" in cohort_df.columns):
result.loc[cohort, i] = round(100 * sum([val for val in cohort_df.loc[i, categorical[i]]])
/ sum([val for val in cohort_df.loc[i].dropna()]), 1)
else:
result.loc[cohort, i] = 0
result.rename(columns={"Sex": "Female %", "APOE4": "APOE4 %"}, inplace=True)
return result
def num_stat_df(dfs, result_df):
"""Calculating std and mean and storing it in the result dataframe"""
column_names = ['Age', 'CDR', 'Education', 'MMSE', 'CDRSB', 'Hippocampus', 'A-beta', 'Ttau', 'Ptau']
for df in dfs:
dataset = dfs[df]
calc_dict = dict()
for col in column_names:
if (col in dataset.columns) and (dataset[col].notna().any()):
df_std = round(np.nanstd(dataset[col]), 1)
df_mean = round(np.nanmean(dataset[col]), 1)
dict_value = str(df_mean) + ' (' + str(df_std) + ')'
calc_dict[col] = dict_value
else:
calc_dict[col] = np.nan
for key in calc_dict:
result_df.loc[df, key] = calc_dict[key]
return result_df
```
## Make an empty dataframe to fill in with the results
```
results = pd.DataFrame(index = all_cohorts.keys(), columns = [col for col in all_cohorts['AIBL'].columns])
results.index.name = 'Name of Dataset'
for i in ['CU', 'MCI', 'AD', 'Total']:
results[i] = np.nan
cat_stat_df(all_cohorts, results)
num_stat_df(all_cohorts, results)
results.drop(columns=['Diagnosis', 'Visit', 'Race', 'Months'], inplace=True)
results
```
## Final table
```
results[['n', 'Total', 'CU', 'MCI', 'AD', 'Female %', 'Age', 'Education', 'MMSE', 'CDR', 'CDRSB', 'APOE4 %', 'Hippocampus']]
```
|
github_jupyter
|
from statistics import mean
import numpy as np
import pandas as pd
import math
import os
from collections import Counter
from functools import reduce
import glob
import copy
dataframes = [pd.read_csv(file, sep=',', index_col=0) for file in sorted(glob.glob('../preprocessed_datasets' + "/*."+'csv'))]
cohorts = [file.strip(".csv") for file in sorted(os.listdir('../preprocessed_datasets'))]
# reduce to BL visit only
all_cohorts = dict()
for name, df in zip(cohorts, dataframes):
all_cohorts[name] = df.loc[(df["Visit"] == 1) & (df["Diagnosis"].astype(str) == 'CU')]
def cat_stat_df(dfs, result):
"""Counting different categories, calculate the % of categorical features, store results in a df"""
categorical = {'APOE4': [2.0, 1.0], 'Sex': ['Female'], 'Diagnosis': ['CU', 'MCI', 'AD']}
column_cat = ['Sex', 'Diagnosis', 'APOE4']
for cohort in dfs:
if dfs[cohort].empty==True:
continue
else:
calc_dict = dict()
df = dfs[cohort]
for col in column_cat:
ca = Counter(df[col].dropna())
calc_dict[col] = ca
cohort_df = pd.DataFrame(calc_dict).transpose()
cohort_df = cohort_df.dropna(how='all')
cohort_df.loc[cohort] = cohort_df.sum()
for i in categorical:
if i == 'Diagnosis':
if i in cohort_df.index:
result.loc[cohort, categorical[i]] = cohort_df.loc[cohort, cohort_df.loc[i].notna()].astype(int)
result.loc[cohort, categorical[i]] = result.loc[cohort, categorical[i]].replace({np.nan: 0})
result.loc[cohort, 'n'] = int(sum(cohort_df.loc[cohort, cohort_df.loc[i].notna()]))
result.loc[cohort, 'Total'] = int(len(dfs[cohort].index))
else:
result.loc[cohort, i] = np.nan
result.loc[cohort, 'n'] = int(len(dfs[cohort].index))
elif i == 'APOE4':
if 'APOE4' in list(cohort_df.index.astype(str)):
if '2.0' not in list(cohort_df.columns.astype(str)) and '2' not in list(cohort_df.columns.astype(str)):
cohort_df[2.0] = np.nan
result.loc[cohort, i] = round(100 * sum([val for val in cohort_df.loc[i, categorical[i]]]) /
sum([val for val in cohort_df.loc[i].dropna()]), 1)
else:
result.loc[cohort, i] = np.nan
elif i == 'Sex':
if (i in cohort_df.index) & ("Female" in cohort_df.columns):
result.loc[cohort, i] = round(100 * sum([val for val in cohort_df.loc[i, categorical[i]]])
/ sum([val for val in cohort_df.loc[i].dropna()]), 1)
else:
result.loc[cohort, i] = 0
result.rename(columns={"Sex": "Female %", "APOE4": "APOE4 %"}, inplace=True)
return result
def num_stat_df(dfs, result_df):
"""Calculating std and mean and storing it in the result dataframe"""
column_names = ['Age', 'CDR', 'Education', 'MMSE', 'CDRSB', 'Hippocampus', 'A-beta', 'Ttau', 'Ptau']
for df in dfs:
dataset = dfs[df]
calc_dict = dict()
for col in column_names:
if (col in dataset.columns) and (dataset[col].notna().any()):
df_std = round(np.nanstd(dataset[col]), 1)
df_mean = round(np.nanmean(dataset[col]), 1)
dict_value = str(df_mean) + ' (' + str(df_std) + ')'
calc_dict[col] = dict_value
else:
calc_dict[col] = np.nan
for key in calc_dict:
result_df.loc[df, key] = calc_dict[key]
return result_df
results = pd.DataFrame(index = all_cohorts.keys(), columns = [col for col in all_cohorts['AIBL'].columns])
results.index.name = 'Name of Dataset'
for i in ['CU', 'MCI', 'AD', 'Total']:
results[i] = np.nan
cat_stat_df(all_cohorts, results)
num_stat_df(all_cohorts, results)
results.drop(columns=['Diagnosis', 'Visit', 'Race', 'Months'], inplace=True)
results
results[['n', 'Total', 'CU', 'MCI', 'AD', 'Female %', 'Age', 'Education', 'MMSE', 'CDR', 'CDRSB', 'APOE4 %', 'Hippocampus']]
| 0.355663 | 0.662216 |
This notebook is a supplement to the paper ["The Orthologic of Epistemic Modals"](https://escholarship.org/uc/item/0ss5z8g3) by [Wesley H. Holliday](mailto:wesholliday@berkeley.edu) and [Matthew Mandelkern](mandelkern@nyu.edu).
To view the notebook online, type the URL of this notebook (https://github.com/wesholliday/ortho-modals/blob/main/ortho-modals.ipynb) into the location field at https://nbviewer.org. GitHub's preview of the notebook does not show all the output that was generated.
The notebook uses the [Natural Language Toolkit](https://www.nltk.org)'s [interface](https://www.nltk.org/howto/inference.html) to [Prover9/Mace4](https://www.cs.unm.edu/~mccune/prover9/) to investigate the derivability of conclusions from the logical principles in the paper, understood as algebraic equations. For example, we treat the logical principle $\Box\varphi \vdash \varphi$, corresponding to the lattice inequality $\Box a\leq a$, as the equation $\Box a = \Box a\wedge a$.
## Outline
**1. [Lattice axioms](#1)**
**2. [Bounded lattice axioms](#2)**
**3. [Ortholattice axioms](#3)**
**4. [Boolean subalgebra axioms](#4)**
**5. [Modal axioms](#5)**
**6. [Epistemic axioms](#6)**
**7. [Conditional axioms](#7)**
**8. [Independence of axioms](#8)**
**9. [The conditional epistemic ortholattice from Figure 12](#9)**
**10. [Example proofs](#10)**
**11. [Avoiding collapse](#11)**
**12. [Modalized Import-Export](#12)**
**13. [Qualified Collapse](#13)**
**14. [Provable principles for which Prover9 does not find a proof](#14)**
**15. [A more economical axiomatization without $\vee$, $\bot$, or $\Diamond$](#14)**
```
from nltk.test.inference_fixt import setup_module
setup_module()
from nltk import *
from nltk.sem.drt import DrtParser
from nltk.sem import logic
logic._counter._value = 0
from nltk.sem import Expression
read_expr = Expression.fromstring
```
## 1. Lattice axioms<a id='1'></a>
```
or_id = read_expr('Or(x,x) = x')
and_id = read_expr('And(x,x) = x')
or_comm = read_expr('Or(x,y) = Or(y,x)')
and_comm = read_expr('And(x,y) = And(y,x)')
or_assoc = read_expr('Or(x,Or(y,z)) = Or(Or(x,y),z)')
and_assoc = read_expr('And(x,And(y,z)) = And(And(x,y),z)')
or_absorp = read_expr('Or(x,And(x,y)) = x')
and_absorp = read_expr('And(x,Or(x,y)) = x')
lattice = [or_id, and_id, or_comm, and_comm, or_assoc, and_assoc, or_absorp, and_absorp]
#By including a definition of the covering relation,
#one can quickly read off Hasse diagrams of lattices from the Mace4 output.
cover_def = read_expr('covered_by(x,y) <-> (-(x=y) & x = And(x,y) & -exists z.(-(z=x) & -(z=y) & x = And(x,z) & z = And(z,y)))')
```
## 2. Bounded lattice axioms<a id='2'></a>
```
bot = read_expr('Or(x,Bot) = x')
top = read_expr('And(x,Top) = x')
bounded_lattice = lattice + [bot,top]
```
## 3. Ortholattice axioms<a id='3'></a>
```
lem = read_expr('Or(x,Not(x)) = Top')
contra = read_expr('And(x,Not(x)) = Bot')
invol = read_expr('Not(Not(x)) = x')
de_morgan = read_expr('Not(And(x,y)) = Or(Not(x),Not(y))')
ortho_lattice = bounded_lattice + [lem,contra,invol,de_morgan]
#We do not want distributivity, but we consider it below
dist = read_expr('And(x,Or(y,z)) = Or(And(x,y),And(x,z))')
#Check that the De Morgan law dual to de_morgan above follows
goal = read_expr('Not(Or(x,y)) = And(Not(x),Not(y))')
prover = Prover9Command(goal, assumptions = ortho_lattice)
prover.prove()
print(prover.proof())
#Check that the distributive law does not follow from ortholattice axioms
goal = dist
mb = MaceCommand(goal, assumptions = ortho_lattice + [cover_def])
mb.build_model()
print(mb.model(format='cooked'))
#Check that adding the pseudocomplementation principle allows the derivability of distributivity
pseudo = read_expr('And(x,y) = Bot -> y = And(y,Not(x))')
goal = dist
prover = Prover9Command(goal, assumptions = ortho_lattice + [pseudo])
prover.prove()
print(prover.proof())
#Conversely, adding distributivity allows the derivation of the pseudocomplementation principle
goal = pseudo
prover = Prover9Command(goal, assumptions = ortho_lattice + [dist])
prover.prove()
print(prover.proof())
```
## 4. Boolean subalgebra axioms<a id='4'></a>
```
B_top = read_expr('B(Top)')
B_not = read_expr('B(x) -> B(Not(x))')
B_and = read_expr('(B(x) & B(y)) -> B(And(x,y))')
B_or = read_expr('(B(x) & B(y)) -> B(Or(x,y))')
B_dist = read_expr('((B(x) & B(y)) & B(z)) -> And(x,Or(y,z)) = Or(And(x,y),And(x,z))')
ortho_boolean_lattice = ortho_lattice + [B_top, B_not, B_and, B_or, B_dist]
#Check that the other distributive law dual to B_dist above follows
goal = read_expr('((B(x) & B(y)) & B(z)) -> Or(x,And(y,z)) = And(Or(x,y),Or(x,z))')
prover = Prover9Command(goal, assumptions = ortho_boolean_lattice)
prover.prove()
print(prover.proof())
```
## 5. Modal axioms<a id='5'></a>
```
box_and = read_expr('Box(And(x,y)) = And(Box(x),Box(y))')
box_top = read_expr('Box(Top) = Top')
diamond_dual = read_expr("Diamond(x) = Not(Box(Not(x)))")
modal_ortho_boolean_lattice = ortho_boolean_lattice + [box_and, box_top, diamond_dual]
#Check that Diamond distributes over disjunction
goal = read_expr('Diamond(Or(x,y)) = Or(Diamond(x),Diamond(y))')
prover = Prover9Command(goal, assumptions = modal_ortho_boolean_lattice)
prover.prove()
print(prover.proof())
```
## 6. Epistemic axioms<a id='6'></a>
```
factive = read_expr('Box(x) = And(Box(x),x)')
episteme = read_expr('And(Not(x),Diamond(x)) = Bot')
epistemic_ortho_boolean_lattice = modal_ortho_boolean_lattice + [factive, episteme]
#Check that another form of Wittgenstein sentence is contradictory
goal = read_expr('And(x,Diamond(Not(x))) = Bot')
prover = Prover9Command(goal, assumptions = epistemic_ortho_boolean_lattice)
prover.prove()
print(prover.proof())
#Check that "p or it must be that not p" is derivable
goal = read_expr('Or(x,Box(Not(x))) = Top')
prover = Prover9Command(goal, assumptions = epistemic_ortho_boolean_lattice)
prover.prove()
print(prover.proof())
#Not only "p but might not p" but also "p but might might not p" is inconsisent
goal = read_expr('And(x,Diamond(Diamond(Not(x))))=Bot')
prover = Prover9Command(goal, assumptions = epistemic_ortho_boolean_lattice)
prover.prove()
print(prover.proof())
#Another noteworthy example: "(must p or must q) & might not p & might not q" is inconsistent.
goal = read_expr('And(Or(Box(x),Box(y)),And(Diamond(Not(x)), Diamond(Not(y)))) = Bot')
prover = Prover9Command(goal, assumptions = epistemic_ortho_boolean_lattice)
prover.prove()
print(prover.proof())
#Check that Diamond p does not entail p
goal = read_expr('Diamond(x) = And(Diamond(x),x)')
mb = MaceCommand(goal, assumptions = epistemic_ortho_boolean_lattice + [cover_def])
mb.build_model()
print(mb.model(format='cooked'))
#Check that Diamond Diamond p does not entail Diamond p
goal = read_expr('Diamond(Diamond(x)) = Diamond(x)')
mb = MaceCommand(goal, assumptions = epistemic_ortho_boolean_lattice + [cover_def])
mb.build_model()
print(mb.model(format='cooked'))
#Although "p and might not p" is inconsistent, "might p and must might not p" is consistent
goal = read_expr('And(Diamond(x),Box(Diamond(Not(x)))) = Bot')
mb = MaceCommand(goal, assumptions = epistemic_ortho_boolean_lattice + [cover_def])
mb.build_model()
print(mb.model(format='cooked'))
```
## 7. Conditional axioms<a id='7'></a>
```
if_and = read_expr('If(x,And(y,z)) = And(If(x,y),If(x,z))')
if_top = read_expr('If(x,Top) = Top')
ident = read_expr('If(x,x) = Top')
simp_mp = read_expr('B(y) -> And(If(x,y),x) = And(And(If(x,y),x),y)')
simp_cs = read_expr('B(y) -> And(x,y) = And(And(x,y),If(x,y))')
simp_mt = read_expr('B(y) -> And(If(x,y),Not(y)) = And(And(If(x,y),Not(y)),Not(x))')
mod_mp = read_expr('And(Box(x),If(x,y))=And(And(Box(x),If(x,y)),y)')
mod_cs = read_expr('And(Box(x),y)=And(And(Box(x),y),If(x,y))')
mod_mt = read_expr('And(If(x,y),Not(y)) = And(And(If(x,y),Not(y)), Not(Box(x)))')
must_intro = read_expr('x = And(x,y) -> If(x,Box(y)) = Top')
simp_must_import = read_expr('B(y) -> Box(If(x,y)) = And(Box(If(x,y)),If(x,Box(y)))')
safe_must_export = read_expr('B(x) -> If(x,Box(y)) = And(If(x,Box(y)),Box(If(x,y)))')
must_preserve = read_expr('And(Diamond(And(x,y)),Box(y)) = And(And(Diamond(And(x,y)),Box(y)),If(x,Box(y)))')
flat = read_expr('If(x,If(And(x,y),z)) = If(And(x,y),z)')
weak_boethius = read_expr('And(Diamond(x),If(x,y)) = And(And(Diamond(x),If(x,y)),Not(If(x,Not(y))))')
must_if_combo = read_expr('If(x,y) = And(If(x,y),Or(Not(x),And(Box(x),If(x,y))))')
safe_ni = read_expr('B(x) -> Not(If(x,y)) = And(Not(If(x,y)),If(x,Not(y)))')
safe_cem_plus = read_expr('B(x) -> If(x,Or(y,z)) = And(If(x,Or(y,z)),Or(If(x,y),If(x,z)))')
cond_ax = [if_and, if_top, ident,
simp_mp, simp_cs, simp_mt, mod_mp, mod_cs, mod_mt,
must_intro, simp_must_import, safe_must_export, must_preserve,
flat, weak_boethius, must_if_combo,
safe_ni, safe_cem_plus]
cond_modal_ortho_boolean_lattice = modal_ortho_boolean_lattice + cond_ax
cond_epistemic_ortho_boolean_lattice = epistemic_ortho_boolean_lattice + cond_ax
#Here we collect the axioms that don't involve modalities:
cond_ortho_boolean_lattice = ortho_boolean_lattice + [if_and, if_top, ident, simp_mp, simp_cs, simp_mt, flat, safe_ni, safe_cem_plus]
#Check that "If Diamond p, then p" is not valid
goal = read_expr('If(Diamond(x),x) = Top')
mb = MaceCommand(goal, assumptions = cond_epistemic_ortho_boolean_lattice + [cover_def])
mb.build_model()
print(mb.model(format='cooked'))
```
## 8. Independence of axioms<a id='8'></a>
Not all of the axioms in cond_ax are independent relative to cond_modal_ortho_boolean_lattice.
```
#if_top is derivable from other axioms in cond_modal_ortho_boolean_lattice
goal = if_top
prover = Prover9Command(goal, assumptions = [ax for ax in cond_modal_ortho_boolean_lattice if not (ax == if_top)])
prover.prove()
print(prover.proof())
#simp_mp is derivable from other axioms in cond_modal_ortho_boolean_lattice
goal = simp_mp
prover = Prover9Command(goal, assumptions = [ax for ax in cond_modal_ortho_boolean_lattice], timeout=100000)
prover.prove()
print(prover.proof())
#ident is derivable from other axioms in cond_epistemic_ortho_boolean lattice
#but not cond_modal_ortho_boolean lattice
goal = ident
prover = Prover9Command(goal, assumptions = [ax for ax in cond_epistemic_ortho_boolean_lattice if not (ax == ident)])
prover.prove()
print(prover.proof())
mb = MaceCommand(goal, assumptions = [ax for ax in cond_modal_ortho_boolean_lattice if not (ax == ident)])
mb.build_model()
print(mb.model(format='cooked'))
```
Mace4 and Prover9 are unable to settle the independence or non-independence of a number of axioms, but Mace4 can show the independence of some.
```
for ax in cond_ax:
if not (ax == if_top or ax == simp_mp or ax == simp_mt or
ax == weak_boethius or ax == must_if_combo or
ax == safe_ni or ax == safe_cem_plus):
print(f"Is {ax} independent of the other axioms?")
mace = Mace()
print(mace.build_model(ax, assumptions = [axiom for axiom in cond_modal_ortho_boolean_lattice if not axiom == ax]))
print("\n")
```
Next we verify that the non-simple versions of the simple conditional axioms are not derivable.
```
mp = read_expr('And(If(x,y),x) = And(And(If(x,y),x),y)')
cs = read_expr('And(x,y) = And(And(x,y),If(x,y))')
mt = read_expr('And(If(x,y),Not(y)) = And(And(If(x,y),Not(y)),Not(x))')
caut_trans = read_expr('And(If(x,y),If(And(x,y),z)) = And(And(If(x,y),If(And(x,y),z)),If(x,z))')
caut_mon = read_expr('And(If(x,y),If(x,z)) = And(And(If(x,y),If(x,z)),If(And(x,y),z))')
must_import = read_expr('Box(If(x,y)) = And(Box(If(x,y)),If(x,Box(y)))')
must_export = read_expr('If(x,Box(y)) = And(If(x,Box(y)),Box(If(x,y)))')
ni = read_expr('Not(If(x,y)) = And(Not(If(x,y)),If(x,Not(y)))')
cem_plus = read_expr('If(x,Or(y,z)) = And(If(x,Or(y,z)),Or(If(x,y),If(x,z)))')
non_simp_ax = [mp, cs, mt, caut_trans, caut_mon, must_import, must_export, ni, cem_plus]
for ax in non_simp_ax:
print(f"Is {ax} a non-theorem?")
mace = Mace()
print(mace.build_model(ax, assumptions = cond_epistemic_ortho_boolean_lattice))
print("\n")
```
## 9. The conditional epistemic ortholattice from Figure 12<a id='9'></a>
Below we code up the conditional epistemic ortholattice implied by Figure 12. We name the elements of the lattice from Figure 11 as $a$, $b$, $c$, $d$, $e$, $f$, $g$, $h$, $i$, and $j$ starting from the top of the Hasse diagram and working down, moving from left to right at each level of the Hasse diagram. For example, the node labelleled $\Box p\vee\Box\neg p$ in Figure 11 is node $c$, the node labelled $\neg p$ is node $f$, etc.
We then check that the specification of the lattice is consistent with cond_epistemic_ortho_boolean_lattice, i.e., the specification together with our axioms does not imply the contradiction $\neg(x=x)$. Since the specification completely describes the lattice, this shows that the conditional epistemic ortholattice implied by Figure 12 obeys all the axioms.
```
a_distinct = read_expr("-(a=b) & -(a=c) & -(a=d) & -(a=e) & -(a=f) & -(a=g) & -(a=h) & -(a=i) & -(a=j)")
b_distinct = read_expr("-(b=c) & -(b=d) & -(b=e) & -(b=f) & -(b=g) & -(b=h) & -(b=i) & -(b=j)")
c_distinct = read_expr("-(c=d) & -(c=e) & -(c=f) & -(c=g) & -(c=h) & -(c=i) & -(c=j)")
d_distinct = read_expr("-(d=e) & -(d=f) & -(d=g) & -(d=h) & -(d=i) & -(d=j)")
e_distinct = read_expr("-(e=f) & -(e=g) & -(e=h) & -(e=i) & -(e=j)")
f_distinct = read_expr("-(f=g) & -(f=h) & -(f=i) & -(f=j)")
g_distinct = read_expr("-(g=h) & -(g=i) & -(g=j)")
h_distinct = read_expr("-(h=i) & -(h=j)")
i_distinct = read_expr("-(i=j)")
distinct = [a_distinct, b_distinct, c_distinct, d_distinct, e_distinct, f_distinct, g_distinct, h_distinct, i_distinct]
elements = read_expr("all x.(x=a | x=b | x=c | x=d | x=e | x=f | x=g | x=h | x=i | x=j)")
a_top = read_expr("Top = a")
j_bot = read_expr("Bot = j")
bi_negs = read_expr("b = Not(i)")
ch_negs = read_expr("c = Not(h)")
dg_negs = read_expr("d = Not(g)")
ef_negs = read_expr("e = Not(f)")
e_under_b = read_expr("And(e,b)=e")
g_under_e = read_expr("And(g,e)=g")
c_join_of_gi = read_expr("c=Or(g,i)")
h_meet_of_bd = read_expr("h=And(b,d)")
g_meet_of_bc = read_expr("g=And(b,c)")
i_meet_of_cd = read_expr("i=And(c,d)")
box_op = read_expr("Box(a)=a & Box(b)=b & Box(c)=c & Box(d)=d & Box(e)=g & Box(f)=i & Box(g)=g & Box(h)=h & Box(i)=i & Box(j)=j")
boolean_sub = read_expr("B(a) & B(e) & B(f) & B(j)")
a_to = read_expr("If(a,a)=a & If(a,b)=b & If(a,c)=c & If(a,d)=d & If(a,e)=e & If(a,f)=f & If(a,g)=g & If(a,h)=h & If(a,i)=i & If(a,j)=j")
b_to = read_expr("If(b,a)=a & If(b,b)=a & If(b,c)=g & If(b,d)=h & If(b,e)=e & If(b,f)=j & If(b,g)=g & If(b,h)=h & If(b,i)=j & If(b,j)=j")
c_to = read_expr("If(c,a)=a & If(c,b)=e & If(c,c)=a & If(c,d)=f & If(c,e)=e & If(c,f)=f & If(c,g)=e & If(c,h)=j & If(c,i)=f & If(c,j)=j")
d_to = read_expr("If(d,a)=a & If(d,b)=h & If(d,c)=i & If(d,d)=a & If(d,e)=j & If(d,f)=f & If(d,g)=j & If(d,h)=h & If(d,i)=i & If(d,j)=j")
e_to = read_expr("If(e,a)=a & If(e,b)=a & If(e,c)=a & If(e,d)=j & If(e,e)=a & If(e,f)=j & If(e,g)=a & If(e,h)=j & If(e,i)=j & If(e,j)=j")
f_to = read_expr("If(f,a)=a & If(f,b)=j & If(f,c)=a & If(f,d)=a & If(f,e)=j & If(f,f)=a & If(f,g)=j & If(f,h)=j & If(f,i)=a & If(f,j)=j")
g_to = read_expr("If(g,a)=a & If(g,b)=a & If(g,c)=a & If(g,d)=j & If(g,e)=a & If(g,f)=j & If(g,g)=a & If(e,h)=j & If(e,i)=j & If(e,j)=j")
h_to = read_expr("If(h,a)=a & If(h,b)=a & If(h,c)=j & If(h,d)=a & If(h,e)=j & If(h,f)=j & If(h,g)=j & If(h,h)=a & If(h,i)=j & If(h,j)=j")
i_to = read_expr("If(i,a)=a & If(i,b)=j & If(i,c)=a & If(i,d)=a & If(i,e)=j & If(i,f)=a & If(i,g)=j & If(i,h)=j & If(i,i)=a & If(i,j)=j")
j_to = read_expr("If(j,a)=a & If(j,b)=a & If(j,c)=a & If(j,d)=a & If(j,e)=a & If(j,f)=a & If(j,g)=a & If(j,h)=a & If(j,i)=a & If(j,j)=a")
lattice_spec = distinct + [elements, a_top, j_bot, bi_negs, ch_negs, dg_negs, ef_negs,
e_under_b, g_under_e, c_join_of_gi, g_meet_of_bc,
h_meet_of_bd, i_meet_of_cd, box_op, boolean_sub,
a_to, b_to, c_to, d_to, e_to, f_to, g_to, h_to, i_to, j_to]
goal = read_expr("-(x=x)")
mb = MaceCommand(goal, assumptions = cond_epistemic_ortho_boolean_lattice + lattice_spec + [cover_def])
mb.build_model()
print(mb.model(format='cooked'))
```
## 10. Example Proofs<a id='10'></a>
We show that the following principles are derivable:
$\varphi\to\psi\vdash\neg\varphi\vee\psi$ (If-to-Or);
$(\varphi\to\psi)\wedge \varphi\wedge\neg\psi\vdash\bot$ (If Contradiction);
if $\varphi\vdash\psi$, then $\psi\to\bot\vdash \varphi\to\bot$ (Falsum Reversal);
$\varphi\to\bot \vdash \neg\varphi$ (Ad Falsum);
$\varphi\vdash \top\to \varphi$ and $\top\to \varphi \vdash\varphi$ (Trivial Conditioning).
```
#Derivable principles
if_to_or = read_expr('If(x,y) = And(If(x,y),Or(Not(x),y))')
if_contra = read_expr('And(If(x,y),And(x,Not(y))) = Bot')
falsum_rev = read_expr('x = And(x,y) -> If(y,Bot) = And(If(y,Bot),If(x,Bot))')
ad_falsum = read_expr('If(x,Bot) = And(If(x,Bot),Not(x))')
cond_triv = read_expr('x = And(x,If(Top,x))')
goal = if_to_or
prover = Prover9Command(goal, assumptions = modal_ortho_boolean_lattice + [mod_mp, must_if_combo])
prover.prove()
print(prover.proof())
goal = if_contra
prover = Prover9Command(goal, assumptions = modal_ortho_boolean_lattice + [mod_mp, must_if_combo])
prover.prove()
print(prover.proof())
goal = falsum_rev
prover = Prover9Command(goal, assumptions = cond_modal_ortho_boolean_lattice)
prover.prove()
print(prover.proof())
goal = ad_falsum
prover = Prover9Command(goal, assumptions = cond_modal_ortho_boolean_lattice)
prover.prove()
print(prover.proof())
goal = cond_triv
prover = Prover9Command(goal, assumptions = cond_modal_ortho_boolean_lattice)
prover.prove()
print(prover.proof())
```
## 11. Avoiding collapse<a id='11'></a>
Next we show that $\alpha\to\beta \not\equiv \neg\alpha\vee\beta $ (No Simple Collapse).
```
goal = read_expr('B(x) & B(y) -> If(x,y) = Or(Not(x),y)')
mb = MaceCommand(goal, assumptions = cond_epistemic_ortho_boolean_lattice + [cover_def])
mb.build_model()
print(mb.model(format='cooked'))
```
## 12. Modalized Import-Export<a id='12'></a>
Consider, $(\varphi\to \Diamond(\varphi\wedge\psi))\wedge (\varphi \to (\psi\to\chi)) \equiv (\varphi\to \Diamond(\varphi\wedge\psi)) \wedge ((\varphi\wedge\psi)\to\chi)$ (Modalized Import-Export), which is proved in the paper. Unfortunately, Prover9 does not find the proof before timing out.
```
mie = read_expr('And(If(x,Diamond(And(x,y))),If(x,If(y,z))) = And(If(x,Diamond(And(x,y))),If(And(x,y),z))')
#A simple version of mie with x, y, and z non-epistemic
simp_mie = read_expr('((B(x) & B(z)) & B(z)) -> And(If(x,Diamond(And(x,y))),If(x,If(y,z))) = And(If(x,Diamond(And(x,y))),If(And(x,y),z))')
#goal = mie
#prover = Prover9Command(goal, assumptions = cond_epistemic_ortho_boolean_lattice, timeout=10000)
#prover.prove()
#print(prover.proof())
#goal = mie
#mb = MaceCommand(goal, assumptions = cond_epistemic_ortho_boolean_lattice)
#mb.build_model()
#print(mb.model(format='cooked'))
```
Mace4 is able to show that without Must Preservation, Modalized Import-Export is not derivable.
```
goal = mie
mb = MaceCommand(goal, assumptions = [axiom for axiom in cond_epistemic_ortho_boolean_lattice
if not axiom == must_preserve])
mb.build_model()
print(mb.model(format='cooked'))
#The non-epistemic version of mie is also not derivable without Must Preservation
goal = simp_mie
mb = MaceCommand(goal, assumptions = [axiom for axiom in cond_epistemic_ortho_boolean_lattice
if not axiom == must_preserve])
mb.build_model()
print(mb.model(format='cooked'))
```
## 13. Qualified Collapse<a id='13'></a>
In Proposition 6.20 in the paper, we consider Qualified Collapse: $\psi\wedge (\psi\to \Diamond (\varphi\wedge\psi))\vdash \varphi\to\psi$.
```
q_collapse = read_expr('And(y,If(y,Diamond(And(x,y)))) = And(And(y,If(y,Diamond(And(x,y)))), If(x,y))')
#A simple version of q_collapse with x,y non-epistemic
simp_q_collapse = read_expr('(B(x) & B(y)) -> And(y,If(y,Diamond(And(x,y)))) = And(And(y,If(y,Diamond(And(x,y)))), If(x,y))')
#If we drop distributivity from the assumptions of Proposition 6.20,
#then Mace4 finds a counterexample to even simple q_collapse,
#which satisfies not only modal_ortho_boolean_lattice but even epistemic_ortho_boolean_lattice
goal = simp_q_collapse
mb = MaceCommand(goal, assumptions = epistemic_ortho_boolean_lattice +
[if_and,if_top,ident,if_to_or,mie])
mb.build_model()
print(mb.model(format='cooked'))
#In fact, Mace4 finds a model falsifying q_collapse while satisfying
#all of cond_epistemic_ortho_boolean_lattice.
goal = q_collapse
mb = MaceCommand(goal, assumptions = cond_epistemic_ortho_boolean_lattice)
mb.build_model()
print(mb.model(format='cooked'))
#It's also noteworthy that the assumptions of Proposition 6.20
#do not entail our episteme axiom
goal = episteme
mb = MaceCommand(goal, assumptions = modal_ortho_boolean_lattice +
[if_and,if_top,ident,if_to_or,mie] +
[cover_def])
mb.build_model()
print(mb.model(format='cooked'))
```
Unfortunately, to date Mace4 has not found a model falsifying simp_q_collapse while satisfying all of cond_epistemic_ortho_boolean_lattice, while Prover9 has not found a proof of simp_q_collapse from cond_epistemic_ortho_boolean_lattice.
```
#goal = simp_q_collapse
#mb = MaceCommand(goal, assumptions = cond_epistemic_ortho_boolean_lattice)
#mb.build_model()
#print(mb.model(format='cooked'))
#goal = simp_q_collapse
#prover = Prover9Command(goal, assumptions = cond_epistemic_ortho_boolean_lattice, timeout=1000000)
#prover.prove()
#print(prover.proof())
```
However, we have some partial results.
First, Mace4 can find a model falsifying simp_q_collapse while satisfying all of cond_epistemic_ortho_boolean_lattice except for mod_cs, simp_cs, and safe_ni.
```
goal = simp_q_collapse
mb = MaceCommand(goal, assumptions = [axiom for axiom in cond_epistemic_ortho_boolean_lattice
if not (axiom == mod_cs or axiom == simp_cs or axiom == safe_ni)])
mb.build_model()
print(mb.model(format='cooked'))
```
Second, Mace4 can find a model falsifying simp_q_collapse while satisfying all of cond_epistemic_ortho_boolean_lattice except for flat (although a restricted version of flat works), safe_must_export, and must_preserve.
```
simp_flat = read_expr('B(x) -> If(x,If(And(x,y),z)) = If(And(x,y),z)')
goal = simp_q_collapse
mb = MaceCommand(goal,
assumptions = [axiom for axiom in cond_epistemic_ortho_boolean_lattice
if not (axiom == flat
or axiom == safe_must_export
or axiom == must_preserve)]
+ [simp_flat])
mb.build_model()
print(mb.model(format='cooked'))
```
Third, we can find a counterexample to the last distributivity step in the proof of Proposition 6.24, namely $\psi\wedge (\neg\psi\vee (\varphi\to\psi))\vdash \psi\wedge (\varphi\to\psi)$, relative to all our examples except for safe_ni and safe_cem_plus.
```
goal = read_expr("(B(x) & B(y)) -> And(y,Or(Not(y),If(x,y)))=And(y,If(x,y))")
mb = MaceCommand(goal, assumptions = [ax for ax in cond_epistemic_ortho_boolean_lattice
if not(ax == safe_ni or ax == safe_cem_plus)])
mb.build_model()
print(mb.model(format='cooked'))
```
Fourth, we can find a counterexample to the related principle $\psi\wedge(\psi\to(\varphi\to\psi))\vdash \varphi\to\psi$ for $\varphi,\psi$ Boolean relative to all our axioms except for simp_cs or mod_cs (though a very simple version of cs is okay) and relative to all our axioms except for flat (though simp_flat is okay).
```
very_simp_cs = read_expr("(B(x) & B(y)) -> And(x,y) = And(And(x,y),If(x,y))")
goal = read_expr("(B(x) & B(y)) -> And(x,If(x,If(y,x))) = And(And(x,If(x,If(y,x))),If(y,x)) ")
mb = MaceCommand(goal, assumptions = [ax for ax in cond_epistemic_ortho_boolean_lattice
if not(ax == simp_cs or ax == mod_cs)] + [very_simp_cs])
mb.build_model()
print(mb.model(format='cooked'))
mb = MaceCommand(goal, assumptions = [ax for ax in cond_epistemic_ortho_boolean_lattice
if not ax == flat] + [simp_flat])
mb.build_model()
print(mb.model(format='cooked'))
```
## 14. Provable principles for which Prover9 does not find a proof <a id='14'></a>
In the paper, we prove Modalized Cautious Transitivity and Modalized Cautious Monotonicity (resp. Simple Cautious Transitivity and Simple Cautious Monotonicity). Unfortunately Prover9/Mace4 does not find a proof/counterexample.
```
mod_caut_trans = read_expr('And(If(x,Box(y)),If(And(x,y),z)) = And(And(If(x,Box(y)),If(And(x,y),z)),If(x,z))')
simp_caut_trans = read_expr('B(z) -> And(If(x,y),If(And(x,y),z)) = And(And(If(x,y),If(And(x,y),z)),If(x,z))')
mod_caut_mon = read_expr('And(If(x,Box(y)),If(x,z)) = And(And(If(x,Box(y)),If(x,z)), If(And(x,y),z))')
simp_caut_mon = read_expr('B(z) -> And(If(x,y),If(x,z)) = And(And(If(x,y),If(x,z)),If(And(x,y),z))')
#goal = mod_caut_trans
#prover = Prover9Command(goal, assumptions = modal_ortho_boolean_lattice
# + [if_and,if_top,mod_mp,must_intro,flat],timeout=1000000)
#prover.prove()
#print(prover.proof())
#goal = mod_caut_trans
#mb = MaceCommand(goal, assumptions = epistemic_ortho_boolean_lattice
# + [if_and,if_top,mod_mp,must_intro,flat])
#mb.build_model()
#print(mb.model(format='cooked'))
#goal = mod_caut_mon
#prover = Prover9Command(goal, assumptions = modal_ortho_boolean_lattice
# + [if_and,if_top,mod_cs,must_intro,flat],timeout=1000000)
#prover.prove()
#print(prover.proof())
#goal = mod_caut_mon
#mb = MaceCommand(goal, assumptions = epistemic_ortho_boolean_lattice
# + [if_and,if_top,mod_cs,must_intro,flat])
#mb.build_model()
#print(mb.model(format='cooked'))
```
Consider $(\varphi\to\psi)\wedge (\psi\to\bot)\vdash \varphi\to\bot$ (Conditional Modus Tollens), which is provable as follows:
1. $\psi\to\bot \vdash \psi\to\varphi$ and $\psi\to\bot\vdash\psi\to\bot$, so $\psi\to\bot \vdash (\psi\to\varphi)\wedge (\psi\to\bot)$
2. By Simple Cautious Monotonicity, $(\psi\to\varphi)\wedge (\psi\to\bot)\vdash (\psi\wedge\varphi)\to\bot\vdash (\varphi\wedge\psi)\to\bot$.
3. By 1 and 2, $\psi\to\bot \vdash (\varphi\wedge\psi)\to\bot$.
4. By Simple Cautious Transitivity, $(\varphi\to\psi)\wedge (\varphi\wedge\psi)\to\bot\vdash \varphi\to\bot$.
5. By 3 and 4, $(\varphi\to\psi)\wedge (\psi\to\bot)\vdash \varphi\to\bot$.
Unfortunately Prover9/Mace4 does not find a proof/counterexample.
```
cond_mt = read_expr('And(If(x,y),If(y,Bot)) = And(And(If(x,y),If(y,Bot)),If(x,Bot))')
#goal = cond_mt
#prover = Prover9Command(goal, assumptions = ortho_boolean_lattice + [if_and,if_top,simp_caut_trans,simp_caut_mon])
#prover.prove()
#print(prover.proof())
#goal = cond_mt
#mb = MaceCommand(goal, assumptions = ortho_boolean_lattice + [if_and,if_top,simp_caut_trans,simp_caut_mon])
#mb.build_model()
#print(mb.model(format='cooked'))
```
Consider, for non-epistemic $\beta$, $(\varphi\to\psi)\wedge (\psi\to\varphi)\wedge (\psi\to\beta)\vdash \varphi\to\beta$ (Simple Reciprocity), which is provable as follows:
1. By Simple Cautious Monotonicity, $(\psi\to\varphi)\wedge (\psi\to\beta)\vdash (\psi\wedge\varphi)\to\beta\vdash (\varphi\wedge\psi)\to\beta$.
2. By Simple Cautious Transitivity, $(\varphi\to\psi)\wedge ((\varphi\wedge\psi)\to\beta)\vdash \varphi\to\beta$.
3. By 1 and 2, $(\varphi\to\psi)\wedge (\psi\to\varphi)\wedge (\psi\to\beta)\vdash \varphi\to\beta$.
Unfortunately Prover9/Mace4 does not find a proof/counterexample.
```
simp_rep = read_expr('B(z) -> And(And(If(x,y),If(y,x)),If(y,z)) = And(And(And(If(x,y),If(y,x)),If(y,z)),If(x,z))')
#goal = simp_rep
#prover = Prover9Command(goal, assumptions = ortho_boolean_lattice + [if_and,if_top,simp_caut_trans,simp_caut_mon])
#prover.prove()
#print(prover.proof())
#goal = simp_rep
#mb = MaceCommand(goal, assumptions = ortho_boolean_lattice + [if_and,if_top,simp_caut_trans,simp_caut_mon])
#mb.build_model()
#print(mb.model(format='cooked'))
```
## 15. A more economical axiomatization without $\vee$, $\bot$, or $\Diamond$ <a id='15'></a>
```
econ_contra = read_expr('And(x,Not(x))= Not(Top)')
econ_de_morgan = read_expr('And(x,Not(And(Not(x),Not(y)))) = x')
econ_ortho_lattice = [and_id, and_comm, and_assoc, top, econ_contra, invol, econ_de_morgan]
econ_B_dist = read_expr('((B(x) & B(y)) & B(z)) -> And(x,Not(And(Not(y),Not(z)))) = Not(And(Not(And(x,y)),Not(And(x,z))))')
econ_ortho_boolean_lattice = econ_ortho_lattice + [B_top, B_not, B_and, econ_B_dist]
econ_modal_ortho_boolean_lattice = econ_ortho_boolean_lattice + [box_and, box_top]
econ_episteme = read_expr('And(x,Not(Box(x))) = Not(Top)')
econ_epistemic_ortho_boolean_lattice = econ_modal_ortho_boolean_lattice + [factive, econ_episteme]
econ_must_preserve = read_expr('And(Not(Box(Not(And(x,y)))),Box(y)) = And(And(Not(Box(Not(And(x,y)))),Box(y)),If(x,Box(y)))')
econ_weak_boethius = read_expr('And(Not(Box(Not(x))),If(x,y)) = And(And(Not(Box(Not(x))),If(x,y)),Not(If(x,Not(y))))')
econ_must_if_combo = read_expr('If(x,y) = And(If(x,y),Not(And(x,Not(And(Box(x),If(x,y))))))')
econ_safe_cem_plus = read_expr('B(x) -> If(x,Not(And(Not(y),Not(z)))) = And(If(x,Not(And(Not(y),Not(z)))),Not(And(Not(If(x,y)),Not(If(x,z)))))')
econ_cond_ax = [if_and, if_top, ident,
mod_mp, mod_cs, simp_cs, simp_mt,
must_intro, simp_must_import, safe_must_export, econ_must_preserve,
flat, econ_weak_boethius, econ_must_if_combo,
safe_ni, econ_safe_cem_plus]
econ_cond_modal_ortho_boolean_lattice = econ_modal_ortho_boolean_lattice + econ_cond_ax
econ_cond_epistemic_ortho_boolean_lattice = econ_epistemic_ortho_boolean_lattice + econ_cond_ax
```
|
github_jupyter
|
from nltk.test.inference_fixt import setup_module
setup_module()
from nltk import *
from nltk.sem.drt import DrtParser
from nltk.sem import logic
logic._counter._value = 0
from nltk.sem import Expression
read_expr = Expression.fromstring
or_id = read_expr('Or(x,x) = x')
and_id = read_expr('And(x,x) = x')
or_comm = read_expr('Or(x,y) = Or(y,x)')
and_comm = read_expr('And(x,y) = And(y,x)')
or_assoc = read_expr('Or(x,Or(y,z)) = Or(Or(x,y),z)')
and_assoc = read_expr('And(x,And(y,z)) = And(And(x,y),z)')
or_absorp = read_expr('Or(x,And(x,y)) = x')
and_absorp = read_expr('And(x,Or(x,y)) = x')
lattice = [or_id, and_id, or_comm, and_comm, or_assoc, and_assoc, or_absorp, and_absorp]
#By including a definition of the covering relation,
#one can quickly read off Hasse diagrams of lattices from the Mace4 output.
cover_def = read_expr('covered_by(x,y) <-> (-(x=y) & x = And(x,y) & -exists z.(-(z=x) & -(z=y) & x = And(x,z) & z = And(z,y)))')
bot = read_expr('Or(x,Bot) = x')
top = read_expr('And(x,Top) = x')
bounded_lattice = lattice + [bot,top]
lem = read_expr('Or(x,Not(x)) = Top')
contra = read_expr('And(x,Not(x)) = Bot')
invol = read_expr('Not(Not(x)) = x')
de_morgan = read_expr('Not(And(x,y)) = Or(Not(x),Not(y))')
ortho_lattice = bounded_lattice + [lem,contra,invol,de_morgan]
#We do not want distributivity, but we consider it below
dist = read_expr('And(x,Or(y,z)) = Or(And(x,y),And(x,z))')
#Check that the De Morgan law dual to de_morgan above follows
goal = read_expr('Not(Or(x,y)) = And(Not(x),Not(y))')
prover = Prover9Command(goal, assumptions = ortho_lattice)
prover.prove()
print(prover.proof())
#Check that the distributive law does not follow from ortholattice axioms
goal = dist
mb = MaceCommand(goal, assumptions = ortho_lattice + [cover_def])
mb.build_model()
print(mb.model(format='cooked'))
#Check that adding the pseudocomplementation principle allows the derivability of distributivity
pseudo = read_expr('And(x,y) = Bot -> y = And(y,Not(x))')
goal = dist
prover = Prover9Command(goal, assumptions = ortho_lattice + [pseudo])
prover.prove()
print(prover.proof())
#Conversely, adding distributivity allows the derivation of the pseudocomplementation principle
goal = pseudo
prover = Prover9Command(goal, assumptions = ortho_lattice + [dist])
prover.prove()
print(prover.proof())
B_top = read_expr('B(Top)')
B_not = read_expr('B(x) -> B(Not(x))')
B_and = read_expr('(B(x) & B(y)) -> B(And(x,y))')
B_or = read_expr('(B(x) & B(y)) -> B(Or(x,y))')
B_dist = read_expr('((B(x) & B(y)) & B(z)) -> And(x,Or(y,z)) = Or(And(x,y),And(x,z))')
ortho_boolean_lattice = ortho_lattice + [B_top, B_not, B_and, B_or, B_dist]
#Check that the other distributive law dual to B_dist above follows
goal = read_expr('((B(x) & B(y)) & B(z)) -> Or(x,And(y,z)) = And(Or(x,y),Or(x,z))')
prover = Prover9Command(goal, assumptions = ortho_boolean_lattice)
prover.prove()
print(prover.proof())
box_and = read_expr('Box(And(x,y)) = And(Box(x),Box(y))')
box_top = read_expr('Box(Top) = Top')
diamond_dual = read_expr("Diamond(x) = Not(Box(Not(x)))")
modal_ortho_boolean_lattice = ortho_boolean_lattice + [box_and, box_top, diamond_dual]
#Check that Diamond distributes over disjunction
goal = read_expr('Diamond(Or(x,y)) = Or(Diamond(x),Diamond(y))')
prover = Prover9Command(goal, assumptions = modal_ortho_boolean_lattice)
prover.prove()
print(prover.proof())
factive = read_expr('Box(x) = And(Box(x),x)')
episteme = read_expr('And(Not(x),Diamond(x)) = Bot')
epistemic_ortho_boolean_lattice = modal_ortho_boolean_lattice + [factive, episteme]
#Check that another form of Wittgenstein sentence is contradictory
goal = read_expr('And(x,Diamond(Not(x))) = Bot')
prover = Prover9Command(goal, assumptions = epistemic_ortho_boolean_lattice)
prover.prove()
print(prover.proof())
#Check that "p or it must be that not p" is derivable
goal = read_expr('Or(x,Box(Not(x))) = Top')
prover = Prover9Command(goal, assumptions = epistemic_ortho_boolean_lattice)
prover.prove()
print(prover.proof())
#Not only "p but might not p" but also "p but might might not p" is inconsisent
goal = read_expr('And(x,Diamond(Diamond(Not(x))))=Bot')
prover = Prover9Command(goal, assumptions = epistemic_ortho_boolean_lattice)
prover.prove()
print(prover.proof())
#Another noteworthy example: "(must p or must q) & might not p & might not q" is inconsistent.
goal = read_expr('And(Or(Box(x),Box(y)),And(Diamond(Not(x)), Diamond(Not(y)))) = Bot')
prover = Prover9Command(goal, assumptions = epistemic_ortho_boolean_lattice)
prover.prove()
print(prover.proof())
#Check that Diamond p does not entail p
goal = read_expr('Diamond(x) = And(Diamond(x),x)')
mb = MaceCommand(goal, assumptions = epistemic_ortho_boolean_lattice + [cover_def])
mb.build_model()
print(mb.model(format='cooked'))
#Check that Diamond Diamond p does not entail Diamond p
goal = read_expr('Diamond(Diamond(x)) = Diamond(x)')
mb = MaceCommand(goal, assumptions = epistemic_ortho_boolean_lattice + [cover_def])
mb.build_model()
print(mb.model(format='cooked'))
#Although "p and might not p" is inconsistent, "might p and must might not p" is consistent
goal = read_expr('And(Diamond(x),Box(Diamond(Not(x)))) = Bot')
mb = MaceCommand(goal, assumptions = epistemic_ortho_boolean_lattice + [cover_def])
mb.build_model()
print(mb.model(format='cooked'))
if_and = read_expr('If(x,And(y,z)) = And(If(x,y),If(x,z))')
if_top = read_expr('If(x,Top) = Top')
ident = read_expr('If(x,x) = Top')
simp_mp = read_expr('B(y) -> And(If(x,y),x) = And(And(If(x,y),x),y)')
simp_cs = read_expr('B(y) -> And(x,y) = And(And(x,y),If(x,y))')
simp_mt = read_expr('B(y) -> And(If(x,y),Not(y)) = And(And(If(x,y),Not(y)),Not(x))')
mod_mp = read_expr('And(Box(x),If(x,y))=And(And(Box(x),If(x,y)),y)')
mod_cs = read_expr('And(Box(x),y)=And(And(Box(x),y),If(x,y))')
mod_mt = read_expr('And(If(x,y),Not(y)) = And(And(If(x,y),Not(y)), Not(Box(x)))')
must_intro = read_expr('x = And(x,y) -> If(x,Box(y)) = Top')
simp_must_import = read_expr('B(y) -> Box(If(x,y)) = And(Box(If(x,y)),If(x,Box(y)))')
safe_must_export = read_expr('B(x) -> If(x,Box(y)) = And(If(x,Box(y)),Box(If(x,y)))')
must_preserve = read_expr('And(Diamond(And(x,y)),Box(y)) = And(And(Diamond(And(x,y)),Box(y)),If(x,Box(y)))')
flat = read_expr('If(x,If(And(x,y),z)) = If(And(x,y),z)')
weak_boethius = read_expr('And(Diamond(x),If(x,y)) = And(And(Diamond(x),If(x,y)),Not(If(x,Not(y))))')
must_if_combo = read_expr('If(x,y) = And(If(x,y),Or(Not(x),And(Box(x),If(x,y))))')
safe_ni = read_expr('B(x) -> Not(If(x,y)) = And(Not(If(x,y)),If(x,Not(y)))')
safe_cem_plus = read_expr('B(x) -> If(x,Or(y,z)) = And(If(x,Or(y,z)),Or(If(x,y),If(x,z)))')
cond_ax = [if_and, if_top, ident,
simp_mp, simp_cs, simp_mt, mod_mp, mod_cs, mod_mt,
must_intro, simp_must_import, safe_must_export, must_preserve,
flat, weak_boethius, must_if_combo,
safe_ni, safe_cem_plus]
cond_modal_ortho_boolean_lattice = modal_ortho_boolean_lattice + cond_ax
cond_epistemic_ortho_boolean_lattice = epistemic_ortho_boolean_lattice + cond_ax
#Here we collect the axioms that don't involve modalities:
cond_ortho_boolean_lattice = ortho_boolean_lattice + [if_and, if_top, ident, simp_mp, simp_cs, simp_mt, flat, safe_ni, safe_cem_plus]
#Check that "If Diamond p, then p" is not valid
goal = read_expr('If(Diamond(x),x) = Top')
mb = MaceCommand(goal, assumptions = cond_epistemic_ortho_boolean_lattice + [cover_def])
mb.build_model()
print(mb.model(format='cooked'))
#if_top is derivable from other axioms in cond_modal_ortho_boolean_lattice
goal = if_top
prover = Prover9Command(goal, assumptions = [ax for ax in cond_modal_ortho_boolean_lattice if not (ax == if_top)])
prover.prove()
print(prover.proof())
#simp_mp is derivable from other axioms in cond_modal_ortho_boolean_lattice
goal = simp_mp
prover = Prover9Command(goal, assumptions = [ax for ax in cond_modal_ortho_boolean_lattice], timeout=100000)
prover.prove()
print(prover.proof())
#ident is derivable from other axioms in cond_epistemic_ortho_boolean lattice
#but not cond_modal_ortho_boolean lattice
goal = ident
prover = Prover9Command(goal, assumptions = [ax for ax in cond_epistemic_ortho_boolean_lattice if not (ax == ident)])
prover.prove()
print(prover.proof())
mb = MaceCommand(goal, assumptions = [ax for ax in cond_modal_ortho_boolean_lattice if not (ax == ident)])
mb.build_model()
print(mb.model(format='cooked'))
for ax in cond_ax:
if not (ax == if_top or ax == simp_mp or ax == simp_mt or
ax == weak_boethius or ax == must_if_combo or
ax == safe_ni or ax == safe_cem_plus):
print(f"Is {ax} independent of the other axioms?")
mace = Mace()
print(mace.build_model(ax, assumptions = [axiom for axiom in cond_modal_ortho_boolean_lattice if not axiom == ax]))
print("\n")
mp = read_expr('And(If(x,y),x) = And(And(If(x,y),x),y)')
cs = read_expr('And(x,y) = And(And(x,y),If(x,y))')
mt = read_expr('And(If(x,y),Not(y)) = And(And(If(x,y),Not(y)),Not(x))')
caut_trans = read_expr('And(If(x,y),If(And(x,y),z)) = And(And(If(x,y),If(And(x,y),z)),If(x,z))')
caut_mon = read_expr('And(If(x,y),If(x,z)) = And(And(If(x,y),If(x,z)),If(And(x,y),z))')
must_import = read_expr('Box(If(x,y)) = And(Box(If(x,y)),If(x,Box(y)))')
must_export = read_expr('If(x,Box(y)) = And(If(x,Box(y)),Box(If(x,y)))')
ni = read_expr('Not(If(x,y)) = And(Not(If(x,y)),If(x,Not(y)))')
cem_plus = read_expr('If(x,Or(y,z)) = And(If(x,Or(y,z)),Or(If(x,y),If(x,z)))')
non_simp_ax = [mp, cs, mt, caut_trans, caut_mon, must_import, must_export, ni, cem_plus]
for ax in non_simp_ax:
print(f"Is {ax} a non-theorem?")
mace = Mace()
print(mace.build_model(ax, assumptions = cond_epistemic_ortho_boolean_lattice))
print("\n")
a_distinct = read_expr("-(a=b) & -(a=c) & -(a=d) & -(a=e) & -(a=f) & -(a=g) & -(a=h) & -(a=i) & -(a=j)")
b_distinct = read_expr("-(b=c) & -(b=d) & -(b=e) & -(b=f) & -(b=g) & -(b=h) & -(b=i) & -(b=j)")
c_distinct = read_expr("-(c=d) & -(c=e) & -(c=f) & -(c=g) & -(c=h) & -(c=i) & -(c=j)")
d_distinct = read_expr("-(d=e) & -(d=f) & -(d=g) & -(d=h) & -(d=i) & -(d=j)")
e_distinct = read_expr("-(e=f) & -(e=g) & -(e=h) & -(e=i) & -(e=j)")
f_distinct = read_expr("-(f=g) & -(f=h) & -(f=i) & -(f=j)")
g_distinct = read_expr("-(g=h) & -(g=i) & -(g=j)")
h_distinct = read_expr("-(h=i) & -(h=j)")
i_distinct = read_expr("-(i=j)")
distinct = [a_distinct, b_distinct, c_distinct, d_distinct, e_distinct, f_distinct, g_distinct, h_distinct, i_distinct]
elements = read_expr("all x.(x=a | x=b | x=c | x=d | x=e | x=f | x=g | x=h | x=i | x=j)")
a_top = read_expr("Top = a")
j_bot = read_expr("Bot = j")
bi_negs = read_expr("b = Not(i)")
ch_negs = read_expr("c = Not(h)")
dg_negs = read_expr("d = Not(g)")
ef_negs = read_expr("e = Not(f)")
e_under_b = read_expr("And(e,b)=e")
g_under_e = read_expr("And(g,e)=g")
c_join_of_gi = read_expr("c=Or(g,i)")
h_meet_of_bd = read_expr("h=And(b,d)")
g_meet_of_bc = read_expr("g=And(b,c)")
i_meet_of_cd = read_expr("i=And(c,d)")
box_op = read_expr("Box(a)=a & Box(b)=b & Box(c)=c & Box(d)=d & Box(e)=g & Box(f)=i & Box(g)=g & Box(h)=h & Box(i)=i & Box(j)=j")
boolean_sub = read_expr("B(a) & B(e) & B(f) & B(j)")
a_to = read_expr("If(a,a)=a & If(a,b)=b & If(a,c)=c & If(a,d)=d & If(a,e)=e & If(a,f)=f & If(a,g)=g & If(a,h)=h & If(a,i)=i & If(a,j)=j")
b_to = read_expr("If(b,a)=a & If(b,b)=a & If(b,c)=g & If(b,d)=h & If(b,e)=e & If(b,f)=j & If(b,g)=g & If(b,h)=h & If(b,i)=j & If(b,j)=j")
c_to = read_expr("If(c,a)=a & If(c,b)=e & If(c,c)=a & If(c,d)=f & If(c,e)=e & If(c,f)=f & If(c,g)=e & If(c,h)=j & If(c,i)=f & If(c,j)=j")
d_to = read_expr("If(d,a)=a & If(d,b)=h & If(d,c)=i & If(d,d)=a & If(d,e)=j & If(d,f)=f & If(d,g)=j & If(d,h)=h & If(d,i)=i & If(d,j)=j")
e_to = read_expr("If(e,a)=a & If(e,b)=a & If(e,c)=a & If(e,d)=j & If(e,e)=a & If(e,f)=j & If(e,g)=a & If(e,h)=j & If(e,i)=j & If(e,j)=j")
f_to = read_expr("If(f,a)=a & If(f,b)=j & If(f,c)=a & If(f,d)=a & If(f,e)=j & If(f,f)=a & If(f,g)=j & If(f,h)=j & If(f,i)=a & If(f,j)=j")
g_to = read_expr("If(g,a)=a & If(g,b)=a & If(g,c)=a & If(g,d)=j & If(g,e)=a & If(g,f)=j & If(g,g)=a & If(e,h)=j & If(e,i)=j & If(e,j)=j")
h_to = read_expr("If(h,a)=a & If(h,b)=a & If(h,c)=j & If(h,d)=a & If(h,e)=j & If(h,f)=j & If(h,g)=j & If(h,h)=a & If(h,i)=j & If(h,j)=j")
i_to = read_expr("If(i,a)=a & If(i,b)=j & If(i,c)=a & If(i,d)=a & If(i,e)=j & If(i,f)=a & If(i,g)=j & If(i,h)=j & If(i,i)=a & If(i,j)=j")
j_to = read_expr("If(j,a)=a & If(j,b)=a & If(j,c)=a & If(j,d)=a & If(j,e)=a & If(j,f)=a & If(j,g)=a & If(j,h)=a & If(j,i)=a & If(j,j)=a")
lattice_spec = distinct + [elements, a_top, j_bot, bi_negs, ch_negs, dg_negs, ef_negs,
e_under_b, g_under_e, c_join_of_gi, g_meet_of_bc,
h_meet_of_bd, i_meet_of_cd, box_op, boolean_sub,
a_to, b_to, c_to, d_to, e_to, f_to, g_to, h_to, i_to, j_to]
goal = read_expr("-(x=x)")
mb = MaceCommand(goal, assumptions = cond_epistemic_ortho_boolean_lattice + lattice_spec + [cover_def])
mb.build_model()
print(mb.model(format='cooked'))
#Derivable principles
if_to_or = read_expr('If(x,y) = And(If(x,y),Or(Not(x),y))')
if_contra = read_expr('And(If(x,y),And(x,Not(y))) = Bot')
falsum_rev = read_expr('x = And(x,y) -> If(y,Bot) = And(If(y,Bot),If(x,Bot))')
ad_falsum = read_expr('If(x,Bot) = And(If(x,Bot),Not(x))')
cond_triv = read_expr('x = And(x,If(Top,x))')
goal = if_to_or
prover = Prover9Command(goal, assumptions = modal_ortho_boolean_lattice + [mod_mp, must_if_combo])
prover.prove()
print(prover.proof())
goal = if_contra
prover = Prover9Command(goal, assumptions = modal_ortho_boolean_lattice + [mod_mp, must_if_combo])
prover.prove()
print(prover.proof())
goal = falsum_rev
prover = Prover9Command(goal, assumptions = cond_modal_ortho_boolean_lattice)
prover.prove()
print(prover.proof())
goal = ad_falsum
prover = Prover9Command(goal, assumptions = cond_modal_ortho_boolean_lattice)
prover.prove()
print(prover.proof())
goal = cond_triv
prover = Prover9Command(goal, assumptions = cond_modal_ortho_boolean_lattice)
prover.prove()
print(prover.proof())
goal = read_expr('B(x) & B(y) -> If(x,y) = Or(Not(x),y)')
mb = MaceCommand(goal, assumptions = cond_epistemic_ortho_boolean_lattice + [cover_def])
mb.build_model()
print(mb.model(format='cooked'))
mie = read_expr('And(If(x,Diamond(And(x,y))),If(x,If(y,z))) = And(If(x,Diamond(And(x,y))),If(And(x,y),z))')
#A simple version of mie with x, y, and z non-epistemic
simp_mie = read_expr('((B(x) & B(z)) & B(z)) -> And(If(x,Diamond(And(x,y))),If(x,If(y,z))) = And(If(x,Diamond(And(x,y))),If(And(x,y),z))')
#goal = mie
#prover = Prover9Command(goal, assumptions = cond_epistemic_ortho_boolean_lattice, timeout=10000)
#prover.prove()
#print(prover.proof())
#goal = mie
#mb = MaceCommand(goal, assumptions = cond_epistemic_ortho_boolean_lattice)
#mb.build_model()
#print(mb.model(format='cooked'))
goal = mie
mb = MaceCommand(goal, assumptions = [axiom for axiom in cond_epistemic_ortho_boolean_lattice
if not axiom == must_preserve])
mb.build_model()
print(mb.model(format='cooked'))
#The non-epistemic version of mie is also not derivable without Must Preservation
goal = simp_mie
mb = MaceCommand(goal, assumptions = [axiom for axiom in cond_epistemic_ortho_boolean_lattice
if not axiom == must_preserve])
mb.build_model()
print(mb.model(format='cooked'))
q_collapse = read_expr('And(y,If(y,Diamond(And(x,y)))) = And(And(y,If(y,Diamond(And(x,y)))), If(x,y))')
#A simple version of q_collapse with x,y non-epistemic
simp_q_collapse = read_expr('(B(x) & B(y)) -> And(y,If(y,Diamond(And(x,y)))) = And(And(y,If(y,Diamond(And(x,y)))), If(x,y))')
#If we drop distributivity from the assumptions of Proposition 6.20,
#then Mace4 finds a counterexample to even simple q_collapse,
#which satisfies not only modal_ortho_boolean_lattice but even epistemic_ortho_boolean_lattice
goal = simp_q_collapse
mb = MaceCommand(goal, assumptions = epistemic_ortho_boolean_lattice +
[if_and,if_top,ident,if_to_or,mie])
mb.build_model()
print(mb.model(format='cooked'))
#In fact, Mace4 finds a model falsifying q_collapse while satisfying
#all of cond_epistemic_ortho_boolean_lattice.
goal = q_collapse
mb = MaceCommand(goal, assumptions = cond_epistemic_ortho_boolean_lattice)
mb.build_model()
print(mb.model(format='cooked'))
#It's also noteworthy that the assumptions of Proposition 6.20
#do not entail our episteme axiom
goal = episteme
mb = MaceCommand(goal, assumptions = modal_ortho_boolean_lattice +
[if_and,if_top,ident,if_to_or,mie] +
[cover_def])
mb.build_model()
print(mb.model(format='cooked'))
#goal = simp_q_collapse
#mb = MaceCommand(goal, assumptions = cond_epistemic_ortho_boolean_lattice)
#mb.build_model()
#print(mb.model(format='cooked'))
#goal = simp_q_collapse
#prover = Prover9Command(goal, assumptions = cond_epistemic_ortho_boolean_lattice, timeout=1000000)
#prover.prove()
#print(prover.proof())
goal = simp_q_collapse
mb = MaceCommand(goal, assumptions = [axiom for axiom in cond_epistemic_ortho_boolean_lattice
if not (axiom == mod_cs or axiom == simp_cs or axiom == safe_ni)])
mb.build_model()
print(mb.model(format='cooked'))
simp_flat = read_expr('B(x) -> If(x,If(And(x,y),z)) = If(And(x,y),z)')
goal = simp_q_collapse
mb = MaceCommand(goal,
assumptions = [axiom for axiom in cond_epistemic_ortho_boolean_lattice
if not (axiom == flat
or axiom == safe_must_export
or axiom == must_preserve)]
+ [simp_flat])
mb.build_model()
print(mb.model(format='cooked'))
goal = read_expr("(B(x) & B(y)) -> And(y,Or(Not(y),If(x,y)))=And(y,If(x,y))")
mb = MaceCommand(goal, assumptions = [ax for ax in cond_epistemic_ortho_boolean_lattice
if not(ax == safe_ni or ax == safe_cem_plus)])
mb.build_model()
print(mb.model(format='cooked'))
very_simp_cs = read_expr("(B(x) & B(y)) -> And(x,y) = And(And(x,y),If(x,y))")
goal = read_expr("(B(x) & B(y)) -> And(x,If(x,If(y,x))) = And(And(x,If(x,If(y,x))),If(y,x)) ")
mb = MaceCommand(goal, assumptions = [ax for ax in cond_epistemic_ortho_boolean_lattice
if not(ax == simp_cs or ax == mod_cs)] + [very_simp_cs])
mb.build_model()
print(mb.model(format='cooked'))
mb = MaceCommand(goal, assumptions = [ax for ax in cond_epistemic_ortho_boolean_lattice
if not ax == flat] + [simp_flat])
mb.build_model()
print(mb.model(format='cooked'))
mod_caut_trans = read_expr('And(If(x,Box(y)),If(And(x,y),z)) = And(And(If(x,Box(y)),If(And(x,y),z)),If(x,z))')
simp_caut_trans = read_expr('B(z) -> And(If(x,y),If(And(x,y),z)) = And(And(If(x,y),If(And(x,y),z)),If(x,z))')
mod_caut_mon = read_expr('And(If(x,Box(y)),If(x,z)) = And(And(If(x,Box(y)),If(x,z)), If(And(x,y),z))')
simp_caut_mon = read_expr('B(z) -> And(If(x,y),If(x,z)) = And(And(If(x,y),If(x,z)),If(And(x,y),z))')
#goal = mod_caut_trans
#prover = Prover9Command(goal, assumptions = modal_ortho_boolean_lattice
# + [if_and,if_top,mod_mp,must_intro,flat],timeout=1000000)
#prover.prove()
#print(prover.proof())
#goal = mod_caut_trans
#mb = MaceCommand(goal, assumptions = epistemic_ortho_boolean_lattice
# + [if_and,if_top,mod_mp,must_intro,flat])
#mb.build_model()
#print(mb.model(format='cooked'))
#goal = mod_caut_mon
#prover = Prover9Command(goal, assumptions = modal_ortho_boolean_lattice
# + [if_and,if_top,mod_cs,must_intro,flat],timeout=1000000)
#prover.prove()
#print(prover.proof())
#goal = mod_caut_mon
#mb = MaceCommand(goal, assumptions = epistemic_ortho_boolean_lattice
# + [if_and,if_top,mod_cs,must_intro,flat])
#mb.build_model()
#print(mb.model(format='cooked'))
cond_mt = read_expr('And(If(x,y),If(y,Bot)) = And(And(If(x,y),If(y,Bot)),If(x,Bot))')
#goal = cond_mt
#prover = Prover9Command(goal, assumptions = ortho_boolean_lattice + [if_and,if_top,simp_caut_trans,simp_caut_mon])
#prover.prove()
#print(prover.proof())
#goal = cond_mt
#mb = MaceCommand(goal, assumptions = ortho_boolean_lattice + [if_and,if_top,simp_caut_trans,simp_caut_mon])
#mb.build_model()
#print(mb.model(format='cooked'))
simp_rep = read_expr('B(z) -> And(And(If(x,y),If(y,x)),If(y,z)) = And(And(And(If(x,y),If(y,x)),If(y,z)),If(x,z))')
#goal = simp_rep
#prover = Prover9Command(goal, assumptions = ortho_boolean_lattice + [if_and,if_top,simp_caut_trans,simp_caut_mon])
#prover.prove()
#print(prover.proof())
#goal = simp_rep
#mb = MaceCommand(goal, assumptions = ortho_boolean_lattice + [if_and,if_top,simp_caut_trans,simp_caut_mon])
#mb.build_model()
#print(mb.model(format='cooked'))
econ_contra = read_expr('And(x,Not(x))= Not(Top)')
econ_de_morgan = read_expr('And(x,Not(And(Not(x),Not(y)))) = x')
econ_ortho_lattice = [and_id, and_comm, and_assoc, top, econ_contra, invol, econ_de_morgan]
econ_B_dist = read_expr('((B(x) & B(y)) & B(z)) -> And(x,Not(And(Not(y),Not(z)))) = Not(And(Not(And(x,y)),Not(And(x,z))))')
econ_ortho_boolean_lattice = econ_ortho_lattice + [B_top, B_not, B_and, econ_B_dist]
econ_modal_ortho_boolean_lattice = econ_ortho_boolean_lattice + [box_and, box_top]
econ_episteme = read_expr('And(x,Not(Box(x))) = Not(Top)')
econ_epistemic_ortho_boolean_lattice = econ_modal_ortho_boolean_lattice + [factive, econ_episteme]
econ_must_preserve = read_expr('And(Not(Box(Not(And(x,y)))),Box(y)) = And(And(Not(Box(Not(And(x,y)))),Box(y)),If(x,Box(y)))')
econ_weak_boethius = read_expr('And(Not(Box(Not(x))),If(x,y)) = And(And(Not(Box(Not(x))),If(x,y)),Not(If(x,Not(y))))')
econ_must_if_combo = read_expr('If(x,y) = And(If(x,y),Not(And(x,Not(And(Box(x),If(x,y))))))')
econ_safe_cem_plus = read_expr('B(x) -> If(x,Not(And(Not(y),Not(z)))) = And(If(x,Not(And(Not(y),Not(z)))),Not(And(Not(If(x,y)),Not(If(x,z)))))')
econ_cond_ax = [if_and, if_top, ident,
mod_mp, mod_cs, simp_cs, simp_mt,
must_intro, simp_must_import, safe_must_export, econ_must_preserve,
flat, econ_weak_boethius, econ_must_if_combo,
safe_ni, econ_safe_cem_plus]
econ_cond_modal_ortho_boolean_lattice = econ_modal_ortho_boolean_lattice + econ_cond_ax
econ_cond_epistemic_ortho_boolean_lattice = econ_epistemic_ortho_boolean_lattice + econ_cond_ax
| 0.378804 | 0.959687 |
# Car window
The rear window of an automobile is defogged by attaching a thin, transparent, film-type heating element to its inner surface. By electrically heating this element, a uniform heat flux may be established at the inner surface.
## Without radiation
For 4-mm-thick window glass ($k=0.8 \mathrm{W}/\mathrm{m}.\mathrm{K}$, https://www.saint-gobain-sekurit.com/glossary/automotive-glazing), determine the electrical power required per unit window area to maintain an inner surface temperature of $T_{s,i}=12^\circ \mathrm{C}$ when the interior air temperature and convection coefficient are $T_{\infty,i}= 22^\circ C$ and $h_i=10 \mathrm{W}/ \mathrm{m}^2. \mathrm{K}$, while the exterior (ambient) air temperature and convection coefficient are $T_{\infty,o}=-10^\circ \mathrm{C}$ and $h_o=65 \mathrm{W}/ \mathrm{m}^2. \mathrm{K}$.
## Assumptions
1D, steady state, constant thermodynamic properties and radiation is negligible.
## Sketch of the problem
<img src="carwindowheating.png" alt="my awesome sketch" width=50% >
## Equations
Conservation of energy on the interior surface of the windows dictates that
$$
q''_{conv,i} +\dot{q}=q''_{cond}
$$
with
$$
q''_{conv,i} = \frac{1}{R''_{conv,i}}(T_{\infty,i}-T_{s,i})=\frac{1}{1/10}(22-12)=100\mathrm{W}/\mathrm{m}^2
$$
and
$$
q''_{cond}=\frac{1}{R''_{cond}}(T_{s,i}-T_{s,o})
$$
Conservation of energy on the exterior surface is
$$
q''_{cond}=q''_{conv,o}=q''_{out}
$$
where
$$
q''_{conv,o} = \frac{1}{R''_{conv,o}}(T_{s,o}-T_{\infty,o})
$$
From the interior surface to the outside, the two resistances can be added to form an equivalent resistance
$$
R''_{out}=R''_{cond}+R''_{conv,o}
$$
and
$$
q''_{out}=\frac{1}{R''_{out}}(T_{s,i}-T_{\infty,i})
$$
The numerical value of $q''_{out}$ is given below, and leads to the solution
$$
\dot{q}=q''_{cond}-q''_{conv,i}=q''_{out}-q''_{conv,i}
$$
To calculate $T_{s,o}$,
$$
T_{s,o}=T_{s,i}-R''_{cond}q''_{cond}
$$
```
import schemdraw as schem
import schemdraw.elements as e
import matplotlib.pyplot as plt
import numpy as np
import math
import scipy.constants as csts
from Libraries import HT_thermal_resistance as res
# Parameters
L_glass = 4e-3 #mm
k_glass = 0.4 #W/m.K
T_infty_i = 22 #K
h_i = 10 #W/m^2.K
T_infty_o = -10. #W/m^2.K
h_o = 65 #W/m^2.K
T_si = 12
Rpp = []
Rpp.append(res.Resistance("$R''_{conv,i}$","W/m^2"))
Rpp[0].convection(h_i)
Rpp.append(res.Resistance("$R''_{cond}$","W/m^2"))
Rpp[1].cond_plane(k_glass,L_glass)
Rpp.append(res.Resistance("$R''_{conv,o}$","W/m^2"))
Rpp[2].convection(h_o)
d = schem.Drawing()
d.add(e.DOT, label = r"$T_{\infty,i}$")
d.add(e.RES, d = 'right', label = Rpp[0].name)
d.add(e.DOT, label = r"$T_{s,i}$")
R1 = d.add(e.RES, d = 'right', label = Rpp[1].name)
d.add(e.DOT, label = r"$T_{s,o}$")
d.add(e.RES, d='right', label = Rpp[2].name)
d.add(e.DOT, label="$T_{\infty,o}$")
L1 = d.add(e.LINE, toplabel = "$\dot{q}$", endpts = [[3, -2.25], [3, -.25]], color = 'orange')
a1 = d.labelI(L1, arrowofst = 0)
a1.color = 'orange'
L2 = d.add(e.LINE, botlabel = "$q''_{conv,i}$", endpts = [[0.5, -0.5], [2.5, -0.5]], color = 'red')
d.labelI(L2, arrowofst = 0)
L3 = d.add(e.LINE, botlabel = "$q''_{cond}$", endpts = [[3.5, -0.5], [5.5, -0.5]], color = 'black')
d.labelI(L3, arrowofst = 0)
L4 = d.add(e.LINE, botlabel = "$q''_{conv,o}$", endpts = [[6.5, -0.5], [8.5, -0.5]], color = 'blue')
d.labelI(L4, arrowofst = 0)
L5 = d.add(e.LINE, botlabel = "$q''_{out}$", endpts = [[9.25, 0], [11.25, 0]], color = 'blue')
d.labelI(L5, arrowofst = 0)
d.draw()
Rpp_out = Rpp[1].R +Rpp[2].R
qpp_out = (1./Rpp_out)*(T_si - T_infty_o)
qpp_conv_i = (1./Rpp[0].R)*(T_infty_i - T_si)
qdot = qpp_out - qpp_conv_i
print("The energy needed for the heating element is %.0f W/m^2 to maintain a temperature of %.0f C on the interior surface" %(qdot,T_si))
```
# With radiation
Now solve the same problem but with radiation using $\varepsilon=0.95$ and $T_{sur}=T_{\infty,o}$
## Assumptions
1D, steady state, constant thermodynamic properties and for radiation $T_{sur}=T_{\infty,o}$.
## Sketch of the problem
<img src="carwindowheatradiation.png" alt="my awesome sketch" width=50% >
## Equations
Conservation of energy on the interior surface of the windows dictates that
$$
q''_{conv,i} +\dot{q}=q''_{cond}
$$
with
$$
q''_{conv,i} = \frac{1}{R''_{conv,i}}(T_{\infty,i}-T_{s,i})=\frac{1}{1/10}(22-12)=100\mathrm{W}/\mathrm{m}^2
$$
and
$$
q''_{cond}=\frac{1}{R''_{cond}}(T_{s,i}-T_{s,o})
$$
Conservation of energy on the exterior surface is
$$
q''_{cond}=q''_{conv,o}+q''_{rad,o} = q''_{out}
$$
where
$$
q''_{conv,o} = \frac{1}{R''_{conv,o}}(T_{s,o}-T_{\infty,o})
$$
and
$$
q''_{rad,o}=\frac{1}{R''_{rad,o}}(T_{s,o}-T_{sur}),\; R''_{rad,o}=\left(\varepsilon\sigma(T_{s,o}+T_{sur})(T_{s,o}^2+T_{sur}^2)\right)^{-1}
$$
Since $R''_{rad,o}$ is a function of $T_{s,o}$, the problem is solved iteratively. First, the thermal circuit on the right hand side of the interior surface must be reduced to an equivalent resistance, which is a function of $T_{s,o}$
The total equivalent resistance on the RHS of $T_{s,i}$ is
$$
R''_{out} = R''_{cond}+R''_{conv+rad,o}
$$
with
$$
R''_{conv+rad,o}=\left(\frac{1}{R''_{conv,o}}+\frac{1}{R''_{rad,o}}\right)^{-1}
$$
yielding
$$
q''_{out}=\frac{1}{R''_{out}}(T_{s,i}-T_{\infty,i})
$$
The temperature on the outer surface of the glass can be then computed:
$$
T_{s,o}=T_{s,i}-R''_{cond}q''_{cond}
$$
The iterative method consists of:
* Step 0: choose an initial guess $T_{s,o}^{(n)}$
* Step 1: Calculate $h_r(T_{s,o}^{(n)})$, then $R''^{(n)}_{out}$ and finally $q''^{(n)}_{out}$
* Step 2: Calculate $T_{s,o}^{(n+1)}$ from $q''^{(n)}_{out}$ from the equation above.
* Step 3: Compute the error $e_n=\vert T_{s,o}^{(n)}- T_{s,o}^{(n+1)}\vert$. If $e_n>\epsilon$, $\epsilon$ being the accuracy desired on the temperature, repeat steps 0 to 3, replacing with $T_{s,o}^{(n+1)}$ as initial guess.
Once $T_{s,o}$ is converged, $q''_{out}$ is converged and
$$
\dot{q}=q''_{cond}-q''_{conv,i}=q''_{out}-q''_{conv,i}
$$
```
# Parameters
L_glass = 4e-3 #mm
k_glass = 0.4 #W/m.K
T_infty_i = 22 #C
h_i = 10 #W/m^2.K
T_infty_o = T_sur= -10. #C
h_o = 65 #W/m^2.K
T_si = 12 #C
eps = 0.95
Rpp = []
Rpp.append(res.Resistance("$R''_{conv,i}$","W/m^2"))
Rpp[0].convection(h_i)
Rpp.append(res.Resistance("$R''_{cond}$","W/m^2"))
Rpp[1].cond_plane(k_glass,L_glass)
Rpp.append(res.Resistance("$R''_{conv,o}$","W/m^2"))
Rpp[2].convection(h_o)
Rpp.append(res.Resistance("$R''_{rad,o}$","W/m^2"))
d = schem.Drawing()
d.add(e.DOT, label = r"$T_{\infty,i}$")
d.add(e.RES, d = 'right', label = Rpp[0].name)
d.add(e.DOT, label = r"$T_{s,i}$")
R1 = d.add(e.RES, d = 'right', label = Rpp[1].name)
d.add(e.DOT, rgtlabel = r"$T_{s,o}$")
d.add(e.LINE, d = 'up', l = 1.5)
d.add(e.RES, d='right', label = Rpp[2].name)
d.add(e.LINE, d = 'down', l = 1.5)
d.add(e.LINE, d = 'right', l = 1.5)
d.add(e.DOT, label="$T_{\infty,o}$")
d.add(e.LINE, d = 'down', l =1.5, xy = R1.end)
d.add(e.RES, d='right', label = Rpp[3].name)
d.add(e.LINE, d = 'up', l = 1.5)
L1 = d.add(e.LINE, toplabel = "$\dot{q}$", endpts = [[3, -2.25], [3, -.25]], color = 'orange')
a1 = d.labelI(L1, arrowofst = 0)
a1.color = 'orange'
L2 = d.add(e.LINE, botlabel = "$q''_{conv,i}$", endpts = [[0.5, -0.5], [2.5, -0.5]], color = 'red')
d.labelI(L2, arrowofst = 0)
L3 = d.add(e.LINE, botlabel = "$q''_{cond}$", endpts = [[3.5, -0.5], [5.5, -0.5]], color = 'black')
d.labelI(L3, arrowofst = 0)
L4 = d.add(e.LINE, botlabel = "$q''_{conv,o}$", endpts = [[6.5, 1.0], [8.5, 1.0]], color = 'blue')
d.labelI(L4, arrowofst = 0)
L41 = d.add(e.LINE, botlabel = "$q''_{rad,o}$", endpts = [[6.5, -2.0], [8.5, -2.0]], color = 'blue')
d.labelI(L41, arrowofst = 0)
L5 = d.add(e.LINE, botlabel = "$q''_{out}$", endpts = [[10.75, 0], [12.75, 0]], color = 'blue')
d.labelI(L5, arrowofst = 0)
d.draw()
from Libraries import thermodynamics as thermo
e_threshold = 0.1
e = np.inf
T_so = 5. #C
iteration = 0
while (e > e_threshold) and (iteration < 10):
T_so_ini = T_so
Rpp[3].radiation(eps,thermo.C2K(T_so),thermo.C2K(T_sur))
Rpp_convrad_o = 1./(1/Rpp[2].R + 1/Rpp[3].R)
Rpp_out = Rpp[1].R + Rpp_convrad_o
qpp_out = 1/Rpp_out*(T_si - T_infty_o)
T_so = T_si - Rpp[1].R*qpp_out
e = abs(T_so - T_so_ini)
iteration += 1
print("iteration: %i, T_so = %.10f C, error = %.4e" %(iteration, T_so, e))
qpp_conv_i = (1./Rpp[0].R)*(T_infty_i - T_si)
qdot = qpp_out - qpp_conv_i
print("The energy needed for the heating element is %.0f W/m^2 to maintain a temperature of %.0f C on the interior surface" %(qdot,T_si))
800/767
```
## Conclusion
Note that the radiation causes a 4.3% increase in electrical energy. As a first approximation, radiation is often neglected unless the heat transfer mechanism is governed by radiation. This assumption allows for the linearization of the thermal circuit and a straighfoward, direct solution. However always solve the problem with radiation if emissivity is provided or you are explicitly asked to include radiation.
|
github_jupyter
|
import schemdraw as schem
import schemdraw.elements as e
import matplotlib.pyplot as plt
import numpy as np
import math
import scipy.constants as csts
from Libraries import HT_thermal_resistance as res
# Parameters
L_glass = 4e-3 #mm
k_glass = 0.4 #W/m.K
T_infty_i = 22 #K
h_i = 10 #W/m^2.K
T_infty_o = -10. #W/m^2.K
h_o = 65 #W/m^2.K
T_si = 12
Rpp = []
Rpp.append(res.Resistance("$R''_{conv,i}$","W/m^2"))
Rpp[0].convection(h_i)
Rpp.append(res.Resistance("$R''_{cond}$","W/m^2"))
Rpp[1].cond_plane(k_glass,L_glass)
Rpp.append(res.Resistance("$R''_{conv,o}$","W/m^2"))
Rpp[2].convection(h_o)
d = schem.Drawing()
d.add(e.DOT, label = r"$T_{\infty,i}$")
d.add(e.RES, d = 'right', label = Rpp[0].name)
d.add(e.DOT, label = r"$T_{s,i}$")
R1 = d.add(e.RES, d = 'right', label = Rpp[1].name)
d.add(e.DOT, label = r"$T_{s,o}$")
d.add(e.RES, d='right', label = Rpp[2].name)
d.add(e.DOT, label="$T_{\infty,o}$")
L1 = d.add(e.LINE, toplabel = "$\dot{q}$", endpts = [[3, -2.25], [3, -.25]], color = 'orange')
a1 = d.labelI(L1, arrowofst = 0)
a1.color = 'orange'
L2 = d.add(e.LINE, botlabel = "$q''_{conv,i}$", endpts = [[0.5, -0.5], [2.5, -0.5]], color = 'red')
d.labelI(L2, arrowofst = 0)
L3 = d.add(e.LINE, botlabel = "$q''_{cond}$", endpts = [[3.5, -0.5], [5.5, -0.5]], color = 'black')
d.labelI(L3, arrowofst = 0)
L4 = d.add(e.LINE, botlabel = "$q''_{conv,o}$", endpts = [[6.5, -0.5], [8.5, -0.5]], color = 'blue')
d.labelI(L4, arrowofst = 0)
L5 = d.add(e.LINE, botlabel = "$q''_{out}$", endpts = [[9.25, 0], [11.25, 0]], color = 'blue')
d.labelI(L5, arrowofst = 0)
d.draw()
Rpp_out = Rpp[1].R +Rpp[2].R
qpp_out = (1./Rpp_out)*(T_si - T_infty_o)
qpp_conv_i = (1./Rpp[0].R)*(T_infty_i - T_si)
qdot = qpp_out - qpp_conv_i
print("The energy needed for the heating element is %.0f W/m^2 to maintain a temperature of %.0f C on the interior surface" %(qdot,T_si))
# Parameters
L_glass = 4e-3 #mm
k_glass = 0.4 #W/m.K
T_infty_i = 22 #C
h_i = 10 #W/m^2.K
T_infty_o = T_sur= -10. #C
h_o = 65 #W/m^2.K
T_si = 12 #C
eps = 0.95
Rpp = []
Rpp.append(res.Resistance("$R''_{conv,i}$","W/m^2"))
Rpp[0].convection(h_i)
Rpp.append(res.Resistance("$R''_{cond}$","W/m^2"))
Rpp[1].cond_plane(k_glass,L_glass)
Rpp.append(res.Resistance("$R''_{conv,o}$","W/m^2"))
Rpp[2].convection(h_o)
Rpp.append(res.Resistance("$R''_{rad,o}$","W/m^2"))
d = schem.Drawing()
d.add(e.DOT, label = r"$T_{\infty,i}$")
d.add(e.RES, d = 'right', label = Rpp[0].name)
d.add(e.DOT, label = r"$T_{s,i}$")
R1 = d.add(e.RES, d = 'right', label = Rpp[1].name)
d.add(e.DOT, rgtlabel = r"$T_{s,o}$")
d.add(e.LINE, d = 'up', l = 1.5)
d.add(e.RES, d='right', label = Rpp[2].name)
d.add(e.LINE, d = 'down', l = 1.5)
d.add(e.LINE, d = 'right', l = 1.5)
d.add(e.DOT, label="$T_{\infty,o}$")
d.add(e.LINE, d = 'down', l =1.5, xy = R1.end)
d.add(e.RES, d='right', label = Rpp[3].name)
d.add(e.LINE, d = 'up', l = 1.5)
L1 = d.add(e.LINE, toplabel = "$\dot{q}$", endpts = [[3, -2.25], [3, -.25]], color = 'orange')
a1 = d.labelI(L1, arrowofst = 0)
a1.color = 'orange'
L2 = d.add(e.LINE, botlabel = "$q''_{conv,i}$", endpts = [[0.5, -0.5], [2.5, -0.5]], color = 'red')
d.labelI(L2, arrowofst = 0)
L3 = d.add(e.LINE, botlabel = "$q''_{cond}$", endpts = [[3.5, -0.5], [5.5, -0.5]], color = 'black')
d.labelI(L3, arrowofst = 0)
L4 = d.add(e.LINE, botlabel = "$q''_{conv,o}$", endpts = [[6.5, 1.0], [8.5, 1.0]], color = 'blue')
d.labelI(L4, arrowofst = 0)
L41 = d.add(e.LINE, botlabel = "$q''_{rad,o}$", endpts = [[6.5, -2.0], [8.5, -2.0]], color = 'blue')
d.labelI(L41, arrowofst = 0)
L5 = d.add(e.LINE, botlabel = "$q''_{out}$", endpts = [[10.75, 0], [12.75, 0]], color = 'blue')
d.labelI(L5, arrowofst = 0)
d.draw()
from Libraries import thermodynamics as thermo
e_threshold = 0.1
e = np.inf
T_so = 5. #C
iteration = 0
while (e > e_threshold) and (iteration < 10):
T_so_ini = T_so
Rpp[3].radiation(eps,thermo.C2K(T_so),thermo.C2K(T_sur))
Rpp_convrad_o = 1./(1/Rpp[2].R + 1/Rpp[3].R)
Rpp_out = Rpp[1].R + Rpp_convrad_o
qpp_out = 1/Rpp_out*(T_si - T_infty_o)
T_so = T_si - Rpp[1].R*qpp_out
e = abs(T_so - T_so_ini)
iteration += 1
print("iteration: %i, T_so = %.10f C, error = %.4e" %(iteration, T_so, e))
qpp_conv_i = (1./Rpp[0].R)*(T_infty_i - T_si)
qdot = qpp_out - qpp_conv_i
print("The energy needed for the heating element is %.0f W/m^2 to maintain a temperature of %.0f C on the interior surface" %(qdot,T_si))
800/767
| 0.441914 | 0.93337 |
```
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt # standard graphics
import seaborn as sns # fancier graphics
# Read the dataset
import pandas as pd
data = pd.read_csv('E:/Data Science/Modules/Module 3/DataSets/EDA/Avocado/avocado.csv')
type(data)
# Show the first five rows
data.head()
# show the last five rows
data.head()
```
# Clean the data
```
data = data.drop(columns = 'Unnamed: 0')
data.head()
data.info()
```
### What are 4046, 4225 and 4770?
Reading of avocado varieties: https://producebrands.com/the-avocado/ gives explanations:
- 4046 = Hass – small
- 4225 = Hass – large
- 4770 = Hass Extra Large
We rename the columns accordingly.
```
data = data.rename(columns = {'4046': 'small', '4225': 'large', '4770': 'xl'})
data.head()
```
# Descriptive Statistics
```
# rows, columns
data.shape
data.columns
# isnull() to check null values.
# sum() it's shown the true,false in (0,1)
data.isnull().sum()
# Show the descriptive statistics using describe()
data.describe()
# What is the mean AveragePrice? (notice that there is a column of named 'AveragePrice')
data['AveragePrice'].mean()
# Print the mean with two decimals only
m = data['AveragePrice'].mean()
print('Mean value of AveragePrice = ', round(m,2))
# What is the standard deviation of Average Price?
s = data['AveragePrice'].std()
print('Standard Deviation of AveragePrice = ', round(m,2))
```
##### If we compare the data.head() and data.describe(), we notice that 'type' and 'region' columns were dropped out from the descriptive statistics table as they were text (or categorical data). However, we can study how many unique values these variables contain and the distribution of the values.
```
# What are the unique type values?
data['type'].unique()
# How many rows i.e., observations there are for each type?
data['type'].value_counts()
# Display the descriptive statistics grouped by 'type'
data.groupby('type').describe()
# Display the descriptive statistics for 'AveragePrice' grouped by 'type'
data_by_type = data.groupby('type')
data_by_type['AveragePrice'].describe()
# Compare the mean of AveragePrice between different types?
data['AveragePrice'].groupby(data['type']).mean()
# Show the distribution of the average prices using Histogram? (Hint: use bins = 30)
data.hist(column = 'AveragePrice', bins = 30, figsize = (8, 6))
plt.xlabel('Price')
plt.ylabel('Count')
plt.title('Distribution of avocado average prices')
plt.show()
# Use a seaborn to create distribution plot of 'AveragePrice'
plt.figure(figsize=(8,5))
plt.title("Distribution of avocado average price")
ax = sns.distplot(data["AveragePrice"], color = 'b')
plt.show()
#set figure size
plt.figure(figsize=(12,5))
# Plot the distribution of conventional type data
sns.distplot(data["AveragePrice"][data['type'] == 'conventional'], color = 'r', label = 'conventional')
sns.distplot(data["AveragePrice"][data['type'] == 'organic'], color = 'g', label = 'organic')
# add legend, show the graphics
plt.legend()
plt.grid()
plt.title("Distribution of average price grouped by type")
plt.show()
# Make a boxplot graph using pandas or seaborn to compare 'Avrage Price' by 'type'
#boxplot using pandas
data.boxplot(column = 'AveragePrice', by = 'type', figsize = (8,6))
plt.show()
# boxplot with seaborn
plt.figure(figsize=(12,5))
sns.boxplot(y = "type", x = "AveragePrice", data = data)
plt.xlim([0, 4])
plt.show()
```
|
github_jupyter
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt # standard graphics
import seaborn as sns # fancier graphics
# Read the dataset
import pandas as pd
data = pd.read_csv('E:/Data Science/Modules/Module 3/DataSets/EDA/Avocado/avocado.csv')
type(data)
# Show the first five rows
data.head()
# show the last five rows
data.head()
data = data.drop(columns = 'Unnamed: 0')
data.head()
data.info()
data = data.rename(columns = {'4046': 'small', '4225': 'large', '4770': 'xl'})
data.head()
# rows, columns
data.shape
data.columns
# isnull() to check null values.
# sum() it's shown the true,false in (0,1)
data.isnull().sum()
# Show the descriptive statistics using describe()
data.describe()
# What is the mean AveragePrice? (notice that there is a column of named 'AveragePrice')
data['AveragePrice'].mean()
# Print the mean with two decimals only
m = data['AveragePrice'].mean()
print('Mean value of AveragePrice = ', round(m,2))
# What is the standard deviation of Average Price?
s = data['AveragePrice'].std()
print('Standard Deviation of AveragePrice = ', round(m,2))
# What are the unique type values?
data['type'].unique()
# How many rows i.e., observations there are for each type?
data['type'].value_counts()
# Display the descriptive statistics grouped by 'type'
data.groupby('type').describe()
# Display the descriptive statistics for 'AveragePrice' grouped by 'type'
data_by_type = data.groupby('type')
data_by_type['AveragePrice'].describe()
# Compare the mean of AveragePrice between different types?
data['AveragePrice'].groupby(data['type']).mean()
# Show the distribution of the average prices using Histogram? (Hint: use bins = 30)
data.hist(column = 'AveragePrice', bins = 30, figsize = (8, 6))
plt.xlabel('Price')
plt.ylabel('Count')
plt.title('Distribution of avocado average prices')
plt.show()
# Use a seaborn to create distribution plot of 'AveragePrice'
plt.figure(figsize=(8,5))
plt.title("Distribution of avocado average price")
ax = sns.distplot(data["AveragePrice"], color = 'b')
plt.show()
#set figure size
plt.figure(figsize=(12,5))
# Plot the distribution of conventional type data
sns.distplot(data["AveragePrice"][data['type'] == 'conventional'], color = 'r', label = 'conventional')
sns.distplot(data["AveragePrice"][data['type'] == 'organic'], color = 'g', label = 'organic')
# add legend, show the graphics
plt.legend()
plt.grid()
plt.title("Distribution of average price grouped by type")
plt.show()
# Make a boxplot graph using pandas or seaborn to compare 'Avrage Price' by 'type'
#boxplot using pandas
data.boxplot(column = 'AveragePrice', by = 'type', figsize = (8,6))
plt.show()
# boxplot with seaborn
plt.figure(figsize=(12,5))
sns.boxplot(y = "type", x = "AveragePrice", data = data)
plt.xlim([0, 4])
plt.show()
| 0.728265 | 0.840128 |
# 第3章: 正規表現
Wikipediaの記事を以下のフォーマットで書き出したファイルjawiki-country.json.gzがある.
* 1行に1記事の情報がJSON形式で格納される
* 各行には記事名が”title”キーに,記事本文が”text”キーの辞書オブジェクトに格納され,そのオブジェクトがJSON形式で書き出される
* ファイル全体はgzipで圧縮される
以下の処理を行うプログラムを作成せよ.
```
! wget https://nlp100.github.io/data/jawiki-country.json.gz
! gunzip jawiki-country.json.gz
```
## 20. JSONデータの読み込み
Wikipedia記事のJSONファイルを読み込み,「イギリス」に関する記事本文を表示せよ.問題21-29では,ここで抽出した記事本文に対して実行せよ.
```
import json
JSON_FILE = 'jawiki-country.json'
def get_country_text(country):
with open(JSON_FILE, encoding='utf-8') as f:
# このJSONファイルは特殊で、1行ごとにJSONデータが保存されている。
for line in f:
data = json.loads(line)
if data.get('title') == country:
return data.get('text')
return ''
desc_uk = get_country_text(u'イギリス')
print(desc_uk)
```
## 21. カテゴリ名を含む行を抽出
記事中でカテゴリ名を宣言している行を抽出せよ.
```
def get_categories(desc):
return [line for line in desc.split('\n') if '[[Category:' in line]
desc_uk = get_country_text(u'イギリス')
for cat in get_categories(desc_uk):
print(cat)
```
## 22. カテゴリ名の抽出
記事のカテゴリ名を(行単位ではなく名前で)抽出せよ.
```
import re
def get_category_names(desc):
res = []
for line in get_categories(desc):
cat_mo = re.search(r'\[{2}Category:(.*?)(\|.*)?]]', line)
res.append(cat_mo.group(1))
return res
desc_uk = get_country_text(u'イギリス')
for cat_name in get_category_names(desc_uk):
print(cat_name)
```
## 23. セクション構造
記事中に含まれるセクション名とそのレベル(例えば”== セクション名 ==”なら1)を表示せよ.
```
import re
def print_section_struct(desc):
for line in desc.split('\n'):
cat_mo = re.search(r'^(=+)\s*(.+?)\s*=+$', line)
if cat_mo is not None:
print(cat_mo.group(2), len(cat_mo.group(1)) - 1)
desc_uk = get_country_text(u'イギリス')
print_section_struct(desc_uk)
```
## 24. ファイル参照の抽出
記事から参照されているメディアファイルをすべて抜き出せ.
```
def get_media_files(desc):
res = []
for line in desc.split('\n'):
# 1行に複数のメディアファイルがある場合があるので、1行ずつループ
# Python 3.8からwhile条件文に代入が使えるようになった
# 「ファイル:」以降はなるべく少ない範囲でマッチした範囲
# 末尾は"|"か"]"でマッチさせる
while (file_mo := re.search(r'ファイル:(.*?)([\|\]])', line)):
res.append(file_mo.group(1))
line = line[file_mo.end():]
return res
print(u'メディアファイル一覧')
desc_uk = get_country_text(u'イギリス')
for idx, filename in enumerate(get_media_files(desc_uk), start=1):
print(idx, filename)
```
memo:<br>
`"[[ファイル:...]]"`で抽出しようとしたが、`[[..[[..]]..]]`のように入れ子になっているものもあり、対応するためにはコードが複雑になるため、ここではシンプルに実装した。
## 25. テンプレートの抽出
記事中に含まれる「基礎情報」テンプレートのフィールド名と値を抽出し,辞書オブジェクトとして格納せよ.
memo:<br>
* 「基礎情報」テンプレートは以下の範囲
```
{{基礎情報 国
...
}}
```
* フィールド名と値は以下のように記述されている
```
|国旗画像 = Flag of the United Kingdom.svg
|国章画像 = [[ファイル:Royal Coat of Arms of the United Kingdom.svg|85px|イギリスの国章]]
...
```
```
def get_basic_info_template(desc):
"""記事から基礎情報テンプレートを取り出す
"""
res = []
is_binfo = False
for line in desc.split('\n'):
if re.match(r'^\{{2}基礎情報.*$', line):
is_binfo = True
continue
elif line == '}}':
break
if is_binfo:
res.append(line)
return res
def get_field_value(temp):
"""テンプレートからフィールドと値を取り出す
"""
fv_mo = re.search(r'^\|(.*?)\s*?=\s*(.*)$', temp)
if fv_mo is not None:
return fv_mo.group(1, 2)
else:
return (None, None)
def create_dict_field_value(desc):
"""基礎情報テンプレートからフィールドと値の辞書を作る
"""
field_value = {}
for temp in get_basic_info_template(desc_uk):
field, value = get_field_value(temp)
if field is not None:
field_value[field] = value
return field_value
desc_uk = get_country_text(u'イギリス')
binfo_dic = create_dict_field_value(desc_uk)
for idx, field in enumerate(binfo_dic.keys()):
print(f'{idx}: {field} = "{binfo_dic.get(field)}"')
```
## 26. 強調マークアップの除去
25の処理時に,テンプレートの値からMediaWikiの強調マークアップ(弱い強調,強調,強い強調のすべて)を除去してテキストに変換せよ(参考: マークアップ早見表).
memo:<br>
48行目の「'''グレートブリテン及び北アイルランド連合王国'''」だけが対象。
```
import re
def remove_emphasis(line):
"""強調マークアップの除去
"""
return re.sub(r"'{2,5}", '', line)
desc_uk = get_country_text(u'イギリス')
binfo_dic = create_dict_field_value(desc_uk)
for idx, (field, val) in enumerate(binfo_dic.items(), start=1):
val = remove_emphasis(binfo_dic.get(field))
print(f'{idx}: {field} = "{val}"')
```
## 27. 内部リンクの除去
26の処理に加えて,テンプレートの値からMediaWikiの内部リンクマークアップを除去し,テキストに変換せよ(参考: マークアップ早見表).
memo:<br>
内部リンクは以下の書式。
```
[[記事名]]
[[記事名|表示文字]]
[[記事名#節名|表示文字]]
```
```
def split_internal_link(line):
"""内部リンクを分割
"""
if re.match(r'\[{2}ファイル:', line):
# ファイルリンクなら終了
return None, None, None
mo = re.search(r'\[{2}(.*?)(\#(.*?))?(\|(.*?))?\]{2}', line)
if mo is not None:
return mo.group(1, 3, 5)
else:
return None
def remove_emphasis_and_intrlink(val):
"""強調と内部リンクを除去
"""
val = remove_emphasis(val)
res = val
while (intrl_mo := re.search(r'\[{2}.*?\]{2}', val)):
#print('DEBUG:', intrl_mo.group(0))
topic, subt, prstr = split_internal_link(intrl_mo.group(0))
#print(topic)
#print(subt)
#print(prstr)
replace_str = ''
if prstr is not None:
# 表示名があれば、表示名で置換
replace_str = prstr
elif topic is not None:
# 表示名がなく、記事名があれば記事名で置換
replace_str = topic
if replace_str:
res = res.replace(intrl_mo.group(0), replace_str)
val = val[intrl_mo.end():]
return res
desc_uk = get_country_text('uイギリス')
binfo_dic = create_dict_field_value(desc_uk)
for idx, (field, val) in enumerate(binfo_dic.items(), start=1):
#print(f'{idx}: {field} = "{val}"')
val = remove_emphasis_and_intrlink(binfo_dic.get(field))
print(f'{idx}: {field} = "{val}"')
```
## 28. MediaWikiマークアップの除去
27の処理に加えて,テンプレートの値からMediaWikiマークアップを可能な限り除去し,国の基本情報を整形せよ.
```
def remove_pattern(line, pattern, pos=0, repl=''):
res = line
while (mo := re.search(pattern, line)):
if mo is not None:
repl_str = ''
if repl:
repl_str = repl
elif pos != 0 and mo.group(pos) is not None:
repl_str = mo.group(pos)
res = res.replace(mo.group(0), repl_str)
line = line[mo.end():]
return res
def remove_markup(line):
"""マークアップの削除
"""
# 強調と内部リンクを削除
res = remove_emphasis_and_intrlink(line)
# langの削除
res = remove_pattern(res, r'\{{2}lang\|..\|(.*?)\}{2}', 1)
# ファイルの削除
res = remove_pattern(res, r'\[{2}ファイル:(.*?)(\|(.*?))?(\|(.*))?\]{2}', 5)
# 仮リンクの削除
res = remove_pattern(res, r'\{{2}仮リンク\|(.*?)(\|(.*?))(\|(.*?))\}{2}', 1)
# Cite webの削除
res = remove_pattern(res, r'\{{2}Cite web.*?\}{2}')
# centerの削除
res = remove_pattern(res, r'\{{2}center.*?\}{2}')
# <ref />の削除
res = remove_pattern(res, r'<ref .*?/>')
# {{0}}をスペースに置換
res = remove_pattern(res, r'\{{2}0\}{2}', repl=' ')
# <br />を改行に置換
res = remove_pattern(res, r'<br\s*/>', repl='\n')
# [http:... ](外部リンク)を削除
res = remove_pattern(res, r'\[http.*\]')
# {en icon}を削除
res = remove_pattern(res, r'\{en icon\}')
# <ref .. />を削除
res = remove_pattern(res, r'<ref\s+.*?/>')
# <ref></ref>(空の)refを削除
res = remove_pattern(res, r'<ref\s*.*?></ref>')
# <ref>..<ref/>を削除
res = remove_pattern(res, r'<ref\s*.*?>(.*?)</ref>', 1)
# <references/>を削除
res = remove_pattern(res, r'<references/>')
return res
desc_uk = get_country_text(u'イギリス')
binfo_dic = create_dict_field_value(desc_uk)
for idx, (field, val) in enumerate(binfo_dic.items(), start=1):
print(f'{idx}: {field} = "{val}"')
val = remove_markup(binfo_dic.get(field))
print(f'{idx}: {field} = "{val}"')
```
## 29. 国旗画像のURLを取得する
テンプレートの内容を利用し,国旗画像のURLを取得せよ.(ヒント: MediaWiki APIのimageinfoを呼び出して,ファイル参照をURLに変換すればよい)
memo:<br>
このページを参照。
https://www.mediawiki.org/wiki/API:Imageinfo/ja
```
import requests
MEDIAWIKI_URL = 'https://www.mediawiki.org/w/api.php'
MEDIAWIKI_PARAMS = {
'action': 'query',
'format': 'json',
'prop' : 'imageinfo',
'iiprop': 'url',
}
def get_flag_picture_url(desc):
# 国旗画像のテキストを取り出す
binfo_dic = create_dict_field_value(desc_uk)
flag_val = binfo_dic.get(u'国旗画像')
fname = re.sub('^.*?:', '', re.sub(r'\|.*$', '', flag_val))
# MediaWiki APIで国旗画像
params = MEDIAWIKI_PARAMS
params['titles'] = 'File:'+fname.replace(' ', ' ')
sess = requests.Session()
res = sess.get(url=MEDIAWIKI_URL, params=params)
res_json = res.json()
# 画像URLを返す
return res_json['query']['pages']['-1']['imageinfo'][0]['url']
desc_uk = get_country_text(u'イギリス')
flag_url = get_flag_picture_url(desc_uk)
print(flag_url)
```
|
github_jupyter
|
! wget https://nlp100.github.io/data/jawiki-country.json.gz
! gunzip jawiki-country.json.gz
import json
JSON_FILE = 'jawiki-country.json'
def get_country_text(country):
with open(JSON_FILE, encoding='utf-8') as f:
# このJSONファイルは特殊で、1行ごとにJSONデータが保存されている。
for line in f:
data = json.loads(line)
if data.get('title') == country:
return data.get('text')
return ''
desc_uk = get_country_text(u'イギリス')
print(desc_uk)
def get_categories(desc):
return [line for line in desc.split('\n') if '[[Category:' in line]
desc_uk = get_country_text(u'イギリス')
for cat in get_categories(desc_uk):
print(cat)
import re
def get_category_names(desc):
res = []
for line in get_categories(desc):
cat_mo = re.search(r'\[{2}Category:(.*?)(\|.*)?]]', line)
res.append(cat_mo.group(1))
return res
desc_uk = get_country_text(u'イギリス')
for cat_name in get_category_names(desc_uk):
print(cat_name)
import re
def print_section_struct(desc):
for line in desc.split('\n'):
cat_mo = re.search(r'^(=+)\s*(.+?)\s*=+$', line)
if cat_mo is not None:
print(cat_mo.group(2), len(cat_mo.group(1)) - 1)
desc_uk = get_country_text(u'イギリス')
print_section_struct(desc_uk)
def get_media_files(desc):
res = []
for line in desc.split('\n'):
# 1行に複数のメディアファイルがある場合があるので、1行ずつループ
# Python 3.8からwhile条件文に代入が使えるようになった
# 「ファイル:」以降はなるべく少ない範囲でマッチした範囲
# 末尾は"|"か"]"でマッチさせる
while (file_mo := re.search(r'ファイル:(.*?)([\|\]])', line)):
res.append(file_mo.group(1))
line = line[file_mo.end():]
return res
print(u'メディアファイル一覧')
desc_uk = get_country_text(u'イギリス')
for idx, filename in enumerate(get_media_files(desc_uk), start=1):
print(idx, filename)
{{基礎情報 国
...
}}
|国旗画像 = Flag of the United Kingdom.svg
|国章画像 = [[ファイル:Royal Coat of Arms of the United Kingdom.svg|85px|イギリスの国章]]
...
def get_basic_info_template(desc):
"""記事から基礎情報テンプレートを取り出す
"""
res = []
is_binfo = False
for line in desc.split('\n'):
if re.match(r'^\{{2}基礎情報.*$', line):
is_binfo = True
continue
elif line == '}}':
break
if is_binfo:
res.append(line)
return res
def get_field_value(temp):
"""テンプレートからフィールドと値を取り出す
"""
fv_mo = re.search(r'^\|(.*?)\s*?=\s*(.*)$', temp)
if fv_mo is not None:
return fv_mo.group(1, 2)
else:
return (None, None)
def create_dict_field_value(desc):
"""基礎情報テンプレートからフィールドと値の辞書を作る
"""
field_value = {}
for temp in get_basic_info_template(desc_uk):
field, value = get_field_value(temp)
if field is not None:
field_value[field] = value
return field_value
desc_uk = get_country_text(u'イギリス')
binfo_dic = create_dict_field_value(desc_uk)
for idx, field in enumerate(binfo_dic.keys()):
print(f'{idx}: {field} = "{binfo_dic.get(field)}"')
import re
def remove_emphasis(line):
"""強調マークアップの除去
"""
return re.sub(r"'{2,5}", '', line)
desc_uk = get_country_text(u'イギリス')
binfo_dic = create_dict_field_value(desc_uk)
for idx, (field, val) in enumerate(binfo_dic.items(), start=1):
val = remove_emphasis(binfo_dic.get(field))
print(f'{idx}: {field} = "{val}"')
[[記事名]]
[[記事名|表示文字]]
[[記事名#節名|表示文字]]
def split_internal_link(line):
"""内部リンクを分割
"""
if re.match(r'\[{2}ファイル:', line):
# ファイルリンクなら終了
return None, None, None
mo = re.search(r'\[{2}(.*?)(\#(.*?))?(\|(.*?))?\]{2}', line)
if mo is not None:
return mo.group(1, 3, 5)
else:
return None
def remove_emphasis_and_intrlink(val):
"""強調と内部リンクを除去
"""
val = remove_emphasis(val)
res = val
while (intrl_mo := re.search(r'\[{2}.*?\]{2}', val)):
#print('DEBUG:', intrl_mo.group(0))
topic, subt, prstr = split_internal_link(intrl_mo.group(0))
#print(topic)
#print(subt)
#print(prstr)
replace_str = ''
if prstr is not None:
# 表示名があれば、表示名で置換
replace_str = prstr
elif topic is not None:
# 表示名がなく、記事名があれば記事名で置換
replace_str = topic
if replace_str:
res = res.replace(intrl_mo.group(0), replace_str)
val = val[intrl_mo.end():]
return res
desc_uk = get_country_text('uイギリス')
binfo_dic = create_dict_field_value(desc_uk)
for idx, (field, val) in enumerate(binfo_dic.items(), start=1):
#print(f'{idx}: {field} = "{val}"')
val = remove_emphasis_and_intrlink(binfo_dic.get(field))
print(f'{idx}: {field} = "{val}"')
def remove_pattern(line, pattern, pos=0, repl=''):
res = line
while (mo := re.search(pattern, line)):
if mo is not None:
repl_str = ''
if repl:
repl_str = repl
elif pos != 0 and mo.group(pos) is not None:
repl_str = mo.group(pos)
res = res.replace(mo.group(0), repl_str)
line = line[mo.end():]
return res
def remove_markup(line):
"""マークアップの削除
"""
# 強調と内部リンクを削除
res = remove_emphasis_and_intrlink(line)
# langの削除
res = remove_pattern(res, r'\{{2}lang\|..\|(.*?)\}{2}', 1)
# ファイルの削除
res = remove_pattern(res, r'\[{2}ファイル:(.*?)(\|(.*?))?(\|(.*))?\]{2}', 5)
# 仮リンクの削除
res = remove_pattern(res, r'\{{2}仮リンク\|(.*?)(\|(.*?))(\|(.*?))\}{2}', 1)
# Cite webの削除
res = remove_pattern(res, r'\{{2}Cite web.*?\}{2}')
# centerの削除
res = remove_pattern(res, r'\{{2}center.*?\}{2}')
# <ref />の削除
res = remove_pattern(res, r'<ref .*?/>')
# {{0}}をスペースに置換
res = remove_pattern(res, r'\{{2}0\}{2}', repl=' ')
# <br />を改行に置換
res = remove_pattern(res, r'<br\s*/>', repl='\n')
# [http:... ](外部リンク)を削除
res = remove_pattern(res, r'\[http.*\]')
# {en icon}を削除
res = remove_pattern(res, r'\{en icon\}')
# <ref .. />を削除
res = remove_pattern(res, r'<ref\s+.*?/>')
# <ref></ref>(空の)refを削除
res = remove_pattern(res, r'<ref\s*.*?></ref>')
# <ref>..<ref/>を削除
res = remove_pattern(res, r'<ref\s*.*?>(.*?)</ref>', 1)
# <references/>を削除
res = remove_pattern(res, r'<references/>')
return res
desc_uk = get_country_text(u'イギリス')
binfo_dic = create_dict_field_value(desc_uk)
for idx, (field, val) in enumerate(binfo_dic.items(), start=1):
print(f'{idx}: {field} = "{val}"')
val = remove_markup(binfo_dic.get(field))
print(f'{idx}: {field} = "{val}"')
import requests
MEDIAWIKI_URL = 'https://www.mediawiki.org/w/api.php'
MEDIAWIKI_PARAMS = {
'action': 'query',
'format': 'json',
'prop' : 'imageinfo',
'iiprop': 'url',
}
def get_flag_picture_url(desc):
# 国旗画像のテキストを取り出す
binfo_dic = create_dict_field_value(desc_uk)
flag_val = binfo_dic.get(u'国旗画像')
fname = re.sub('^.*?:', '', re.sub(r'\|.*$', '', flag_val))
# MediaWiki APIで国旗画像
params = MEDIAWIKI_PARAMS
params['titles'] = 'File:'+fname.replace(' ', ' ')
sess = requests.Session()
res = sess.get(url=MEDIAWIKI_URL, params=params)
res_json = res.json()
# 画像URLを返す
return res_json['query']['pages']['-1']['imageinfo'][0]['url']
desc_uk = get_country_text(u'イギリス')
flag_url = get_flag_picture_url(desc_uk)
print(flag_url)
| 0.165661 | 0.770119 |
```
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
import scipy.stats as stats
import sys
sys.path.append("../")
import vuong_tests
#generate the sample
def gen_data(beta=3):
nobs = 1000
x = np.random.uniform(low=-3., high=3., size=(nobs,3))
e = np.random.normal(loc=0.0, scale=1.0, size=nobs)
y = 1 + beta * x[:,1] + e
return y,x,nobs
yn,xn,nobs = gen_data()
def compute_loglike(resid):
sigma = np.sqrt(np.sum(resid**2)/resid.shape[0])
ll = np.log(stats.norm.pdf(resid,loc=0,scale=sigma))
return ll
def compute_llr(yn,xn):
x1n,x2n = xn[:,0:2],xn[:,1:3]
model1 = sm.OLS(yn,sm.add_constant(x1n))
model1_fit = model1.fit(disp=False)
ll1 = compute_loglike(model1_fit.resid)
model2 = sm.OLS(yn,sm.add_constant(x2n))
model2_fit = model2.fit(disp=False)
ll2 = compute_loglike(model2_fit.resid)
llr = ll1.sum() - ll2.sum()
omega2 = (ll1- ll2).var()
return llr,np.sqrt(omega2)
yn,xn,nobs = gen_data()
print(compute_llr(yn,xn))
yn,xn,nobs = gen_data()
print(vuong_tests.bootstrap_test(yn,xn,nobs,compute_llr,hist=True))
print(vuong_tests.regular_test(yn,xn,nobs,compute_llr,hist=True))
plt.title("Comparison with bootstrap")
plt.xlabel("Test Statistic")
plt.ylabel("Density")
plt.legend()
plt.savefig('../figs/bootstrap_compare10')
plt.show()
reg,boot1,boot2, llr, std, omega = vuong_tests.monte_carlo(100,gen_data,compute_llr,trials=200,use_boot2=True)
print("reg: %s, boot1: %s, boot2: %s, llr:%s, std: %s, omega:%s"%(reg,boot1,boot2,llr,std, omega))
def compute_score(yn,xn,params):
xn = sm.add_constant(xn)
resid = yn - np.matmul(xn,params)
k = len(params)
scale = (resid**2).mean()
tile_resid = np.tile( resid, k)
tile_resid = np.reshape(tile_resid, (k,xn.shape[0]) ).transpose()
grad = tile_resid*xn/scale
return grad
def compute_hess(yn,xn,params):
pass
def setup_shi(yn,xn):
x1n,x2n = xn[:,0:2],xn[:,1:3]
# model 1 grad, etc.
model1 = sm.OLS(yn,sm.add_constant(x1n))
model1_fit = model1.fit(disp=False)
k1 = len(model1_fit.params)
ll1 = compute_loglike(model1_fit.resid)
grad1 = compute_score(yn,x1n,model1_fit.params)
hess1 = model1.hessian(model1_fit.params)
#model 2 grad, etc.
model2 = sm.OLS(yn,sm.add_constant(x2n))
model2_fit = model2.fit(disp=False)
k2 = len(model1_fit.params)
ll2 = compute_loglike(model2_fit.resid)
grad2 = compute_score(yn,x2n,model2_fit.params)
hess2 = model2.hessian(model2_fit.params)
return ll1,grad1,hess1,ll2,k1, grad2,hess2,k2
yn,xn,nobs = gen_data()
ll1,grad1,hess1,ll2,k1, grad2,hess2,k2 = setup_shi(yn,xn)
shi_result = vuong_tests.monte_carlo_shi(100,setup_shi,gen_data)
print(shi_result)
sys.path.append("../")
import vuong_tests
#generate the sample
def gen_data(beta=3):
nobs = 1000
#x = np.random.normal(low=-3., high=3., size=(nobs,3))
x = np.random.normal(scale=3., size=(nobs,3))
e = np.random.normal(loc=0.0, scale=1.0, size=nobs)
y = 1 + 1*x[:,2] + 1*x[:,0] + beta * x[:,1] + e
return y,x,nobs
reg,boot1,boot2, llr, std, omega = vuong_tests.monte_carlo(100,gen_data,compute_llr,trials=200,use_boot2=True)
print("reg: %s, boot1: %s, boot2: %s, llr:%s, std: %s, omega:%s"%(reg,boot1,boot2,llr,std, omega))
shi_result = vuong_tests.monte_carlo_shi(100,setup_shi,gen_data)
print(shi_result)
#generate the sample
def gen_data(beta=1):
nobs = 1000
#x = np.random.normal(low=-3., high=3., size=(nobs,3))
x = np.random.normal(scale=3., size=(nobs,3))
e = np.random.normal(loc=0.0, scale=1.0, size=nobs)
y = 1 + 2*x[:,2] + 2*x[:,0] + beta * x[:,1] + e
return y,x,nobs
reg,boot1,boot2, llr, std, omega = vuong_tests.monte_carlo(1000,gen_data,compute_llr,trials=200,use_boot2=True)
print("reg: %s, boot1: %s, boot2: %s, llr:%s, std: %s, omega:%s"%(reg,boot1,boot2,llr,std, omega))
shi_result = vuong_tests.monte_carlo_shi(100,setup_shi,gen_data)
print(shi_result)
#generate the sample
def gen_data(beta=1):
nobs = 1000
#x = np.random.normal(low=-3., high=3., size=(nobs,3))
x = np.random.normal(scale=3., size=(nobs,3))
e = np.random.normal(loc=0.0, scale=1.0, size=nobs)
y = 1 + beta * x[:,1] + e
return y,x,nobs
reg,boot1,boot2, llr, std, omega = vuong_tests.monte_carlo(1000,gen_data,compute_llr,trials=200,use_boot2=True)
print("reg: %s, boot1: %s, boot2: %s, llr:%s, std: %s, omega:%s"%(reg,boot1,boot2,llr,std, omega))
shi_result = vuong_tests.monte_carlo_shi(100,setup_shi,gen_data)
print(shi_result)
#generate the sample
def gen_data(beta=3):
nobs = 1000
#x = np.random.normal(low=-3., high=3., size=(nobs,3))
x = np.random.normal(scale=3., size=(nobs,3))
e = np.random.normal(loc=0.0, scale=1.0, size=nobs)
y = 1 + .1*x[:,0] + beta * x[:,1] + e
return y,x,nobs
reg,boot1,boot2, llr, std, omega = vuong_tests.monte_carlo(100,gen_data,compute_llr,trials=200,use_boot2=True)
print("reg: %s, boot1: %s, boot2: %s, llr:%s, std: %s, omega:%s"%(reg,boot1,boot2,llr,std, omega))
shi_result = vuong_tests.monte_carlo_shi(100,setup_shi,gen_data)
print(shi_result)
#generate the sample
def gen_data(beta=3):
nobs = 1000
#x = np.random.normal(low=-3., high=3., size=(nobs,3))
x = np.random.normal(scale=3., size=(nobs,3))
e = np.random.normal(loc=0.0, scale=1.0, size=nobs)
y = 1 + .01*x[:,0] + beta * x[:,1] + e
return y,x,nobs
reg,boot1,boot2, llr, std, omega = vuong_tests.monte_carlo(100,gen_data,compute_llr,trials=200,use_boot2=True)
print("reg: %s, boot1: %s, boot2: %s, llr:%s, std: %s, omega:%s"%(reg,boot1,boot2,llr,std, omega))
shi_result = vuong_tests.monte_carlo_shi(100,setup_shi,gen_data)
print(shi_result)
#generate the sample
def gen_data(beta=2):
nobs = 1000
#x = np.random.normal(low=-3., high=3., size=(nobs,3))
x = np.random.normal(scale=3., size=(nobs,3))
e = np.random.normal(loc=0.0, scale=1.0, size=nobs)
y = 1 + .01*x[:,0] + beta * x[:,1] + e
return y,x,nobs
reg,boot1,boot2, llr, std, omega = vuong_tests.monte_carlo(100,gen_data,compute_llr,trials=200,use_boot2=True)
print("reg: %s, boot1: %s, boot2: %s, llr:%s, std: %s, omega:%s"%(reg,boot1,boot2,llr,std, omega))
shi_result = vuong_tests.monte_carlo_shi(100,setup_shi,gen_data)
print(shi_result)
```
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
import scipy.stats as stats
import sys
sys.path.append("../")
import vuong_tests
#generate the sample
def gen_data(beta=3):
nobs = 1000
x = np.random.uniform(low=-3., high=3., size=(nobs,3))
e = np.random.normal(loc=0.0, scale=1.0, size=nobs)
y = 1 + beta * x[:,1] + e
return y,x,nobs
yn,xn,nobs = gen_data()
def compute_loglike(resid):
sigma = np.sqrt(np.sum(resid**2)/resid.shape[0])
ll = np.log(stats.norm.pdf(resid,loc=0,scale=sigma))
return ll
def compute_llr(yn,xn):
x1n,x2n = xn[:,0:2],xn[:,1:3]
model1 = sm.OLS(yn,sm.add_constant(x1n))
model1_fit = model1.fit(disp=False)
ll1 = compute_loglike(model1_fit.resid)
model2 = sm.OLS(yn,sm.add_constant(x2n))
model2_fit = model2.fit(disp=False)
ll2 = compute_loglike(model2_fit.resid)
llr = ll1.sum() - ll2.sum()
omega2 = (ll1- ll2).var()
return llr,np.sqrt(omega2)
yn,xn,nobs = gen_data()
print(compute_llr(yn,xn))
yn,xn,nobs = gen_data()
print(vuong_tests.bootstrap_test(yn,xn,nobs,compute_llr,hist=True))
print(vuong_tests.regular_test(yn,xn,nobs,compute_llr,hist=True))
plt.title("Comparison with bootstrap")
plt.xlabel("Test Statistic")
plt.ylabel("Density")
plt.legend()
plt.savefig('../figs/bootstrap_compare10')
plt.show()
reg,boot1,boot2, llr, std, omega = vuong_tests.monte_carlo(100,gen_data,compute_llr,trials=200,use_boot2=True)
print("reg: %s, boot1: %s, boot2: %s, llr:%s, std: %s, omega:%s"%(reg,boot1,boot2,llr,std, omega))
def compute_score(yn,xn,params):
xn = sm.add_constant(xn)
resid = yn - np.matmul(xn,params)
k = len(params)
scale = (resid**2).mean()
tile_resid = np.tile( resid, k)
tile_resid = np.reshape(tile_resid, (k,xn.shape[0]) ).transpose()
grad = tile_resid*xn/scale
return grad
def compute_hess(yn,xn,params):
pass
def setup_shi(yn,xn):
x1n,x2n = xn[:,0:2],xn[:,1:3]
# model 1 grad, etc.
model1 = sm.OLS(yn,sm.add_constant(x1n))
model1_fit = model1.fit(disp=False)
k1 = len(model1_fit.params)
ll1 = compute_loglike(model1_fit.resid)
grad1 = compute_score(yn,x1n,model1_fit.params)
hess1 = model1.hessian(model1_fit.params)
#model 2 grad, etc.
model2 = sm.OLS(yn,sm.add_constant(x2n))
model2_fit = model2.fit(disp=False)
k2 = len(model1_fit.params)
ll2 = compute_loglike(model2_fit.resid)
grad2 = compute_score(yn,x2n,model2_fit.params)
hess2 = model2.hessian(model2_fit.params)
return ll1,grad1,hess1,ll2,k1, grad2,hess2,k2
yn,xn,nobs = gen_data()
ll1,grad1,hess1,ll2,k1, grad2,hess2,k2 = setup_shi(yn,xn)
shi_result = vuong_tests.monte_carlo_shi(100,setup_shi,gen_data)
print(shi_result)
sys.path.append("../")
import vuong_tests
#generate the sample
def gen_data(beta=3):
nobs = 1000
#x = np.random.normal(low=-3., high=3., size=(nobs,3))
x = np.random.normal(scale=3., size=(nobs,3))
e = np.random.normal(loc=0.0, scale=1.0, size=nobs)
y = 1 + 1*x[:,2] + 1*x[:,0] + beta * x[:,1] + e
return y,x,nobs
reg,boot1,boot2, llr, std, omega = vuong_tests.monte_carlo(100,gen_data,compute_llr,trials=200,use_boot2=True)
print("reg: %s, boot1: %s, boot2: %s, llr:%s, std: %s, omega:%s"%(reg,boot1,boot2,llr,std, omega))
shi_result = vuong_tests.monte_carlo_shi(100,setup_shi,gen_data)
print(shi_result)
#generate the sample
def gen_data(beta=1):
nobs = 1000
#x = np.random.normal(low=-3., high=3., size=(nobs,3))
x = np.random.normal(scale=3., size=(nobs,3))
e = np.random.normal(loc=0.0, scale=1.0, size=nobs)
y = 1 + 2*x[:,2] + 2*x[:,0] + beta * x[:,1] + e
return y,x,nobs
reg,boot1,boot2, llr, std, omega = vuong_tests.monte_carlo(1000,gen_data,compute_llr,trials=200,use_boot2=True)
print("reg: %s, boot1: %s, boot2: %s, llr:%s, std: %s, omega:%s"%(reg,boot1,boot2,llr,std, omega))
shi_result = vuong_tests.monte_carlo_shi(100,setup_shi,gen_data)
print(shi_result)
#generate the sample
def gen_data(beta=1):
nobs = 1000
#x = np.random.normal(low=-3., high=3., size=(nobs,3))
x = np.random.normal(scale=3., size=(nobs,3))
e = np.random.normal(loc=0.0, scale=1.0, size=nobs)
y = 1 + beta * x[:,1] + e
return y,x,nobs
reg,boot1,boot2, llr, std, omega = vuong_tests.monte_carlo(1000,gen_data,compute_llr,trials=200,use_boot2=True)
print("reg: %s, boot1: %s, boot2: %s, llr:%s, std: %s, omega:%s"%(reg,boot1,boot2,llr,std, omega))
shi_result = vuong_tests.monte_carlo_shi(100,setup_shi,gen_data)
print(shi_result)
#generate the sample
def gen_data(beta=3):
nobs = 1000
#x = np.random.normal(low=-3., high=3., size=(nobs,3))
x = np.random.normal(scale=3., size=(nobs,3))
e = np.random.normal(loc=0.0, scale=1.0, size=nobs)
y = 1 + .1*x[:,0] + beta * x[:,1] + e
return y,x,nobs
reg,boot1,boot2, llr, std, omega = vuong_tests.monte_carlo(100,gen_data,compute_llr,trials=200,use_boot2=True)
print("reg: %s, boot1: %s, boot2: %s, llr:%s, std: %s, omega:%s"%(reg,boot1,boot2,llr,std, omega))
shi_result = vuong_tests.monte_carlo_shi(100,setup_shi,gen_data)
print(shi_result)
#generate the sample
def gen_data(beta=3):
nobs = 1000
#x = np.random.normal(low=-3., high=3., size=(nobs,3))
x = np.random.normal(scale=3., size=(nobs,3))
e = np.random.normal(loc=0.0, scale=1.0, size=nobs)
y = 1 + .01*x[:,0] + beta * x[:,1] + e
return y,x,nobs
reg,boot1,boot2, llr, std, omega = vuong_tests.monte_carlo(100,gen_data,compute_llr,trials=200,use_boot2=True)
print("reg: %s, boot1: %s, boot2: %s, llr:%s, std: %s, omega:%s"%(reg,boot1,boot2,llr,std, omega))
shi_result = vuong_tests.monte_carlo_shi(100,setup_shi,gen_data)
print(shi_result)
#generate the sample
def gen_data(beta=2):
nobs = 1000
#x = np.random.normal(low=-3., high=3., size=(nobs,3))
x = np.random.normal(scale=3., size=(nobs,3))
e = np.random.normal(loc=0.0, scale=1.0, size=nobs)
y = 1 + .01*x[:,0] + beta * x[:,1] + e
return y,x,nobs
reg,boot1,boot2, llr, std, omega = vuong_tests.monte_carlo(100,gen_data,compute_llr,trials=200,use_boot2=True)
print("reg: %s, boot1: %s, boot2: %s, llr:%s, std: %s, omega:%s"%(reg,boot1,boot2,llr,std, omega))
shi_result = vuong_tests.monte_carlo_shi(100,setup_shi,gen_data)
print(shi_result)
| 0.398992 | 0.552057 |
```
# Importing the libraries
import numpy as np
import pandas as pd
# Importing the dataset
dataset = pd.read_csv('Churn_Modelling.csv')
X = dataset.iloc[:, 3:13]
y = dataset.iloc[:, -1]
X = X.values
y = y.values
# Encoding categorical data
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
labelencoder_X_1 = LabelEncoder()
X[:, 1] = labelencoder_X_1.fit_transform(X[:, 1])
labelencoder_X_2 = LabelEncoder()
X[:, 2] = labelencoder_X_2.fit_transform(X[:, 2])
from sklearn.compose import ColumnTransformer
ct = ColumnTransformer([("Country", OneHotEncoder(),[1])], remainder="passthrough") # The last arg ([0]) is the list of columns you want to transform in this step
X = ct.fit_transform(X)
X = X[:, 1:]
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# Part 2 - Now let's make the ANN!
# Importing the Keras libraries and packages
import tensorflow as tf
print ("TensorFlow version: " + tf.__version__)
# Tuning the ANN
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Dropout
from sklearn.model_selection import GridSearchCV
def build_classifier(optimizer):
classifier = Sequential()
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu', input_dim = 11))
classifier.add(Dropout(rate = 0.1))
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu'))
classifier.add(Dropout(rate = 0.1))
classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
classifier.compile(optimizer = optimizer, loss = 'binary_crossentropy', metrics = ['accuracy'])
return classifier
classifier = KerasClassifier(build_fn = build_classifier)
parameters = {'batch_size': [25, 32],
'epochs': [100, 500],
'optimizer': ['adam', 'rmsprop']}
grid_search = GridSearchCV(estimator = classifier,
param_grid = parameters,
scoring = 'accuracy',
cv = 10,
n_jobs=-1)
grid_search = grid_search.fit(X_train, y_train)
best_parameters = grid_search.best_params_
best_accuracy = grid_search.best_score_
# Part 3 - Making the predictions and evaluating the model
# Predicting the Test set results
y_pred = grid_search.predict(X_test)
y_pred = (y_pred > 0.5)
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
# Predicting a single new observation
"""Predict if the customer with the following informations will leave the bank:
Geography: France
Credit Score: 600
Gender: Male
Age: 40
Tenure: 3
Balance: 60000
Number of Products: 2
Has Credit Card: Yes
Is Active Member: Yes
Estimated Salary: 50000"""
new_prediction = grid_search.predict(sc.transform(np.array([[0.0, 0, 600, 1, 40, 3, 60000, 2, 1, 1, 50000]])))
new_prediction = (new_prediction > 0.5)
```
|
github_jupyter
|
# Importing the libraries
import numpy as np
import pandas as pd
# Importing the dataset
dataset = pd.read_csv('Churn_Modelling.csv')
X = dataset.iloc[:, 3:13]
y = dataset.iloc[:, -1]
X = X.values
y = y.values
# Encoding categorical data
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
labelencoder_X_1 = LabelEncoder()
X[:, 1] = labelencoder_X_1.fit_transform(X[:, 1])
labelencoder_X_2 = LabelEncoder()
X[:, 2] = labelencoder_X_2.fit_transform(X[:, 2])
from sklearn.compose import ColumnTransformer
ct = ColumnTransformer([("Country", OneHotEncoder(),[1])], remainder="passthrough") # The last arg ([0]) is the list of columns you want to transform in this step
X = ct.fit_transform(X)
X = X[:, 1:]
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# Part 2 - Now let's make the ANN!
# Importing the Keras libraries and packages
import tensorflow as tf
print ("TensorFlow version: " + tf.__version__)
# Tuning the ANN
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Dropout
from sklearn.model_selection import GridSearchCV
def build_classifier(optimizer):
classifier = Sequential()
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu', input_dim = 11))
classifier.add(Dropout(rate = 0.1))
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu'))
classifier.add(Dropout(rate = 0.1))
classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
classifier.compile(optimizer = optimizer, loss = 'binary_crossentropy', metrics = ['accuracy'])
return classifier
classifier = KerasClassifier(build_fn = build_classifier)
parameters = {'batch_size': [25, 32],
'epochs': [100, 500],
'optimizer': ['adam', 'rmsprop']}
grid_search = GridSearchCV(estimator = classifier,
param_grid = parameters,
scoring = 'accuracy',
cv = 10,
n_jobs=-1)
grid_search = grid_search.fit(X_train, y_train)
best_parameters = grid_search.best_params_
best_accuracy = grid_search.best_score_
# Part 3 - Making the predictions and evaluating the model
# Predicting the Test set results
y_pred = grid_search.predict(X_test)
y_pred = (y_pred > 0.5)
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
# Predicting a single new observation
"""Predict if the customer with the following informations will leave the bank:
Geography: France
Credit Score: 600
Gender: Male
Age: 40
Tenure: 3
Balance: 60000
Number of Products: 2
Has Credit Card: Yes
Is Active Member: Yes
Estimated Salary: 50000"""
new_prediction = grid_search.predict(sc.transform(np.array([[0.0, 0, 600, 1, 40, 3, 60000, 2, 1, 1, 50000]])))
new_prediction = (new_prediction > 0.5)
| 0.84916 | 0.770767 |
<img src="img/logos.png" width="1500" align="center">
<h2 align="center"><code style="background-color:white">Paweł Święcki</code></h2>
<h1 align="center"><code style="background-color:white">Podstawy Pythona: Funkcje</code></h1>
<h3 align="center"><code style="background-color:white">PyLight #9</code></h3>
<a href="https://github.com/pawelswiecki/python_podstawy_funkcje
"><h3 align="center">https://github.com/pawelswiecki/python_podstawy_funkcje</h3></a>
<h1><code style="background-color:white">O czym powiem</code></h1>
<h2><code style="background-color:white">1. Fundamentalia ;)</code></h2>
<h2><code style="background-color:white">2. Struktura funkcji</code></h2>
<h2><code style="background-color:white">3. Jak dobrze pisać funkcje</code></h2>
<h1 align="center"><code style="background-color:white">1. Fundamentalia</code></h1>
<div align="center">O co tu w ogóle chodzi?</div>
<h2 align="center"><code style="background-color:white">Czym jest funkcja?</code></h2>
### Funkcja rozumiana potocznie
> Funkcja to zadanie, które spełnia lub ma spełnić jakaś osoba lub rzecz.
(zob. https://sjp.pwn.pl/szukaj/funkcja.html)
### Funkcja matematyczna
Definicja funkcji jednoargumentowej:
> Dla danych dwóch zbiorów $X$ oraz $Y$ funkcja $f$ to przyporządkowanie każdemu elementowi zbioru $X$ dokładnie jednego elementu zbioru $Y$, co zapisujemy $f: X \rightarrow Y$.
(por. https://pl.wikipedia.org/wiki/Funkcja)
Można powiedzieć, że funkcja matematyczna to **mapowanie**:
<img src="img/function-wikipedia.png" width="350">
<div align="center">Funkcja mapująca figurę geometryczną na jej kolor [obrazek z <a href="https://en.wikipedia.org/wiki/Function_(mathematics)">wikipedii</a>]</div>
W Pythonie funkcja (jedno- lub wieloargumentowa) może być czystym mapowaniem, ale takie funkcje są tylko jednym z rodzajów funkcji.
### Funkcja w Pythonie
W Pythonie (i innych językach) funkcja rozumiana jest szerzej niż w matematyce – może być mapowaniem, ale nie musi.
Funkcja to wydzielony fragment programu, który:
- **dostaje jakieś dane** (input), po czym
- **coś robi**, a następnie
- **zwraca jakieś dane** (output)
Inne określenia o podobnym znaczeniu: _procedure_, _routine_, _subroutine_, _subprogram_.
Jest jeszcze _method_ (metoda). Jest to funkcja na stałe przywiązana do obiektu.
<h2 align="center"><code style="background-color:white">Po co nam funkcje?</code></h2>
#### 1. Aby podzielić program na mniejsze fragmenty.
Tak się go łatwiej rozwija, kod jest bardziej zrozumiały i łatwiejszy w debuggowaniu.
#### 2. Aby można było powtórnie używać części kodu.
Na przykład jeśli zdefiniuję funkcję liczenia średniej, to potem mogę używać jej wszędzie, gdzie tylko potrzebuję średniej.
<h2 align="center"><code style="background-color:white">Jak zdefiniować funkcję?</code></h2>
Funkcje definiowane są przy pomocy słówka **`def`**:
```
def nazwa_funkcji(parametry_funkcji):
# tutaj jest to, co funkcja robi
return dane_które_funkcja_zwraca
```
Na przykład:
```
def add_one(number):
result = number + 1
return result
```
Funkcja ta jest czystym mapowaniem i implementuje funkcję matematyczną $f(x) = x + 1$.
<h2 align="center"><code style="background-color:white">Jak używać funkcji?</code></h2>
Funkcji używamy w sposób następujący:
`nazwa_funkcji(argumenty_przekazane_do_funkcji)`
```
add_one(11)
add_one(41)
```
Aby zachować w zmiennej wartość zwróconą przez funkcję robimy tak:
`zmienna = nazwa_funkcji(argumenty)`
Na przykład:
```
result = add_one(255)
result
```
<h1 align="center"><code style="background-color:white">2. Struktura funkcji</code></h1>
Powtórzmy – funkcja to wydzielony fragment programu, który:
A. **dostaje jakieś dane**, po czym
B. **coś robi**, a następnie
C. **zwraca jakieś dane**.
Jak zaraz zobaczymy, każdy z tych punktów jest opcjonalny.
```
'''
Schemat funkcji
=============== A. dane przekazane (argumenty)
[INPUT]
|
|
v
def moja_funkcja(parametry):
+--------------------+
| |
| B. przetwarzanie |
| danych |
| . |
| . |
| . |
| . |
| |
|return ... |
+--------------------+
|
|
v
C. dane zwrócone
[OUTPUT]
'''[0]
```
Zacznę od (A) danych przekazanych, potem omówię (C) dane zwrócone, a na końcu (B) to, co w funkcja robi w środku.
<h2 align="center"><code style="background-color:white">A. To, co funkcja dostaje</code></h2>
<div align="center">O parametrach i argumentach.</div>
### "Parametr" a "argument"
Czasem używa się tych słów zamiennie, czasem się je myli, a nie chcemy ich mylić, bo głupio.
**Parametr** to zmienna w definicji funkcji.
```python
def add_one(number):
... ^
|
parametr
```
**Argument** to dana przekazana do funkcji.
```python
result = add_one(11)
^
|
argument
```
Wywołujemy tu funkcję `add_one` z _argumentem_ `11`. Argument ten zostaje przekazany do _parametru_ `number` tej funkcji. Od teraz ramach tej funkcji `number` ma wartość `11`.
### Czy funkcja może nie mieć żadnych parametrów?
Tak:
```
def get_pythons_creator_name():
return 'Guido van Rossum'
name = get_pythons_creator_name()
name
```
### Czy funkcja może mieć wiele parametrów?
Tak:
```
def add(x, y):
return x + y
add(3, 8)
def save_user_data(ip, isp, country, language, device, browser, os):
print('User data saved.')
save_user_data('151.101.13.140', 'AT&T', 'us', 'en-US', 'pc', 'Mozilla/5.0', 'Ubuntu')
```
Sporo tych argumentów, aż można się pomylić...
### Wywoływanie funkcji wraz z nazwami parametrów
```
add_one(number=33)
add(x=1, y=2)
```
Czasem jest to bardzo przydatne, szczególnie gdy funkcja ma wiele parametrów:
```
save_user_data(
ip='151.101.13.140',
isp='AT&T',
country='us',
language='en-US',
device='pc',
browser='Mozilla/5.0',
os='Ubuntu',
)
# Dużo lepiej, niż:
save_user_data('151.101.13.140', 'AT&T', 'us', 'en-US', 'pc', 'Mozilla/5.0', 'Ubuntu')
```
### Parametry z wartościami domyślnymi (_default parameter values_)
Możemy też zdefinować funkcję tak, by przyjmowała wprawdzie argument, ale jeśli nie zostanie on przekazany, to odpowiedni parametr przyjmie wartość domyslną (_default_):
```
def launch_missiles(missiles, drill=True):
if not drill:
print('📣 This is not a drill! 📣\n')
for missile_id in missiles:
if drill:
print(f'Pretending to launch missile #{missile_id}... 😌')
else:
print(f'Launching missile #{missile_id}! 😱')
```
W samej definicji parametrowi `drill` domyślnie przypisaliśmy `True`.
Wywołajmy funkcję `launch_missiles` tylko z jednym argumentem:
```
launch_missiles([1, 3, 7])
```
Czyli parametr `drill` otrzymał wartość `True`. Uff...
Dobra, wystrzelmy te pociski!
```
launch_missiles([1, 2, 3, 4], drill=False)
# można też `launch_missiles([1, 2, 3, 4], False)`
```
**UWAGA!** Wartościami domyślnymi powinny być wyłącznie obiekty niezmienne (immutable). Dlaczego? Zob. <a href="https://www.youtube.com/watch?v=Lb-t3TOBIQ0">prezentację</a> PyLight Marcina Jaroszewskiego oraz <a href="https://github.com/PyLightMeetup/Domyslne-niezmienne-czyli-o-argumentach-funkcji/blob/master/domyslne_niezmienne.ipynb">materiały</a> do niej.
### Funkcje z dowolną liczbą argumentów
#### czyli `*args` i `**kwargs`
- "args" to skrót od "arguments"
- "kwargs" to skrót od "keyword arguments" (czyli argumenty nazwane)
`*args` "pakuje" w siebie wszystkie pozostałe nienazwane argumenty.
```
def save_personal_data(*args):
print(f'args = {args}')
print(type(args))
save_personal_data(445, 'Bob Smith', 'ALIVE')
```
Czyli `args` to krotka (tuple).
`**kwargs` "pakuje" w siebie wszystkie pozostałe nazwane argumenty.
```
def save_personal_data2(**kwargs):
print(f'kwargs = {kwargs}')
print(type(kwargs))
save_personal_data2(id_=44, name='Boba Fett', status='MIA')
```
Czyli `kwargs` to dict.
Możemy też łączyć `*args` z `**kwargs`:
```
def save_personal_data3(*args, **kwargs):
print(f'args = {args}')
print(f'kargs = {kwargs}')
save_personal_data3(10, 'R2-D2', droid_type='astromech')
```
Możemy również łączyć zwykłe przekazywanie parametrów z `*args` i `**kwargs`:
```
def save_personal_data4(id_, name, **extra_data):
print(f'id_ = {id_}')
print(f'name = {name}')
print(f'extra_data = {extra_data}')
save_personal_data4(
id_=10,
name='Luke Skywalker',
status='FORCE_GHOST',
last_occupation='Ahch-To',
)
```
Można używać innych nazw na oznaczenie obu, ale przyjęły się "args" i "kwargs". Istotą są tu gwiazdki: `*` oraz `**`.
#### Po co w ogóle `*args` i `**kwargs`
Przy ich pomocy nasze funkcje mogą być elastyczniejsze:
- Przydaje się to na przykład w kontekście OOP i dziedziczenia.
- Twórcy bibliotek często wykorzystują tę technikę, by uczynić funkcje w swoich bibliotekach bardziej uniwersalnymi. Widoczne jest to na przykład w bibliotece **requests**, w której funkcja `request` przyjmuje wiele opcjonalnych parametrów, które są "łapane" przy pomocy `**kwargs`, zob. http://docs.python-requests.org/en/master/api/#requests.request.
<h2 align="center"><code style="background-color:white">C. To, co funkcja zwraca</code></h2>
<div align="center">Czyli dane, które dostajemy od funkcji.</div>
```
'''
Schemat funkcji
=============== A. dane przekazane (argumenty)
[INPUT]
|
|
v
def moja_funkcja(parametry):
+--------------------+
| |
| B. przetwarzanie |
| danych |
| . |
| . |
| . |
| . |
| |
|return ... |
+--------------------+
|
|
v
C. dane zwrócone
[OUTPUT]
'''[0]
```
### Czy funkcja może zwracać wiele wartości?
I tak, i nie. Funkcja może zwracać tylko jeden obiekt, ale może to być obiekt złożony z wielu obiektów:
```
def get_two_numbers():
return 1, 2
# można też z nawiasem: `return (1, 2)`
two_numbers = get_two_numbers()
two_numbers
type(two_numbers)
```
Aby oddzielnie zapisać obie dane ze zwróconej krotki przy przypisywaniu krotki zwróconej z funkcji używamy... drugiej krotki, po lewej stronie `=`:
```
number_one, number_two = get_two_numbers()
# również tu można z nawiasem: `(number_one, number_two)`
number_one
number_two
```
### Czy funkcja może nic nie zwracać?
I tak, i nie...
```
def useless_function(x):
y = x + 1
useless_function(100)
```
Tak naprawdę nawet ta funkcja coś zwraca, tylko w sposób niejawny. Tym czymś jest obiekt **`None`**.
Obiekt `None` reprezentuje **nic** lub **brak**. Więc funkcja zwracająca `None` w pewnym sensie nic nie zwraca.
```
what = useless_function(100)
type(what)
```
W trzech przypadkach funkcja może zwracać `None`:
- w funkcji w ogóle nie ma słówka `return`
- w funkcji jest `return` bez niczego po nim w tej samej linijce
- w funkcji jest `return None` (albo `return zmienna`, gdzie to zmiennej przypisany jest `None`)
```
def give_me_none():
return
x = give_me_none()
type(x)
```
<h2 align="center"><code style="background-color:white">B. To, co funkcja robi</code></h2>
<div align="center">Bebechy funkcji.</div>
```
'''
Schemat funkcji
=============== A. dane przekazane (argumenty)
[INPUT]
|
|
v
def moja_funkcja(parametry):
+--------------------+
| |
| B. przetwarzanie |
| danych |
| . |
| . |
| . |
| . |
| |
|return ... |
+--------------------+
|
|
v
C. dane zwrócone
[OUTPUT]
'''[0]
```
### Co funkcja _może_ robić?
**1. Przetwarzać dane**
Nawet `result = number + 1` to przetwarzane danych.
**2. Wywoływać efekty uboczne**
**Efekt uboczny** funkcji to jej dodatkowa (poza zwróceniem wartości) interakcja z czymś poza nią, np. wysłanie maila, pobranie danych z API. Więcej o tym kiedy indziej :)
### Funkcje dużo robiące
Funkcja może robić praktycznie nieograniczoną liczbę rzeczy. Jednak najlepsze są funkcje wyspecjalizowane, które dobrze robą jedną rzecz – jak mówi **Single Responsibility Principle**.
Jeśli funkcja musi zrobić dużo, to najlepiej podzielić jej pracę na mniejsze fragmenty i zlecić ich wykonanie... mniejszym funkcjom pomocniczym.
Wyobraźmy sobie, że mamy napisać funkcję `update_password` zmieniającą hasło użytkownika. Funkcja ta powinna:
- sprawdzać, czy stare hasło i powtórzone stare hasło są równe
- sprawdzać, czy stare hasło jest poprawne
- sprawdzać, czy nowe hasło ma odpowiednią liczbę znaków
- sprawdzać, czy nowe hasło ma odpowiednie znaki
- wygenerować hash nowego hasła
- dokonać próby zmiany hasha hasła w bazie danych
- zgłosić błąd przy zmianie hasha
- dokonać odpowiedniego wpisu do logów systemowych
Jeśli to wszystko wrzucimy "luzem" do jednej funkcji `update_password`, to funkcja ta będzie:
- nieczytelna
- trudna w utrzymaniu
- trudna w testowaniu
- podatna na bugi
- podatna na luki bezpieczeństwa
Nalepiej wydzielić poszczególne zadania tej funkcji i przenieść je do wielu funkcji pomocniczych, które będą wywoływane z wewnątrz funkcji `update_password`.
Funkcje te będą mogły być oddzielnie gruntownie otestowane, co zwiększy bezpieczeństwo i niezawodność systemu.
:D
### Funkcje nic nie robiące
Na drugim biegunie mamy funkcje, które nic nie robią. Na przykład tzw. funkcja tożsamościowa (identity function) po prostu zwraca przekazaną wartość: $f(x) = x$.
```
def identity(value):
return value
identity(15)
```
Taka funkcja w większości przpadków jest dość bezużyteczna, ale są case'y, w których ma to sens.
Dla zabawy możemy też zdefiniować funkcję, która nic nie przyjmuje, nic nie robi i nic nie zwraca (albo lepiej: zwraca nic):
```
def it_does_absolutely_nothing():
return
it_does_absolutely_nothing()
```
`¯\_(ツ)_/¯`
<h1 align="center"><code style="background-color:white">3. Jak dobrze pisać funkcje?</code></h1>
<div align="center">Kilka prostych wskazówek.</div>
## Nazywanie funkcji
**1. Funkcje należy nazywać zgodnie z konwencją `snake_case`**
Nie `PascalCase`, tak w Pythonie nazywamy klasy.
Nie `camelCase`, tak w Pythonie nic nie nazywamy.
Nie `UPPER_CASE_WITH_UNDERSCORES`, tak w Pythonie nazywamy stałe.
**2. Nazwy funkcji powinny być informatywne**
Nazwa funkcji powinna prawdziwie oddawać to, co funkcja robi. Wprowadzanie w błąd użytkownika funkcji nie jest fajne. Na przykład funkcja `update_user_data` nie może w żadnym wypadku usuwać użytkownika z systemu.
Nadmierne używanie skrótów w nazwach funkcji też nie jest zabawne ;)
Na przykład co robi funkcja `med_sum`? Czy to "medium sum", "median sum", "medical sum", a może "medieval summary"?
Jeśli funkcja sumuje wartości medialne, to lepiej nazwać ją `sum_of_medians` (albo `medians_sum`).
**3. Nazwy funkcji nie powinny być za długie**
Nazwa `get_user_country_by_id_or_by_name_whatever_is_passed` jest lekko przesadna. Lepiej skrócić do `get_user_country` a dodatkowe wyjaśnienie działania funkcji dać w komentarzu lub w docstring.
Wyjątkiem są nazwy funkcji testujących, gdzie długie nazwy są nawet wskazane, na przykład: `test_user_update_should_fail_on_wrong_user_id`.
## Cel i długość funkcji
**4. Single Responsibility Principle**
Funkcja powinna robić jedną dobrze określoną rzecz i robić ją dobrze.
**5. Funkcje nie powinny być za długie**
Stosuję prostą zasadę: funkcja ma mieścić się w oknie edytora otwartego na pełnym ekranie.
Dobrze jest móc widzieć całą funkcję na raz.
## Dzięki :)
## Pytania?
|
github_jupyter
|
def nazwa_funkcji(parametry_funkcji):
# tutaj jest to, co funkcja robi
return dane_które_funkcja_zwraca
def add_one(number):
result = number + 1
return result
add_one(11)
add_one(41)
result = add_one(255)
result
'''
Schemat funkcji
=============== A. dane przekazane (argumenty)
[INPUT]
|
|
v
def moja_funkcja(parametry):
+--------------------+
| |
| B. przetwarzanie |
| danych |
| . |
| . |
| . |
| . |
| |
|return ... |
+--------------------+
|
|
v
C. dane zwrócone
[OUTPUT]
'''[0]
def add_one(number):
... ^
|
parametr
result = add_one(11)
^
|
argument
def get_pythons_creator_name():
return 'Guido van Rossum'
name = get_pythons_creator_name()
name
def add(x, y):
return x + y
add(3, 8)
def save_user_data(ip, isp, country, language, device, browser, os):
print('User data saved.')
save_user_data('151.101.13.140', 'AT&T', 'us', 'en-US', 'pc', 'Mozilla/5.0', 'Ubuntu')
add_one(number=33)
add(x=1, y=2)
save_user_data(
ip='151.101.13.140',
isp='AT&T',
country='us',
language='en-US',
device='pc',
browser='Mozilla/5.0',
os='Ubuntu',
)
# Dużo lepiej, niż:
save_user_data('151.101.13.140', 'AT&T', 'us', 'en-US', 'pc', 'Mozilla/5.0', 'Ubuntu')
def launch_missiles(missiles, drill=True):
if not drill:
print('📣 This is not a drill! 📣\n')
for missile_id in missiles:
if drill:
print(f'Pretending to launch missile #{missile_id}... 😌')
else:
print(f'Launching missile #{missile_id}! 😱')
launch_missiles([1, 3, 7])
launch_missiles([1, 2, 3, 4], drill=False)
# można też `launch_missiles([1, 2, 3, 4], False)`
def save_personal_data(*args):
print(f'args = {args}')
print(type(args))
save_personal_data(445, 'Bob Smith', 'ALIVE')
def save_personal_data2(**kwargs):
print(f'kwargs = {kwargs}')
print(type(kwargs))
save_personal_data2(id_=44, name='Boba Fett', status='MIA')
def save_personal_data3(*args, **kwargs):
print(f'args = {args}')
print(f'kargs = {kwargs}')
save_personal_data3(10, 'R2-D2', droid_type='astromech')
def save_personal_data4(id_, name, **extra_data):
print(f'id_ = {id_}')
print(f'name = {name}')
print(f'extra_data = {extra_data}')
save_personal_data4(
id_=10,
name='Luke Skywalker',
status='FORCE_GHOST',
last_occupation='Ahch-To',
)
'''
Schemat funkcji
=============== A. dane przekazane (argumenty)
[INPUT]
|
|
v
def moja_funkcja(parametry):
+--------------------+
| |
| B. przetwarzanie |
| danych |
| . |
| . |
| . |
| . |
| |
|return ... |
+--------------------+
|
|
v
C. dane zwrócone
[OUTPUT]
'''[0]
def get_two_numbers():
return 1, 2
# można też z nawiasem: `return (1, 2)`
two_numbers = get_two_numbers()
two_numbers
type(two_numbers)
number_one, number_two = get_two_numbers()
# również tu można z nawiasem: `(number_one, number_two)`
number_one
number_two
def useless_function(x):
y = x + 1
useless_function(100)
what = useless_function(100)
type(what)
def give_me_none():
return
x = give_me_none()
type(x)
'''
Schemat funkcji
=============== A. dane przekazane (argumenty)
[INPUT]
|
|
v
def moja_funkcja(parametry):
+--------------------+
| |
| B. przetwarzanie |
| danych |
| . |
| . |
| . |
| . |
| |
|return ... |
+--------------------+
|
|
v
C. dane zwrócone
[OUTPUT]
'''[0]
def identity(value):
return value
identity(15)
def it_does_absolutely_nothing():
return
it_does_absolutely_nothing()
| 0.378115 | 0.87982 |
```
import os
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
train = pd.read_csv(os.path.join('https://raw.githubusercontent.com/zlatankr/Projects/master/Titanic/data/train.csv'))
test = pd.read_csv(os.path.join('https://raw.githubusercontent.com/zlatankr/Projects/master/Titanic/data/test.csv'))
train.info()
train.head()
train['Survived'].value_counts(normalize=True)
sns.countplot(train['Survived'])
train['Survived'].groupby(train['Pclass']).mean()
sns.countplot(train['Pclass'], hue=train['Survived'])
train['Name'].head()
train['Name_Title'] = train['Name'].apply(lambda x: x.split(',')[1]).apply(lambda x: x.split()[0])
train['Name_Title'].value_counts()
train['Survived'].groupby(train['Name_Title']).mean()
train['Name_Len'] = train['Name'].apply(lambda x: len(x))
train['Survived'].groupby(pd.qcut(train['Name_Len'],5)).mean()
pd.qcut(train['Name_Len'],5).value_counts()
train['Sex'].value_counts(normalize=True)
train['Survived'].groupby(train['Sex']).mean()
train['Survived'].groupby(train['Age'].isnull()).mean()
train['Survived'].groupby(pd.qcut(train['Age'],5)).mean()
pd.qcut(train['Age'],5).value_counts()
train['Survived'].groupby(train['SibSp']).mean()
train['SibSp'].value_counts()
train['Survived'].groupby(train['Parch']).mean()
train['Parch'].value_counts()
train['Ticket'].head(n=10)
train['Ticket_Len'] = train['Ticket'].apply(lambda x: len(x))
train['Ticket_Len'].value_counts()
train['Ticket_Lett'] = train['Ticket'].apply(lambda x: str(x)[0])
train['Ticket_Lett'].value_counts()
train.groupby(['Ticket_Lett'])['Survived'].mean()
pd.qcut(train['Fare'], 3).value_counts()
train['Survived'].groupby(pd.qcut(train['Fare'], 3)).mean()
pd.crosstab(pd.qcut(train['Fare'], 5), columns=train['Pclass'])
train['Cabin_Letter'] = train['Cabin'].apply(lambda x: str(x)[0])
train['Cabin_Letter'].value_counts()
train['Survived'].groupby(train['Cabin_Letter']).mean()
train['Cabin_num'] = train['Cabin'].apply(lambda x: str(x).split(' ')[-1][1:])
train['Cabin_num'].replace('an', np.NaN, inplace = True)
train['Cabin_num'] = train['Cabin_num'].apply(lambda x: int(x) if not pd.isnull(x) and x != '' else np.NaN)
pd.qcut(train['Cabin_num'],3).value_counts()
train['Survived'].groupby(pd.qcut(train['Cabin_num'], 3)).mean()
train['Survived'].corr(train['Cabin_num'])
train['Embarked'].value_counts()
train['Embarked'].value_counts(normalize=True)
train['Survived'].groupby(train['Embarked']).mean()
sns.countplot(train['Embarked'], hue=train['Pclass'])
def names(train, test):
for i in [train, test]:
i['Name_Len'] = i['Name'].apply(lambda x: len(x))
i['Name_Title'] = i['Name'].apply(lambda x: x.split(',')[1]).apply(lambda x: x.split()[0])
del i['Name']
return train, test
def age_impute(train, test):
for i in [train, test]:
i['Age_Null_Flag'] = i['Age'].apply(lambda x: 1 if pd.isnull(x) else 0)
data = train.groupby(['Name_Title', 'Pclass'])['Age']
i['Age'] = data.transform(lambda x: x.fillna(x.mean()))
return train, test
def fam_size(train, test):
for i in [train, test]:
i['Fam_Size'] = np.where((i['SibSp']+i['Parch']) == 0 , 'Solo',
np.where((i['SibSp']+i['Parch']) <= 3,'Nuclear', 'Big'))
del i['SibSp']
del i['Parch']
return train, test
def ticket_grouped(train, test):
for i in [train, test]:
i['Ticket_Lett'] = i['Ticket'].apply(lambda x: str(x)[0])
i['Ticket_Lett'] = i['Ticket_Lett'].apply(lambda x: str(x))
i['Ticket_Lett'] = np.where((i['Ticket_Lett']).isin(['1', '2', '3', 'S', 'P', 'C', 'A']), i['Ticket_Lett'],
np.where((i['Ticket_Lett']).isin(['W', '4', '7', '6', 'L', '5', '8']),
'Low_ticket', 'Other_ticket'))
i['Ticket_Len'] = i['Ticket'].apply(lambda x: len(x))
del i['Ticket']
return train, test
def cabin(train, test):
for i in [train, test]:
i['Cabin_Letter'] = i['Cabin'].apply(lambda x: str(x)[0])
del i['Cabin']
return train, test
def cabin_num(train, test):
for i in [train, test]:
i['Cabin_num1'] = i['Cabin'].apply(lambda x: str(x).split(' ')[-1][1:])
i['Cabin_num1'].replace('an', np.NaN, inplace = True)
i['Cabin_num1'] = i['Cabin_num1'].apply(lambda x: int(x) if not pd.isnull(x) and x != '' else np.NaN)
i['Cabin_num'] = pd.qcut(train['Cabin_num1'],3)
train = pd.concat((train, pd.get_dummies(train['Cabin_num'], prefix = 'Cabin_num')), axis = 1)
test = pd.concat((test, pd.get_dummies(test['Cabin_num'], prefix = 'Cabin_num')), axis = 1)
del train['Cabin_num']
del test['Cabin_num']
del train['Cabin_num1']
del test['Cabin_num1']
return train, test
def embarked_impute(train, test):
for i in [train, test]:
i['Embarked'] = i['Embarked'].fillna('S')
return train, test
test['Fare'].fillna(train['Fare'].mean(), inplace = True)
def dummies(train, test, columns = ['Pclass', 'Sex', 'Embarked', 'Ticket_Lett', 'Cabin_Letter', 'Name_Title', 'Fam_Size']):
for column in columns:
train[column] = train[column].apply(lambda x: str(x))
test[column] = test[column].apply(lambda x: str(x))
good_cols = [column+'_'+i for i in train[column].unique() if i in test[column].unique()]
train = pd.concat((train, pd.get_dummies(train[column], prefix = column)[good_cols]), axis = 1)
test = pd.concat((test, pd.get_dummies(test[column], prefix = column)[good_cols]), axis = 1)
del train[column]
del test[column]
return train, test
def drop(train, test, bye = ['PassengerId']):
for i in [train, test]:
for z in bye:
del i[z]
return train, test
train = pd.read_csv(os.path.join('https://raw.githubusercontent.com/zlatankr/Projects/master/Titanic/data/train.csv'))
test = pd.read_csv(os.path.join('https://raw.githubusercontent.com/zlatankr/Projects/master/Titanic/data/test.csv'))
train, test = names(train, test)
train, test = age_impute(train, test)
train, test = cabin_num(train, test)
train, test = cabin(train, test)
train, test = embarked_impute(train, test)
train, test = fam_size(train, test)
test['Fare'].fillna(train['Fare'].mean(), inplace = True)
train, test = ticket_grouped(train, test)
train, test = dummies(train, test, columns = ['Pclass', 'Sex', 'Embarked', 'Ticket_Lett',
'Cabin_Letter', 'Name_Title', 'Fam_Size'])
train, test = drop(train, test)
print(len(train.columns))
train.head()
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(criterion='gini',
n_estimators=700,
min_samples_split=10,
min_samples_leaf=1,
max_features='auto',
oob_score=True,
random_state=1,
n_jobs=-1)
rf.fit(train.iloc[:, 1:], train.iloc[:, 0])
print("%.4f" % rf.oob_score_)
pd.concat((pd.DataFrame(train.iloc[:, 1:].columns, columns = ['variable']),
pd.DataFrame(rf.feature_importances_, columns = ['importance'])),
axis = 1).sort_values(by='importance', ascending = False)[:20]
predictions = rf.predict(test)
predictions = pd.DataFrame(predictions, columns=['Survived'])
test = pd.read_csv(os.path.join('https://raw.githubusercontent.com/zlatankr/Projects/master/Titanic/data/test.csv'))
predictions = pd.concat((test.iloc[:, 0], predictions), axis = 1)
predictions.to_csv('y_test15.csv', sep=",", index = False)
```
|
github_jupyter
|
import os
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
train = pd.read_csv(os.path.join('https://raw.githubusercontent.com/zlatankr/Projects/master/Titanic/data/train.csv'))
test = pd.read_csv(os.path.join('https://raw.githubusercontent.com/zlatankr/Projects/master/Titanic/data/test.csv'))
train.info()
train.head()
train['Survived'].value_counts(normalize=True)
sns.countplot(train['Survived'])
train['Survived'].groupby(train['Pclass']).mean()
sns.countplot(train['Pclass'], hue=train['Survived'])
train['Name'].head()
train['Name_Title'] = train['Name'].apply(lambda x: x.split(',')[1]).apply(lambda x: x.split()[0])
train['Name_Title'].value_counts()
train['Survived'].groupby(train['Name_Title']).mean()
train['Name_Len'] = train['Name'].apply(lambda x: len(x))
train['Survived'].groupby(pd.qcut(train['Name_Len'],5)).mean()
pd.qcut(train['Name_Len'],5).value_counts()
train['Sex'].value_counts(normalize=True)
train['Survived'].groupby(train['Sex']).mean()
train['Survived'].groupby(train['Age'].isnull()).mean()
train['Survived'].groupby(pd.qcut(train['Age'],5)).mean()
pd.qcut(train['Age'],5).value_counts()
train['Survived'].groupby(train['SibSp']).mean()
train['SibSp'].value_counts()
train['Survived'].groupby(train['Parch']).mean()
train['Parch'].value_counts()
train['Ticket'].head(n=10)
train['Ticket_Len'] = train['Ticket'].apply(lambda x: len(x))
train['Ticket_Len'].value_counts()
train['Ticket_Lett'] = train['Ticket'].apply(lambda x: str(x)[0])
train['Ticket_Lett'].value_counts()
train.groupby(['Ticket_Lett'])['Survived'].mean()
pd.qcut(train['Fare'], 3).value_counts()
train['Survived'].groupby(pd.qcut(train['Fare'], 3)).mean()
pd.crosstab(pd.qcut(train['Fare'], 5), columns=train['Pclass'])
train['Cabin_Letter'] = train['Cabin'].apply(lambda x: str(x)[0])
train['Cabin_Letter'].value_counts()
train['Survived'].groupby(train['Cabin_Letter']).mean()
train['Cabin_num'] = train['Cabin'].apply(lambda x: str(x).split(' ')[-1][1:])
train['Cabin_num'].replace('an', np.NaN, inplace = True)
train['Cabin_num'] = train['Cabin_num'].apply(lambda x: int(x) if not pd.isnull(x) and x != '' else np.NaN)
pd.qcut(train['Cabin_num'],3).value_counts()
train['Survived'].groupby(pd.qcut(train['Cabin_num'], 3)).mean()
train['Survived'].corr(train['Cabin_num'])
train['Embarked'].value_counts()
train['Embarked'].value_counts(normalize=True)
train['Survived'].groupby(train['Embarked']).mean()
sns.countplot(train['Embarked'], hue=train['Pclass'])
def names(train, test):
for i in [train, test]:
i['Name_Len'] = i['Name'].apply(lambda x: len(x))
i['Name_Title'] = i['Name'].apply(lambda x: x.split(',')[1]).apply(lambda x: x.split()[0])
del i['Name']
return train, test
def age_impute(train, test):
for i in [train, test]:
i['Age_Null_Flag'] = i['Age'].apply(lambda x: 1 if pd.isnull(x) else 0)
data = train.groupby(['Name_Title', 'Pclass'])['Age']
i['Age'] = data.transform(lambda x: x.fillna(x.mean()))
return train, test
def fam_size(train, test):
for i in [train, test]:
i['Fam_Size'] = np.where((i['SibSp']+i['Parch']) == 0 , 'Solo',
np.where((i['SibSp']+i['Parch']) <= 3,'Nuclear', 'Big'))
del i['SibSp']
del i['Parch']
return train, test
def ticket_grouped(train, test):
for i in [train, test]:
i['Ticket_Lett'] = i['Ticket'].apply(lambda x: str(x)[0])
i['Ticket_Lett'] = i['Ticket_Lett'].apply(lambda x: str(x))
i['Ticket_Lett'] = np.where((i['Ticket_Lett']).isin(['1', '2', '3', 'S', 'P', 'C', 'A']), i['Ticket_Lett'],
np.where((i['Ticket_Lett']).isin(['W', '4', '7', '6', 'L', '5', '8']),
'Low_ticket', 'Other_ticket'))
i['Ticket_Len'] = i['Ticket'].apply(lambda x: len(x))
del i['Ticket']
return train, test
def cabin(train, test):
for i in [train, test]:
i['Cabin_Letter'] = i['Cabin'].apply(lambda x: str(x)[0])
del i['Cabin']
return train, test
def cabin_num(train, test):
for i in [train, test]:
i['Cabin_num1'] = i['Cabin'].apply(lambda x: str(x).split(' ')[-1][1:])
i['Cabin_num1'].replace('an', np.NaN, inplace = True)
i['Cabin_num1'] = i['Cabin_num1'].apply(lambda x: int(x) if not pd.isnull(x) and x != '' else np.NaN)
i['Cabin_num'] = pd.qcut(train['Cabin_num1'],3)
train = pd.concat((train, pd.get_dummies(train['Cabin_num'], prefix = 'Cabin_num')), axis = 1)
test = pd.concat((test, pd.get_dummies(test['Cabin_num'], prefix = 'Cabin_num')), axis = 1)
del train['Cabin_num']
del test['Cabin_num']
del train['Cabin_num1']
del test['Cabin_num1']
return train, test
def embarked_impute(train, test):
for i in [train, test]:
i['Embarked'] = i['Embarked'].fillna('S')
return train, test
test['Fare'].fillna(train['Fare'].mean(), inplace = True)
def dummies(train, test, columns = ['Pclass', 'Sex', 'Embarked', 'Ticket_Lett', 'Cabin_Letter', 'Name_Title', 'Fam_Size']):
for column in columns:
train[column] = train[column].apply(lambda x: str(x))
test[column] = test[column].apply(lambda x: str(x))
good_cols = [column+'_'+i for i in train[column].unique() if i in test[column].unique()]
train = pd.concat((train, pd.get_dummies(train[column], prefix = column)[good_cols]), axis = 1)
test = pd.concat((test, pd.get_dummies(test[column], prefix = column)[good_cols]), axis = 1)
del train[column]
del test[column]
return train, test
def drop(train, test, bye = ['PassengerId']):
for i in [train, test]:
for z in bye:
del i[z]
return train, test
train = pd.read_csv(os.path.join('https://raw.githubusercontent.com/zlatankr/Projects/master/Titanic/data/train.csv'))
test = pd.read_csv(os.path.join('https://raw.githubusercontent.com/zlatankr/Projects/master/Titanic/data/test.csv'))
train, test = names(train, test)
train, test = age_impute(train, test)
train, test = cabin_num(train, test)
train, test = cabin(train, test)
train, test = embarked_impute(train, test)
train, test = fam_size(train, test)
test['Fare'].fillna(train['Fare'].mean(), inplace = True)
train, test = ticket_grouped(train, test)
train, test = dummies(train, test, columns = ['Pclass', 'Sex', 'Embarked', 'Ticket_Lett',
'Cabin_Letter', 'Name_Title', 'Fam_Size'])
train, test = drop(train, test)
print(len(train.columns))
train.head()
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(criterion='gini',
n_estimators=700,
min_samples_split=10,
min_samples_leaf=1,
max_features='auto',
oob_score=True,
random_state=1,
n_jobs=-1)
rf.fit(train.iloc[:, 1:], train.iloc[:, 0])
print("%.4f" % rf.oob_score_)
pd.concat((pd.DataFrame(train.iloc[:, 1:].columns, columns = ['variable']),
pd.DataFrame(rf.feature_importances_, columns = ['importance'])),
axis = 1).sort_values(by='importance', ascending = False)[:20]
predictions = rf.predict(test)
predictions = pd.DataFrame(predictions, columns=['Survived'])
test = pd.read_csv(os.path.join('https://raw.githubusercontent.com/zlatankr/Projects/master/Titanic/data/test.csv'))
predictions = pd.concat((test.iloc[:, 0], predictions), axis = 1)
predictions.to_csv('y_test15.csv', sep=",", index = False)
| 0.170301 | 0.370339 |
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import animation
from IPython.display import HTML
import numba
import sys
sys.path.append('..')
import solver
import potential
%load_ext autoreload
%autoreload 2
d = 1.0
v = potential.DeltaPotential(d)
psi0 = v.get_eigenfunction()
e0 = v.get_eigenenergy()
tmax = - np.pi / e0
dt = tmax/ 400000
s = solver.EulerSolver(10 / d, 0.1 / d, dt, v)
ts, psis = s.execute(tmax, psi0=psi0, output_dt=tmax/60)
psi0_value = psi0(s.x)
psi = psis[-1]
plt.plot(s.x, np.real(psi0(s.x)))
plt.plot(s.x, np.real(psi))
plt.plot(s.x, np.imag(psi0(s.x)))
plt.plot(s.x, np.imag(psi))
%%capture
def plot(i):
plt.clf()
plt.plot(s.x, np.real(psis[i]))
plt.plot(s.x, np.imag(psis[i]))
plt.ylim(-1.1, 1.1)
fig = plt.figure()
anim = animation.FuncAnimation(fig, plot, frames=len(psis), interval=20).to_html5_video()
HTML(anim)
%%capture
def plot(i):
plt.clf()
plt.plot(s.x, np.abs(psis[i]))
plt.ylim(-1.1, 1.1)
fig = plt.figure()
anim = animation.FuncAnimation(fig, plot, frames=len(psis), interval=20).to_html5_video()
HTML(anim)
import wavefunction
for psi in psis:
print(wavefunction.norm(s.x, psi))
d = 1.0
v = potential.DeltaPotential(d)
psi0 = v.get_eigenfunction()
e0 = v.get_eigenenergy()
tmax = - np.pi / e0
dt = tmax / 400000
s = solver.CrankNicolsonSolver(10 / d, 0.1 / d, tmax / 4000, potential=v)
s2 = solver.EulerSolver(10 / d, 0.1 / d, dt, potential=v)
ts, psis = s.execute(tmax, output_dt=tmax/60, psi0=psi0)
ts2, psis2 = s2.execute(tmax, output_dt=tmax/60, psi0=psi0)
%%capture
def plot(i):
plt.clf()
plt.plot(s.x, np.real(psis[i]))
plt.plot(s.x, np.real(psis2[i]))
plt.ylim(-1.1, 1.1)
fig = plt.figure()
anim = animation.FuncAnimation(fig, plot, frames=len(psis), interval=20).to_html5_video()
HTML(anim)
%%capture
def plot(i):
plt.clf()
plt.plot(s.x, np.imag(psis[i]))
plt.plot(s.x, np.imag(psis2[i]))
plt.ylim(-1.1, 1.1)
fig = plt.figure()
anim = animation.FuncAnimation(fig, plot, frames=len(psis), interval=20).to_html5_video()
HTML(anim)
%timeit ts, psis = s.execute(tmax, psi0=psi0, output_dt=tmax/10)
%timeit ts2, psis2 = s2.execute(tmax, psi0=psi0, output_dt=tmax/10)
plt.plot(s.x, np.abs(psi))
from scipy.fftpack import fft, ifft, fftfreq
psi_p = fft(psi)
psi2 = ifft(psi_p)
plt.plot(s.x, np.abs(psi))
v = potential.DeltaPotential(1.0)
psi0 = v.get_eigenfunction()
tmax = -2 * np.pi / v.get_eigenenergy()
s = solver.CrankNicolsonSolver(20, 0.1, tmax/2400, v)
s2 = solver.SplitOperatorHalfSpectralSolver(20, 0.1, tmax/2400, v)
ts, psis = s.execute(3*tmax, output_dt=tmax/60, psi0=psi0)
ts2, psis2 = s2.execute(3*tmax, output_dt=tmax/60, psi0=psi0)
%%capture
def plot(i):
plt.clf()
plt.plot(s.x, np.real(psi0(s.x) * np.exp(2j * np.pi * ts[i] / tmax)), 'k--')
plt.plot(s.x, np.real(psis[i]))
plt.plot(s2.x, np.real(psis2[i]))
plt.ylim(-1.1, 1.1)
fig = plt.figure()
anim = animation.FuncAnimation(fig, plot, frames=len(psis), interval=40).to_html5_video()
HTML(anim)
```
|
github_jupyter
|
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import animation
from IPython.display import HTML
import numba
import sys
sys.path.append('..')
import solver
import potential
%load_ext autoreload
%autoreload 2
d = 1.0
v = potential.DeltaPotential(d)
psi0 = v.get_eigenfunction()
e0 = v.get_eigenenergy()
tmax = - np.pi / e0
dt = tmax/ 400000
s = solver.EulerSolver(10 / d, 0.1 / d, dt, v)
ts, psis = s.execute(tmax, psi0=psi0, output_dt=tmax/60)
psi0_value = psi0(s.x)
psi = psis[-1]
plt.plot(s.x, np.real(psi0(s.x)))
plt.plot(s.x, np.real(psi))
plt.plot(s.x, np.imag(psi0(s.x)))
plt.plot(s.x, np.imag(psi))
%%capture
def plot(i):
plt.clf()
plt.plot(s.x, np.real(psis[i]))
plt.plot(s.x, np.imag(psis[i]))
plt.ylim(-1.1, 1.1)
fig = plt.figure()
anim = animation.FuncAnimation(fig, plot, frames=len(psis), interval=20).to_html5_video()
HTML(anim)
%%capture
def plot(i):
plt.clf()
plt.plot(s.x, np.abs(psis[i]))
plt.ylim(-1.1, 1.1)
fig = plt.figure()
anim = animation.FuncAnimation(fig, plot, frames=len(psis), interval=20).to_html5_video()
HTML(anim)
import wavefunction
for psi in psis:
print(wavefunction.norm(s.x, psi))
d = 1.0
v = potential.DeltaPotential(d)
psi0 = v.get_eigenfunction()
e0 = v.get_eigenenergy()
tmax = - np.pi / e0
dt = tmax / 400000
s = solver.CrankNicolsonSolver(10 / d, 0.1 / d, tmax / 4000, potential=v)
s2 = solver.EulerSolver(10 / d, 0.1 / d, dt, potential=v)
ts, psis = s.execute(tmax, output_dt=tmax/60, psi0=psi0)
ts2, psis2 = s2.execute(tmax, output_dt=tmax/60, psi0=psi0)
%%capture
def plot(i):
plt.clf()
plt.plot(s.x, np.real(psis[i]))
plt.plot(s.x, np.real(psis2[i]))
plt.ylim(-1.1, 1.1)
fig = plt.figure()
anim = animation.FuncAnimation(fig, plot, frames=len(psis), interval=20).to_html5_video()
HTML(anim)
%%capture
def plot(i):
plt.clf()
plt.plot(s.x, np.imag(psis[i]))
plt.plot(s.x, np.imag(psis2[i]))
plt.ylim(-1.1, 1.1)
fig = plt.figure()
anim = animation.FuncAnimation(fig, plot, frames=len(psis), interval=20).to_html5_video()
HTML(anim)
%timeit ts, psis = s.execute(tmax, psi0=psi0, output_dt=tmax/10)
%timeit ts2, psis2 = s2.execute(tmax, psi0=psi0, output_dt=tmax/10)
plt.plot(s.x, np.abs(psi))
from scipy.fftpack import fft, ifft, fftfreq
psi_p = fft(psi)
psi2 = ifft(psi_p)
plt.plot(s.x, np.abs(psi))
v = potential.DeltaPotential(1.0)
psi0 = v.get_eigenfunction()
tmax = -2 * np.pi / v.get_eigenenergy()
s = solver.CrankNicolsonSolver(20, 0.1, tmax/2400, v)
s2 = solver.SplitOperatorHalfSpectralSolver(20, 0.1, tmax/2400, v)
ts, psis = s.execute(3*tmax, output_dt=tmax/60, psi0=psi0)
ts2, psis2 = s2.execute(3*tmax, output_dt=tmax/60, psi0=psi0)
%%capture
def plot(i):
plt.clf()
plt.plot(s.x, np.real(psi0(s.x) * np.exp(2j * np.pi * ts[i] / tmax)), 'k--')
plt.plot(s.x, np.real(psis[i]))
plt.plot(s2.x, np.real(psis2[i]))
plt.ylim(-1.1, 1.1)
fig = plt.figure()
anim = animation.FuncAnimation(fig, plot, frames=len(psis), interval=40).to_html5_video()
HTML(anim)
| 0.394667 | 0.648592 |
# Reconstructing an *off-axis* hologram by Fresnel Approximation
Reference: Digital holography and wavefront sensing by Ulf Schnars, Claas Falldorf, John Watson, and Werner Jüptner, Springer-verlag Berlin an, 2016. (Section 3.2)
## Info about the digital hologram:
'ulf7.BMP' is a digital hologram created by recording an object at about 1 meter distance with HeNe laser (632.8 nm) and an image sensor with 6.8 µm pixel size.
```
#Import libraries realted to matplotlib and mathematical operations
# %matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from PIL import Image
import numpy as np
import ipywidgets as widgets
from IPython.display import display
# Read the hologram image file
hologram = Image.open('ulf7.BMP')
hologram = np.array(hologram).astype(np.float64) #Convert into float type. Crucial for non integer based mathematical operations
# plot the hologram
imgplot = plt.imshow(hologram, cmap="viridis")
```
## Some equations from the book!
The *Fresnel-Kirchhoff* integral describing diffraction field beyond an aperture is given by the coherent superposition of the secondary waves (section 2.4)
\begin{equation}
\Gamma\left(\xi^{\prime}, \eta^{\prime}\right)=\frac{i}{\lambda} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} A(x, y) \frac{\exp \left(-i \frac{2 \pi}{\lambda} \rho^{\prime}\right)}{\rho^{\prime}} Q d x d y
\end{equation}
where, $A(x, y)$ is the complex amplitude in the plane of the diffracting aperture, $\rho^{\prime}$ is the distance between a point in the aperture plane and a point in the observation plane, and $Q$ is the inclination factor defined to take care of no backward propagation of the diffracted optical field. For holograms, $Q$ is approximately equal to 1.
A hologram $h(x,y)$ recorded by a reference light wave $E_{R}(x, y)$ can be reconstructed by a conjugate reference wave $E_{R}^{*}(x, y)$ as described by the following *Fresnel-Kirchhoff* integral
\begin{equation}
\Gamma(\xi, \eta)=\frac{i}{\lambda} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} h(x, y) E_{R}^{*}(x, y) \frac{\exp \left(-i \frac{2 \pi}{\lambda} \rho\right)}{\rho} d x d y
\end{equation}
with $\rho = \sqrt{ (x-\xi)^2 + (y-\eta)^2 + d^2 }$. Here $d$ is the distance between the object and hologram planes. Substituting the apprximated *Taylor* expansion of $\rho$ in above equation leads to the Fresnel reonstruction field relation (see section 3.2 of the book)
\begin{aligned} \Gamma(\xi, \eta)=& \frac{i}{\lambda d} \exp \left(-i \frac{2 \pi}{\lambda} d\right) \exp \left[-i \frac{\pi}{\lambda d}\left(\xi^{2}+\eta^{2}\right)\right] \times \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} E_{R}^{*}(x, y) h(x, y) \exp \left[-i \frac{\pi}{\lambda d}\left(x^{2}+y^{2}\right)\right] \exp \left[i \frac{2 \pi}{\lambda d}(x \xi+y \eta)\right] d x d y \end{aligned}
Or, in a digital form by
\begin{aligned} \Gamma(m, n)=& \frac{i}{\lambda d} \exp \left(-i \frac{2 \pi}{\lambda} d\right) \exp \left[-i \pi \lambda d\left(\frac{m^{2}}{N^{2} \Delta x^{2}}+\frac{n^{2}}{N^{2} \Delta y^{2}}\right)\right] \times \sum_{k=0}^{N-1} \sum_{l=0}^{N-1} E_{R}^{*}(k, l) h(k, l) \exp \left[-i \frac{\pi}{\lambda d}\left(k^{2} \Delta x^{2}+l^{2} \Delta y^{2}\right)\right] \exp \left[i 2 \pi\left(\frac{k m}{N}+\frac{l n}{N}\right)\right] \\ =& C \times \sum_{k=0}^{N-1} \sum_{l=0}^{N-1} E_{R}^{*}(k, l) h(k, l) \exp \left[-i \frac{\pi}{\lambda d}\left(k^{2} \Delta x^{2}+l^{2} \Delta y^{2}\right)\right] \exp \left[i 2 \pi\left(\frac{k m}{N}+\frac{l n}{N}\right)\right] \end{aligned}
where, $h(k,l)$ is the hologram, $N$ is number of pixels in camera sensor (assumed number of rows = number of columns, if not, convert the hologram in such a way prior to the operations), $\lambda$ is wavelength, $\Delta x$ and $\Delta y$ are horizontal and vertical distance of neighboring sensor pixels, and $d$ is the distance of reconstruction. It is easy to see that the last term under discrete integral is actually an IFT (inverse Fourier transform) of a multiple of hologram function and an exponential factor term. C is just a complex constant which does not affect the reconstruction process and $E_{R}^{*}(k, l)$ gets simplified to unity for a plane wave as reconstruction/recording wave.
```
# User defined reconstruction distance
w = widgets.FloatSlider(value=-1.054,min=-2.0,max=2.0,step=0.001,
description='d (in meters):',orientation='horizontal',readout=True,readout_format='.3f',)
display(w)
# User define parameters
Nr,Nc = np.shape(hologram) #number of rows and columns in the hologram
wavelength = 632.8e-9 #HeNe laser wavelength in SI units i.e. meters
dx = 6.8e-6 #sensor pixel size in meters
d = w.value #-1.054 #reconstruction distance in meters
# prepare the Fresnel operand for the hologram
Nr = np.linspace(0, Nr-1, Nr)-Nr/2
Nc = np.linspace(0, Nc-1, Nc)-Nc/2
k, l = np.meshgrid(Nc,Nr)
factor = np.multiply(hologram, np.exp(-1j*np.pi/(wavelength*d)*(np.multiply(k, k)*dx**2 + np.multiply(l, l)*dx**2)))
reconstructed_field = np.fft.ifftshift(np.fft.ifft2(np.fft.ifftshift(factor))) # Take inverse Fourier transform of the factor
# plot
I = np.abs(reconstructed_field)/np.max(np.abs(reconstructed_field)) #normalized intensity profile
fig = plt.figure(figsize=(10,10)) #setup a blank figure
plt.imshow(I, cmap="hot", clim=(0.0, 0.3))
plt.colorbar()
```
|
github_jupyter
|
#Import libraries realted to matplotlib and mathematical operations
# %matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from PIL import Image
import numpy as np
import ipywidgets as widgets
from IPython.display import display
# Read the hologram image file
hologram = Image.open('ulf7.BMP')
hologram = np.array(hologram).astype(np.float64) #Convert into float type. Crucial for non integer based mathematical operations
# plot the hologram
imgplot = plt.imshow(hologram, cmap="viridis")
# User defined reconstruction distance
w = widgets.FloatSlider(value=-1.054,min=-2.0,max=2.0,step=0.001,
description='d (in meters):',orientation='horizontal',readout=True,readout_format='.3f',)
display(w)
# User define parameters
Nr,Nc = np.shape(hologram) #number of rows and columns in the hologram
wavelength = 632.8e-9 #HeNe laser wavelength in SI units i.e. meters
dx = 6.8e-6 #sensor pixel size in meters
d = w.value #-1.054 #reconstruction distance in meters
# prepare the Fresnel operand for the hologram
Nr = np.linspace(0, Nr-1, Nr)-Nr/2
Nc = np.linspace(0, Nc-1, Nc)-Nc/2
k, l = np.meshgrid(Nc,Nr)
factor = np.multiply(hologram, np.exp(-1j*np.pi/(wavelength*d)*(np.multiply(k, k)*dx**2 + np.multiply(l, l)*dx**2)))
reconstructed_field = np.fft.ifftshift(np.fft.ifft2(np.fft.ifftshift(factor))) # Take inverse Fourier transform of the factor
# plot
I = np.abs(reconstructed_field)/np.max(np.abs(reconstructed_field)) #normalized intensity profile
fig = plt.figure(figsize=(10,10)) #setup a blank figure
plt.imshow(I, cmap="hot", clim=(0.0, 0.3))
plt.colorbar()
| 0.414188 | 0.992554 |
# Lab 02 - Simple Linear Regression
Regressions are any learning problem that aim to describe the relation between a set of explanatory
variables (i.e. features) and a continuous response (or a set of responses). Therefore our dataset is of the form:
$$S=\left\{\left(\mathbf{x}_i, y_i\right)\right\}^m_{i=1} \quad s.t. \quad \mathbf{x}_i\in\mathbb{R^d},\,\,y_i\in\mathbb{R}$$
In the case of Linear Regression the relation learned is a linear one. That is, we search for a linear function to map
$\mathcal{X}$ to $\mathcal{Y}$. So the hypothesis class of linear regression is:
$$ \mathcal{H}_{reg} = \left\{h:h\left(x_1,\ldots,x_d\right)=w_0 + \sum w_i x_i\right\} $$
Note that the linear function is linear in the parameters $w_0,w_1,\ldots,w_d$. Let us simulate a dataset fitting the case of a simple linear regression:
$$ y_i = w_1 x_i + w_0 \quad i=1,\ldots,m $$
So each hypothesis in the class $\mathcal{H}_{reg}$ is defined by two parameters $w_0,w_1$ - the intercept and slope of
the line. Suppose the data is generated from the following line: $Y=2X+1$. So $w_0=1$ and $w_2=2$. Let us draw and plot
samples from this function.
```
import sys
sys.path.append("../")
from utils import *
```
## Linear Regression
```
w0, w1 = 1, 2
x = np.linspace(0, 100, 10)
y = w1*x + w0
fig = go.Figure([go.Scatter(x=x, y=y, name="Real Model", showlegend=True,
marker=dict(color="black", opacity=.7), line=dict(color="black", dash="dash", width=1))],
layout=go.Layout(title=r"$\text{(1) Simulated Data}$",
xaxis={"title": "x - Explanatory Variable"},
yaxis={"title": "y - Response"},
height=400))
fig.show()
```
Using this sample as a **training set**, let us compute the Ordinary Least Squares (OLS) estimators $\hat{w_0},\widehat{w_1}$ of the model. Then, if we are given new samples $x_j$ we can predict its response $\hat{y}_j$:
$$ \hat{y}_j = \hat{w_1} x_j + \hat{w}_0 $$
Over the dataset above, try and think what would you expect the output to be?
```
from sklearn.linear_model import LinearRegression
noiseless_model = LinearRegression()
noiseless_model.fit(x.reshape((-1,1)), y)
print("Estimated intercept:", noiseless_model.intercept_)
print("Estimated coefficient:", noiseless_model.coef_[0])
```
## Linear Regression With Noise
As the dataset used to fit the model lays exactly on a straight line, the estimated coefficients are the correct
ones (up to floating point precision). Next, let us add some Gaussian noise to the data and see how it influences our
estimation. So:
$$\forall i \in \left[ m \right]\quad y_i=w_1\cdot x_i + w_0 + \varepsilon_i \quad s.t.\quad
\varepsilon\sim\mathcal{N}\left(0,\sigma^2I_m\right)$$
Namely, the noise of each sample distributes by a Gaussian with zero mean and $\sigma^2$ variance, and is uncorrelated between samples.
*Notice that from now on we mark the $y$'s generated by the noise-less model with `y_`. This is so it is clear that the "real"
$y$'s observed in a given sample are noisy.*
```
if "y_" not in locals(): y_ = y
epsilon = np.random.normal(loc=0, scale=40, size=len(x))
y = y_ + epsilon
fig.add_trace(go.Scatter(x=x, y=y, name="Observed Points", mode="markers", line=dict(width=1)))
fig.update_layout(title=r"$\text{(2) Simulated Data - With Noise}$")
fig.show()
```
Try and execute the block above several times. See how each time the "Observed Points" look different. These datasets,
though all come from the same model, look very different. Try to think:
* What would happen if we attempt fitting a model to these observations (i.e. the ones with the noise)?
* How would it influence our estimation of the coefficients $w_0, w_1$?
* Where will the regression line be?
```
from pandas import DataFrame
model = LinearRegression().fit(x.reshape((-1,1)), y)
DataFrame({"Model":["Noise-less","Noisy"],
"Intercept": [noiseless_model.intercept_, model.intercept_],
"Slope": [noiseless_model.coef_[0], model.coef_[0]]})
y_hat = model.predict(x.reshape(-1,1))
fig.data = [fig.data[0], fig.data[1]]
fig.update_layout(title=r"$\text{(3) Fitted Model Over Noisy Data}$")
fig.add_traces([go.Scatter(x=x, y=y_hat, mode="markers", name="Predicted Responses", marker=dict(color="blue")),
go.Scatter(x=x, y=y_hat, mode="lines", name="Fitted Model", line=dict(color="blue", width=1))])
fig.show()
```
Let us better understand what took place. Schematically, we started with some model
$$ Y=w_1X+w_0 \quad s.t. w_1=2,w_0=1 $$
and obtained a dataset from this model
$$ Y=w_1X + w_0 + \mathcal{N}\left(0,\sigma^2\right) $$
Then, using the dataset we estimated the model parameters to obtain $\widehat{w_1},\widehat{w_0}$. However, we should look
at these steps from two different points of view: the "observer" and the "all-knowing".
- The "observer" is us whenever we work with data. We somehow obtained samples/observations that we assume to be generated
from some "true" function/model $f$. As in reality data is noisy, when we assume something about the "true" function we
also make assumptions about the noise. Then, as we do not know $f$ we try to learn it based on the observations.
- The "all-knowing", unlike the "observer", knows exactly how $f$ looks and for each sample what is the noise.
In the graph above the <span style="color:Black">**Real Model**</span> is only known to the "all-knowing". We, as the
"observer" only witness the <span style="color:red">**Observed Points**</span>. We **assumed** the data came from a linear
model with Gaussian Noise and therefore fitted the OLS estimators $\widehat{w}_1, \widehat{w}_0$. These estimators give
us the <span style="color:blue">**Fitted Model**</span> and a <span style="color:blue">**Predicted Response**</span> to
each observation.
Using these estimators of the model coefficients we can do two things:
- **Inference**: We can study the estimated model. What are the statistical properties of our estimators? How confident are
we in the estimation? Are the features associated with the helpful/relevant for predicting/explaining the response? Etc.
- **Prediction**: We can use this estimated model to predict the responses of new data-points. How accurate are our predictions? How does the training set (and its size) influence this accuracy?
In the scope of this course we are mainly interested in using the fitted model for prediction, with only slightly
investigating the properties of our fitted model.
## Multivatiate Linaer Regression
Lastly, using a more complicated model, we fit a model and answer some inference and prediction questions.
To gain a better understanding, please look at the graph below and answer the question before reading the code.
```
response = lambda x1, x2: 5*x1 + .1*x2 + 3
min_x1, min_x2, max_x1, max_x2 = -10, -10, 10, 10
xv1, xv2 = np.meshgrid(np.linspace(min_x1, max_x1, 10), np.linspace(min_x2, max_x2, 10))
surface = response(xv1, xv2)
x = np.random.uniform((min_x1, min_x2), (max_x1, max_x2), (10, 2))
y_ = response(x[:,0], x[:,1])
y = y_ + np.random.normal(0, 30, len(x))
model = LinearRegression().fit(x, y)
y_hat = model.predict(x)
DataFrame({"Coefficient": [rf"$w_{{0}}$".format(i) for i in range(len(model.coef_)+1)],
"Estimated Value": np.concatenate([[model.intercept_], model.coef_])})
go.Figure([go.Surface(x=xv1, y=xv2, z=surface, opacity=.5, showscale=False),
go.Scatter3d(name="Real (noise-less) Points", x=x[:,0], y=x[:,1], z=y_, mode="markers", marker=dict(color="black", size=2)),
go.Scatter3d(name="Observed Points", x=x[:,0], y=x[:,1], z=y, mode="markers", marker=dict(color="red", size=2)),
go.Scatter3d(name="Predicted Points", x=x[:,0], y=x[:,1], z=y_hat, mode="markers", marker=dict(color="blue", size=2))],
layout=go.Layout(
title=r"$\text{(4) Bivariate Linear Regression}$",
scene=dict(xaxis=dict(title="Feature 1"),
yaxis=dict(title="Feature 2"),
zaxis=dict(title="Response"),
camera=dict(eye=dict(x=-1, y=-2, z=.5)))
)).show()
```
# Time To Think...
In the scenario above we performed a linear regression over observations with more than two features (i.e multi-variate
linear regression). In gradient color we see the subspace from which our data-points are drawn. As we have 2 features, the subspace is a 2D plane.
Try rotating the figure above and look at the plane from its different axes (such that it looks like a line rather than a plane). This view allows you to see the fit between the one specific feature and the response, similar to the case of fitting a simple linear regression using that feature.
Run the code generating the data and graph with more/less samples and high/lower noise levels. How do these changes influence the quality of the fit?
|
github_jupyter
|
import sys
sys.path.append("../")
from utils import *
w0, w1 = 1, 2
x = np.linspace(0, 100, 10)
y = w1*x + w0
fig = go.Figure([go.Scatter(x=x, y=y, name="Real Model", showlegend=True,
marker=dict(color="black", opacity=.7), line=dict(color="black", dash="dash", width=1))],
layout=go.Layout(title=r"$\text{(1) Simulated Data}$",
xaxis={"title": "x - Explanatory Variable"},
yaxis={"title": "y - Response"},
height=400))
fig.show()
from sklearn.linear_model import LinearRegression
noiseless_model = LinearRegression()
noiseless_model.fit(x.reshape((-1,1)), y)
print("Estimated intercept:", noiseless_model.intercept_)
print("Estimated coefficient:", noiseless_model.coef_[0])
if "y_" not in locals(): y_ = y
epsilon = np.random.normal(loc=0, scale=40, size=len(x))
y = y_ + epsilon
fig.add_trace(go.Scatter(x=x, y=y, name="Observed Points", mode="markers", line=dict(width=1)))
fig.update_layout(title=r"$\text{(2) Simulated Data - With Noise}$")
fig.show()
from pandas import DataFrame
model = LinearRegression().fit(x.reshape((-1,1)), y)
DataFrame({"Model":["Noise-less","Noisy"],
"Intercept": [noiseless_model.intercept_, model.intercept_],
"Slope": [noiseless_model.coef_[0], model.coef_[0]]})
y_hat = model.predict(x.reshape(-1,1))
fig.data = [fig.data[0], fig.data[1]]
fig.update_layout(title=r"$\text{(3) Fitted Model Over Noisy Data}$")
fig.add_traces([go.Scatter(x=x, y=y_hat, mode="markers", name="Predicted Responses", marker=dict(color="blue")),
go.Scatter(x=x, y=y_hat, mode="lines", name="Fitted Model", line=dict(color="blue", width=1))])
fig.show()
response = lambda x1, x2: 5*x1 + .1*x2 + 3
min_x1, min_x2, max_x1, max_x2 = -10, -10, 10, 10
xv1, xv2 = np.meshgrid(np.linspace(min_x1, max_x1, 10), np.linspace(min_x2, max_x2, 10))
surface = response(xv1, xv2)
x = np.random.uniform((min_x1, min_x2), (max_x1, max_x2), (10, 2))
y_ = response(x[:,0], x[:,1])
y = y_ + np.random.normal(0, 30, len(x))
model = LinearRegression().fit(x, y)
y_hat = model.predict(x)
DataFrame({"Coefficient": [rf"$w_{{0}}$".format(i) for i in range(len(model.coef_)+1)],
"Estimated Value": np.concatenate([[model.intercept_], model.coef_])})
go.Figure([go.Surface(x=xv1, y=xv2, z=surface, opacity=.5, showscale=False),
go.Scatter3d(name="Real (noise-less) Points", x=x[:,0], y=x[:,1], z=y_, mode="markers", marker=dict(color="black", size=2)),
go.Scatter3d(name="Observed Points", x=x[:,0], y=x[:,1], z=y, mode="markers", marker=dict(color="red", size=2)),
go.Scatter3d(name="Predicted Points", x=x[:,0], y=x[:,1], z=y_hat, mode="markers", marker=dict(color="blue", size=2))],
layout=go.Layout(
title=r"$\text{(4) Bivariate Linear Regression}$",
scene=dict(xaxis=dict(title="Feature 1"),
yaxis=dict(title="Feature 2"),
zaxis=dict(title="Response"),
camera=dict(eye=dict(x=-1, y=-2, z=.5)))
)).show()
| 0.481941 | 0.993346 |
# Introduction to the Data Science Process
<img src="figures/DataScienceLifeCycle.jpg" />
## Table of contents
1. [Introduction](#Introduction)
2. [The problem domain](#The-problem-domain)
3. [Step 1: Answering the question](#Step-1:-Answering-the-question)
4. [Step 2: Checking the data](#Step-2:-Checking-the-data)
5. [Step 3: Tidying the data](#Step-3:-Tidying-the-data)
6. [Step 4: Exploratory analysis](#Step-4:-Exploratory-analysis)
7. [Step 5: Classification](#Step-5:-Classification)
8. [Step 6: Reproducibility](#Step-6:-Reproducibility)
9. [Conclusions](#Conclusions)
10. [Acknowledgements](#Acknowledgements)
## Introduction
[[ go back to the top ]](#Table-of-contents)
In the time it took you to read this sentence, terabytes of data have been collectively generated across the world — more data than any of us could ever hope to process, much less make sense of, on the machines we're using to read this notebook.
In response to this massive influx of data, the field of Data Science has come to the forefront in the past decade. Cobbled together by people from a diverse array of fields — statistics, physics, computer science, design, and many more — the field of Data Science represents our collective desire to understand and harness the abundance of data around us to build a better world.
In this notebook, I'm going to go over a basic Python data analysis pipeline from start to finish to show you what a typical data science workflow looks like.
In addition to providing code examples, I also hope to imbue in you a sense of good practices so you can be a more effective — and more collaborative — data scientist.
I will be following along with the data analysis checklist from [The Elements of Data Analytic Style](https://leanpub.com/datastyle), which I strongly recommend reading as a free and quick guidebook to performing outstanding data analysis.
## The problem domain
[[ go back to the top ]](#Table-of-contents)
For the purposes of this exercise, let's pretend we're working for a start-up company that just got funded to create a smartphone app that automatically identifies species of flowers from pictures taken on the smartphone. We're working with a moderately-sized team of data scientists and will be building part of the data analysis pipeline for this app.
We've been tasked by our company's Head of Data Science to create a demo machine learning model that takes four measurements from the flowers (sepal length, sepal width, petal length, and petal width) and identifies the species based on those measurements alone.
<img src="figures/petal_sepal.jpg" />
We've been given a [data set](data/iris-data.csv) from our field researchers to develop the demo, which only includes measurements for three types of *Iris* flowers:
### *Iris setosa*
<img src="figures/iris_setosa.jpg" />
### *Iris versicolor*
<img src="figures/iris_versicolor.jpg" />
### *Iris virginica*
<img src="figures/iris_virginica.jpg" />
The four measurements we're using currently come from hand-measurements by the field researchers, but they will be automatically measured by an image processing model in the future.
**Note:** The data set we're working with is the famous [*Iris* data set](https://archive.ics.uci.edu/ml/datasets/Iris) — included with this notebook — which I have modified slightly for demonstration purposes.
## Step 1: Answering the question
[[ go back to the top ]](#Table-of-contents)
The first step to any data analysis project is to define the question or problem we're looking to solve, and to define a measure (or set of measures) for our success at solving that task. The data analysis checklist has us answer a handful of questions to accomplish that, so let's work through those questions.
>Did you specify the type of data analytic question (e.g. exploration, association causality) before touching the data?
We're trying to classify the species (i.e., class) of the flower based on four measurements that we're provided: sepal length, sepal width, petal length, and petal width.
>Did you define the metric for success before beginning?
Let's do that now. Since we're performing classification, we can use [accuracy](https://en.wikipedia.org/wiki/Accuracy_and_precision) — the fraction of correctly classified flowers — to quantify how well our model is performing. Our company's Head of Data has told us that we should achieve at least 90% accuracy.
>Did you understand the context for the question and the scientific or business application?
We're building part of a data analysis pipeline for a smartphone app that will be able to classify the species of flowers from pictures taken on the smartphone. In the future, this pipeline will be connected to another pipeline that automatically measures from pictures the traits we're using to perform this classification.
>Did you record the experimental design?
Our company's Head of Data has told us that the field researchers are hand-measuring 50 randomly-sampled flowers of each species using a standardized methodology. The field researchers take pictures of each flower they sample from pre-defined angles so the measurements and species can be confirmed by the other field researchers at a later point. At the end of each day, the data is compiled and stored on a private company GitHub repository.
>Did you consider whether the question could be answered with the available data?
The data set we currently have is only for three types of *Iris* flowers. The model built off of this data set will only work for those *Iris* flowers, so we will need more data to create a general flower classifier.
<hr />
Notice that we've spent a fair amount of time working on the problem without writing a line of code or even looking at the data.
**Thinking about and documenting the problem we're working on is an important step to performing effective data analysis that often goes overlooked.** Don't skip it.
## Step 2: Checking the data
[[ go back to the top ]](#Table-of-contents)
The next step is to look at the data we're working with. Even curated data sets can have errors in them, and it's vital that we spot these errors before investing too much time in our analysis.
Generally, we're looking to answer the following questions:
* Is there anything wrong with the data?
* Are there any quirks with the data?
* Do I need to fix or remove any of the data?
Let's start by reading the data into a pandas DataFrame.
```
import pandas as pd
iris_data = pd.read_csv('data/iris-data.csv')
iris_data.head()
```
We're in luck! The data seems to be in a usable format.
The first row in the data file defines the column headers, and the headers are descriptive enough for us to understand what each column represents. The headers even give us the units that the measurements were recorded in, just in case we needed to know at a later point in the project.
Each row following the first row represents an entry for a flower: four measurements and one class, which tells us the species of the flower.
### Missing Data
**One of the first things we should look for is missing data.** Thankfully, the field researchers already told us that they put a 'NA' into the spreadsheet when they were missing a measurement.
We can tell pandas to automatically identify missing values if it knows our missing value marker.
```
iris_data = pd.read_csv('data/iris-data.csv', na_values=['NA'])
```
Voilà! Now pandas knows to treat rows with 'NA' as missing values.
### Distribution of Data
Next, it's always a good idea to look at the distribution of our data — especially the outliers.
Let's start by printing out some summary statistics about the data set.
```
iris_data.describe()
```
We can see several useful values from this table. For example, we see that five `petal_width_cm` entries are missing.
### Visualization
If you ask me, though, tables like this are rarely useful, unless we know that our data should fall in a particular range. It's usually better to visualize the data in some way. Visualization makes outliers and errors immediately stand out, whereas they might go unnoticed in a large table of numbers.
Since we know we're going to be plotting in this section, let's set up the notebook so we can plot inside of it.
```
# This line tells the notebook to show plots inside of the notebook
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
```
Next, let's create a **scatterplot matrix**. Scatterplot matrices plot the distribution of each column along the diagonal, and then plot a scatterplot matrix for the combination of each variable. They make for an efficient tool to look for errors in our data.
We can even have the plotting package color each entry by its class to look for trends within the classes.
```
# We have to temporarily drop the rows with 'NA' values
# because the Seaborn plotting function does not know
# what to do with them
sb.pairplot(iris_data.dropna(), hue='class')
;
```
From the scatterplot matrix, we can already see some issues with the data set:
1. There are five classes when there should only be three, meaning there were some coding errors.
2. There are some clear outliers in the measurements that may be erroneous: one `sepal_width_cm` entry for `Iris-setosa` falls well outside its normal range, and several `sepal_length_cm` entries for `Iris-versicolor` are near-zero for some reason.
3. We had to drop those rows with missing values.
In all of these cases, we need to figure out what to do with the erroneous data. Which takes us to the next step...
## Step 3: Tidying the data
[[ go back to the top ]](#Table-of-contents)
Now that we've identified several errors in the data set, we need to fix them before we proceed with the analysis.
### Mislabled Data
Let's walk through the issues one-by-one.
>There are **five** classes when there should only be **three**, meaning there were some **coding errors**.
After talking with the field researchers, it sounds like one of them forgot to add `Iris-` before their `Iris-versicolor` entries. The other extraneous class, `Iris-setossa`, was simply a typo that they forgot to fix.
Let's use the DataFrame to fix these errors.
```
iris_data.loc[iris_data['class'] == 'versicolor', 'class'] = 'Iris-versicolor'
iris_data.loc[iris_data['class'] == 'Iris-setossa', 'class'] = 'Iris-setosa'
iris_data['class'].unique()
```
Much better! Now we only have three class types. Imagine how embarrassing it would've been to create a model that used the wrong classes.
### Outliers
>There are some clear outliers in the measurements that may be erroneous: one `sepal_width_cm` entry for `Iris-setosa` falls well outside its normal range, and several `sepal_length_cm` entries for `Iris-versicolor` are near-zero for some reason.
Fixing outliers can be tricky business. It's rarely clear whether the outlier was caused by measurement error, recording the data in improper units, or if the outlier is a real anomaly. For that reason, we should be judicious when working with outliers: if we decide to exclude any data, we need to make sure to document what data we excluded and provide solid reasoning for excluding that data. (i.e., "This data didn't fit my hypothesis" will not stand peer review.)
In the case of the one anomalous entry for `Iris-setosa`, let's say our field researchers know that it's impossible for `Iris-setosa` to have a sepal width below 2.5 cm. Clearly this entry was made in error, and we're better off just scrapping the entry than spending hours finding out what happened.
```
# This line drops any 'Iris-setosa' rows with a separal width less than 2.5 cm
iris_data = iris_data.loc[(iris_data['class'] != 'Iris-setosa') | (iris_data['sepal_width_cm'] >= 2.5)]
iris_data.loc[iris_data['class'] == 'Iris-setosa', 'sepal_width_cm'].hist()
;
```
Excellent! Now all of our `Iris-setosa` rows have a sepal width greater than 2.5.
### Incorrect Scaling
The next data issue to address is the several near-zero sepal lengths for the `Iris-versicolor` rows. Let's take a look at those rows.
```
iris_data.loc[(iris_data['class'] == 'Iris-versicolor') &
(iris_data['sepal_length_cm'] < 1.0)]
```
How about that? All of these near-zero `sepal_length_cm` entries seem to be off by **two orders of magnitude**, as if they had been recorded in **meters** instead of **centimeters**.
After some brief correspondence with the field researchers, we find that one of them forgot to convert those measurements to centimeters. Let's do that for them.
```
iris_data.loc[(iris_data['class'] == 'Iris-versicolor') &
(iris_data['sepal_length_cm'] < 1.0),
'sepal_length_cm'] *= 100.0
iris_data.loc[iris_data['class'] == 'Iris-versicolor', 'sepal_length_cm'].hist()
;
```
Phew! Good thing we fixed those outliers. They could've really thrown our analysis off.
### Missing Values
>We had to drop those rows with missing values.
Let's take a look at the rows with missing values:
```
iris_data.loc[(iris_data['sepal_length_cm'].isnull()) |
(iris_data['sepal_width_cm'].isnull()) |
(iris_data['petal_length_cm'].isnull()) |
(iris_data['petal_width_cm'].isnull())]
```
It's not ideal that we had to drop those rows, especially considering they're all `Iris-setosa` entries. Since it seems like the missing data is systematic — all of the missing values are in the same column for the same *Iris* type — this error could potentially bias our analysis.
One way to deal with missing data is **mean imputation**: If we know that the values for a measurement fall in a certain range, we can fill in empty values with the average of that measurement.
Let's see if we can do that here.
```
iris_data.loc[iris_data['class'] == 'Iris-setosa', 'petal_width_cm'].hist()
;
```
Most of the petal widths for `Iris-setosa` fall within the 0.2-0.3 range, so let's fill in these entries with the average measured petal width.
```
average_petal_width = iris_data.loc[iris_data['class'] == 'Iris-setosa', 'petal_width_cm'].mean()
iris_data.loc[(iris_data['class'] == 'Iris-setosa') &
(iris_data['petal_width_cm'].isnull()),
'petal_width_cm'] = average_petal_width
iris_data.loc[(iris_data['class'] == 'Iris-setosa') &
(iris_data['petal_width_cm'] == average_petal_width)]
iris_data.loc[(iris_data['sepal_length_cm'].isnull()) |
(iris_data['sepal_width_cm'].isnull()) |
(iris_data['petal_length_cm'].isnull()) |
(iris_data['petal_width_cm'].isnull())]
```
Great! Now we've recovered those rows and no longer have missing data in our data set.
**Note:** If you don't feel comfortable imputing your data, you can drop all rows with missing data with the `dropna()` call:
iris_data.dropna(inplace=True)
### Save Clean Data
After all this hard work, we don't want to repeat this process every time we work with the data set. **Let's save the tidied data file *as a separate file* and work directly with that data file from now on**.
```
iris_data.to_csv('data/iris-data-clean.csv', index=False)
iris_data_clean = pd.read_csv('data/iris-data-clean.csv')
```
### Visualization
Now, let's take a look at the scatterplot matrix now that we've tidied the data.
```
sb.pairplot(iris_data_clean, hue='class')
;
```
### Summary
Of course, I purposely inserted numerous errors into this data set to demonstrate some of the many possible scenarios you may face while tidying your data.
The general takeaways here should be:
* Make sure your data is encoded properly
* Make sure your data falls within the expected range, and use domain knowledge whenever possible to define that expected range
* Deal with missing data in one way or another: replace it if you can or drop it
* Never tidy your data manually because that is not easily reproducible
* Use code as a record of how you tidied your data
* Plot everything you can about the data at this stage of the analysis so you can *visually* confirm everything looks correct
## Testing our data
[[ go back to the top ]](#Table-of-contents)
Early in my career, I was exposed to a great idea: **We should test our data**. Just how we use unit tests to verify our expectations from code, we can similarly set up unit tests to verify our expectations about a data set.
We can quickly test our data using `assert` statements: We assert that something must be true, and if it is, then nothing happens and the notebook continues running. However, if our assertion is wrong, then the notebook stops running and brings it to our attention. For example,
```Python
assert 1 == 2
```
will raise an `AssertionError` and stop execution of the notebook because the assertion failed.
Let's test a few things that we know about our data set now.
```
# We know that we should only have three classes
assert len(iris_data_clean['class'].unique()) == 3
# We know that sepal lengths for 'Iris-versicolor' should never be below 2.5 cm
assert iris_data_clean.loc[iris_data_clean['class'] == 'Iris-versicolor', 'sepal_length_cm'].min() >= 2.5
# We know that our data set should have no missing measurements
assert len(iris_data_clean.loc[(iris_data_clean['sepal_length_cm'].isnull()) |
(iris_data_clean['sepal_width_cm'].isnull()) |
(iris_data_clean['petal_length_cm'].isnull()) |
(iris_data_clean['petal_width_cm'].isnull())]) == 0
```
And so on. If any of these expectations are violated, then our analysis immediately stops and we have to return to the tidying stage.
## Step 4: Exploratory analysis
[[ go back to the top ]](#Table-of-contents)
Now after spending entirely too much time tidying our data, we can start analyzing it!
Exploratory analysis is the step where we start delving deeper into the data set beyond the outliers and errors. We'll be looking to answer questions such as:
* How is my data distributed?
* Are there any correlations in my data?
* Are there any confounding factors that explain these correlations?
This is the stage where we plot all the data in as many ways as possible. Create many charts, but don't bother making them pretty — these charts are for internal use.
Let's return to that scatterplot matrix that we used earlier.
```
sb.pairplot(iris_data_clean)
;
```
### Inspection of Anomolous Data
Our data is normally distributed for the most part, which is great news if we plan on using any modeling methods that assume the data is normally distributed.
There's **something strange going on with the petal measurements**. Maybe it's something to do with the different `Iris` types.
Let's color code the data by the class again to see if that clears things up.
```
sb.pairplot(iris_data_clean, hue='class')
;
```
Sure enough, the strange distribution of the petal measurements exist **because of the different species**. This is actually **great news** for our classification task since it means that the petal measurements will make it easy to distinguish between `Iris-setosa` and the other `Iris` types.
Distinguishing `Iris-versicolor` and `Iris-virginica` will prove more difficult given how much their measurements overlap.
There are also correlations between petal length and petal width, as well as sepal length and sepal width. The field biologists assure us that this is to be expected: Longer flower petals also tend to be wider, and the same applies for sepals.
We can also make **violin plots** of the data to compare the measurement distributions of the classes. Violin plots contain the same information as [box plots](https://en.wikipedia.org/wiki/Box_plot), but also scales the box according to the density of the data.
```
plt.figure(figsize=(10, 10))
for column_index, column in enumerate(iris_data_clean.columns):
if column == 'class':
continue
plt.subplot(2, 2, column_index + 1)
sb.violinplot(x='class', y=column, data=iris_data_clean)
```
Enough flirting with the data. Let's get to modeling.
## Step 5: Classification
[[ go back to the top ]](#Table-of-contents)
Wow, all this work and we *still* haven't modeled the data!
As tiresome as it can be, tidying and exploring our data is a vital component to any data analysis. If we had jumped straight to the modeling step, we would have created a faulty classification model.
Remember: **Bad data leads to bad models.** Always check your data first.
<hr />
Assured that our data is now as clean as we can make it — and armed with some cursory knowledge of the distributions and relationships in our data set — it's time to make the next big step in our analysis: Splitting the data into training and testing sets.
A **training set** is a random subset of the data that we use to train our models.
A **testing set** is a random subset of the data (mutually exclusive from the training set) that we use to validate our models on unforseen data.
Especially in sparse data sets like ours, it's easy for models to **overfit** the data: The model will learn the training set so well that it won't be able to handle most of the cases it's never seen before. This is why it's important for us to build the model with the training set, but score it with the testing set.
Note that once we split the data into a training and testing set, we should treat the testing set like it no longer exists: We cannot use any information from the testing set to build our model or else we're cheating.
### Data Set-up
Let's set up our data first.
```
iris_data_clean = pd.read_csv('data/iris-data-clean.csv')
# We're using all four measurements as inputs
# Note that scikit-learn expects each entry to be a list of values, e.g.,
# [ [val1, val2, val3],
# [val1, val2, val3],
# ... ]
# such that our input data set is represented as a list of lists
# We can extract the data in this format from pandas like this:
all_inputs = iris_data_clean[['sepal_length_cm', 'sepal_width_cm',
'petal_length_cm', 'petal_width_cm']].values
# Similarly, we can extract the class labels
all_labels = iris_data_clean['class'].values
# Make sure that you don't mix up the order of the entries
# all_inputs[5] inputs should correspond to the class in all_labels[5]
# Here's what a subset of our inputs looks like:
all_inputs[:5]
```
Now our data is ready to be split.
```
from sklearn.model_selection import train_test_split
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25, random_state=1)
```
With our data split, we can start fitting models to our data. Our company's Head of Data is all about **decision tree classifiers**, so let's start with one of those.
### Model Selection: Decision Tree
Decision tree classifiers are incredibly simple in theory. In their simplest form, decision tree classifiers ask a series of Yes/No questions about the data — each time getting closer to finding out the class of each entry — until they either classify the data set perfectly, or simply can't differentiate a set of entries. Think of it like a game of [Twenty Questions](https://en.wikipedia.org/wiki/Twenty_Questions), except the computer is *much*, *much* better at it.
Here's an example decision tree classifier:
<img src="figures/iris_dtc.png" />
Notice how the classifier asks Yes/No questions about the data — whether a certain feature is <= 1.75, for example — so it can differentiate the records. This is the essence of every decision tree.
The nice part about decision tree classifiers is that they are **scale-invariant**, i.e., the scale of the features does not affect their performance, unlike many Machine Learning models. In other words, it doesn't matter if our features range from 0 to 1 or 0 to 1,000; decision tree classifiers will work with them just the same.
There are several [parameters](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html) that we can tune for decision tree classifiers, but for now let's use a basic decision tree classifier.
```
from sklearn.tree import DecisionTreeClassifier
# Create the classifier
decision_tree_classifier = DecisionTreeClassifier()
# Train the classifier on the training set
decision_tree_classifier.fit(training_inputs, training_classes)
# Validate the classifier on the testing set using classification accuracy
decision_tree_classifier.score(testing_inputs, testing_classes)
```
Heck yeah! Our model achieves 97% classification accuracy without much effort.
However, there's a catch: Depending on how our training and testing set was sampled, our model can achieve anywhere from 80% to 100% accuracy:
```
model_accuracies = []
for repetition in range(1000):
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25)
decision_tree_classifier = DecisionTreeClassifier()
decision_tree_classifier.fit(training_inputs, training_classes)
classifier_accuracy = decision_tree_classifier.score(testing_inputs, testing_classes)
model_accuracies.append(classifier_accuracy)
plt.hist(model_accuracies)
;
```
It's obviously a problem that our model performs quite differently depending on the subset of the data it's trained on. This phenomenon is known as **overfitting**: The model is learning to classify the training set so well that it doesn't generalize and perform well on data it hasn't seen before.
### Cross-validation
[[ go back to the top ]](#Table-of-contents)
This problem is the main reason that most data scientists perform ***k*-fold cross-validation** on their models: Split the original data set into *k* subsets, use one of the subsets as the testing set, and the rest of the subsets are used as the training set. This process is then repeated *k* times such that each subset is used as the testing set exactly once.
10-fold cross-validation is the most common choice, so let's use that here. Performing 10-fold cross-validation on our data set looks something like this:
(each square is an entry in our data set)
```
import numpy as np
from sklearn.model_selection import StratifiedKFold
def plot_cv(cv, features, labels):
masks = []
for train, test in cv.split(features, labels):
mask = np.zeros(len(labels), dtype=bool)
mask[test] = 1
masks.append(mask)
plt.figure(figsize=(15, 15))
plt.imshow(masks, interpolation='none', cmap='gray_r')
plt.ylabel('Fold')
plt.xlabel('Row #')
plot_cv(StratifiedKFold(n_splits=10), all_inputs, all_labels)
```
You'll notice that we used **Stratified *k*-fold cross-validation** in the code above. Stratified *k*-fold keeps the class proportions the same across all of the folds, which is vital for maintaining a representative subset of our data set. (e.g., so we don't have 100% `Iris setosa` entries in one of the folds.)
We can perform 10-fold cross-validation on our model with the following code:
```
from sklearn.model_selection import cross_val_score
decision_tree_classifier = DecisionTreeClassifier()
# cross_val_score returns a list of the scores, which we can visualize
# to get a reasonable estimate of our classifier's performance
cv_scores = cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10)
plt.hist(cv_scores)
plt.title('Average score: {}'.format(np.mean(cv_scores)))
;
```
Now we have a much more consistent rating of our classifier's general classification accuracy.
### Parameter tuning
[[ go back to the top ]](#Table-of-contents)
Every Machine Learning model comes with a variety of parameters to tune, and these parameters can be vitally important to the performance of our classifier. For example, if we severely limit the depth of our decision tree classifier:
```
decision_tree_classifier = DecisionTreeClassifier(max_depth=1)
cv_scores = cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10)
plt.hist(cv_scores)
plt.title('Average score: {}'.format(np.mean(cv_scores)))
;
```
the classification accuracy falls tremendously.
Therefore, we need to find a systematic method to discover the best parameters for our model and data set.
The most common method for model parameter tuning is **Grid Search**. The idea behind Grid Search is simple: explore a range of parameters and find the best-performing parameter combination. Focus your search on the best range of parameters, then repeat this process several times until the best parameters are discovered.
Let's tune our decision tree classifier. We'll stick to only two parameters for now, but it's possible to simultaneously explore dozens of parameters if we want.
```
from sklearn.model_selection import GridSearchCV
decision_tree_classifier = DecisionTreeClassifier()
parameter_grid = {'max_depth': [1, 2, 3, 4, 5],
'max_features': [1, 2, 3, 4]}
cross_validation = StratifiedKFold(n_splits=10)
grid_search = GridSearchCV(decision_tree_classifier,
param_grid=parameter_grid,
cv=cross_validation)
grid_search.fit(all_inputs, all_labels)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))
```
Now let's visualize the grid search to see how the parameters interact.
```
grid_visualization = grid_search.cv_results_['mean_test_score']
grid_visualization.shape = (5, 4)
sb.heatmap(grid_visualization, cmap='Blues', annot=True)
plt.xticks(np.arange(4) + 0.5, grid_search.param_grid['max_features'])
plt.yticks(np.arange(5) + 0.5, grid_search.param_grid['max_depth'])
plt.xlabel('max_features')
plt.ylabel('max_depth')
;
```
Now we have a better sense of the parameter space: We know that we need a `max_depth` of at least 2 to allow the decision tree to make more than a one-off decision.
`max_features` doesn't really seem to make a big difference here as long as we have 2 of them, which makes sense since our data set has only 4 features and is relatively easy to classify. (Remember, one of our data set's classes was easily separable from the rest based on a single feature.)
Let's go ahead and use a broad grid search to find the best settings for a handful of parameters.
```
decision_tree_classifier = DecisionTreeClassifier()
parameter_grid = {'criterion': ['gini', 'entropy'],
'splitter': ['best', 'random'],
'max_depth': [1, 2, 3, 4, 5],
'max_features': [1, 2, 3, 4]}
cross_validation = StratifiedKFold(n_splits=10)
grid_search = GridSearchCV(decision_tree_classifier,
param_grid=parameter_grid,
cv=cross_validation)
grid_search.fit(all_inputs, all_labels)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))
```
Now we can take the best classifier from the Grid Search and use that:
```
decision_tree_classifier = grid_search.best_estimator_
decision_tree_classifier
```
We can even visualize the decision tree with [GraphViz](http://www.graphviz.org/) to see how it's making the classifications:
```
import sklearn.tree as tree
from sklearn.externals.six import StringIO
with open('figures/iris_dtc.dot', 'w') as out_file:
out_file = tree.export_graphviz(decision_tree_classifier, out_file=out_file)
```
<img src="figures/iris_dtc.png" />
(This classifier may look familiar from earlier in the notebook.)
Alright! We finally have our demo classifier. Let's create some visuals of its performance so we have something to show our company's Head of Data.
```
dt_scores = cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10)
sb.boxplot(dt_scores)
sb.stripplot(dt_scores, jitter=True, color='black')
;
```
Hmmm... that's a little boring by itself though. How about we compare another classifier to see how they perform?
### Model Selection: Random Forrest
We already know from previous projects that **Random Forest classifiers** usually work better than individual decision trees. A common problem that decision trees face is that they're prone to overfitting: They complexify to the point that they classify the training set near-perfectly, but fail to generalize to data they have not seen before.
**Random Forest classifiers** work around that limitation by creating a whole bunch of decision trees (hence "forest") — each trained on random subsets of training samples (drawn with replacement) and features (drawn without replacement) — and have the decision trees work together to make a more accurate classification.
Let that be a lesson for us: **Even in Machine Learning, we get better results when we work together!**
Let's see if a Random Forest classifier works better here.
The great part about scikit-learn is that the training, testing, parameter tuning, etc. process is the same for all models, so we only need to plug in the new classifier.
```
from sklearn.ensemble import RandomForestClassifier
random_forest_classifier = RandomForestClassifier()
parameter_grid = {'n_estimators': [10, 25, 50, 100],
'criterion': ['gini', 'entropy'],
'max_features': [1, 2, 3, 4]}
cross_validation = StratifiedKFold(n_splits=10)
grid_search = GridSearchCV(random_forest_classifier,
param_grid=parameter_grid,
cv=cross_validation)
grid_search.fit(all_inputs, all_labels)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))
grid_search.best_estimator_
```
### Model Comparison: Decision Tree vs. Random Forrest
Now we can compare their performance:
```
random_forest_classifier = grid_search.best_estimator_
rf_df = pd.DataFrame({'accuracy': cross_val_score(random_forest_classifier, all_inputs, all_labels, cv=10),
'classifier': ['Random Forest'] * 10})
dt_df = pd.DataFrame({'accuracy': cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10),
'classifier': ['Decision Tree'] * 10})
both_df = rf_df.append(dt_df)
sb.boxplot(x='classifier', y='accuracy', data=both_df)
sb.stripplot(x='classifier', y='accuracy', data=both_df, jitter=True, color='black')
;
```
How about that? They both seem to perform about the same on this data set. This is probably because of the limitations of our data set: We have only 4 features to make the classification, and Random Forest classifiers excel when there's hundreds of possible features to look at. In other words, there wasn't much room for improvement with this data set.
## Step 6: Reproducibility
[[ go back to the top ]](#Table-of-contents)
Ensuring that our work is reproducible is the last and — arguably — most important step in any analysis. **As a rule, we shouldn't place much weight on a discovery that can't be reproduced**. As such, if our analysis isn't reproducible, we might as well not have done it.
Notebooks like this one go a long way toward making our work reproducible. Since we documented every step as we moved along, we have a written record of what we did and why we did it — both in text and code.
Beyond recording what we did, we should also document what software and hardware we used to perform our analysis. This typically goes at the top of our notebooks so our readers know what tools to use.
[Sebastian Raschka](http://sebastianraschka.com/) created a handy [notebook tool](https://github.com/rasbt/watermark) for this:
Finally, let's extract the core of our work from Steps 1-5 and turn it into a single pipeline.
```
%matplotlib inline
import pandas as pd
import seaborn as sb
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, cross_val_score
# We can jump directly to working with the clean data because we saved our cleaned data set
iris_data_clean = pd.read_csv('data/iris-data-clean.csv')
# Testing our data: Our analysis will stop here if any of these assertions are wrong
# We know that we should only have three classes
assert len(iris_data_clean['class'].unique()) == 3
# We know that sepal lengths for 'Iris-versicolor' should never be below 2.5 cm
assert iris_data_clean.loc[iris_data_clean['class'] == 'Iris-versicolor', 'sepal_length_cm'].min() >= 2.5
# We know that our data set should have no missing measurements
assert len(iris_data_clean.loc[(iris_data_clean['sepal_length_cm'].isnull()) |
(iris_data_clean['sepal_width_cm'].isnull()) |
(iris_data_clean['petal_length_cm'].isnull()) |
(iris_data_clean['petal_width_cm'].isnull())]) == 0
all_inputs = iris_data_clean[['sepal_length_cm', 'sepal_width_cm',
'petal_length_cm', 'petal_width_cm']].values
all_labels = iris_data_clean['class'].values
# This is the classifier that came out of Grid Search
random_forest_classifier = RandomForestClassifier(criterion='gini', max_features=3, n_estimators=50)
# All that's left to do now is plot the cross-validation scores
rf_classifier_scores = cross_val_score(random_forest_classifier, all_inputs, all_labels, cv=10)
sb.boxplot(rf_classifier_scores)
sb.stripplot(rf_classifier_scores, jitter=True, color='black')
# ...and show some of the predictions from the classifier
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25)
random_forest_classifier.fit(training_inputs, training_classes)
for input_features, prediction, actual in zip(testing_inputs[:10],
random_forest_classifier.predict(testing_inputs[:10]),
testing_classes[:10]):
print('{}\t-->\t{}\t(Actual: {})'.format(input_features, prediction, actual))
```
There we have it: We have a complete and reproducible Machine Learning pipeline to demo to our company's Head of Data. We've met the success criteria that we set from the beginning (>90% accuracy), and our pipeline is flexible enough to handle new inputs or flowers when that data set is ready. Not bad for our first week on the job!
## Conclusions
[[ go back to the top ]](#Table-of-contents)
I hope you found this example useful and learned at least one new trick by going through it.
This notebook hopefully highlighted the **process** of doing **data science** correctly, by:
- asking the right questions
- defining the problem to solve
- exploring the dataset
- fixing issues in the data
- determining the correct model to use
- producing "user friendly results"
- making your results reproducible
## Acknowledgements
[[ go back to the top ]](#Table-of-contents)
This course (not just this notebook) was built using material from my private industry and acedemic experience, as well as material borrowed from:
- UCLA ECE 239AS
- UPenn CIS 229
- UPenn CIS 520
- Stanford CS 229
- Python Data Science Handbook
- Machine Learning Mastery
- Towards Data Science
- Randy Olson's data analysis and machine learning projects.
- Many thanks to [Andreas Mueller](http://amueller.github.io/) for some of his [examples](https://github.com/amueller/scipy_2015_sklearn_tutorial) in the Machine Learning section. I drew inspiration from several of his excellent examples.
- Many thanks to Kaggle for the datasets
- Numerous others that I cannot remember.
|
github_jupyter
|
import pandas as pd
iris_data = pd.read_csv('data/iris-data.csv')
iris_data.head()
iris_data = pd.read_csv('data/iris-data.csv', na_values=['NA'])
iris_data.describe()
# This line tells the notebook to show plots inside of the notebook
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
# We have to temporarily drop the rows with 'NA' values
# because the Seaborn plotting function does not know
# what to do with them
sb.pairplot(iris_data.dropna(), hue='class')
;
iris_data.loc[iris_data['class'] == 'versicolor', 'class'] = 'Iris-versicolor'
iris_data.loc[iris_data['class'] == 'Iris-setossa', 'class'] = 'Iris-setosa'
iris_data['class'].unique()
# This line drops any 'Iris-setosa' rows with a separal width less than 2.5 cm
iris_data = iris_data.loc[(iris_data['class'] != 'Iris-setosa') | (iris_data['sepal_width_cm'] >= 2.5)]
iris_data.loc[iris_data['class'] == 'Iris-setosa', 'sepal_width_cm'].hist()
;
iris_data.loc[(iris_data['class'] == 'Iris-versicolor') &
(iris_data['sepal_length_cm'] < 1.0)]
iris_data.loc[(iris_data['class'] == 'Iris-versicolor') &
(iris_data['sepal_length_cm'] < 1.0),
'sepal_length_cm'] *= 100.0
iris_data.loc[iris_data['class'] == 'Iris-versicolor', 'sepal_length_cm'].hist()
;
iris_data.loc[(iris_data['sepal_length_cm'].isnull()) |
(iris_data['sepal_width_cm'].isnull()) |
(iris_data['petal_length_cm'].isnull()) |
(iris_data['petal_width_cm'].isnull())]
iris_data.loc[iris_data['class'] == 'Iris-setosa', 'petal_width_cm'].hist()
;
average_petal_width = iris_data.loc[iris_data['class'] == 'Iris-setosa', 'petal_width_cm'].mean()
iris_data.loc[(iris_data['class'] == 'Iris-setosa') &
(iris_data['petal_width_cm'].isnull()),
'petal_width_cm'] = average_petal_width
iris_data.loc[(iris_data['class'] == 'Iris-setosa') &
(iris_data['petal_width_cm'] == average_petal_width)]
iris_data.loc[(iris_data['sepal_length_cm'].isnull()) |
(iris_data['sepal_width_cm'].isnull()) |
(iris_data['petal_length_cm'].isnull()) |
(iris_data['petal_width_cm'].isnull())]
iris_data.to_csv('data/iris-data-clean.csv', index=False)
iris_data_clean = pd.read_csv('data/iris-data-clean.csv')
sb.pairplot(iris_data_clean, hue='class')
;
assert 1 == 2
# We know that we should only have three classes
assert len(iris_data_clean['class'].unique()) == 3
# We know that sepal lengths for 'Iris-versicolor' should never be below 2.5 cm
assert iris_data_clean.loc[iris_data_clean['class'] == 'Iris-versicolor', 'sepal_length_cm'].min() >= 2.5
# We know that our data set should have no missing measurements
assert len(iris_data_clean.loc[(iris_data_clean['sepal_length_cm'].isnull()) |
(iris_data_clean['sepal_width_cm'].isnull()) |
(iris_data_clean['petal_length_cm'].isnull()) |
(iris_data_clean['petal_width_cm'].isnull())]) == 0
sb.pairplot(iris_data_clean)
;
sb.pairplot(iris_data_clean, hue='class')
;
plt.figure(figsize=(10, 10))
for column_index, column in enumerate(iris_data_clean.columns):
if column == 'class':
continue
plt.subplot(2, 2, column_index + 1)
sb.violinplot(x='class', y=column, data=iris_data_clean)
iris_data_clean = pd.read_csv('data/iris-data-clean.csv')
# We're using all four measurements as inputs
# Note that scikit-learn expects each entry to be a list of values, e.g.,
# [ [val1, val2, val3],
# [val1, val2, val3],
# ... ]
# such that our input data set is represented as a list of lists
# We can extract the data in this format from pandas like this:
all_inputs = iris_data_clean[['sepal_length_cm', 'sepal_width_cm',
'petal_length_cm', 'petal_width_cm']].values
# Similarly, we can extract the class labels
all_labels = iris_data_clean['class'].values
# Make sure that you don't mix up the order of the entries
# all_inputs[5] inputs should correspond to the class in all_labels[5]
# Here's what a subset of our inputs looks like:
all_inputs[:5]
from sklearn.model_selection import train_test_split
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25, random_state=1)
from sklearn.tree import DecisionTreeClassifier
# Create the classifier
decision_tree_classifier = DecisionTreeClassifier()
# Train the classifier on the training set
decision_tree_classifier.fit(training_inputs, training_classes)
# Validate the classifier on the testing set using classification accuracy
decision_tree_classifier.score(testing_inputs, testing_classes)
model_accuracies = []
for repetition in range(1000):
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25)
decision_tree_classifier = DecisionTreeClassifier()
decision_tree_classifier.fit(training_inputs, training_classes)
classifier_accuracy = decision_tree_classifier.score(testing_inputs, testing_classes)
model_accuracies.append(classifier_accuracy)
plt.hist(model_accuracies)
;
import numpy as np
from sklearn.model_selection import StratifiedKFold
def plot_cv(cv, features, labels):
masks = []
for train, test in cv.split(features, labels):
mask = np.zeros(len(labels), dtype=bool)
mask[test] = 1
masks.append(mask)
plt.figure(figsize=(15, 15))
plt.imshow(masks, interpolation='none', cmap='gray_r')
plt.ylabel('Fold')
plt.xlabel('Row #')
plot_cv(StratifiedKFold(n_splits=10), all_inputs, all_labels)
from sklearn.model_selection import cross_val_score
decision_tree_classifier = DecisionTreeClassifier()
# cross_val_score returns a list of the scores, which we can visualize
# to get a reasonable estimate of our classifier's performance
cv_scores = cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10)
plt.hist(cv_scores)
plt.title('Average score: {}'.format(np.mean(cv_scores)))
;
decision_tree_classifier = DecisionTreeClassifier(max_depth=1)
cv_scores = cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10)
plt.hist(cv_scores)
plt.title('Average score: {}'.format(np.mean(cv_scores)))
;
from sklearn.model_selection import GridSearchCV
decision_tree_classifier = DecisionTreeClassifier()
parameter_grid = {'max_depth': [1, 2, 3, 4, 5],
'max_features': [1, 2, 3, 4]}
cross_validation = StratifiedKFold(n_splits=10)
grid_search = GridSearchCV(decision_tree_classifier,
param_grid=parameter_grid,
cv=cross_validation)
grid_search.fit(all_inputs, all_labels)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))
grid_visualization = grid_search.cv_results_['mean_test_score']
grid_visualization.shape = (5, 4)
sb.heatmap(grid_visualization, cmap='Blues', annot=True)
plt.xticks(np.arange(4) + 0.5, grid_search.param_grid['max_features'])
plt.yticks(np.arange(5) + 0.5, grid_search.param_grid['max_depth'])
plt.xlabel('max_features')
plt.ylabel('max_depth')
;
decision_tree_classifier = DecisionTreeClassifier()
parameter_grid = {'criterion': ['gini', 'entropy'],
'splitter': ['best', 'random'],
'max_depth': [1, 2, 3, 4, 5],
'max_features': [1, 2, 3, 4]}
cross_validation = StratifiedKFold(n_splits=10)
grid_search = GridSearchCV(decision_tree_classifier,
param_grid=parameter_grid,
cv=cross_validation)
grid_search.fit(all_inputs, all_labels)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))
decision_tree_classifier = grid_search.best_estimator_
decision_tree_classifier
import sklearn.tree as tree
from sklearn.externals.six import StringIO
with open('figures/iris_dtc.dot', 'w') as out_file:
out_file = tree.export_graphviz(decision_tree_classifier, out_file=out_file)
dt_scores = cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10)
sb.boxplot(dt_scores)
sb.stripplot(dt_scores, jitter=True, color='black')
;
from sklearn.ensemble import RandomForestClassifier
random_forest_classifier = RandomForestClassifier()
parameter_grid = {'n_estimators': [10, 25, 50, 100],
'criterion': ['gini', 'entropy'],
'max_features': [1, 2, 3, 4]}
cross_validation = StratifiedKFold(n_splits=10)
grid_search = GridSearchCV(random_forest_classifier,
param_grid=parameter_grid,
cv=cross_validation)
grid_search.fit(all_inputs, all_labels)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))
grid_search.best_estimator_
random_forest_classifier = grid_search.best_estimator_
rf_df = pd.DataFrame({'accuracy': cross_val_score(random_forest_classifier, all_inputs, all_labels, cv=10),
'classifier': ['Random Forest'] * 10})
dt_df = pd.DataFrame({'accuracy': cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10),
'classifier': ['Decision Tree'] * 10})
both_df = rf_df.append(dt_df)
sb.boxplot(x='classifier', y='accuracy', data=both_df)
sb.stripplot(x='classifier', y='accuracy', data=both_df, jitter=True, color='black')
;
%matplotlib inline
import pandas as pd
import seaborn as sb
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, cross_val_score
# We can jump directly to working with the clean data because we saved our cleaned data set
iris_data_clean = pd.read_csv('data/iris-data-clean.csv')
# Testing our data: Our analysis will stop here if any of these assertions are wrong
# We know that we should only have three classes
assert len(iris_data_clean['class'].unique()) == 3
# We know that sepal lengths for 'Iris-versicolor' should never be below 2.5 cm
assert iris_data_clean.loc[iris_data_clean['class'] == 'Iris-versicolor', 'sepal_length_cm'].min() >= 2.5
# We know that our data set should have no missing measurements
assert len(iris_data_clean.loc[(iris_data_clean['sepal_length_cm'].isnull()) |
(iris_data_clean['sepal_width_cm'].isnull()) |
(iris_data_clean['petal_length_cm'].isnull()) |
(iris_data_clean['petal_width_cm'].isnull())]) == 0
all_inputs = iris_data_clean[['sepal_length_cm', 'sepal_width_cm',
'petal_length_cm', 'petal_width_cm']].values
all_labels = iris_data_clean['class'].values
# This is the classifier that came out of Grid Search
random_forest_classifier = RandomForestClassifier(criterion='gini', max_features=3, n_estimators=50)
# All that's left to do now is plot the cross-validation scores
rf_classifier_scores = cross_val_score(random_forest_classifier, all_inputs, all_labels, cv=10)
sb.boxplot(rf_classifier_scores)
sb.stripplot(rf_classifier_scores, jitter=True, color='black')
# ...and show some of the predictions from the classifier
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25)
random_forest_classifier.fit(training_inputs, training_classes)
for input_features, prediction, actual in zip(testing_inputs[:10],
random_forest_classifier.predict(testing_inputs[:10]),
testing_classes[:10]):
print('{}\t-->\t{}\t(Actual: {})'.format(input_features, prediction, actual))
| 0.644561 | 0.993531 |
# 110 - First percepton with pytorch
Implement the forward (prediction) and backward (training) algorithm with [pytorch](http://pytorch.org/).
**Note:** install [tqdm](https://pypi.python.org/pypi/tqdm) if not installed: ``!pip install tqdm``
```
import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch as F
import torch.optim as optim
from torch.autograd import Variable
print("torch", torch.__version__)
from torchvision import datasets, transforms
from tqdm import tqdm
from sklearn.datasets import load_iris
from sklearn.preprocessing import LabelBinarizer
%matplotlib inline
X, Y = load_iris(return_X_y=True)
X = X.astype("float32")
X.shape, Y.shape
ftrain = np.arange(X.shape[0]) % 4 != 0
Xtrain, Ytrain = X[ftrain, :], Y[ftrain]
Xtest, Ytest = X[~ftrain, :], Y[~ftrain]
Xtrain.shape, Ytrain.shape, Xtest.shape, Ytest.shape
BATCH_SIZE = 64
TEST_BATCH_SIZE = 64
N_EPOCHS = 1000
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(4, 20)
self.fc2 = nn.Linear(20, 3)
def forward(self, x):
x = F.tanh(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x, dim=-1)
model = Net()
#optimizer = optim.SGD(model.parameters(), lr=1e-1, momentum=0.8)
optimizer = optim.Adam(model.parameters())
loss_fn = nn.NLLLoss()
Xtrain_ = Variable(torch.from_numpy(Xtrain))
Xtest_ = Variable(torch.from_numpy(Xtest))
Ytrain_ = Variable(torch.from_numpy(Ytrain.astype(np.int64)))
Ytest_ = Variable(torch.from_numpy(Ytest.astype(np.int64)))
perfs = []
for t in range(1, N_EPOCHS + 1):
# Before the backward pass, use the optimizer object to zero all of the
# gradients for the variables it will update (which are the learnable weights
# of the model)
optimizer.zero_grad()
# Forward pass: compute predicted y by passing x to the model.
Ypred = model(Xtrain_)
# Compute and print loss.
loss = loss_fn(Ypred , Ytrain_)
# Backward pass: compute gradient of the loss with respect to model
# parameters
loss.backward()
# Calling the step function on an Optimizer makes an update to its
# parameters
optimizer.step()
Ypred_test = model(Xtest_)
loss_test = loss_fn(Ypred_test, Ytest_)
pred = Ypred_test.data.max(1, keepdim=True)[1] # get the index of the max log-probability
accuracy = pred.eq(Ytest_.data.view_as(pred)).cpu().sum().item() / Ytest.size
perfs.append([t, loss.item(), loss_test.data.item(), accuracy])
df_perfs = pd.DataFrame(perfs, columns=["epoch", "train_loss", "test_loss", "accuracy"]).set_index("epoch")
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
print("Last accuracy %.3f" % df_perfs.accuracy.iloc[-1])
print("Best accuracy %.3f" % df_perfs.accuracy.max())
print("Last test loss %.4f" % df_perfs.test_loss.iloc[-1])
df_perfs[["train_loss", "test_loss"]].plot(ax=ax1);
df_perfs[["accuracy"]].plot(ax=ax2);
plt.ylim(ymin=0.7);
```
|
github_jupyter
|
import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch as F
import torch.optim as optim
from torch.autograd import Variable
print("torch", torch.__version__)
from torchvision import datasets, transforms
from tqdm import tqdm
from sklearn.datasets import load_iris
from sklearn.preprocessing import LabelBinarizer
%matplotlib inline
X, Y = load_iris(return_X_y=True)
X = X.astype("float32")
X.shape, Y.shape
ftrain = np.arange(X.shape[0]) % 4 != 0
Xtrain, Ytrain = X[ftrain, :], Y[ftrain]
Xtest, Ytest = X[~ftrain, :], Y[~ftrain]
Xtrain.shape, Ytrain.shape, Xtest.shape, Ytest.shape
BATCH_SIZE = 64
TEST_BATCH_SIZE = 64
N_EPOCHS = 1000
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(4, 20)
self.fc2 = nn.Linear(20, 3)
def forward(self, x):
x = F.tanh(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x, dim=-1)
model = Net()
#optimizer = optim.SGD(model.parameters(), lr=1e-1, momentum=0.8)
optimizer = optim.Adam(model.parameters())
loss_fn = nn.NLLLoss()
Xtrain_ = Variable(torch.from_numpy(Xtrain))
Xtest_ = Variable(torch.from_numpy(Xtest))
Ytrain_ = Variable(torch.from_numpy(Ytrain.astype(np.int64)))
Ytest_ = Variable(torch.from_numpy(Ytest.astype(np.int64)))
perfs = []
for t in range(1, N_EPOCHS + 1):
# Before the backward pass, use the optimizer object to zero all of the
# gradients for the variables it will update (which are the learnable weights
# of the model)
optimizer.zero_grad()
# Forward pass: compute predicted y by passing x to the model.
Ypred = model(Xtrain_)
# Compute and print loss.
loss = loss_fn(Ypred , Ytrain_)
# Backward pass: compute gradient of the loss with respect to model
# parameters
loss.backward()
# Calling the step function on an Optimizer makes an update to its
# parameters
optimizer.step()
Ypred_test = model(Xtest_)
loss_test = loss_fn(Ypred_test, Ytest_)
pred = Ypred_test.data.max(1, keepdim=True)[1] # get the index of the max log-probability
accuracy = pred.eq(Ytest_.data.view_as(pred)).cpu().sum().item() / Ytest.size
perfs.append([t, loss.item(), loss_test.data.item(), accuracy])
df_perfs = pd.DataFrame(perfs, columns=["epoch", "train_loss", "test_loss", "accuracy"]).set_index("epoch")
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
print("Last accuracy %.3f" % df_perfs.accuracy.iloc[-1])
print("Best accuracy %.3f" % df_perfs.accuracy.max())
print("Last test loss %.4f" % df_perfs.test_loss.iloc[-1])
df_perfs[["train_loss", "test_loss"]].plot(ax=ax1);
df_perfs[["accuracy"]].plot(ax=ax2);
plt.ylim(ymin=0.7);
| 0.949259 | 0.927429 |
# Playing with the dictionary to help in Wordle
[Wordle](https://www.powerlanguage.co.uk/wordle/) is a fun little game.
Besides being fun, it offers the opportunity of playing a bit with Python and Jupyter. Both are things that I am trying to learn.
First lets load a word list from the computers dictionary
```
word_list = open("/usr/share/dict/words","r").readlines()
word_list[:5]
```
Then let's clean up a bit the word list to get all the words with 5 letters.
```
words = [x.strip().lower() for x in word_list if len(x.strip()) == 5 ]
```
Count the letter appearances:
```
from collections import defaultdict
freq = defaultdict(int) # a dictionary with default value 0 for all keys
for w in words:
for c in w:
freq[c] += 1
```
And then compute their relative frequency (this step we may avoid).
```
total = sum(freq.values())
for k, v in freq.items():
freq[k] = freq[k]/total
```
I define now a function to compute the score of a word. A word is better when the unique letters it contains appear more frequently as dictionary entries.
```
def score(w):
s = 0
for letter in set(w):
s += freq[letter]
return s
sc = [ (w, score(w)) for w in words] # score each element of the list
sc.sort(key=lambda x:x[1], reverse=True) # sort the list according to score
sc[:5]
```
According to this metric, the best word to start the game with should be the first one. Since we have a sorted list, we need the first element (extracted by [0]) and the first component of the pair (the second [0]) since we do not really care about the frequency.
```
sc[0][0]
```
'arose' is a good candidate. I would have expected that 'arise' ranked higher. And furthermore, I like starting words that end with 's' (to know if the word we are looking for ends is plural). However, so far we did not tell the score function that we like words that end with 's', so that is my fault really.
So my personal favourite word to start would be 'aries' but it is not in the game's dictionary. The program gets pretty close nonetheless.
Here I run out of steam a bit, but still I defined what a not bad word is (i.e.: one that does not contain letters we know do not appear in the solution).
However, bad letters are just a constant that one needs to add.
```
def not_bad(w, bad):
return all([not(c in bad) for c in w])
def good(w, g):
return all([c in w for c in g])
bad_letters = ""
good_letters = ""
fsc = [w for (w, _) in sc if not_bad(w, bad_letters) and good(w, good_letters)]
```
Then I define a function to match words with letters in good positions. So the spec is a five character string with dots as placeholders for unknown letters, and the letters we know otherwise.
```
def match_spec(w, sp):
for i in range(0,len(sp)):
if sp[i] != ".":
if w[i] != sp[i]:
return False
return True
```
With that we can try to find the words that are not bad, and that match the spec. And we get them sorted by their frequency. The dictionary contains many weird words that Wordle does not like, so you should try the less weird looking words.
```
pattern = "....."
[w for w in fsc if match_spec(w, pattern)][:20]
```
|
github_jupyter
|
word_list = open("/usr/share/dict/words","r").readlines()
word_list[:5]
words = [x.strip().lower() for x in word_list if len(x.strip()) == 5 ]
from collections import defaultdict
freq = defaultdict(int) # a dictionary with default value 0 for all keys
for w in words:
for c in w:
freq[c] += 1
total = sum(freq.values())
for k, v in freq.items():
freq[k] = freq[k]/total
def score(w):
s = 0
for letter in set(w):
s += freq[letter]
return s
sc = [ (w, score(w)) for w in words] # score each element of the list
sc.sort(key=lambda x:x[1], reverse=True) # sort the list according to score
sc[:5]
sc[0][0]
def not_bad(w, bad):
return all([not(c in bad) for c in w])
def good(w, g):
return all([c in w for c in g])
bad_letters = ""
good_letters = ""
fsc = [w for (w, _) in sc if not_bad(w, bad_letters) and good(w, good_letters)]
def match_spec(w, sp):
for i in range(0,len(sp)):
if sp[i] != ".":
if w[i] != sp[i]:
return False
return True
pattern = "....."
[w for w in fsc if match_spec(w, pattern)][:20]
| 0.245356 | 0.926636 |
Threat to Coral Reef from Fishing Practices bio.024.4 http://www.wri.org/publication/reefs-risk-revisited
```
import numpy as np
import pandas as pd
import rasterio
import boto3
import requests as req
from matplotlib import pyplot as plt
%matplotlib inline
import os
import sys
import threading
```
If data already on s3, create a staging key and download to staging folder
```
s3_bucket = "wri-public-data"
s3_folder = "resourcewatch/bio_024_4_coral_reef_threat_from_fishing_practices/"
s3_key_orig = s3_folder + "bio_024_4_coral_reef_threat_from_fishing_practices.tif"
s3_key_edit = s3_key_orig[0:-4] + "_edit.tif"
temp_folder = "/Users/nathansuberi/Desktop/WRI_Programming/RW_Data/temp/"
local_orig = temp_folder + "bio_024_4.tif"
local_edit = local_orig[:-4] + "_edit.tif"
s3 = boto3.resource('s3')
s3.meta.client.download_file(s3_bucket, s3_key_orig, local_orig)
#s3.meta.client.download_file(s3_bucket, s3_key_edit, local_edit)
```
<b>Regardless of any needed edits, upload original file</b>
<i>Upload tif to S3 folder</i>
http://boto3.readthedocs.io/en/latest/guide/s3-example-creating-buckets.html
<i>Monitor Progress of Upload</i>
http://boto3.readthedocs.io/en/latest/_modules/boto3/s3/transfer.html
https://boto3.readthedocs.io/en/latest/guide/s3.html#using-the-transfer-manager
```
s3 = boto3.client("s3")
class ProgressPercentage(object):
def __init__(self, filename):
self._filename = filename
self._size = float(os.path.getsize(filename))
self._seen_so_far = 0
self._lock = threading.Lock()
def __call__(self, bytes_amount):
# To simplify we'll assume this is hooked up
# to a single filename.
with self._lock:
self._seen_so_far += bytes_amount
percentage = (self._seen_so_far / self._size) * 100
sys.stdout.write(
"\r%s %s / %s (%.2f%%)" % (
self._filename, self._seen_so_far, self._size,
percentage))
sys.stdout.flush()
# Defined above:
# s3_bucket
# s3_key_orig
# s3_key_edit
# staging_key_orig
# staging_key_edit
s3.upload_file(local_key, s3_bucket, s3_key_orig,
Callback=ProgressPercentage(local_key))
```
Check for compression, projection
Create edit file if necessary
```
with rasterio.open(local_orig) as src:
print(src.profile)
local_edit_tb = local_edit[:-4] + "_tb.tif"
local_edit_t = local_edit[:-4] + "_t.tif"
local_edit_b = local_edit[:-4] + "_b.tif"
with rasterio.open(local_orig) as src:
data = src.read(1)
# Return lat info
south_lat = -90
north_lat = 90
# Return lon info
west_lon = -180
east_lon = 180
# Transformation function
transform = rasterio.transform.from_bounds(west_lon, south_lat, east_lon, north_lat, data.shape[1], data.shape[0])
# Profile
kwargs = src.profile
kwargs.update(
driver = 'GTiff',
dtype = rasterio.int16,
crs = 'EPSG:4326',
compress = 'lzw',
nodata = -9999,
transform = transform,
)
kwargs_tiled_blocked = dict(kwargs)
kwargs["tiled"] = False
kwargs_blocked = dict(kwargs)
kwargs.pop("blockxsize", None)
kwargs.pop("blockysize", None)
kwargs_no_tile_no_block = dict(kwargs)
kwargs["tiled"] = True
kwargs_tiled = dict(kwargs)
np.putmask(data, data==-32768, -9999)
with rasterio.open(local_edit_tb, 'w', **kwargs_tiled_blocked) as dst:
dst.write(data.astype(kwargs['dtype']), 1)
with rasterio.open(local_edit_t, 'w', **kwargs_tiled) as dst:
dst.write(data.astype(kwargs['dtype']), 1)
with rasterio.open(local_edit_b, 'w', **kwargs_blocked) as dst:
dst.write(data.astype(kwargs['dtype']), 1)
with rasterio.open(local_edit, 'w', **kwargs_no_tile_no_block) as dst:
dst.write(data.astype(kwargs['dtype']), 1)
local_edit
with rasterio.open(local_edit) as src:
print(src.profile)
windows = src.block_windows()
for ix, window in windows:
print(window)
break
with rasterio.open(local_edit_t) as src:
print(src.profile)
windows = src.block_windows()
for ix, window in windows:
print(window)
break
with rasterio.open(local_edit_b) as src:
print(src.profile)
windows = src.block_windows()
for ix, window in windows:
print(window)
break
with rasterio.open(local_edit_tb) as src:
print(src.profile)
windows = src.block_windows()
for ix, window in windows:
print(window)
break
```
Upload edited files to S3
```
# Defined above:
# s3_bucket
# s3_key_orig
# s3_key_edit
# staging_key_orig
# staging_key_edit
s3_key_edit_t = s3_key_edit[:-4] + "_t.tif"
s3_key_edit_b = s3_key_edit[:-4] + "_b.tif"
s3_key_edit_tb = s3_key_edit[:-4] + "_tb.tif"
s3.upload_file(local_edit, s3_bucket, s3_key_edit,
Callback=ProgressPercentage(local_edit))
s3.upload_file(local_edit_t, s3_bucket, s3_key_edit_t,
Callback=ProgressPercentage(local_edit_t))
s3.upload_file(local_edit_b, s3_bucket, s3_key_edit_b,
Callback=ProgressPercentage(local_edit_b))
s3.upload_file(local_edit_tb, s3_bucket, s3_key_edit_tb,
Callback=ProgressPercentage(local_edit_tb))
s3_key_edit
```
Layer definition
https://github.com/resource-watch/notebooks/blob/master/ResourceWatch/Api_definition/layer_definition.ipynb
Upload to server destination
```
# Too big for ArcGIS Online to upload using their web interface... 1 GB limit
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import rasterio
import boto3
import requests as req
from matplotlib import pyplot as plt
%matplotlib inline
import os
import sys
import threading
s3_bucket = "wri-public-data"
s3_folder = "resourcewatch/bio_024_4_coral_reef_threat_from_fishing_practices/"
s3_key_orig = s3_folder + "bio_024_4_coral_reef_threat_from_fishing_practices.tif"
s3_key_edit = s3_key_orig[0:-4] + "_edit.tif"
temp_folder = "/Users/nathansuberi/Desktop/WRI_Programming/RW_Data/temp/"
local_orig = temp_folder + "bio_024_4.tif"
local_edit = local_orig[:-4] + "_edit.tif"
s3 = boto3.resource('s3')
s3.meta.client.download_file(s3_bucket, s3_key_orig, local_orig)
#s3.meta.client.download_file(s3_bucket, s3_key_edit, local_edit)
s3 = boto3.client("s3")
class ProgressPercentage(object):
def __init__(self, filename):
self._filename = filename
self._size = float(os.path.getsize(filename))
self._seen_so_far = 0
self._lock = threading.Lock()
def __call__(self, bytes_amount):
# To simplify we'll assume this is hooked up
# to a single filename.
with self._lock:
self._seen_so_far += bytes_amount
percentage = (self._seen_so_far / self._size) * 100
sys.stdout.write(
"\r%s %s / %s (%.2f%%)" % (
self._filename, self._seen_so_far, self._size,
percentage))
sys.stdout.flush()
# Defined above:
# s3_bucket
# s3_key_orig
# s3_key_edit
# staging_key_orig
# staging_key_edit
s3.upload_file(local_key, s3_bucket, s3_key_orig,
Callback=ProgressPercentage(local_key))
with rasterio.open(local_orig) as src:
print(src.profile)
local_edit_tb = local_edit[:-4] + "_tb.tif"
local_edit_t = local_edit[:-4] + "_t.tif"
local_edit_b = local_edit[:-4] + "_b.tif"
with rasterio.open(local_orig) as src:
data = src.read(1)
# Return lat info
south_lat = -90
north_lat = 90
# Return lon info
west_lon = -180
east_lon = 180
# Transformation function
transform = rasterio.transform.from_bounds(west_lon, south_lat, east_lon, north_lat, data.shape[1], data.shape[0])
# Profile
kwargs = src.profile
kwargs.update(
driver = 'GTiff',
dtype = rasterio.int16,
crs = 'EPSG:4326',
compress = 'lzw',
nodata = -9999,
transform = transform,
)
kwargs_tiled_blocked = dict(kwargs)
kwargs["tiled"] = False
kwargs_blocked = dict(kwargs)
kwargs.pop("blockxsize", None)
kwargs.pop("blockysize", None)
kwargs_no_tile_no_block = dict(kwargs)
kwargs["tiled"] = True
kwargs_tiled = dict(kwargs)
np.putmask(data, data==-32768, -9999)
with rasterio.open(local_edit_tb, 'w', **kwargs_tiled_blocked) as dst:
dst.write(data.astype(kwargs['dtype']), 1)
with rasterio.open(local_edit_t, 'w', **kwargs_tiled) as dst:
dst.write(data.astype(kwargs['dtype']), 1)
with rasterio.open(local_edit_b, 'w', **kwargs_blocked) as dst:
dst.write(data.astype(kwargs['dtype']), 1)
with rasterio.open(local_edit, 'w', **kwargs_no_tile_no_block) as dst:
dst.write(data.astype(kwargs['dtype']), 1)
local_edit
with rasterio.open(local_edit) as src:
print(src.profile)
windows = src.block_windows()
for ix, window in windows:
print(window)
break
with rasterio.open(local_edit_t) as src:
print(src.profile)
windows = src.block_windows()
for ix, window in windows:
print(window)
break
with rasterio.open(local_edit_b) as src:
print(src.profile)
windows = src.block_windows()
for ix, window in windows:
print(window)
break
with rasterio.open(local_edit_tb) as src:
print(src.profile)
windows = src.block_windows()
for ix, window in windows:
print(window)
break
# Defined above:
# s3_bucket
# s3_key_orig
# s3_key_edit
# staging_key_orig
# staging_key_edit
s3_key_edit_t = s3_key_edit[:-4] + "_t.tif"
s3_key_edit_b = s3_key_edit[:-4] + "_b.tif"
s3_key_edit_tb = s3_key_edit[:-4] + "_tb.tif"
s3.upload_file(local_edit, s3_bucket, s3_key_edit,
Callback=ProgressPercentage(local_edit))
s3.upload_file(local_edit_t, s3_bucket, s3_key_edit_t,
Callback=ProgressPercentage(local_edit_t))
s3.upload_file(local_edit_b, s3_bucket, s3_key_edit_b,
Callback=ProgressPercentage(local_edit_b))
s3.upload_file(local_edit_tb, s3_bucket, s3_key_edit_tb,
Callback=ProgressPercentage(local_edit_tb))
s3_key_edit
# Too big for ArcGIS Online to upload using their web interface... 1 GB limit
| 0.327668 | 0.670878 |
```
# OPTIONAL: Load the "autoreload" extension so that code can change
%load_ext autoreload
# OPTIONAL: always reload modules so that as you change code in src, it gets loaded
%autoreload 2
import pandas as pd
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
from icosphere_py.shapes import RegIcos
```
#### Create the base regular icosahedron. Vertices in Cartesian coords are stored in a dataframe in `.vertices`
```
icos = RegIcos(100)
icos.vertices
```
#### Create derived "icospheres" by repeated subdivision
My code is very slow so even only 3 iterations takes a while...
```
%time poly2 = icos.subdivide()
%time poly3 = poly2.subdivide()
%time poly4 = poly3.subdivide()
```
#### Plot them by drawing their edges
```
def noaxticks(ax):
ax.set_xticks([])
ax.set_yticks([])
ax.set_zticks([])
fig = plt.figure(figsize=(40,10))
ax = fig.add_subplot(141, projection='3d'); noaxticks(ax)
icos.drawedges(ax,'r',3)
ax = fig.add_subplot(142, projection='3d'); noaxticks(ax)
poly2.drawedges(ax,'g',2)
ax = fig.add_subplot(143, projection='3d'); noaxticks(ax)
poly3.drawedges(ax,'g',1)
ax = fig.add_subplot(144, projection='3d'); noaxticks(ax)
poly4.drawedges(ax,'g',0.5)
fig.tight_layout()
poly2.get_dualfaces()
import numpy as np
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
faces = poly2.get_dualfaces()
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d')
#poly.drawverts(ax,'ro')
#poly.drawedges(ax,'b',0.5)
facecolors = [plt.cm.jet(x) for x in np.random.rand(len(faces))]
patches = Poly3DCollection(faces, facecolors=facecolors)
ax.add_collection3d(patches)
ax.set_xlim3d(-100,100)
ax.set_ylim3d(-100,100)
ax.set_zlim3d(-100,100)
```
#### Get vertices in spherical coordinates
```
icos.get_verts_thetaphi()
```
#### Do several iterations and save them all to a file and reload
Build a large one and save the points <br>
Number of subdivisions, k, gives number of vertices, V: V = 2 + 10*2^(2k) <br>
k: V <br>
0: 12 <br>
1: 42 <br>
2: 162 <br>
3: 642 <br>
4: 2562 <br>
5: 10242 <br>
6: 40962
```
refrad = 6371200
icos = RegIcos(refrad)
for k in range(1,7):
%time icos = icos.subdivide()
print("Done iteration", k, "of 6")
theta, phi = icos.get_verts_thetaphi()
icosverts = pd.DataFrame({'theta':theta, 'phi':phi})
icosverts.to_hdf('icosphere_data.h5', str(len(theta)))
# Now load with:
df = pd.read_hdf('icosphere_data.h5', '40962')
theta = df.theta
phi = df.phi
theta, phi
```
|
github_jupyter
|
# OPTIONAL: Load the "autoreload" extension so that code can change
%load_ext autoreload
# OPTIONAL: always reload modules so that as you change code in src, it gets loaded
%autoreload 2
import pandas as pd
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
from icosphere_py.shapes import RegIcos
icos = RegIcos(100)
icos.vertices
%time poly2 = icos.subdivide()
%time poly3 = poly2.subdivide()
%time poly4 = poly3.subdivide()
def noaxticks(ax):
ax.set_xticks([])
ax.set_yticks([])
ax.set_zticks([])
fig = plt.figure(figsize=(40,10))
ax = fig.add_subplot(141, projection='3d'); noaxticks(ax)
icos.drawedges(ax,'r',3)
ax = fig.add_subplot(142, projection='3d'); noaxticks(ax)
poly2.drawedges(ax,'g',2)
ax = fig.add_subplot(143, projection='3d'); noaxticks(ax)
poly3.drawedges(ax,'g',1)
ax = fig.add_subplot(144, projection='3d'); noaxticks(ax)
poly4.drawedges(ax,'g',0.5)
fig.tight_layout()
poly2.get_dualfaces()
import numpy as np
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
faces = poly2.get_dualfaces()
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d')
#poly.drawverts(ax,'ro')
#poly.drawedges(ax,'b',0.5)
facecolors = [plt.cm.jet(x) for x in np.random.rand(len(faces))]
patches = Poly3DCollection(faces, facecolors=facecolors)
ax.add_collection3d(patches)
ax.set_xlim3d(-100,100)
ax.set_ylim3d(-100,100)
ax.set_zlim3d(-100,100)
icos.get_verts_thetaphi()
refrad = 6371200
icos = RegIcos(refrad)
for k in range(1,7):
%time icos = icos.subdivide()
print("Done iteration", k, "of 6")
theta, phi = icos.get_verts_thetaphi()
icosverts = pd.DataFrame({'theta':theta, 'phi':phi})
icosverts.to_hdf('icosphere_data.h5', str(len(theta)))
# Now load with:
df = pd.read_hdf('icosphere_data.h5', '40962')
theta = df.theta
phi = df.phi
theta, phi
| 0.397237 | 0.868381 |
(text-intro)=
# Introduction to Text
This chapter covers how to use code to work with text as data, including opening files with text in, changing and cleaning text, regular expressions, and vectorised operations on text.
It has benefitted from the [Python String Cook Book](https://mkaz.blog/code/python-string-format-cookbook/) and Jake VanderPlas' [Python Data Science Handbook](https://jakevdp.github.io/PythonDataScienceHandbook/03.10-working-with-strings.html).
## An aside on encodings
Before we get to the good stuff, we need to talk about string encodings. Whether you're using code or a text editor (Notepad, Word, Pages, Visual Studio Code), every bit of text that you see on a computer will have an encoding behind the scenes that tells the computer how to display the underlying data. There is no such thing as 'plain' text: all text on computers is the result of an encoding. Oftentimes, a computer programme (email reader, Word, whatever) will guess the encoding and show you what it thinks the text should look like. But it doesn't always know, or get it right: *that is what is happening when you get an email or open an file full of weird symbols and question marks*. If a computer doesn't know whether a particular string is encoded using UTF-8 or ASCII or ISO 8859-1 (Latin 1) or Windows 1252 (Western European), it simply cannot display it correctly and you get gibberish.
When it comes to encodings, there are just two things to remember: i) you should use UTF-8 (aka Unicode), it's the international standard. ii) the Windows operating system tends to use either Latin 1 or Windows 1252 but (and this is good news) is moving to UTF-8.
[Unicode](https://www.unicode.org/) is a specification that aims to list every character used by human languages and give each character its own unique code. The Unicode specifications are continually revised and updated to add new languages and symbols.
Take special care when saving CSV files containing text on a Windows machine using Excel; unless you specify it, the text may not be saved in UTF-8. If your computer and you get confused enough about encodings and re-save a file with the wrong ones, you could lose data.
Hopefully you'll never have to worry about string encodings. But if you *do* see weird symbols appearing in your text, at least you'll know that there's an encoding problem and will know where to start Googling. You can find a much more in-depth explanation of text encodings [here](https://kunststube.net/encoding/).
## Strings
Note that there are many built-in functions for using strings in Python, you can find a comprehensive list [here](https://www.w3schools.com/python/python_ref_string.asp).
Strings are the basic data type for text in Python. They can be of any length. A string can be signalled by quote marks or double quote marks like so:
`'text'`
or
`"text"`
Style guides tend to prefer the latter but some coders (ahem!) have a bad habit of using the former. We can put this into a variable like so:
```
var = "banana"
```
Now, if we check the type of the variable:
```
type(var)
```
We see that it is `str`, which is short for string.
Strings in Python can be indexed, so we can get certain characters out by using square brackets to say which positions we would like.
```
var[:3]
```
The usual slicing tricks that apply to lists work for strings too, i.e. the positions you want to get can be retrieved using the `var[start:stop:step]` syntax. Here's an example of getting every other character from the string starting from the 2nd position.
```
var[1::2]
```
Note that strings, like tuples such as `(1, 2, 3)` but unlike lists such as `[1, 2, 3]`, are *immutable*. This means commands like `var[0] = "B"` will result in an error. If you want to change a single character, you will have to replace the entire string. In this example, the command to do that would be `var = "Banana"`.
Like lists, you can find the length of a string using `len`:
```
len(var)
```
The `+` operator concatenates two or more strings:
```
second_word = 'panther'
first_word = 'black'
print(first_word + " " + second_word)
```
Note that we added a space so that the noun made sense. Another way of achieving the same end that scales to many words more efficiently (if you have them in a list) is:
```
" ".join([first_word, second_word])
```
Three useful functions to know about are `upper`, `lower`, and `title`. Let's see what they do
```
var = 'input TEXT'
var_list = [var.upper(), var.lower(), var.title()]
print(var_list)
```
```{admonition} Exercise
Reverse the string `"gnirts desrever a si sihT"` using indexing operations.
```
While we're using `print()`, it has a few tricks. If we have a list, we can print out entries with a given separator:
```
print(*var_list, sep="; and \n")
```
(We'll find out more about what '\n' does shortly.) To turn variables of other kinds into strings, use the `str()` function, for example
```
'A boolean is either ' + str(True) + ' or ' + str(False) + ', there are only ' + str(2) + ' options.'
```
In this example two boolean variables and one integer variable were converted to strings. `str` generally makes an intelligent guess at how you'd like to convert your non-string type variable into a string type. You can pass a variable or a literal value to `str`.
### f-strings
The example above is quite verbose. Another way of combining strings with variables is via *f-strings*. A simple f-string looks like this:
```
variable = 15.32399
print(f"You scored {variable}")
```
This is similar to calling `str` on variable and using `+` for concatenation but much shorter to write. You can add expressions to f-strings too:
```
print(f"You scored {variable**2}")
```
This also works with functions; after all `**2` is just a function with its own special syntax.
In this example, the score number that came out had a lot of (probably) uninteresting decimal places. So how do we polish the printed output? You can pass more inforation to the f-string to get the output formatted just the way you want. Let's say we wanted two decimal places and a sign (although you always write `+` in the formatting, the sign comes out as + or - depending on the value):
```
print(f"You scored {variable:+.2f}")
```
There are a whole range of formatting options for numbers as shown in the following table:
| Number | Format | Output | Description |
|------------ |--------- |------------ |----------------------------------------------- |
| 15.32347 | {:.2f} | 15.32 | Format float 2 decimal places |
| 15.32347 | {:+.2f} | +15.32 | Format float 2 decimal places with sign |
| -1 | {:+.2f} | -1.00 | Format float 2 decimal places with sign |
| 15.32347 | {:.0f} | 15 | Format float with no decimal places |
| 3 | {:0>2d} | 03 | Pad number with zeros (left padding, width 2) |
| 3 | {:*<4d} | 3*** | Pad number with *’s (right padding, width 4) |
| 13 | {:*<4d} | 13** | Pad number with *’s (right padding, width 4) |
| 1000000 | {:,} | 1,000,000 | Number format with comma separator |
| 0.25 | {:.1%} | 25.0% | Format percentage |
| 1000000000 | {:.2e} | 1.00e+09 | Exponent notation |
| 12 | {:10d} | 12 | Right aligned (default, width 10) |
| 12 | {:<10d} | 12 | Left aligned (width 10) |
| 12 | {:^10d} | 12 | Center aligned (width 10) |
As well as using this page interactively through the Colab and Binder links at the top of the page, or downloading this page and using it on your own computer, you can play around with some of these options over at [this link](https://www.python-utils.com/).
### Special characters
Python has a string module that comes with some useful built-in strings and characters. For example
```
import string
string.punctuation
```
gives you all of the punctuation,
```
string.ascii_letters
```
returns all of the basic letters in the 'ASCII' encoding (with `.ascii_lowercase` and `.ascii_uppercase` variants), and
```
string.digits
```
gives you the numbers from 0 to 9. Finally, though less impressive visually, `string.whitespace` gives a string containing all of the different (there is more than one!) types of whitespace.
There are other special characters around; in fact, we already met the most famous of them: "\n" for new line. To actually print "\n" we have to 'escape' the backward slash by adding another backward slash:
```
print('Here is a \n new line')
print('Here is an \\n escaped new line ')
```
The table below shows the most important escape commands:
| Code | Result |
|------ |----------------- |
| `\'` | Single Quote (useful if using `'` for strings) |
| `\"` | Double Quote (useful if using `"` for strings) |
| `\\` | Backslash |
| `\n` | New Line |
| `\r` | Carriage Return |
| `\t` | Tab |
## Cleaning Text
You often want to make changes to the text you're working with. In this section, we'll look at the various options to do this.
### Replacing sub-strings
A common text task is to replace a substring within a longer string. Let's say you have a string variable `var`. You can use `.replace(old_text, new_text)` to do this.
```
"Value is objective".replace("objective", "subjective")
```
As with any variable of a specific type (here, string), this would also work with variables:
```
text = "Value is objective"
old_substr = "objective"
new_substr = "subjective"
text.replace(old_substr, new_substr)
```
Note that `.replace` performs an exact replace and so is case-sensitive.
### Replacing characters with translate
A character is an individual entry within a string, like the 'l' in 'equilibrium'. You can always count the number of characters in a string variable called `var` by using `len(var)`. A very fast method for replacing individual characters in a string is `str.translate`.
Replacing characters is extremely useful in certain situations, most commonly when you wish to remote all punctuation prior to doing other text analysis. You can use the built-in `string.punctuation` for this.
Let's see how to use it to remove all of the vowels from some text. With apologies to economist Lisa Cook, we'll use the abstract from {cite}`cook2011inventing` as the text we'll modify and we'll first create a dictionary of translations of vowels to nothing, i.e. `""`.
```
example_text = "Much recent work has focused on the influence of social capital on innovative outcomes. Little research has been done on disadvantaged groups who were often restricted from participation in social networks that provide information necessary for invention and innovation. Unique new data on African American inventors and patentees between 1843 and 1930 permit an empirical investigation of the relation between social capital and economic outcomes. I find that African Americans used both traditional, i.e., occupation-based, and nontraditional, i.e., civic, networks to maximize inventive output and that laws constraining social-capital formation are most negatively correlated with economically important inventive activity."
vowels = 'aeiou'
translation_dict = {x: "" for x in vowels}
translation_dict
```
Now we turn our dictionary into a string translator and apply it to our text:
```
translator = example_text.maketrans(translation_dict)
example_text.translate(translator)
```
```{admonition} Exercise
Use `translate` to replace all puncuation from the following sentence with spaces: "The well-known story I told at the conferences [about hypocondria] in Boston, New York, Philadelphia,...and Richmond went as follows: It amused people who knew Tommy to hear this; however, it distressed Suzi when Tommy (1982--2019) asked, \"How can I find out who yelled, 'Fire!' in the theater?\" and then didn't wait to hear Missy give the answer---'Dick Tracy.'"
```
Generally, `str.translate` is very fast at replacing individual characters in strings. But you can also do it using a list comprehension and a `join` of the resulting list, like so:
```
''.join([ch for ch in "Example. string. with- excess_ [punctuation]/," if ch not in string.punctuation])
```
### Slugifying
A special case of string cleaning occurs when you are given text with lots of non-standard characters in, and spaces, and other symbols; and what you want is a clean string suitable for a filename or column heading in a dataframe. Remember that it's best practice to have filenames that don't have spaces in. Slugiyfing is the process of creating the latter from the former and we can use the [**slugify**](https://github.com/un33k/python-slugify) package to do it.
Here are some examples of slugifying text:
```
from slugify import slugify
txt = 'the quick brown fox jumps over the lazy dog'
slugify(txt, stopwords=['the'])
```
In this very simple example, the words listed in the `stopwords=` keyword argument (a list), are removed and spaces are replaced by hyphens. Let's now see a more complicated example:
```
slugify('当我的信息改变时... àccêntæd tËXT ')
```
Slugify converts text to latin characters, while also removing accents and whitespace (of all kinds-the last whitespace is a tab). There's also a `replacement=` keyword argument that will replace specific strings with other strings using a list of lists format, eg `replacement=[['old_text', 'new_text']]`
### Splitting strings
If you want to split a string at a certain position, there are two quick ways to do it. The first is to use indexing methods, which work well if you know at which position you want to split text, eg
```
"This is a sentence and we will split it at character 18"[:18]
```
Next up we can use the built-in `split` function, which returns a list of places where a given sub-string occurs:
```
"This is a sentence. And another sentence. And a third sentence".split(".")
```
Note that the character used to split the string is removed from the resulting list of strings. Let's see an example with a string used for splitting instead of a single character:
```
"This is a sentence. And another sentence. And a third sentence".split("sentence")
```
A useful extra function to know about is `splitlines()`, which splits a string at line breaks and returns the split parts as a list.
### count and find
Let's do some simple counting of words within text using `str.count`. Let's use the first verse of Elizabeth Bishop's sestina 'A Miracle for Breakfast' for our text.
```
text = "At six o'clock we were waiting for coffee, \n waiting for coffee and the charitable crumb \n that was going to be served from a certain balcony \n --like kings of old, or like a miracle. \n It was still dark. One foot of the sun \n steadied itself on a long ripple in the river."
word = "coffee"
print(f'The word "{word}" appears {text.count(word)} times.')
```
Meanwhile, `find` returns the position where a particular word or character occurs.
```
text.find(word)
```
We can check this using the number we get and some string indexing:
```
text[text.find(word):text.find(word) + len(word)]
```
But this isn't the only place where the word 'coffee' appears. If we want to find the last occurrence, it's
```
text.rfind(word)
```
## Regular expressions
Regex, aka regular expressions, provide a way to both search and change text. Their advantages are that they are concise, they run very quickly, they can be ported across languages (they are definitely not just a Python thing!), and they are very powerful. The disadvantage is that they are confusing and take some getting used to!
You can live code regex in a couple of places, the first is within Visual Studio Code itself. Do this by clicking the magnifying glass in the left-hand side panel of options. When the search strip appears, you can put a search term in. To the right of the text entry box, there are three buttons, one of which is a period (full stop) followed by an asterisk. This option allows the Visual Studio text search function to accept regular expressions. This will apply regex to all of the text in your current Visual Studio workspace.
Another approach is to head over to [https://regex101.com/](https://regex101.com/) and begin typing your regular expression there. You will need to add some text in the box for the regex to be applied to.
Try either of the above with the regex `string \w+\s`. This matches any occurrence of the word 'string' that is followed by another word and then a whitespace. As an example, 'string cleaning ' would be picked up as a match when using this regex.
Within Python, the `re` library provides support for regular expressions. Let's try it:
```
import re
text = "It is true that string cleaning is a topic in this chapter. string editing is another."
re.findall("string \w+\s", text)
```
`re.findall` returns all matches. There are several useful search-like functions in `re` to be aware of that have a similar syntax of `re.function(regex, text)`. The table shows what they all do
| Function | What it does | Example of use | Output for given value of `text` |
|--------------|-----------------------------------------------------------------|---------------------------------------------|-----------------------------------------------------------------------|
| `re.match` | Declares whether there is a match at the beginning of a string. | `re.match("string \w+\s" , text) is True` | `None` |
| `re.search` | Declares whether there is a match anywhere in the string. | `re.search("string \w+\s" , text) is True` | `True` |
| `re.findall` | Returns all matches. | `re.findall("string \w+\s" , text)` | `['string cleaning ', 'string editing ']` |
| `re.split` | Splits text wherever a match occurs. | `re.split("string \w+\s" , text)` | `['It is true that ', 'is a topic in this chapter. ', 'is another.']` |
Another really handy regex function is `re.sub`, which substitutes one bit of text for another if it finds a match. Here's an example:
```
new_text = 'new text here! '
re.sub("string \w+\s", new_text, text)
```
#### Special Characters
So far, we've only seen a very simple application of regex involving a vanilla word, `string`, the code for another word `\w+` and the code for a whitespace `\s`. Let's take a more comprehensive look at the regex special characters:
| Character | Description | Example Text | Example Regex | Example Match Text |
|-----------|--------------------------------------------------------|----------------------------------------|-----------------------|---------------------|
| \d | One Unicode digit in any script | "file_93 is open" | `file_\d\d` | "file_93" |
| \w | "word character": Unicode letter, digit, or underscore | "blah hello-word blah" | `\w-\w` | "hello-world" |
| \s | "whitespace character": any Unicode separator | "these are some words with spaces" | `words\swith\sspaces` | "words with spaces" |
| \D | Non-digit character (opposite of \d) | "ABC 10323982328" | `\D\D\D` | "ABC" |
| \W | Non-word character (opposite of \w) | "Once upon a time *" | `\W` | "*" |
| \S | Non-whitespace character (opposite of \s) | "y " | `\S` | "y" |
| \Z | End of string | "End of a string" | `\w+\Z` | "string"" | |
| . | Match any character except the newline | "ab=def" | `ab.def` | "ab=def" |
Note that whitespace characters include newlines, `\n`, and tabs, `\t`.
#### Quantifiers
As well as these special characters, there are quantifiers which ask for more than one occurence of a character. For example, in the above, `\w\w` asked for two word characters, while `\d\d` asks for two digits. The next table shows all of the quantifiers.
| Quantifier | Role | Example Text | Example Regex | Example Match |
|------------|--------------------------------------------|----------------------------|---------------|--------------------|
| {m} | Exactly m repetitions | "936 and 42 are the codes" | `\d{3}` | "936" |
| {m,n} | From m (default 0) to n (default infinity) | "Words up to four letters" | `\b\w{1,4}\b` | "up", "to", "four" |
| * | 0 or more. Same as {,} | "42 is the code" | `\d*\s` | "42" |
| + | 1 or more. Same as {1,} | "4 323 hello" | `\d+` | "4", "323" |
| ? | Optional, so 0 or 1. Same as {,1}. | "4 323 hello" | `\d?\s` | "4" |
```{admonition} Exercise
Find a single regex that will pick out only the percentage numbers from both "Inflation in year 3 was 2 percent" and "Interest rates were as high as 12 percent".
```
#### Metacharacters
Now, as well as special characters and quantifiers, we can have meta-character matches. These are not characters *per se*, but starts, ends, and other bits of words. For example, `\b` matches strings at a word (`\w+`) boundary, so if we took the text "Three letter words only are captured" and ran `\b\w\w\w\b` we would return "are". `\B` matches strings not at word (`\w+`) boundaries so the text "Bricks" with `\B\w\w\B` applied would yield "ri". The next table contains some useful metacharacters.
| Metacharacter Sequence | Meaning | Example Regex | Example Match |
|------------------------|-------------------------------|--------------------|------------------------------------------------------------------------------|
| ^ | Start of string or line | `^abc` | "abc" (appearing at start of string or line) |
| $ | End of string, or end of line | `xyz$` | "xyz" (appearing at end of string or line) |
| \b | Match string at word (\w+) boundary | `ing\b` | "match**ing**" (matches ing if it is at the end of a word) |
| \B | Match string not at word (\w+) boundary | `\Bing\B` | "st**ing**er" (matches ing if it is not at the beginning or end of the word) |
Because so many characters have special meaning in regex, if you want to look for, say, a dollar sign or a dot, you need to escape the character first with a backward slash. So `\${1}\d+` would look for a single dollar sign followed by some digits and would pick up the '\$50' in 'she made \$50 dollars'.
```{admonition} Exercise
Find the regex that will pick out only the first instance of the word 'money' and any word subsequent to 'money' from the following: "money supply has grown considerably. money demand has not kept up.".
```
#### Ranges
You probably think you're done with regex, but not so fast! There are more metacharacters to come. This time, they will represent *ranges* of characters.
| Metacharacter Sequence | Description | Example Expression | Example Match |
|------------------------|---------------------------------------------------------|--------------------|-----------------------------------|
| \[characters\] | The characters inside the brackets are part of a matching-character set | `[abcd]` | a, b, c, d, abcd |
| \[^...\] | Characters inside brackets are a non-matching set; a character not inside is a matching character. | `[^abcd]` | Any occurrence of any character EXCEPT a, b, c, d. |
| \[character-character\] | Any character in the range between two characters (inclusive) is part of the set | `[a-z]` | Any lowercase letter |
| \[^character\] | Any character that is not the listed character | `[^A]` | Any character EXCEPT capital A |
Ranges have two more neat tricks. The first is that they can be concatenated. For example, `[a-c-1-5]` would match any of a, b, c, 1, 2, 3, 4, 5. They can also be modified with a quantifier, so `[a-c0-2]{2}` would match "a0" and "ab".
#### Greedy versus lazy regexes
Buckle up, because this one is a bit tricky to grasp. Adding a `?` after a regex will make it go from being 'greedy' to being 'lazy'. Greedy means that you will match the longest possible string that hits the condition. Lazy will mean that you get the shortest possible string matching the condition. It's easiest to demonstrate with an example:
```
test_string = "stackoverflow"
greedy_regex = "s.*o"
lazy_regex = "s.*?o"
print(f'The greedy match is {re.findall(greedy_regex, test_string)[0]}')
print(f'The lazy match is {re.findall(lazy_regex, test_string)[0]}')
```
In the former (greedy) case, we get from an 's' all the way to the last 'o' within the same word. In the latter (lazy) case we just get everything between the start and first occurrence of an 'o'.
#### Matches versus capture groups
There is often a difference between what you might want to match and what you actually want to *grab* with your regex. Let's say, for example, we're parsing some text and we want any numbers that follow the format '$xx.xx', where the 'x' are numbers but we don't want the dollar sign. To do this, we can create a *capture group* using brackets. Here's an example:
```
text = "Product 1 was $45.34, while product 2 came in at $50.00 however it was assessed that the $4.66 difference did not make up for the higher quality of product 2."
re.findall("\$(\d{2}.\d{2})", text)
```
Let's pick apart the regex here. First, we asked for a literal dollar sign using `\$`. Next, we opened up a capture group with `(`. Then we said only give us the numbers that are 2 digits, a period, and another 2 digits (thus excluding \$4.66). Finally, we closed the capture group with `)`.
So while we specify a *match* using regex, while only want running the regex to return the *capture group*.
Let's see a more complicated example.
```
sal_r_per = r"\b([0-9]{1,6}(?:\.)?(?:[0-9]{1,2})?(?:\s?-\s?|\s?to\s?)[0-9]{1,6}(?:\.)?(?:[0-9]{1,2})?)(?:\s?per)\b"
text = "This job pays gbp 30500.00 to 35000 per year. Apply at number 100 per the below address."
re.findall(sal_r_per, text)
```
In this case, the regex first looks for up to 6 digits, then optionally a period, then optionally another couple of digits, then either a dash or 'to' using the '|' operator (which means or), followed by a similar number, followed by 'per'.
But the capture group is only the subset of the match that is the number range-we discard most of the rest. Note also that other numbers, even if they are followed by 'per', are not picked up. `(?:)` begins a *non-capture group*, which matches only but does not capture, so that although `(?:\s?per)` looks for " per" after a salary (with the space optional due to the second `?`), it does not get returned.
```{admonition} Exercise
Find a regex that captures the wage range from "Salary Pay in range $9.00 - $12.02 but you must start at 8.00 - 8.30 every morning.".
```
This has been a whirlwind tour of regexes. Although regex looks a lot like gobbledygook, it is a really useful tool to be able to deploy for more complex string cleaning and extraction tasks.
## Scaling up from a single string to a corpus
For this section, it's useful to be familiar with the **pandas** package, which is covered in the [Data Analysis Quickstart](data-quickstart) and [Working with Data](working-with-data) sections. This section will closely follow the treatment by Jake VanderPlas.
We've seen how to work with individual strings. But often we want to work with a group of strings, otherwise known as a corpus, that is a collection of texts. It could be a collection of words, sentences, paragraphs, or some domain-based grouping (eg job descriptions).
Fortunately, many of the methods that we have seen deployed on a single string can be straightforwardly scaled up to hundreds, thousands, or millions of strings using **pandas** or other tools. This scaling up is achieved via *vectorisation*, in analogy with going from a single value (a scalar) to multiple values in a list (a vector).
As a very minimal example, here is capitalisation of names vectorised using a list comprehension:
```
[name.capitalize() for name in ['ada', 'adam', 'elinor', 'grace', 'jean']]
```
A **pandas** series can be used in place of a list. Let's create the series first:
```
import pandas as pd
dfs = pd.Series(['ada lovelace', 'adam smith', 'elinor ostrom', 'grace hopper', 'jean bartik'], dtype="string")
dfs
```
Now we use the syntax series.str.function to change the text series:
```
dfs.str.title()
```
If we had a dataframe and not a series, the syntax would change to refer just to the column of interest like so:
```
df = pd.DataFrame(dfs, columns=['names'])
df['names'].str.title()
```
The table below shows a non-exhaustive list of the string methods that are available in **pandas**.
| Function (preceded by `.str.`) | What it does |
|-----------------------------|-------------------------|
| `len()` | Length of string. |
| `lower()` | Put string in lower case. |
| `upper()` | Put string in upper case. |
| `capitalize()` | Put string in leading upper case. |
| `swapcase()` | Swap cases in a string. |
| `translate()` | Returns a copy of the string in which each character has been mapped through a given translation table. |
| `ljust()` | Left pad a string (default is to pad with spaces) |
| `rjust()` | Right pad a string (default is to pad with spaces) |
| `center()` | Pad such that string appears in centre (default is to pad with spaces) |
| `zfill()` | Pad with zeros |
| `strip()` | Strip out leading and trailing whitespace |
| `rstrip()` | Strip out trailing whitespace |
| `lstrip()` | Strip out leading whitespace |
| `find()` | Return the lowest index in the data where a substring appears |
| `split()` | Split the string using a passed substring as the delimiter |
| `isupper()` | Check whether string is upper case |
| `isdigit()` | Check whether string is composed of digits |
| `islower()` | Check whether string is lower case |
| `startswith()` | Check whether string starts with a given sub-string |
Regular expressions can also be scaled up with **pandas**. The below table shows vectorised regular expressions.
| Function | What it does |
|-|----------------------------------|
| `match()` | Call `re.match()` on each element, returning a boolean. |
| `extract()` | Call `re.match()` on each element, returning matched groups as strings. |
| `findall()` | Call `re.findall()` on each element |
| `replace()` | Replace occurrences of pattern with some other string |
| `contains()` | Call `re.search()` on each element, returning a boolean |
| `count()` | Count occurrences of pattern |
| `split()` | Equivalent to `str.split()`, but accepts regexes |
| `rsplit()` | Equivalent to `str.rsplit()`, but accepts regexes |
Let's see a couple of these in action. First, splitting on a given sub-string:
```
df['names'].str.split(' ')
```
It's fairly common that you want to split out strings and save the results to new columns in your dataframe. You can specify a (max) number of splits via the `n=` kwarg and you can get the columns using `expand`
```
df['names'].str.split(' ', n=2, expand=True)
```
```{admonition} Exercise
Using vectorised operations, create a new column with the index position where the first vowel occurs for each row in the `names` column.
```
Here's an example of using a regex function with **pandas**:
```
df['names'].str.extract('(\w+)', expand=False)
```
There are a few more vectorised string operations that are useful.
| Method | Description |
|-|-|
| `get()` | Index each element |
| `slice()` | Slice each element |
| `slice_replace()` | Replace slice in each element with passed value |
| `cat()` | Concatenate strings |
| `repeat()` | Repeat values |
| `normalize()` | Return Unicode form of string |
| `pad()` | Add whitespace to left, right, or both sides of strings |
| `wrap()` | Split long strings into lines with length less than a given width |
| `join()` | Join strings in each element of the Series with passed separator |
| `get_dummies()` | extract dummy variables as a dataframe |
The `get()` and `slice()` methods give access to elements of the lists returned by `split()`. Here's an example that combines `split()` and `get()`:
```
df['names'].str.split().str.get(-1)
```
We already saw `get_dummies()` in the [Regression](regression) chapter, but it's worth revisiting it here with strings. If we have a column with tags split by a symbol, we can use this function to split it out. For example, let's create a dataframe with a single column that mixes subject and nationality tags:
```
df = pd.DataFrame({'names': ['ada lovelace', 'adam smith', 'elinor ostrom', 'grace hopper', 'jean bartik'], 'tags': ['uk; cs', 'uk; econ', 'usa; econ', 'usa; cs', 'usa; cs']})
df
```
If we now use `str.get_dummies` and split on `;` we can get a dataframe of dummies.
```
df['tags'].str.get_dummies(';')
```
## Reading and Writing Text
### Text file
If you have just a plain text file, you can read it in like so:
```python
fname = 'book.txt'
with open(fname, encoding='utf-8') as f:
text_of_book = f.read()
```
You can also read a text file directly into a **pandas** dataframe using
```python
df = pd.read_csv('book.txt', delimiter = "\n")
```
In the above, the delimiter for different rows of the dataframe is set as "\n", which means new line, but you could use whatever delimiter you prefer.
```{admonition} Exercise
Download the file 'smith_won.txt' from this book's github repository using this [link](https://github.com/aeturrell/coding-for-economists/blob/main/data/smith_won.txt) (use right-click and save as). Then read the text in using **pandas**.
```
### CSV file
CSV files are already split into rows. By far the easiest way to read in csv files is using **pandas**,
```python
df = pd.read_csv('book.csv')
```
Remember that **pandas** can read many other file types too.
|
github_jupyter
|
var = "banana"
type(var)
var[:3]
var[1::2]
len(var)
second_word = 'panther'
first_word = 'black'
print(first_word + " " + second_word)
" ".join([first_word, second_word])
var = 'input TEXT'
var_list = [var.upper(), var.lower(), var.title()]
print(var_list)
While we're using `print()`, it has a few tricks. If we have a list, we can print out entries with a given separator:
(We'll find out more about what '\n' does shortly.) To turn variables of other kinds into strings, use the `str()` function, for example
In this example two boolean variables and one integer variable were converted to strings. `str` generally makes an intelligent guess at how you'd like to convert your non-string type variable into a string type. You can pass a variable or a literal value to `str`.
### f-strings
The example above is quite verbose. Another way of combining strings with variables is via *f-strings*. A simple f-string looks like this:
This is similar to calling `str` on variable and using `+` for concatenation but much shorter to write. You can add expressions to f-strings too:
This also works with functions; after all `**2` is just a function with its own special syntax.
In this example, the score number that came out had a lot of (probably) uninteresting decimal places. So how do we polish the printed output? You can pass more inforation to the f-string to get the output formatted just the way you want. Let's say we wanted two decimal places and a sign (although you always write `+` in the formatting, the sign comes out as + or - depending on the value):
There are a whole range of formatting options for numbers as shown in the following table:
| Number | Format | Output | Description |
|------------ |--------- |------------ |----------------------------------------------- |
| 15.32347 | {:.2f} | 15.32 | Format float 2 decimal places |
| 15.32347 | {:+.2f} | +15.32 | Format float 2 decimal places with sign |
| -1 | {:+.2f} | -1.00 | Format float 2 decimal places with sign |
| 15.32347 | {:.0f} | 15 | Format float with no decimal places |
| 3 | {:0>2d} | 03 | Pad number with zeros (left padding, width 2) |
| 3 | {:*<4d} | 3*** | Pad number with *’s (right padding, width 4) |
| 13 | {:*<4d} | 13** | Pad number with *’s (right padding, width 4) |
| 1000000 | {:,} | 1,000,000 | Number format with comma separator |
| 0.25 | {:.1%} | 25.0% | Format percentage |
| 1000000000 | {:.2e} | 1.00e+09 | Exponent notation |
| 12 | {:10d} | 12 | Right aligned (default, width 10) |
| 12 | {:<10d} | 12 | Left aligned (width 10) |
| 12 | {:^10d} | 12 | Center aligned (width 10) |
As well as using this page interactively through the Colab and Binder links at the top of the page, or downloading this page and using it on your own computer, you can play around with some of these options over at [this link](https://www.python-utils.com/).
### Special characters
Python has a string module that comes with some useful built-in strings and characters. For example
gives you all of the punctuation,
returns all of the basic letters in the 'ASCII' encoding (with `.ascii_lowercase` and `.ascii_uppercase` variants), and
gives you the numbers from 0 to 9. Finally, though less impressive visually, `string.whitespace` gives a string containing all of the different (there is more than one!) types of whitespace.
There are other special characters around; in fact, we already met the most famous of them: "\n" for new line. To actually print "\n" we have to 'escape' the backward slash by adding another backward slash:
The table below shows the most important escape commands:
| Code | Result |
|------ |----------------- |
| `\'` | Single Quote (useful if using `'` for strings) |
| `\"` | Double Quote (useful if using `"` for strings) |
| `\\` | Backslash |
| `\n` | New Line |
| `\r` | Carriage Return |
| `\t` | Tab |
## Cleaning Text
You often want to make changes to the text you're working with. In this section, we'll look at the various options to do this.
### Replacing sub-strings
A common text task is to replace a substring within a longer string. Let's say you have a string variable `var`. You can use `.replace(old_text, new_text)` to do this.
As with any variable of a specific type (here, string), this would also work with variables:
Note that `.replace` performs an exact replace and so is case-sensitive.
### Replacing characters with translate
A character is an individual entry within a string, like the 'l' in 'equilibrium'. You can always count the number of characters in a string variable called `var` by using `len(var)`. A very fast method for replacing individual characters in a string is `str.translate`.
Replacing characters is extremely useful in certain situations, most commonly when you wish to remote all punctuation prior to doing other text analysis. You can use the built-in `string.punctuation` for this.
Let's see how to use it to remove all of the vowels from some text. With apologies to economist Lisa Cook, we'll use the abstract from {cite}`cook2011inventing` as the text we'll modify and we'll first create a dictionary of translations of vowels to nothing, i.e. `""`.
Now we turn our dictionary into a string translator and apply it to our text:
Generally, `str.translate` is very fast at replacing individual characters in strings. But you can also do it using a list comprehension and a `join` of the resulting list, like so:
### Slugifying
A special case of string cleaning occurs when you are given text with lots of non-standard characters in, and spaces, and other symbols; and what you want is a clean string suitable for a filename or column heading in a dataframe. Remember that it's best practice to have filenames that don't have spaces in. Slugiyfing is the process of creating the latter from the former and we can use the [**slugify**](https://github.com/un33k/python-slugify) package to do it.
Here are some examples of slugifying text:
In this very simple example, the words listed in the `stopwords=` keyword argument (a list), are removed and spaces are replaced by hyphens. Let's now see a more complicated example:
Slugify converts text to latin characters, while also removing accents and whitespace (of all kinds-the last whitespace is a tab). There's also a `replacement=` keyword argument that will replace specific strings with other strings using a list of lists format, eg `replacement=[['old_text', 'new_text']]`
### Splitting strings
If you want to split a string at a certain position, there are two quick ways to do it. The first is to use indexing methods, which work well if you know at which position you want to split text, eg
Next up we can use the built-in `split` function, which returns a list of places where a given sub-string occurs:
Note that the character used to split the string is removed from the resulting list of strings. Let's see an example with a string used for splitting instead of a single character:
A useful extra function to know about is `splitlines()`, which splits a string at line breaks and returns the split parts as a list.
### count and find
Let's do some simple counting of words within text using `str.count`. Let's use the first verse of Elizabeth Bishop's sestina 'A Miracle for Breakfast' for our text.
Meanwhile, `find` returns the position where a particular word or character occurs.
We can check this using the number we get and some string indexing:
But this isn't the only place where the word 'coffee' appears. If we want to find the last occurrence, it's
## Regular expressions
Regex, aka regular expressions, provide a way to both search and change text. Their advantages are that they are concise, they run very quickly, they can be ported across languages (they are definitely not just a Python thing!), and they are very powerful. The disadvantage is that they are confusing and take some getting used to!
You can live code regex in a couple of places, the first is within Visual Studio Code itself. Do this by clicking the magnifying glass in the left-hand side panel of options. When the search strip appears, you can put a search term in. To the right of the text entry box, there are three buttons, one of which is a period (full stop) followed by an asterisk. This option allows the Visual Studio text search function to accept regular expressions. This will apply regex to all of the text in your current Visual Studio workspace.
Another approach is to head over to [https://regex101.com/](https://regex101.com/) and begin typing your regular expression there. You will need to add some text in the box for the regex to be applied to.
Try either of the above with the regex `string \w+\s`. This matches any occurrence of the word 'string' that is followed by another word and then a whitespace. As an example, 'string cleaning ' would be picked up as a match when using this regex.
Within Python, the `re` library provides support for regular expressions. Let's try it:
`re.findall` returns all matches. There are several useful search-like functions in `re` to be aware of that have a similar syntax of `re.function(regex, text)`. The table shows what they all do
| Function | What it does | Example of use | Output for given value of `text` |
|--------------|-----------------------------------------------------------------|---------------------------------------------|-----------------------------------------------------------------------|
| `re.match` | Declares whether there is a match at the beginning of a string. | `re.match("string \w+\s" , text) is True` | `None` |
| `re.search` | Declares whether there is a match anywhere in the string. | `re.search("string \w+\s" , text) is True` | `True` |
| `re.findall` | Returns all matches. | `re.findall("string \w+\s" , text)` | `['string cleaning ', 'string editing ']` |
| `re.split` | Splits text wherever a match occurs. | `re.split("string \w+\s" , text)` | `['It is true that ', 'is a topic in this chapter. ', 'is another.']` |
Another really handy regex function is `re.sub`, which substitutes one bit of text for another if it finds a match. Here's an example:
#### Special Characters
So far, we've only seen a very simple application of regex involving a vanilla word, `string`, the code for another word `\w+` and the code for a whitespace `\s`. Let's take a more comprehensive look at the regex special characters:
| Character | Description | Example Text | Example Regex | Example Match Text |
|-----------|--------------------------------------------------------|----------------------------------------|-----------------------|---------------------|
| \d | One Unicode digit in any script | "file_93 is open" | `file_\d\d` | "file_93" |
| \w | "word character": Unicode letter, digit, or underscore | "blah hello-word blah" | `\w-\w` | "hello-world" |
| \s | "whitespace character": any Unicode separator | "these are some words with spaces" | `words\swith\sspaces` | "words with spaces" |
| \D | Non-digit character (opposite of \d) | "ABC 10323982328" | `\D\D\D` | "ABC" |
| \W | Non-word character (opposite of \w) | "Once upon a time *" | `\W` | "*" |
| \S | Non-whitespace character (opposite of \s) | "y " | `\S` | "y" |
| \Z | End of string | "End of a string" | `\w+\Z` | "string"" | |
| . | Match any character except the newline | "ab=def" | `ab.def` | "ab=def" |
Note that whitespace characters include newlines, `\n`, and tabs, `\t`.
#### Quantifiers
As well as these special characters, there are quantifiers which ask for more than one occurence of a character. For example, in the above, `\w\w` asked for two word characters, while `\d\d` asks for two digits. The next table shows all of the quantifiers.
| Quantifier | Role | Example Text | Example Regex | Example Match |
|------------|--------------------------------------------|----------------------------|---------------|--------------------|
| {m} | Exactly m repetitions | "936 and 42 are the codes" | `\d{3}` | "936" |
| {m,n} | From m (default 0) to n (default infinity) | "Words up to four letters" | `\b\w{1,4}\b` | "up", "to", "four" |
| * | 0 or more. Same as {,} | "42 is the code" | `\d*\s` | "42" |
| + | 1 or more. Same as {1,} | "4 323 hello" | `\d+` | "4", "323" |
| ? | Optional, so 0 or 1. Same as {,1}. | "4 323 hello" | `\d?\s` | "4" |
#### Metacharacters
Now, as well as special characters and quantifiers, we can have meta-character matches. These are not characters *per se*, but starts, ends, and other bits of words. For example, `\b` matches strings at a word (`\w+`) boundary, so if we took the text "Three letter words only are captured" and ran `\b\w\w\w\b` we would return "are". `\B` matches strings not at word (`\w+`) boundaries so the text "Bricks" with `\B\w\w\B` applied would yield "ri". The next table contains some useful metacharacters.
| Metacharacter Sequence | Meaning | Example Regex | Example Match |
|------------------------|-------------------------------|--------------------|------------------------------------------------------------------------------|
| ^ | Start of string or line | `^abc` | "abc" (appearing at start of string or line) |
| $ | End of string, or end of line | `xyz$` | "xyz" (appearing at end of string or line) |
| \b | Match string at word (\w+) boundary | `ing\b` | "match**ing**" (matches ing if it is at the end of a word) |
| \B | Match string not at word (\w+) boundary | `\Bing\B` | "st**ing**er" (matches ing if it is not at the beginning or end of the word) |
Because so many characters have special meaning in regex, if you want to look for, say, a dollar sign or a dot, you need to escape the character first with a backward slash. So `\${1}\d+` would look for a single dollar sign followed by some digits and would pick up the '\$50' in 'she made \$50 dollars'.
#### Ranges
You probably think you're done with regex, but not so fast! There are more metacharacters to come. This time, they will represent *ranges* of characters.
| Metacharacter Sequence | Description | Example Expression | Example Match |
|------------------------|---------------------------------------------------------|--------------------|-----------------------------------|
| \[characters\] | The characters inside the brackets are part of a matching-character set | `[abcd]` | a, b, c, d, abcd |
| \[^...\] | Characters inside brackets are a non-matching set; a character not inside is a matching character. | `[^abcd]` | Any occurrence of any character EXCEPT a, b, c, d. |
| \[character-character\] | Any character in the range between two characters (inclusive) is part of the set | `[a-z]` | Any lowercase letter |
| \[^character\] | Any character that is not the listed character | `[^A]` | Any character EXCEPT capital A |
Ranges have two more neat tricks. The first is that they can be concatenated. For example, `[a-c-1-5]` would match any of a, b, c, 1, 2, 3, 4, 5. They can also be modified with a quantifier, so `[a-c0-2]{2}` would match "a0" and "ab".
#### Greedy versus lazy regexes
Buckle up, because this one is a bit tricky to grasp. Adding a `?` after a regex will make it go from being 'greedy' to being 'lazy'. Greedy means that you will match the longest possible string that hits the condition. Lazy will mean that you get the shortest possible string matching the condition. It's easiest to demonstrate with an example:
In the former (greedy) case, we get from an 's' all the way to the last 'o' within the same word. In the latter (lazy) case we just get everything between the start and first occurrence of an 'o'.
#### Matches versus capture groups
There is often a difference between what you might want to match and what you actually want to *grab* with your regex. Let's say, for example, we're parsing some text and we want any numbers that follow the format '$xx.xx', where the 'x' are numbers but we don't want the dollar sign. To do this, we can create a *capture group* using brackets. Here's an example:
Let's pick apart the regex here. First, we asked for a literal dollar sign using `\$`. Next, we opened up a capture group with `(`. Then we said only give us the numbers that are 2 digits, a period, and another 2 digits (thus excluding \$4.66). Finally, we closed the capture group with `)`.
So while we specify a *match* using regex, while only want running the regex to return the *capture group*.
Let's see a more complicated example.
In this case, the regex first looks for up to 6 digits, then optionally a period, then optionally another couple of digits, then either a dash or 'to' using the '|' operator (which means or), followed by a similar number, followed by 'per'.
But the capture group is only the subset of the match that is the number range-we discard most of the rest. Note also that other numbers, even if they are followed by 'per', are not picked up. `(?:)` begins a *non-capture group*, which matches only but does not capture, so that although `(?:\s?per)` looks for " per" after a salary (with the space optional due to the second `?`), it does not get returned.
This has been a whirlwind tour of regexes. Although regex looks a lot like gobbledygook, it is a really useful tool to be able to deploy for more complex string cleaning and extraction tasks.
## Scaling up from a single string to a corpus
For this section, it's useful to be familiar with the **pandas** package, which is covered in the [Data Analysis Quickstart](data-quickstart) and [Working with Data](working-with-data) sections. This section will closely follow the treatment by Jake VanderPlas.
We've seen how to work with individual strings. But often we want to work with a group of strings, otherwise known as a corpus, that is a collection of texts. It could be a collection of words, sentences, paragraphs, or some domain-based grouping (eg job descriptions).
Fortunately, many of the methods that we have seen deployed on a single string can be straightforwardly scaled up to hundreds, thousands, or millions of strings using **pandas** or other tools. This scaling up is achieved via *vectorisation*, in analogy with going from a single value (a scalar) to multiple values in a list (a vector).
As a very minimal example, here is capitalisation of names vectorised using a list comprehension:
A **pandas** series can be used in place of a list. Let's create the series first:
Now we use the syntax series.str.function to change the text series:
If we had a dataframe and not a series, the syntax would change to refer just to the column of interest like so:
The table below shows a non-exhaustive list of the string methods that are available in **pandas**.
| Function (preceded by `.str.`) | What it does |
|-----------------------------|-------------------------|
| `len()` | Length of string. |
| `lower()` | Put string in lower case. |
| `upper()` | Put string in upper case. |
| `capitalize()` | Put string in leading upper case. |
| `swapcase()` | Swap cases in a string. |
| `translate()` | Returns a copy of the string in which each character has been mapped through a given translation table. |
| `ljust()` | Left pad a string (default is to pad with spaces) |
| `rjust()` | Right pad a string (default is to pad with spaces) |
| `center()` | Pad such that string appears in centre (default is to pad with spaces) |
| `zfill()` | Pad with zeros |
| `strip()` | Strip out leading and trailing whitespace |
| `rstrip()` | Strip out trailing whitespace |
| `lstrip()` | Strip out leading whitespace |
| `find()` | Return the lowest index in the data where a substring appears |
| `split()` | Split the string using a passed substring as the delimiter |
| `isupper()` | Check whether string is upper case |
| `isdigit()` | Check whether string is composed of digits |
| `islower()` | Check whether string is lower case |
| `startswith()` | Check whether string starts with a given sub-string |
Regular expressions can also be scaled up with **pandas**. The below table shows vectorised regular expressions.
| Function | What it does |
|-|----------------------------------|
| `match()` | Call `re.match()` on each element, returning a boolean. |
| `extract()` | Call `re.match()` on each element, returning matched groups as strings. |
| `findall()` | Call `re.findall()` on each element |
| `replace()` | Replace occurrences of pattern with some other string |
| `contains()` | Call `re.search()` on each element, returning a boolean |
| `count()` | Count occurrences of pattern |
| `split()` | Equivalent to `str.split()`, but accepts regexes |
| `rsplit()` | Equivalent to `str.rsplit()`, but accepts regexes |
Let's see a couple of these in action. First, splitting on a given sub-string:
It's fairly common that you want to split out strings and save the results to new columns in your dataframe. You can specify a (max) number of splits via the `n=` kwarg and you can get the columns using `expand`
Here's an example of using a regex function with **pandas**:
There are a few more vectorised string operations that are useful.
| Method | Description |
|-|-|
| `get()` | Index each element |
| `slice()` | Slice each element |
| `slice_replace()` | Replace slice in each element with passed value |
| `cat()` | Concatenate strings |
| `repeat()` | Repeat values |
| `normalize()` | Return Unicode form of string |
| `pad()` | Add whitespace to left, right, or both sides of strings |
| `wrap()` | Split long strings into lines with length less than a given width |
| `join()` | Join strings in each element of the Series with passed separator |
| `get_dummies()` | extract dummy variables as a dataframe |
The `get()` and `slice()` methods give access to elements of the lists returned by `split()`. Here's an example that combines `split()` and `get()`:
We already saw `get_dummies()` in the [Regression](regression) chapter, but it's worth revisiting it here with strings. If we have a column with tags split by a symbol, we can use this function to split it out. For example, let's create a dataframe with a single column that mixes subject and nationality tags:
If we now use `str.get_dummies` and split on `;` we can get a dataframe of dummies.
## Reading and Writing Text
### Text file
If you have just a plain text file, you can read it in like so:
You can also read a text file directly into a **pandas** dataframe using
In the above, the delimiter for different rows of the dataframe is set as "\n", which means new line, but you could use whatever delimiter you prefer.
### CSV file
CSV files are already split into rows. By far the easiest way to read in csv files is using **pandas**,
| 0.770206 | 0.950641 |
<p><img alt="DataOwl" width=150 src="http://gwsolutions.cl/Images/dataowl.png", align="left", hspace=0, vspace=5></p>
<h1 align="center">Numpy y Pandas</h1>
<h4 align="center">Arreglos y Dataframes</h4>
<pre><div align="center"> La idea de este notebook es que sirva para iniciarse en el preprocesamiento de datos.</div>
<div align="right"> En términos de código y estructura, este notebook esta basado en el BootCamp
<a href="https://github.com/Shekhar-rv/Python-for-Data-Science-and-Machine-Learning-Bootcamp">Python for Data Science and Machine Learning</a>.
.</div></pre>
## ¿Qué es Numpy?
<p><img alt="Numpy" width=70 src="https://user-images.githubusercontent.com/50221806/81123350-b7c5bb00-8ee7-11ea-9bfc-88f676c80315.png", align="right", hspace=0, vspace=5></p>
NumPy es una extensión de Python, que le agrega mayor soporte para vectores y matrices, constituyendo una biblioteca de funciones matemáticas de alto nivel para operar con esos vectores o matrices.
Para instalar la librería puede hacerse a través del comando **pip** o el comando **conda** en la consola de comandos:
```cmd
conda install numpy
pip install numpy
```
```
# Importando la librería
import numpy as np
```
Numpy nos permite crear y trabajar con vectores (llamados arreglos) y matrices de forma simple, y se enfoca principalmente en añadir funciones matematicas, a diferencia de las listas que son estructuras de mayor complejidad. Existen varias maneras de construir un arreglo, entre ellas la más sencilla es realizarlo a partir de una lista, no obstante, podemos hacerlo a partir de una matriz o de comandos predefinidos.
```
# Creando arreglo a partir de una lista
my_list = [1,2,3,4,5]
my_array = np.array(my_list)
my_array
# Creando arreglo a partir de una lista de listas
my_list_of_lists = [ [1,2,3] , [4,5,6] , [7,8,9] ]
my_array_2 = np.array(my_list_of_lists)
my_array_2
# Definiendo un array ordenado (Función arange)
arr = np.arange(1,11)
arr
# Definiendo un array de ceros (Función zeros)
arr_0 = np.zeros(10)
arr_0
# Definiendo un array de unos (Función ones)
arr_1 = np.ones(10)
arr_1
# Definiendo un arreglo equiespaciado
arr = np.linspace(0,1,15)
arr
# Definiendo la matriz identidad
arr = np.eye(4)
arr
```
Además de la creación de arreglos, existen además maneras de indexar y operaciones alrededor de los arreglos, no obstante no porfundizaremos en ellos ya que hoy en día esta librería no se utiliza en demasía, por lo que pasaremos a Pandas.
## ¿Qué es Pandas?
<p><img alt="Pandas" width=150 src="https://zhihuicao.files.wordpress.com/2016/05/pandas.png?w=399", align="right", hspace=0, vspace=5></p>
Pandas es una biblioteca de software escrita como extensión de NumPy para manipulación y análisis de datos para el lenguaje de programación Python. En particular, ofrece estructuras de datos y operaciones para manipular tablas numéricas y series temporales.
Para instalar la librería puede hacerse a través del comando **pip** o el comando **conda** en la consola de comandos:
```cmd
conda install pandas
pip install pandas
```
```
#Importando la librería
import pandas as pd
```
<h3>Secciones</h3>
<div class="alert alert-danger" role="alert">
<ol>
<li><a href="#section1"> Series</a></li>
<li><a href="#section2"> DataFrames </a></li>
<li><a href="#section3"> Datos faltantes (Missing Data) </a></li>
<li><a href="#section4"> Agrupaciones (Groupby)</a></li>
<li><a href="#section5"> Fusiones (Merge), Uniones (Join) y Concatenaciones</a></li>
</ol>
</div>
<hr>
<a id="section1"></a>
<h3>1. Series</h3>
<hr>
El primer tipo de datos principal que aprenderemos para los pandas es el tipo de datos **Serie**. Importemos Pandas y exploremos el objeto Serie.
<p><img alt="Dataframe" width=150 src="https://miro.medium.com/max/1284/1*iI8ltITQlsrX7Mc6E-OKKg.png", align="center", hspace=0, vspace=5></p>
Una serie es muy similar a un arreglo de NumPy (de hecho, está construida sobre el objeto de arreglo de NumPy). Lo que diferencia el arreglo de una Serie es que una Serie puede tener etiquetas de eje, lo que significa que puede ser indexada por una etiqueta, en lugar de solo una ubicación numérica. Tampoco necesita contener datos numéricos, puede contener cualquier objeto Python arbitrario.
Una serie puede ser creada a partir de un arreglo, de una lista o de un diccionario.
```
# Creando una serie
labels = ['a','b','c']
my_list = [10,20,30]
arr = np.array([10,20,30])
d = { 'a' : 10 , 'b' : 20 , 'c' : 30 }
Serie = pd.Series( d )
Serie
```
Los índices se utilizan de la misma manera que en las listas, diccionarios o arreglos:
```
# Llamando un elemento de la serie
Serie[ ['b','c'] ]
```
Cuando operamos dos series tenemos que tener cuidado, ya que estas se operan a partir de las etiquetas, no de la posición que utilizen en esta:
```
# Operación simple entre series
ser_1 = pd.Series([1,2,3,4],index = ['USA', 'Germany','USSR', 'Japan'])
ser_2 = pd.Series([1,2,5,4],index = ['USA', 'Germany','Italy', 'Japan'])
ser_1+ser_2
```
<a id="section2"></a>
<h3>2. Dataframe</h3>
<hr>
Los DataFrames son el caballo de batalla de los pandas y están directamente inspirados en el lenguaje de programación R. Podemos pensar en un DataFrame como una matriz, donde cada columna es una serie.
<p><img alt="Dataframe" width=450 src="https://vrzkj25a871bpq7t1ugcgmn9-wpengine.netdna-ssl.com/wp-content/uploads/2022/01/pandas-dataframe-integer-location.png", align="center", hspace=0, vspace=5></p>
Las columnas nos representan variables y las filas los datos, algunas veces indexados.
```
# Creando nuestro primer dataframe
arr = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12]])
df_1 = pd.DataFrame( data = arr )
df_2 = pd.DataFrame( data = arr , index = ["d_1","d_2","d_3"] , columns = ["V_1","V_2","V_3","V_4"])
df_2
```
#### Leyendo y guardando ficheros
No solo podemos crear dataframes a partir de objetos creados en Python, además podemos leer datos de nuestra propia base datos o de un directorio externo.
```
# Leyendo un archivo .csv
df_covid19 = pd.read_csv("cases_country.csv")
df_covid19
# Leyendo archivo desde url
df_covid19_url = pd.read_csv("https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv")
df_covid19_url
# Guardando un archivo localmente
df_covid19_url.to_csv("df_covid19.csv" , index="False")
```
#### Selección por columnas
Como ahora estamos trabajando con matrices, podemos querer seleccionar tanto por variables (Columnas) como por datos (Filas), por lo cual existen distintas formas de indexación, la primera a ver será la indexación por columnas.
Si indexamos 1 columna, entonces obtenemos una serie por resultado, pero si indexamos una lista de columnas obtenemos un dataframe
```
# Seleccionando 1 columna
df_covid19["Country_Region"]
# Seleccionando 4 columnas
df = df_covid19[ ["Country_Region","Confirmed","Deaths","Recovered"] ]
df
```
#### Trabajando con variables
* Para generar una nueva variable (columna) basta con crearla como si estuvieramos seleccionandola, y asignarle su valor.
* Para eliminar una variable se utiliza la función **.drop( )**
* Podemos indexar condiciones para establecer condiciones sobre las variables
```
# Creando una nueva variable
df["Active"] = df["Confirmed"]-df["Deaths"]-df["Recovered"]
# Eliminando la variable
df = df.drop("Active" , axis = 1)
df
# Condicionando variables
df_high_deaths = df[ df["Deaths"]>10000 ]
df_high_deaths
```
#### Indexación por filas
Para comenzar, podemos establecer que una de nuestras columnas sea el indice para nuestro dataframe, para esto utilizamos la funcion **.set_index( )**
```
# Escogiendo un índice
df = df.set_index("Country_Region")
df
```
Para escoger 1 o 2 datos, utilizamos la función **.loc[ ]**.
```
# Escogiendo datos particulares
df.loc[ ["Argentina","Chile"] ]
```
<a id="section3"></a>
<h3>3. Datos faltantes (Missing Data)</h3>
<hr>
Generalmente los datos vienen con valores faltantes en algunas de sus columnas, y esto puede traernos problema a la hora de trabajar y establecer criterios de decisión con la información que obtenemos de estos.
Debemos saber que hacer con la data que falta y cuándo hacerlo, para así no tener problemas a la hora de establecer un modelo, o obtener insights.
Hay 3 formas usuales de trabajar con datos faltantes:
1. La primera es simplemente eliminar las filas con datos faltantes, y evitarse el riesgo de asignar mal un valor. El problema que esto podría traer es que estamos omitiendo información relevante de nuestra muestra.
2. La segunda es rellenar con algun valor determinado, ya sea la media, mediana u otro que nosotros consideremos. Aquí podríamos estar centralizando demasiado nuestros datos, y cuando las desviaciones son altas este método no es muy efectivo.
3. Otro método consiste en utilizar modelos para tratar de predecir o remplanzar los datos faltantes, aunque esto podría tomarnos mucho tiempo y recursos.
<p><img alt="Multiple_Imputation" width=450 src="https://media.springernature.com/lw785/springer-static/image/chp%3A10.1007%2F978-3-319-43742-2_13/MediaObjects/339333_1_En_13_Fig3_HTML.gif", align="center", hspace=0, vspace=5></p>
En Python las funciones que nos ayudan a esto son **.dropna( )** para eliminar y la función **.fillna( )** para rellenar.
```
# Consideramos este dataframe
df = pd.DataFrame({'v_1':[1,2,np.nan],
'v_2':[5,np.nan,np.nan],
'v_3':[1,2,3]})
df
# Eliminando filas con NA
df.dropna()
# Eliminando columnas con NA
df.dropna(axis=1)
# Eliminando filas que tengan mas de n datos faltantes
n = 2
df.dropna( thresh=n , axis = 1)
# Esto se puede replicar para columnas con axis=1 en los argumentos
# Llenando con el promedio
for i in df.columns:
df[i] = df[i].fillna( value = df[i].mean() )
df
```
<a id="section4"></a>
<h3>4. Agrupaciones (Groupby)</h3>
<hr>
Para agrupar los datos en varios dataframes utilizando valores que se repitan en alguna de sus variables, utilizamos la funcion **.groupby( )**
```
# Agrupamos el conjunto por paises
df = df_covid19_url.dropna()
countrys = df.groupby("Country/Region")
countrys
```
Cuando ya tenemos agrupados los datos por alguna variable podemos hacer diversas operaciones, entre ellas estan
* Suma con la funcion **.sum( )**
* Calcular el promedio con **.mean( )**
* Calcular mínimo y máximo con **.max( )** y **.min( )** respectivamente.
* Contar cuantos datos hay con **.count( )**
* Realizar estadística descriptiva con **.describe( )**
Ademas con la función **.get_group( )** podemos obtener un dataframe específico.
```
# Hacemos una suma sobre los grupos
countrys.count()
# Podemos obtener el dataframe de un elemento en particular
countrys.get_group("Australia").describe()
df.columns
```
<a id="section5"></a>
<h3>5. Fusiones (Merges), Uniones (Joins) y Concatenaciones</h3>
<hr>
Las siguientes operaciones que veremos entre dataframes nos permiten realizar uniones de algún tipo con 2 dataframes.
#### Join
Entre las tres operaciones de DataFrame, **join()** es la más sencilla y la que menos control nos da sobre la unión. Combina todas las columnas existentes en dos tablas, y las columnas en común las renombrara con un *lsuffix* y un *rsuffix* definidos.
Existen a su ves varios tipos de join, los cuales se deben definir en los argumentos de la función con un argumento *how*.
<p><img alt="Join" width=750 src="https://miro.medium.com/max/1400/1*-I_1qa5TIiB5eNYxnodfAA.png", align="center", hspace=0, vspace=5></p>
```Python
df.join(self, how='left', lsuffix='', rsuffix='')
```
```
# Definimos los dataframes para el join()
df_1 = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
'B': ['B0', 'B1', 'B2']},
index=['K0', 'K1', 'K2'])
df_2 = pd.DataFrame({'B': ['B0', 'B1', 'B2'],
'C': ['C0', 'C2', 'C3'],
'D': ['D0', 'D2', 'D3']},
index=['K0', 'K2', 'K3'])
# Visualizamos el primer Dataframe
df_1
# Visualizamos el segundo Dataframe
df_2
# Realizamos el join()
df_join = df_1.join(df_2, how='outer', lsuffix='_1', rsuffix='_2')
df_join
```
#### Merge
Similar al join, el **merge()** también combina todas las columnas de dos tablas, con las columnas comunes renombradas con los sufijos definidos. Sin embargo, el merge proporciona tres formas de control flexible sobre la alineación en filas:
1. La primera forma es usar *on = COLUMN NAME*, aquí la columna dada debe ser la columna común en ambas tablas.
2. La segunda forma es usar *left_on = COLUMN NAME* y *right_on = COLUMN NAME*, y permite alinear las dos tablas usando dos columnas diferentes.
3. La tercera forma es usar *left_index = True* y *right_index = True*, y las dos tablas están alineadas en función de su índice.
```Python
pd.merge(left, right, how='inner', on=None, left_on=None, right_on=None,
left_index=False, right_index=False, suffixes=('_x', '_y'))
```
#### Concatenation
A diferencia del join() y del merge(), que por defecto operan en columnas, **concat()** puede realizar tambien operaciones de union para filas. El argumento en este caso es una lista con dataframes.
.
* *Axis = 1*
<p><img alt="Join" width=450 src="https://miro.medium.com/max/1400/1*LoUq8uZrbg_tO3t4tqZfqg.png", align="center", hspace=0, vspace=5></p>
* *Axis = 0*
<p><img alt="Join" width=450 src="https://miro.medium.com/max/1400/1*bQ3Bl6_N_V4er6XZxVxIZA.png", align="center", hspace=0, vspace=5></p>
```Python
pd.concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False, keys=None)
```
#### ¡Motivense a seguir probando distintas combinaciones en los argumentos de las funciones!
**Guía de Uniones:** <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html">Click aquí</a>
|
github_jupyter
|
conda install numpy
pip install numpy
# Importando la librería
import numpy as np
# Creando arreglo a partir de una lista
my_list = [1,2,3,4,5]
my_array = np.array(my_list)
my_array
# Creando arreglo a partir de una lista de listas
my_list_of_lists = [ [1,2,3] , [4,5,6] , [7,8,9] ]
my_array_2 = np.array(my_list_of_lists)
my_array_2
# Definiendo un array ordenado (Función arange)
arr = np.arange(1,11)
arr
# Definiendo un array de ceros (Función zeros)
arr_0 = np.zeros(10)
arr_0
# Definiendo un array de unos (Función ones)
arr_1 = np.ones(10)
arr_1
# Definiendo un arreglo equiespaciado
arr = np.linspace(0,1,15)
arr
# Definiendo la matriz identidad
arr = np.eye(4)
arr
conda install pandas
pip install pandas
#Importando la librería
import pandas as pd
# Creando una serie
labels = ['a','b','c']
my_list = [10,20,30]
arr = np.array([10,20,30])
d = { 'a' : 10 , 'b' : 20 , 'c' : 30 }
Serie = pd.Series( d )
Serie
# Llamando un elemento de la serie
Serie[ ['b','c'] ]
# Operación simple entre series
ser_1 = pd.Series([1,2,3,4],index = ['USA', 'Germany','USSR', 'Japan'])
ser_2 = pd.Series([1,2,5,4],index = ['USA', 'Germany','Italy', 'Japan'])
ser_1+ser_2
# Creando nuestro primer dataframe
arr = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12]])
df_1 = pd.DataFrame( data = arr )
df_2 = pd.DataFrame( data = arr , index = ["d_1","d_2","d_3"] , columns = ["V_1","V_2","V_3","V_4"])
df_2
# Leyendo un archivo .csv
df_covid19 = pd.read_csv("cases_country.csv")
df_covid19
# Leyendo archivo desde url
df_covid19_url = pd.read_csv("https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv")
df_covid19_url
# Guardando un archivo localmente
df_covid19_url.to_csv("df_covid19.csv" , index="False")
# Seleccionando 1 columna
df_covid19["Country_Region"]
# Seleccionando 4 columnas
df = df_covid19[ ["Country_Region","Confirmed","Deaths","Recovered"] ]
df
# Creando una nueva variable
df["Active"] = df["Confirmed"]-df["Deaths"]-df["Recovered"]
# Eliminando la variable
df = df.drop("Active" , axis = 1)
df
# Condicionando variables
df_high_deaths = df[ df["Deaths"]>10000 ]
df_high_deaths
# Escogiendo un índice
df = df.set_index("Country_Region")
df
# Escogiendo datos particulares
df.loc[ ["Argentina","Chile"] ]
# Consideramos este dataframe
df = pd.DataFrame({'v_1':[1,2,np.nan],
'v_2':[5,np.nan,np.nan],
'v_3':[1,2,3]})
df
# Eliminando filas con NA
df.dropna()
# Eliminando columnas con NA
df.dropna(axis=1)
# Eliminando filas que tengan mas de n datos faltantes
n = 2
df.dropna( thresh=n , axis = 1)
# Esto se puede replicar para columnas con axis=1 en los argumentos
# Llenando con el promedio
for i in df.columns:
df[i] = df[i].fillna( value = df[i].mean() )
df
# Agrupamos el conjunto por paises
df = df_covid19_url.dropna()
countrys = df.groupby("Country/Region")
countrys
# Hacemos una suma sobre los grupos
countrys.count()
# Podemos obtener el dataframe de un elemento en particular
countrys.get_group("Australia").describe()
df.columns
df.join(self, how='left', lsuffix='', rsuffix='')
# Definimos los dataframes para el join()
df_1 = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
'B': ['B0', 'B1', 'B2']},
index=['K0', 'K1', 'K2'])
df_2 = pd.DataFrame({'B': ['B0', 'B1', 'B2'],
'C': ['C0', 'C2', 'C3'],
'D': ['D0', 'D2', 'D3']},
index=['K0', 'K2', 'K3'])
# Visualizamos el primer Dataframe
df_1
# Visualizamos el segundo Dataframe
df_2
# Realizamos el join()
df_join = df_1.join(df_2, how='outer', lsuffix='_1', rsuffix='_2')
df_join
pd.merge(left, right, how='inner', on=None, left_on=None, right_on=None,
left_index=False, right_index=False, suffixes=('_x', '_y'))
```
#### Concatenation
A diferencia del join() y del merge(), que por defecto operan en columnas, **concat()** puede realizar tambien operaciones de union para filas. El argumento en este caso es una lista con dataframes.
.
* *Axis = 1*
<p><img alt="Join" width=450 src="https://miro.medium.com/max/1400/1*LoUq8uZrbg_tO3t4tqZfqg.png", align="center", hspace=0, vspace=5></p>
* *Axis = 0*
<p><img alt="Join" width=450 src="https://miro.medium.com/max/1400/1*bQ3Bl6_N_V4er6XZxVxIZA.png", align="center", hspace=0, vspace=5></p>
| 0.38549 | 0.965996 |
```
import numpy as np
import scipy as sp
import pylab as pl
import mxnet as mx
from mxnet import gluon
from gmm_base import *
ndims = 2
sample_size = int(1e4)
num_clusters = 7
epochs = 1000
gnd_mu_ = np.random.RandomState(0).rand(num_clusters, ndims)
gnd_cov_L_ = np.random.RandomState(1).randn(num_clusters, ndims, ndims) * 0.03
z = (np.random.RandomState(2).rand(sample_size) * num_clusters).astype(int)
x = gnd_mu_[z] + (gnd_cov_L_[z] @
np.random.RandomState(3).randn(sample_size, ndims)[:,:,None]
).squeeze(axis=-1)
def plot_cov(mean, cov, **kw):
vec, v2, _ = np.linalg.svd(cov)
val = v2**0.5
for r in range(len(val)):
handle = pl.plot(*zip(
mean - vec[:,r]*val[r],
mean + vec[:,r]*val[r]), **kw)
return handle
def Gaussian_log_pdf(ex, g_mean, g_kR):
model = GMMModel(ex, num_clusters=1, mu_=g_mean[None,:], kR_=g_kR[None,:,:])
return model(mx.nd.array(ex))[0].asnumpy()
def mixture_full_log_pdf(x, mu_, kR_):
model = GMMModel(x, num_clusters=mu_.shape[0], mu_=mu_, kR_=kR_)
return model(mx.nd.array(x))[0].asnumpy()
g_mean = x.mean(axis=0)
g_cov = ((x-g_mean[None,:])[:,:,None] @ (x-g_mean[None,:])[:,None,:]).mean(axis=0)
handle0, = pl.plot(x[:,0], x[:,1], '.', zorder=-1, label='empirical')
for c in range(num_clusters):
handle1, = plot_cov(gnd_mu_[c], gnd_cov_L_[c] @ gnd_cov_L_[c].T,
color='k', lw=4, label='gnd')
handle2, = plot_cov(g_mean, g_cov, color='C1', lw=4, label='Gaussian')
pl.legend([handle0, handle1, handle2], [
'empirical',
'gnd loglik={:.3f}'.format(mixture_full_log_pdf(
x, gnd_mu_, np.linalg.inv(gnd_cov_L_)).mean()),
'Gaussian loglik={:.3f}'.format(Gaussian_log_pdf(
x, g_mean, np.linalg.inv(np.linalg.cholesky(g_cov))).mean())
], loc='upper right')
pl.axis('square')
```
# full rank
```
model = GMMModel(x, num_clusters=num_clusters)
trainer = GMMTrainer(model)
for t, epoch in elapsed(range(100)):
trainer(x)
if np.allclose(np.log2(epoch+1), np.round(np.log2(epoch+1))) or epoch+1==100:
loglik = model(mx.nd.array(x))[0].mean().asscalar()
print(f'{epoch+1} loglik={loglik:.3f} elapsed={t:.1f}s')
mu_ = model.mu_.data().asnumpy()
kR_ = model.kR_.data().asnumpy()
cov_ = np.linalg.inv(kR_.swapaxes(1,2) @ kR_)
handle0, = pl.plot(x[:,0], x[:,1], '.', zorder=-1, label='empirical')
for c in range(num_clusters):
handle1, = plot_cov(gnd_mu_[c], gnd_cov_L_[c] @ gnd_cov_L_[c].T,
color='k', lw=4, label='gnd')
handle2, = plot_cov(g_mean, g_cov, color='C1', lw=4, label='Gaussian')
for c in range(num_clusters):
handle3, = plot_cov(mu_[c], cov_[c], color='C1', lw=2,
label='EM full rank')
pl.legend([handle0, handle1, handle2, handle3], [
'empirical',
'gnd loglik={:.3f}'.format(mixture_full_log_pdf(
x, gnd_mu_, np.linalg.inv(gnd_cov_L_)).mean()),
'Gaussian loglik={:.3f}'.format(Gaussian_log_pdf(
x, g_mean, np.linalg.inv(np.linalg.cholesky(g_cov))).mean()),
'mixture loglik={:.3f}'.format(mixture_full_log_pdf(
x, mu_, kR_).mean()),
], loc='upper right')
pl.axis('square')
```
|
github_jupyter
|
import numpy as np
import scipy as sp
import pylab as pl
import mxnet as mx
from mxnet import gluon
from gmm_base import *
ndims = 2
sample_size = int(1e4)
num_clusters = 7
epochs = 1000
gnd_mu_ = np.random.RandomState(0).rand(num_clusters, ndims)
gnd_cov_L_ = np.random.RandomState(1).randn(num_clusters, ndims, ndims) * 0.03
z = (np.random.RandomState(2).rand(sample_size) * num_clusters).astype(int)
x = gnd_mu_[z] + (gnd_cov_L_[z] @
np.random.RandomState(3).randn(sample_size, ndims)[:,:,None]
).squeeze(axis=-1)
def plot_cov(mean, cov, **kw):
vec, v2, _ = np.linalg.svd(cov)
val = v2**0.5
for r in range(len(val)):
handle = pl.plot(*zip(
mean - vec[:,r]*val[r],
mean + vec[:,r]*val[r]), **kw)
return handle
def Gaussian_log_pdf(ex, g_mean, g_kR):
model = GMMModel(ex, num_clusters=1, mu_=g_mean[None,:], kR_=g_kR[None,:,:])
return model(mx.nd.array(ex))[0].asnumpy()
def mixture_full_log_pdf(x, mu_, kR_):
model = GMMModel(x, num_clusters=mu_.shape[0], mu_=mu_, kR_=kR_)
return model(mx.nd.array(x))[0].asnumpy()
g_mean = x.mean(axis=0)
g_cov = ((x-g_mean[None,:])[:,:,None] @ (x-g_mean[None,:])[:,None,:]).mean(axis=0)
handle0, = pl.plot(x[:,0], x[:,1], '.', zorder=-1, label='empirical')
for c in range(num_clusters):
handle1, = plot_cov(gnd_mu_[c], gnd_cov_L_[c] @ gnd_cov_L_[c].T,
color='k', lw=4, label='gnd')
handle2, = plot_cov(g_mean, g_cov, color='C1', lw=4, label='Gaussian')
pl.legend([handle0, handle1, handle2], [
'empirical',
'gnd loglik={:.3f}'.format(mixture_full_log_pdf(
x, gnd_mu_, np.linalg.inv(gnd_cov_L_)).mean()),
'Gaussian loglik={:.3f}'.format(Gaussian_log_pdf(
x, g_mean, np.linalg.inv(np.linalg.cholesky(g_cov))).mean())
], loc='upper right')
pl.axis('square')
model = GMMModel(x, num_clusters=num_clusters)
trainer = GMMTrainer(model)
for t, epoch in elapsed(range(100)):
trainer(x)
if np.allclose(np.log2(epoch+1), np.round(np.log2(epoch+1))) or epoch+1==100:
loglik = model(mx.nd.array(x))[0].mean().asscalar()
print(f'{epoch+1} loglik={loglik:.3f} elapsed={t:.1f}s')
mu_ = model.mu_.data().asnumpy()
kR_ = model.kR_.data().asnumpy()
cov_ = np.linalg.inv(kR_.swapaxes(1,2) @ kR_)
handle0, = pl.plot(x[:,0], x[:,1], '.', zorder=-1, label='empirical')
for c in range(num_clusters):
handle1, = plot_cov(gnd_mu_[c], gnd_cov_L_[c] @ gnd_cov_L_[c].T,
color='k', lw=4, label='gnd')
handle2, = plot_cov(g_mean, g_cov, color='C1', lw=4, label='Gaussian')
for c in range(num_clusters):
handle3, = plot_cov(mu_[c], cov_[c], color='C1', lw=2,
label='EM full rank')
pl.legend([handle0, handle1, handle2, handle3], [
'empirical',
'gnd loglik={:.3f}'.format(mixture_full_log_pdf(
x, gnd_mu_, np.linalg.inv(gnd_cov_L_)).mean()),
'Gaussian loglik={:.3f}'.format(Gaussian_log_pdf(
x, g_mean, np.linalg.inv(np.linalg.cholesky(g_cov))).mean()),
'mixture loglik={:.3f}'.format(mixture_full_log_pdf(
x, mu_, kR_).mean()),
], loc='upper right')
pl.axis('square')
| 0.580828 | 0.715151 |
Highcharts Demos
=================
Scatter plot: http://www.highcharts.com/demo/scatter
-----------------------------------------------------
```
from highcharts import Highchart
H = Highchart(width=850, height=400)
options = {
'chart': {
'type': 'scatter',
'zoomType': 'xy'
},
'title': {
'text': 'Height Versus Weight of 507 Individuals by Gender'
},
'subtitle': {
'text': 'Source: Heinz 2003'
},
'xAxis': {
'title': {
'enabled': True,
'text': 'Height (cm)'
},
'startOnTick': True,
'endOnTick': True,
'showLastLabel': True
},
'yAxis': {
'title': {
'text': 'Weight (kg)'
}
},
'legend': {
'layout': 'vertical',
'align': 'left',
'verticalAlign': 'top',
'x': 100,
'y': 70,
'floating': True,
'backgroundColor': "(Highcharts.theme && Highcharts.theme.legendBackgroundColor) || '#FFFFFF'",
'borderWidth': 1
},
'plotOptions': {
'scatter': {
'marker': {
'radius': 5,
'states': {
'hover': {
'enabled': True,
'lineColor': 'rgb(100,100,100)'
}
}
},
'states': {
'hover': {
'marker': {
'enabled': False
}
}
},
'tooltip': {
'headerFormat': '<b>{series.name}</b><br>',
'pointFormat': '{point.x} cm, {point.y} kg'
}
}
},
}
data1 = [[161.2, 51.6], [167.5, 59.0], [159.5, 49.2], [157.0, 63.0], [155.8, 53.6],
[170.0, 59.0], [159.1, 47.6], [166.0, 69.8], [176.2, 66.8], [160.2, 75.2],
[172.5, 55.2], [170.9, 54.2], [172.9, 62.5], [153.4, 42.0], [160.0, 50.0],
[147.2, 49.8], [168.2, 49.2], [175.0, 73.2], [157.0, 47.8], [167.6, 68.8],
[159.5, 50.6], [175.0, 82.5], [166.8, 57.2], [176.5, 87.8], [170.2, 72.8],
[174.0, 54.5], [173.0, 59.8], [179.9, 67.3], [170.5, 67.8], [160.0, 47.0],
[154.4, 46.2], [162.0, 55.0], [176.5, 83.0], [160.0, 54.4], [152.0, 45.8],
[162.1, 53.6], [170.0, 73.2], [160.2, 52.1], [161.3, 67.9], [166.4, 56.6],
[168.9, 62.3], [163.8, 58.5], [167.6, 54.5], [160.0, 50.2], [161.3, 60.3],
[167.6, 58.3], [165.1, 56.2], [160.0, 50.2], [170.0, 72.9], [157.5, 59.8],
[167.6, 61.0], [160.7, 69.1], [163.2, 55.9], [152.4, 46.5], [157.5, 54.3],
[168.3, 54.8], [180.3, 60.7], [165.5, 60.0], [165.0, 62.0], [164.5, 60.3],
[156.0, 52.7], [160.0, 74.3], [163.0, 62.0], [165.7, 73.1], [161.0, 80.0],
[162.0, 54.7], [166.0, 53.2], [174.0, 75.7], [172.7, 61.1], [167.6, 55.7],
[151.1, 48.7], [164.5, 52.3], [163.5, 50.0], [152.0, 59.3], [169.0, 62.5],
[164.0, 55.7], [161.2, 54.8], [155.0, 45.9], [170.0, 70.6], [176.2, 67.2],
[170.0, 69.4], [162.5, 58.2], [170.3, 64.8], [164.1, 71.6], [169.5, 52.8],
[163.2, 59.8], [154.5, 49.0], [159.8, 50.0], [173.2, 69.2], [170.0, 55.9],
[161.4, 63.4], [169.0, 58.2], [166.2, 58.6], [159.4, 45.7], [162.5, 52.2],
[159.0, 48.6], [162.8, 57.8], [159.0, 55.6], [179.8, 66.8], [162.9, 59.4],
[161.0, 53.6], [151.1, 73.2], [168.2, 53.4], [168.9, 69.0], [173.2, 58.4],
[171.8, 56.2], [178.0, 70.6], [164.3, 59.8], [163.0, 72.0], [168.5, 65.2],
[166.8, 56.6], [172.7, 105.2], [163.5, 51.8], [169.4, 63.4], [167.8, 59.0],
[159.5, 47.6], [167.6, 63.0], [161.2, 55.2], [160.0, 45.0], [163.2, 54.0],
[162.2, 50.2], [161.3, 60.2], [149.5, 44.8], [157.5, 58.8], [163.2, 56.4],
[172.7, 62.0], [155.0, 49.2], [156.5, 67.2], [164.0, 53.8], [160.9, 54.4],
[162.8, 58.0], [167.0, 59.8], [160.0, 54.8], [160.0, 43.2], [168.9, 60.5],
[158.2, 46.4], [156.0, 64.4], [160.0, 48.8], [167.1, 62.2], [158.0, 55.5],
[167.6, 57.8], [156.0, 54.6], [162.1, 59.2], [173.4, 52.7], [159.8, 53.2],
[170.5, 64.5], [159.2, 51.8], [157.5, 56.0], [161.3, 63.6], [162.6, 63.2],
[160.0, 59.5], [168.9, 56.8], [165.1, 64.1], [162.6, 50.0], [165.1, 72.3],
[166.4, 55.0], [160.0, 55.9], [152.4, 60.4], [170.2, 69.1], [162.6, 84.5],
[170.2, 55.9], [158.8, 55.5], [172.7, 69.5], [167.6, 76.4], [162.6, 61.4],
[167.6, 65.9], [156.2, 58.6], [175.2, 66.8], [172.1, 56.6], [162.6, 58.6],
[160.0, 55.9], [165.1, 59.1], [182.9, 81.8], [166.4, 70.7], [165.1, 56.8],
[177.8, 60.0], [165.1, 58.2], [175.3, 72.7], [154.9, 54.1], [158.8, 49.1],
[172.7, 75.9], [168.9, 55.0], [161.3, 57.3], [167.6, 55.0], [165.1, 65.5],
[175.3, 65.5], [157.5, 48.6], [163.8, 58.6], [167.6, 63.6], [165.1, 55.2],
[165.1, 62.7], [168.9, 56.6], [162.6, 53.9], [164.5, 63.2], [176.5, 73.6],
[168.9, 62.0], [175.3, 63.6], [159.4, 53.2], [160.0, 53.4], [170.2, 55.0],
[162.6, 70.5], [167.6, 54.5], [162.6, 54.5], [160.7, 55.9], [160.0, 59.0],
[157.5, 63.6], [162.6, 54.5], [152.4, 47.3], [170.2, 67.7], [165.1, 80.9],
[172.7, 70.5], [165.1, 60.9], [170.2, 63.6], [170.2, 54.5], [170.2, 59.1],
[161.3, 70.5], [167.6, 52.7], [167.6, 62.7], [165.1, 86.3], [162.6, 66.4],
[152.4, 67.3], [168.9, 63.0], [170.2, 73.6], [175.2, 62.3], [175.2, 57.7],
[160.0, 55.4], [165.1, 104.1], [174.0, 55.5], [170.2, 77.3], [160.0, 80.5],
[167.6, 64.5], [167.6, 72.3], [167.6, 61.4], [154.9, 58.2], [162.6, 81.8],
[175.3, 63.6], [171.4, 53.4], [157.5, 54.5], [165.1, 53.6], [160.0, 60.0],
[174.0, 73.6], [162.6, 61.4], [174.0, 55.5], [162.6, 63.6], [161.3, 60.9],
[156.2, 60.0], [149.9, 46.8], [169.5, 57.3], [160.0, 64.1], [175.3, 63.6],
[169.5, 67.3], [160.0, 75.5], [172.7, 68.2], [162.6, 61.4], [157.5, 76.8],
[176.5, 71.8], [164.4, 55.5], [160.7, 48.6], [174.0, 66.4], [163.8, 67.3]]
data2 = [[174.0, 65.6], [175.3, 71.8], [193.5, 80.7], [186.5, 72.6], [187.2, 78.8],
[181.5, 74.8], [184.0, 86.4], [184.5, 78.4], [175.0, 62.0], [184.0, 81.6],
[180.0, 76.6], [177.8, 83.6], [192.0, 90.0], [176.0, 74.6], [174.0, 71.0],
[184.0, 79.6], [192.7, 93.8], [171.5, 70.0], [173.0, 72.4], [176.0, 85.9],
[176.0, 78.8], [180.5, 77.8], [172.7, 66.2], [176.0, 86.4], [173.5, 81.8],
[178.0, 89.6], [180.3, 82.8], [180.3, 76.4], [164.5, 63.2], [173.0, 60.9],
[183.5, 74.8], [175.5, 70.0], [188.0, 72.4], [189.2, 84.1], [172.8, 69.1],
[170.0, 59.5], [182.0, 67.2], [170.0, 61.3], [177.8, 68.6], [184.2, 80.1],
[186.7, 87.8], [171.4, 84.7], [172.7, 73.4], [175.3, 72.1], [180.3, 82.6],
[182.9, 88.7], [188.0, 84.1], [177.2, 94.1], [172.1, 74.9], [167.0, 59.1],
[169.5, 75.6], [174.0, 86.2], [172.7, 75.3], [182.2, 87.1], [164.1, 55.2],
[163.0, 57.0], [171.5, 61.4], [184.2, 76.8], [174.0, 86.8], [174.0, 72.2],
[177.0, 71.6], [186.0, 84.8], [167.0, 68.2], [171.8, 66.1], [182.0, 72.0],
[167.0, 64.6], [177.8, 74.8], [164.5, 70.0], [192.0, 101.6], [175.5, 63.2],
[171.2, 79.1], [181.6, 78.9], [167.4, 67.7], [181.1, 66.0], [177.0, 68.2],
[174.5, 63.9], [177.5, 72.0], [170.5, 56.8], [182.4, 74.5], [197.1, 90.9],
[180.1, 93.0], [175.5, 80.9], [180.6, 72.7], [184.4, 68.0], [175.5, 70.9],
[180.6, 72.5], [177.0, 72.5], [177.1, 83.4], [181.6, 75.5], [176.5, 73.0],
[175.0, 70.2], [174.0, 73.4], [165.1, 70.5], [177.0, 68.9], [192.0, 102.3],
[176.5, 68.4], [169.4, 65.9], [182.1, 75.7], [179.8, 84.5], [175.3, 87.7],
[184.9, 86.4], [177.3, 73.2], [167.4, 53.9], [178.1, 72.0], [168.9, 55.5],
[157.2, 58.4], [180.3, 83.2], [170.2, 72.7], [177.8, 64.1], [172.7, 72.3],
[165.1, 65.0], [186.7, 86.4], [165.1, 65.0], [174.0, 88.6], [175.3, 84.1],
[185.4, 66.8], [177.8, 75.5], [180.3, 93.2], [180.3, 82.7], [177.8, 58.0],
[177.8, 79.5], [177.8, 78.6], [177.8, 71.8], [177.8, 116.4], [163.8, 72.2],
[188.0, 83.6], [198.1, 85.5], [175.3, 90.9], [166.4, 85.9], [190.5, 89.1],
[166.4, 75.0], [177.8, 77.7], [179.7, 86.4], [172.7, 90.9], [190.5, 73.6],
[185.4, 76.4], [168.9, 69.1], [167.6, 84.5], [175.3, 64.5], [170.2, 69.1],
[190.5, 108.6], [177.8, 86.4], [190.5, 80.9], [177.8, 87.7], [184.2, 94.5],
[176.5, 80.2], [177.8, 72.0], [180.3, 71.4], [171.4, 72.7], [172.7, 84.1],
[172.7, 76.8], [177.8, 63.6], [177.8, 80.9], [182.9, 80.9], [170.2, 85.5],
[167.6, 68.6], [175.3, 67.7], [165.1, 66.4], [185.4, 102.3], [181.6, 70.5],
[172.7, 95.9], [190.5, 84.1], [179.1, 87.3], [175.3, 71.8], [170.2, 65.9],
[193.0, 95.9], [171.4, 91.4], [177.8, 81.8], [177.8, 96.8], [167.6, 69.1],
[167.6, 82.7], [180.3, 75.5], [182.9, 79.5], [176.5, 73.6], [186.7, 91.8],
[188.0, 84.1], [188.0, 85.9], [177.8, 81.8], [174.0, 82.5], [177.8, 80.5],
[171.4, 70.0], [185.4, 81.8], [185.4, 84.1], [188.0, 90.5], [188.0, 91.4],
[182.9, 89.1], [176.5, 85.0], [175.3, 69.1], [175.3, 73.6], [188.0, 80.5],
[188.0, 82.7], [175.3, 86.4], [170.5, 67.7], [179.1, 92.7], [177.8, 93.6],
[175.3, 70.9], [182.9, 75.0], [170.8, 93.2], [188.0, 93.2], [180.3, 77.7],
[177.8, 61.4], [185.4, 94.1], [168.9, 75.0], [185.4, 83.6], [180.3, 85.5],
[174.0, 73.9], [167.6, 66.8], [182.9, 87.3], [160.0, 72.3], [180.3, 88.6],
[167.6, 75.5], [186.7, 101.4], [175.3, 91.1], [175.3, 67.3], [175.9, 77.7],
[175.3, 81.8], [179.1, 75.5], [181.6, 84.5], [177.8, 76.6], [182.9, 85.0],
[177.8, 102.5], [184.2, 77.3], [179.1, 71.8], [176.5, 87.9], [188.0, 94.3],
[174.0, 70.9], [167.6, 64.5], [170.2, 77.3], [167.6, 72.3], [188.0, 87.3],
[174.0, 80.0], [176.5, 82.3], [180.3, 73.6], [167.6, 74.1], [188.0, 85.9],
[180.3, 73.2], [167.6, 76.3], [183.0, 65.9], [183.0, 90.9], [179.1, 89.1],
[170.2, 62.3], [177.8, 82.7], [179.1, 79.1], [190.5, 98.2], [177.8, 84.1],
[180.3, 83.2], [180.3, 83.2]]
H.set_dict_options(options)
H.add_data_set(data1, 'scatter', 'Female', color='rgba(223, 83, 83, .5)')
H.add_data_set(data2, 'scatter', 'Male', color='rgba(119, 152, 191, .5)')
H
```
|
github_jupyter
|
from highcharts import Highchart
H = Highchart(width=850, height=400)
options = {
'chart': {
'type': 'scatter',
'zoomType': 'xy'
},
'title': {
'text': 'Height Versus Weight of 507 Individuals by Gender'
},
'subtitle': {
'text': 'Source: Heinz 2003'
},
'xAxis': {
'title': {
'enabled': True,
'text': 'Height (cm)'
},
'startOnTick': True,
'endOnTick': True,
'showLastLabel': True
},
'yAxis': {
'title': {
'text': 'Weight (kg)'
}
},
'legend': {
'layout': 'vertical',
'align': 'left',
'verticalAlign': 'top',
'x': 100,
'y': 70,
'floating': True,
'backgroundColor': "(Highcharts.theme && Highcharts.theme.legendBackgroundColor) || '#FFFFFF'",
'borderWidth': 1
},
'plotOptions': {
'scatter': {
'marker': {
'radius': 5,
'states': {
'hover': {
'enabled': True,
'lineColor': 'rgb(100,100,100)'
}
}
},
'states': {
'hover': {
'marker': {
'enabled': False
}
}
},
'tooltip': {
'headerFormat': '<b>{series.name}</b><br>',
'pointFormat': '{point.x} cm, {point.y} kg'
}
}
},
}
data1 = [[161.2, 51.6], [167.5, 59.0], [159.5, 49.2], [157.0, 63.0], [155.8, 53.6],
[170.0, 59.0], [159.1, 47.6], [166.0, 69.8], [176.2, 66.8], [160.2, 75.2],
[172.5, 55.2], [170.9, 54.2], [172.9, 62.5], [153.4, 42.0], [160.0, 50.0],
[147.2, 49.8], [168.2, 49.2], [175.0, 73.2], [157.0, 47.8], [167.6, 68.8],
[159.5, 50.6], [175.0, 82.5], [166.8, 57.2], [176.5, 87.8], [170.2, 72.8],
[174.0, 54.5], [173.0, 59.8], [179.9, 67.3], [170.5, 67.8], [160.0, 47.0],
[154.4, 46.2], [162.0, 55.0], [176.5, 83.0], [160.0, 54.4], [152.0, 45.8],
[162.1, 53.6], [170.0, 73.2], [160.2, 52.1], [161.3, 67.9], [166.4, 56.6],
[168.9, 62.3], [163.8, 58.5], [167.6, 54.5], [160.0, 50.2], [161.3, 60.3],
[167.6, 58.3], [165.1, 56.2], [160.0, 50.2], [170.0, 72.9], [157.5, 59.8],
[167.6, 61.0], [160.7, 69.1], [163.2, 55.9], [152.4, 46.5], [157.5, 54.3],
[168.3, 54.8], [180.3, 60.7], [165.5, 60.0], [165.0, 62.0], [164.5, 60.3],
[156.0, 52.7], [160.0, 74.3], [163.0, 62.0], [165.7, 73.1], [161.0, 80.0],
[162.0, 54.7], [166.0, 53.2], [174.0, 75.7], [172.7, 61.1], [167.6, 55.7],
[151.1, 48.7], [164.5, 52.3], [163.5, 50.0], [152.0, 59.3], [169.0, 62.5],
[164.0, 55.7], [161.2, 54.8], [155.0, 45.9], [170.0, 70.6], [176.2, 67.2],
[170.0, 69.4], [162.5, 58.2], [170.3, 64.8], [164.1, 71.6], [169.5, 52.8],
[163.2, 59.8], [154.5, 49.0], [159.8, 50.0], [173.2, 69.2], [170.0, 55.9],
[161.4, 63.4], [169.0, 58.2], [166.2, 58.6], [159.4, 45.7], [162.5, 52.2],
[159.0, 48.6], [162.8, 57.8], [159.0, 55.6], [179.8, 66.8], [162.9, 59.4],
[161.0, 53.6], [151.1, 73.2], [168.2, 53.4], [168.9, 69.0], [173.2, 58.4],
[171.8, 56.2], [178.0, 70.6], [164.3, 59.8], [163.0, 72.0], [168.5, 65.2],
[166.8, 56.6], [172.7, 105.2], [163.5, 51.8], [169.4, 63.4], [167.8, 59.0],
[159.5, 47.6], [167.6, 63.0], [161.2, 55.2], [160.0, 45.0], [163.2, 54.0],
[162.2, 50.2], [161.3, 60.2], [149.5, 44.8], [157.5, 58.8], [163.2, 56.4],
[172.7, 62.0], [155.0, 49.2], [156.5, 67.2], [164.0, 53.8], [160.9, 54.4],
[162.8, 58.0], [167.0, 59.8], [160.0, 54.8], [160.0, 43.2], [168.9, 60.5],
[158.2, 46.4], [156.0, 64.4], [160.0, 48.8], [167.1, 62.2], [158.0, 55.5],
[167.6, 57.8], [156.0, 54.6], [162.1, 59.2], [173.4, 52.7], [159.8, 53.2],
[170.5, 64.5], [159.2, 51.8], [157.5, 56.0], [161.3, 63.6], [162.6, 63.2],
[160.0, 59.5], [168.9, 56.8], [165.1, 64.1], [162.6, 50.0], [165.1, 72.3],
[166.4, 55.0], [160.0, 55.9], [152.4, 60.4], [170.2, 69.1], [162.6, 84.5],
[170.2, 55.9], [158.8, 55.5], [172.7, 69.5], [167.6, 76.4], [162.6, 61.4],
[167.6, 65.9], [156.2, 58.6], [175.2, 66.8], [172.1, 56.6], [162.6, 58.6],
[160.0, 55.9], [165.1, 59.1], [182.9, 81.8], [166.4, 70.7], [165.1, 56.8],
[177.8, 60.0], [165.1, 58.2], [175.3, 72.7], [154.9, 54.1], [158.8, 49.1],
[172.7, 75.9], [168.9, 55.0], [161.3, 57.3], [167.6, 55.0], [165.1, 65.5],
[175.3, 65.5], [157.5, 48.6], [163.8, 58.6], [167.6, 63.6], [165.1, 55.2],
[165.1, 62.7], [168.9, 56.6], [162.6, 53.9], [164.5, 63.2], [176.5, 73.6],
[168.9, 62.0], [175.3, 63.6], [159.4, 53.2], [160.0, 53.4], [170.2, 55.0],
[162.6, 70.5], [167.6, 54.5], [162.6, 54.5], [160.7, 55.9], [160.0, 59.0],
[157.5, 63.6], [162.6, 54.5], [152.4, 47.3], [170.2, 67.7], [165.1, 80.9],
[172.7, 70.5], [165.1, 60.9], [170.2, 63.6], [170.2, 54.5], [170.2, 59.1],
[161.3, 70.5], [167.6, 52.7], [167.6, 62.7], [165.1, 86.3], [162.6, 66.4],
[152.4, 67.3], [168.9, 63.0], [170.2, 73.6], [175.2, 62.3], [175.2, 57.7],
[160.0, 55.4], [165.1, 104.1], [174.0, 55.5], [170.2, 77.3], [160.0, 80.5],
[167.6, 64.5], [167.6, 72.3], [167.6, 61.4], [154.9, 58.2], [162.6, 81.8],
[175.3, 63.6], [171.4, 53.4], [157.5, 54.5], [165.1, 53.6], [160.0, 60.0],
[174.0, 73.6], [162.6, 61.4], [174.0, 55.5], [162.6, 63.6], [161.3, 60.9],
[156.2, 60.0], [149.9, 46.8], [169.5, 57.3], [160.0, 64.1], [175.3, 63.6],
[169.5, 67.3], [160.0, 75.5], [172.7, 68.2], [162.6, 61.4], [157.5, 76.8],
[176.5, 71.8], [164.4, 55.5], [160.7, 48.6], [174.0, 66.4], [163.8, 67.3]]
data2 = [[174.0, 65.6], [175.3, 71.8], [193.5, 80.7], [186.5, 72.6], [187.2, 78.8],
[181.5, 74.8], [184.0, 86.4], [184.5, 78.4], [175.0, 62.0], [184.0, 81.6],
[180.0, 76.6], [177.8, 83.6], [192.0, 90.0], [176.0, 74.6], [174.0, 71.0],
[184.0, 79.6], [192.7, 93.8], [171.5, 70.0], [173.0, 72.4], [176.0, 85.9],
[176.0, 78.8], [180.5, 77.8], [172.7, 66.2], [176.0, 86.4], [173.5, 81.8],
[178.0, 89.6], [180.3, 82.8], [180.3, 76.4], [164.5, 63.2], [173.0, 60.9],
[183.5, 74.8], [175.5, 70.0], [188.0, 72.4], [189.2, 84.1], [172.8, 69.1],
[170.0, 59.5], [182.0, 67.2], [170.0, 61.3], [177.8, 68.6], [184.2, 80.1],
[186.7, 87.8], [171.4, 84.7], [172.7, 73.4], [175.3, 72.1], [180.3, 82.6],
[182.9, 88.7], [188.0, 84.1], [177.2, 94.1], [172.1, 74.9], [167.0, 59.1],
[169.5, 75.6], [174.0, 86.2], [172.7, 75.3], [182.2, 87.1], [164.1, 55.2],
[163.0, 57.0], [171.5, 61.4], [184.2, 76.8], [174.0, 86.8], [174.0, 72.2],
[177.0, 71.6], [186.0, 84.8], [167.0, 68.2], [171.8, 66.1], [182.0, 72.0],
[167.0, 64.6], [177.8, 74.8], [164.5, 70.0], [192.0, 101.6], [175.5, 63.2],
[171.2, 79.1], [181.6, 78.9], [167.4, 67.7], [181.1, 66.0], [177.0, 68.2],
[174.5, 63.9], [177.5, 72.0], [170.5, 56.8], [182.4, 74.5], [197.1, 90.9],
[180.1, 93.0], [175.5, 80.9], [180.6, 72.7], [184.4, 68.0], [175.5, 70.9],
[180.6, 72.5], [177.0, 72.5], [177.1, 83.4], [181.6, 75.5], [176.5, 73.0],
[175.0, 70.2], [174.0, 73.4], [165.1, 70.5], [177.0, 68.9], [192.0, 102.3],
[176.5, 68.4], [169.4, 65.9], [182.1, 75.7], [179.8, 84.5], [175.3, 87.7],
[184.9, 86.4], [177.3, 73.2], [167.4, 53.9], [178.1, 72.0], [168.9, 55.5],
[157.2, 58.4], [180.3, 83.2], [170.2, 72.7], [177.8, 64.1], [172.7, 72.3],
[165.1, 65.0], [186.7, 86.4], [165.1, 65.0], [174.0, 88.6], [175.3, 84.1],
[185.4, 66.8], [177.8, 75.5], [180.3, 93.2], [180.3, 82.7], [177.8, 58.0],
[177.8, 79.5], [177.8, 78.6], [177.8, 71.8], [177.8, 116.4], [163.8, 72.2],
[188.0, 83.6], [198.1, 85.5], [175.3, 90.9], [166.4, 85.9], [190.5, 89.1],
[166.4, 75.0], [177.8, 77.7], [179.7, 86.4], [172.7, 90.9], [190.5, 73.6],
[185.4, 76.4], [168.9, 69.1], [167.6, 84.5], [175.3, 64.5], [170.2, 69.1],
[190.5, 108.6], [177.8, 86.4], [190.5, 80.9], [177.8, 87.7], [184.2, 94.5],
[176.5, 80.2], [177.8, 72.0], [180.3, 71.4], [171.4, 72.7], [172.7, 84.1],
[172.7, 76.8], [177.8, 63.6], [177.8, 80.9], [182.9, 80.9], [170.2, 85.5],
[167.6, 68.6], [175.3, 67.7], [165.1, 66.4], [185.4, 102.3], [181.6, 70.5],
[172.7, 95.9], [190.5, 84.1], [179.1, 87.3], [175.3, 71.8], [170.2, 65.9],
[193.0, 95.9], [171.4, 91.4], [177.8, 81.8], [177.8, 96.8], [167.6, 69.1],
[167.6, 82.7], [180.3, 75.5], [182.9, 79.5], [176.5, 73.6], [186.7, 91.8],
[188.0, 84.1], [188.0, 85.9], [177.8, 81.8], [174.0, 82.5], [177.8, 80.5],
[171.4, 70.0], [185.4, 81.8], [185.4, 84.1], [188.0, 90.5], [188.0, 91.4],
[182.9, 89.1], [176.5, 85.0], [175.3, 69.1], [175.3, 73.6], [188.0, 80.5],
[188.0, 82.7], [175.3, 86.4], [170.5, 67.7], [179.1, 92.7], [177.8, 93.6],
[175.3, 70.9], [182.9, 75.0], [170.8, 93.2], [188.0, 93.2], [180.3, 77.7],
[177.8, 61.4], [185.4, 94.1], [168.9, 75.0], [185.4, 83.6], [180.3, 85.5],
[174.0, 73.9], [167.6, 66.8], [182.9, 87.3], [160.0, 72.3], [180.3, 88.6],
[167.6, 75.5], [186.7, 101.4], [175.3, 91.1], [175.3, 67.3], [175.9, 77.7],
[175.3, 81.8], [179.1, 75.5], [181.6, 84.5], [177.8, 76.6], [182.9, 85.0],
[177.8, 102.5], [184.2, 77.3], [179.1, 71.8], [176.5, 87.9], [188.0, 94.3],
[174.0, 70.9], [167.6, 64.5], [170.2, 77.3], [167.6, 72.3], [188.0, 87.3],
[174.0, 80.0], [176.5, 82.3], [180.3, 73.6], [167.6, 74.1], [188.0, 85.9],
[180.3, 73.2], [167.6, 76.3], [183.0, 65.9], [183.0, 90.9], [179.1, 89.1],
[170.2, 62.3], [177.8, 82.7], [179.1, 79.1], [190.5, 98.2], [177.8, 84.1],
[180.3, 83.2], [180.3, 83.2]]
H.set_dict_options(options)
H.add_data_set(data1, 'scatter', 'Female', color='rgba(223, 83, 83, .5)')
H.add_data_set(data2, 'scatter', 'Male', color='rgba(119, 152, 191, .5)')
H
| 0.471223 | 0.611498 |
```
%load_ext autoreload
%autoreload 2
# general libraries
import numpy as np
import math
import time
# to import covariance module later
import sys
sys.path.append('../skrmt')
```
## Sampling random population covariance matrix (Sigma)
#### First, let's build a method to sample orthogonal random matrices
```
def sample_rand_orthogonal_mtx(n):
# n by n random complex matrix
X = np.random.randn(n,n)
# orthonormalizing matrix using QR algorithm
Q,_ = np.linalg.qr(X)
return Q
M = sample_rand_orthogonal_mtx(5)
# the printed matrix should be the identity
np.round(np.matmul(M, M.T), 12)
```
#### Now, let's sample a diagonal matrix with a certain proportion of eigenvalues:
- eigenvalues equal to 1: 20%
- eigenvalues equal to 3: 40%
- eigenvalues equal to 10: 40%
```
def sample_diagEig_mtx(p):
ONE_PROP = 0.2 # proportion of eigenvalues equal to one
THREE_PROP = 0.4 # proportion of eigenvalues equal to three
TEN_PROP = 0.4 # proportion of eigenvalues equal to ten
n_one = math.ceil(p * ONE_PROP) # number of eigenvalues equal to one
n_three = math.floor(p * THREE_PROP) # number of eigenvalues equal to three
n_ten = p - n_one - n_three # number of eigenvalues equal to ten
# building eigenvalues
one_eigs = [1.0]*n_one
three_eigs = [3.0]*n_three
ten_eigs = [10.0]*n_ten
# concatenating eigenvalues lists
eigs = one_eigs + three_eigs + ten_eigs
# shuffling eigenvalues
np.random.shuffle(eigs)
# building diagonal matrix
M = np.diag(eigs)
return M
# the printed matrix should have one '1', two '3' and two '10' in its diagonal
sample_diagEig_mtx(5)
```
#### to create a random population matrix, that is not necessarily diagonal, we use both functions
```
def sample_pop_cov(p, diag=False):
if diag:
return sample_diagEig_mtx(p)
else:
O = sample_rand_orthogonal_mtx(p)
M = sample_diagEig_mtx(p)
# O M O.T preserves original eigenvalues (O is an orthogonal rotation)
return np.matmul(np.matmul(O, M), O.T) # sampling \Sigma
Sigma = sample_pop_cov(5)
# printing Sigma, population covariance matrix
print('Sigma:\n', Sigma)
# printing Sigma eigenvalues, that should be equal to the specified eigenvalues above.
print('Sigma eigenvalues:', np.linalg.eigvals(Sigma))
```
## Sampling dataset using population covariance matrix Sigma
#### To do so we specify the number of attributes (p) and the number of observations to sample (n). After sampling a population covariance matrix Sigma, we can use it to create a n random samples of the population, generating the dataset X.
```
p, n = 5, 10
Sigma = sample_pop_cov(p)
X = np.random.multivariate_normal(np.random.randn(p), Sigma, size=n)
print('Shape:', X.shape)
print('Random generated dataset:\n', X)
```
## Small tests of estimators and metrics
```
p, n = 200, 600
Sigma = sample_pop_cov(p)
X = np.random.multivariate_normal(np.random.randn(p), Sigma, size=n)
print('Shape:', X.shape)
from covariance import sample_estimator
Sigma_sample = sample_estimator(X)
# the printed shape should be equal to (p,p)
print('S sample estimator shape:', Sigma_sample.shape)
from covariance import fsopt_estimator
Sigma_fsopt = fsopt_estimator(X, Sigma)
# the printed shape should be equal to (p,p)
print('S sample estimator shape:', Sigma_fsopt.shape)
```
### Checking that PRIAL(S) = 0%
```
from covariance import loss_mv, prial_mv
exp_sample = loss_mv(sigma_tilde=Sigma_sample, sigma=Sigma)
exp_sigma_tilde = loss_mv(sigma_tilde=Sigma_sample, sigma=Sigma)
exp_fsopt = loss_mv(sigma_tilde=Sigma_fsopt, sigma=Sigma)
prial_mv(exp_sample=exp_sample, exp_sigma_tilde=exp_sigma_tilde, exp_fsopt=exp_fsopt)
```
### Checking that PRIAL(S_star) = 100%
```
exp_sample = loss_mv(sigma_tilde=Sigma_sample, sigma=Sigma)
exp_sigma_tilde = loss_mv(sigma_tilde=Sigma_fsopt, sigma=Sigma)
exp_fsopt = loss_mv(sigma_tilde=Sigma_fsopt, sigma=Sigma)
prial_mv(exp_sample=exp_sample, exp_sigma_tilde=exp_sigma_tilde, exp_fsopt=exp_fsopt)
```
#### After these tests we are more confident that estimators FSOptEstimator and SampleEstimator, as well as metrics loss_mv and PRIAL_mv, are working properly
# Monte Carlo simulations for covariance module
```
import numpy as np
import matplotlib.pyplot as plt
import math
import time
# importing all estimators
from covariance import sample_estimator
from covariance import fsopt_estimator
from covariance import linear_shrinkage_estimator
from covariance import analytical_shrinkage_estimator
from covariance import empirical_bayesian_estimator
from covariance import minimax_estimator
# importing metrics
from covariance import loss_mv, prial_mv
def sample_dataset(p, n, Sigma=None):
if Sigma is None:
Sigma = sample_pop_cov(p)
X = np.random.multivariate_normal(np.random.randn(p), Sigma, size=n)
return X, Sigma
def run_simulation(p, n, estimators, nreps=100):
# adviced to check prial_mv formula to understand the code below
Sn_idx = 0
Sstar_idx = 1
Sigma_tilde_idx = 2
# generating population covariance matrix
Sigma = sample_pop_cov(p)
# matrices/arrays of results
# +2 because sample and FSOptimal estimators are always considered
LOSSES = np.zeros((len(estimators)+2, 3))
PRIALS = np.zeros(len(estimators)+2)
TIMES = np.zeros((len(estimators)+2))
for (idx, estimator) in enumerate(estimators):
t1 = time.time()
for i in range(nreps):
# sampling random dataset from fixed population covariance matrix
X, _ = sample_dataset(p=p, n=n, Sigma=Sigma)
# estimating sample cov
Sample = sample_estimator(X)
# estimating S_star
S_star = fsopt_estimator(X, Sigma)
# estimating population covariance matrix using current estimator
Sigma_tilde = estimator(X)
# calculating losses
loss_Sn = loss_mv(sigma_tilde=Sample, sigma=Sigma)
loss_Sstar = loss_mv(sigma_tilde=S_star, sigma=Sigma)
loss_Sigma_tilde = loss_mv(sigma_tilde=Sigma_tilde, sigma=Sigma)
LOSSES[idx][Sn_idx] += loss_Sn
LOSSES[idx][Sstar_idx] += loss_Sstar
LOSSES[idx][Sigma_tilde_idx] += loss_Sigma_tilde
t2 = time.time()
TIMES[idx] = (t2-t1)*1000/nreps # time needed in ms (meaned by number of repetitions)
LOSSES[idx] /= p
PRIALS[idx] = prial_mv(exp_sample=LOSSES[idx][Sn_idx],
exp_sigma_tilde=LOSSES[idx][Sigma_tilde_idx],
exp_fsopt=LOSSES[idx][Sstar_idx])
# Sample estimator
t1 = time.time()
for i in range(nreps):
# sampling random dataset from fixed population covariance matrix
X, _ = sample_dataset(p=p, n=n, Sigma=Sigma)
# estimating sample cov
Sample = sample_estimator(X)
# estimating S_star
S_star = fsopt_estimator(X, Sigma)
# estimating population covariance matrix using sample estimator
Sigma_tilde = sample_estimator(X)
# calculating losses
loss_Sn = loss_mv(sigma_tilde=Sample, sigma=Sigma)
loss_Sstar = loss_mv(sigma_tilde=S_star, sigma=Sigma)
loss_Sigma_tilde = loss_mv(sigma_tilde=Sigma_tilde, sigma=Sigma)
LOSSES[-2][Sn_idx] += loss_Sn
LOSSES[-2][Sstar_idx] += loss_Sstar
LOSSES[-2][Sigma_tilde_idx] += loss_Sigma_tilde
t2 = time.time()
TIMES[-2] = (t2-t1)*1000/nreps # time needed in ms (meaned by number of repetitions)
LOSSES[-2] /= p
PRIALS[-2] = prial_mv(exp_sample=LOSSES[-2][Sn_idx],
exp_sigma_tilde=LOSSES[-2][Sigma_tilde_idx],
exp_fsopt=LOSSES[-2][Sstar_idx])
# FSOpt estimator
t1 = time.time()
for i in range(nreps):
# sampling random dataset from fixed population covariance matrix
X, _ = sample_dataset(p=p, n=n, Sigma=Sigma)
# estimating sample cov
Sample = sample_estimator(X)
# estimating S_star
S_star = fsopt_estimator(X, Sigma)
# estimating population covariance matrix using current estimator
Sigma_tilde = fsopt_estimator(X, Sigma)
# calculating losses
loss_Sn = loss_mv(sigma_tilde=Sample, sigma=Sigma)
loss_Sstar = loss_mv(sigma_tilde=S_star, sigma=Sigma)
loss_Sigma_tilde = loss_mv(sigma_tilde=Sigma_tilde, sigma=Sigma)
LOSSES[-1][Sn_idx] += loss_Sn
LOSSES[-1][Sstar_idx] += loss_Sstar
LOSSES[-1][Sigma_tilde_idx] += loss_Sigma_tilde
t2 = time.time()
TIMES[-1] = (t2-t1)*1000/nreps # time needed in ms (meaned by number of repetitions)
LOSSES[-1] /= p
PRIALS[-1] = prial_mv(exp_sample=LOSSES[-1][Sn_idx],
exp_sigma_tilde=LOSSES[-1][Sigma_tilde_idx],
exp_fsopt=LOSSES[-1][Sstar_idx])
return LOSSES, PRIALS, TIMES
def run_graphic_simulation(estimators, labels, P_list=[5, 50, 100, 150, 200, 300, 400, 500],
N=None, ratio=3, nreps=None, metric='prial'):
# +2 because sample and FSOptimal estimators are always considered
MEASURES = np.zeros((len(P_list), len(estimators)+2))
labels += ['Sample', 'FSOpt']
ratios = []
for (idx, p) in enumerate(P_list):
if N is None:
n = ratio*p
else:
n = N
ratios.append(p/n)
if nreps is None:
nreps = int(max(100, min(1000, 10000/p)))
losses, prials, times = run_simulation(p, n, estimators, nreps=nreps)
if metric == 'prial':
MEASURES[idx] = prials
elif metric == 'loss':
MEASURES[idx] = losses
elif metric == 'time':
MEASURES[idx] = times
if N is None:
lines = plt.plot(P_list, MEASURES, '-D')
plt.xlabel('Matrix dimension p')
else:
lines = plt.plot(ratios, MEASURES, '-D')
plt.xlabel('Ratio p/n')
plt.legend(lines, labels)
if metric == 'prial':
plt.title('Evolution of PRIAL (reps='+str(nreps)+')')
plt.ylabel('PRIAL')
elif metric == 'loss':
plt.title('Evolution of Loss (reps='+str(nreps)+')')
plt.ylabel('Loss')
elif metric == 'time':
plt.title('Duration study on average (reps='+str(nreps)+')')
plt.ylabel('time (ms)')
estimators = [analytical_shrinkage_estimator, linear_shrinkage_estimator]
labels = ['Analytical', 'Linear']
P_list = [5, 50, 100, 200, 300, 400, 500]
run_graphic_simulation(estimators, labels, P_list=P_list, ratio=3, nreps=10, metric='prial')
run_graphic_simulation(estimators, labels, P_list=P_list, ratio=3, nreps=10, metric='prial')
run_graphic_simulation(estimators, labels, P_list=P_list, N=600, nreps=10, metric='prial')
run_graphic_simulation(estimators, labels, P_list=P_list, ratio=3, nreps=10, metric='time')
# Using now all estimators
estimators = [analytical_shrinkage_estimator, linear_shrinkage_estimator,
empirical_bayesian_estimator, minimax_estimator]
labels = ['Analytical', 'Linear', 'Bayesian', 'Minimax']
P_list = [5, 50, 100, 200, 300, 400, 500]
run_graphic_simulation(estimators, labels, P_list=P_list, ratio=3, nreps=10, metric='prial')
run_graphic_simulation(estimators, labels, P_list=P_list, N=600, nreps=10, metric='prial')
run_graphic_simulation(estimators, labels, P_list=P_list, ratio=3, nreps=10, metric='time')
```
### Let's execute simulations with a greater amount of repetitions
```
estimators = [analytical_shrinkage_estimator, linear_shrinkage_estimator]
labels = ['Analytical', 'Linear']
P_list = [5, 50, 100, 200, 300, 400, 500]
run_graphic_simulation(estimators, labels, P_list=P_list, ratio=3, nreps=100, metric='prial')
run_graphic_simulation(estimators, labels, P_list=P_list, N=600, nreps=100, metric='prial')
run_graphic_simulation(estimators, labels, P_list=P_list, ratio=3, nreps=100, metric='time')
# Using now all estimators
estimators = [analytical_shrinkage_estimator, linear_shrinkage_estimator,
empirical_bayesian_estimator, minimax_estimator]
labels = ['Analytical', 'Linear', 'Bayesian', 'Minimax']
P_list = [5, 50, 100, 200, 300, 400, 500]
run_graphic_simulation(estimators, labels, P_list=P_list, ratio=3, nreps=100, metric='prial')
run_graphic_simulation(estimators, labels, P_list=P_list, N=600, nreps=100, metric='prial')
run_graphic_simulation(estimators, labels, P_list=P_list, ratio=3, nreps=100, metric='time')
```
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
# general libraries
import numpy as np
import math
import time
# to import covariance module later
import sys
sys.path.append('../skrmt')
def sample_rand_orthogonal_mtx(n):
# n by n random complex matrix
X = np.random.randn(n,n)
# orthonormalizing matrix using QR algorithm
Q,_ = np.linalg.qr(X)
return Q
M = sample_rand_orthogonal_mtx(5)
# the printed matrix should be the identity
np.round(np.matmul(M, M.T), 12)
def sample_diagEig_mtx(p):
ONE_PROP = 0.2 # proportion of eigenvalues equal to one
THREE_PROP = 0.4 # proportion of eigenvalues equal to three
TEN_PROP = 0.4 # proportion of eigenvalues equal to ten
n_one = math.ceil(p * ONE_PROP) # number of eigenvalues equal to one
n_three = math.floor(p * THREE_PROP) # number of eigenvalues equal to three
n_ten = p - n_one - n_three # number of eigenvalues equal to ten
# building eigenvalues
one_eigs = [1.0]*n_one
three_eigs = [3.0]*n_three
ten_eigs = [10.0]*n_ten
# concatenating eigenvalues lists
eigs = one_eigs + three_eigs + ten_eigs
# shuffling eigenvalues
np.random.shuffle(eigs)
# building diagonal matrix
M = np.diag(eigs)
return M
# the printed matrix should have one '1', two '3' and two '10' in its diagonal
sample_diagEig_mtx(5)
def sample_pop_cov(p, diag=False):
if diag:
return sample_diagEig_mtx(p)
else:
O = sample_rand_orthogonal_mtx(p)
M = sample_diagEig_mtx(p)
# O M O.T preserves original eigenvalues (O is an orthogonal rotation)
return np.matmul(np.matmul(O, M), O.T) # sampling \Sigma
Sigma = sample_pop_cov(5)
# printing Sigma, population covariance matrix
print('Sigma:\n', Sigma)
# printing Sigma eigenvalues, that should be equal to the specified eigenvalues above.
print('Sigma eigenvalues:', np.linalg.eigvals(Sigma))
p, n = 5, 10
Sigma = sample_pop_cov(p)
X = np.random.multivariate_normal(np.random.randn(p), Sigma, size=n)
print('Shape:', X.shape)
print('Random generated dataset:\n', X)
p, n = 200, 600
Sigma = sample_pop_cov(p)
X = np.random.multivariate_normal(np.random.randn(p), Sigma, size=n)
print('Shape:', X.shape)
from covariance import sample_estimator
Sigma_sample = sample_estimator(X)
# the printed shape should be equal to (p,p)
print('S sample estimator shape:', Sigma_sample.shape)
from covariance import fsopt_estimator
Sigma_fsopt = fsopt_estimator(X, Sigma)
# the printed shape should be equal to (p,p)
print('S sample estimator shape:', Sigma_fsopt.shape)
from covariance import loss_mv, prial_mv
exp_sample = loss_mv(sigma_tilde=Sigma_sample, sigma=Sigma)
exp_sigma_tilde = loss_mv(sigma_tilde=Sigma_sample, sigma=Sigma)
exp_fsopt = loss_mv(sigma_tilde=Sigma_fsopt, sigma=Sigma)
prial_mv(exp_sample=exp_sample, exp_sigma_tilde=exp_sigma_tilde, exp_fsopt=exp_fsopt)
exp_sample = loss_mv(sigma_tilde=Sigma_sample, sigma=Sigma)
exp_sigma_tilde = loss_mv(sigma_tilde=Sigma_fsopt, sigma=Sigma)
exp_fsopt = loss_mv(sigma_tilde=Sigma_fsopt, sigma=Sigma)
prial_mv(exp_sample=exp_sample, exp_sigma_tilde=exp_sigma_tilde, exp_fsopt=exp_fsopt)
import numpy as np
import matplotlib.pyplot as plt
import math
import time
# importing all estimators
from covariance import sample_estimator
from covariance import fsopt_estimator
from covariance import linear_shrinkage_estimator
from covariance import analytical_shrinkage_estimator
from covariance import empirical_bayesian_estimator
from covariance import minimax_estimator
# importing metrics
from covariance import loss_mv, prial_mv
def sample_dataset(p, n, Sigma=None):
if Sigma is None:
Sigma = sample_pop_cov(p)
X = np.random.multivariate_normal(np.random.randn(p), Sigma, size=n)
return X, Sigma
def run_simulation(p, n, estimators, nreps=100):
# adviced to check prial_mv formula to understand the code below
Sn_idx = 0
Sstar_idx = 1
Sigma_tilde_idx = 2
# generating population covariance matrix
Sigma = sample_pop_cov(p)
# matrices/arrays of results
# +2 because sample and FSOptimal estimators are always considered
LOSSES = np.zeros((len(estimators)+2, 3))
PRIALS = np.zeros(len(estimators)+2)
TIMES = np.zeros((len(estimators)+2))
for (idx, estimator) in enumerate(estimators):
t1 = time.time()
for i in range(nreps):
# sampling random dataset from fixed population covariance matrix
X, _ = sample_dataset(p=p, n=n, Sigma=Sigma)
# estimating sample cov
Sample = sample_estimator(X)
# estimating S_star
S_star = fsopt_estimator(X, Sigma)
# estimating population covariance matrix using current estimator
Sigma_tilde = estimator(X)
# calculating losses
loss_Sn = loss_mv(sigma_tilde=Sample, sigma=Sigma)
loss_Sstar = loss_mv(sigma_tilde=S_star, sigma=Sigma)
loss_Sigma_tilde = loss_mv(sigma_tilde=Sigma_tilde, sigma=Sigma)
LOSSES[idx][Sn_idx] += loss_Sn
LOSSES[idx][Sstar_idx] += loss_Sstar
LOSSES[idx][Sigma_tilde_idx] += loss_Sigma_tilde
t2 = time.time()
TIMES[idx] = (t2-t1)*1000/nreps # time needed in ms (meaned by number of repetitions)
LOSSES[idx] /= p
PRIALS[idx] = prial_mv(exp_sample=LOSSES[idx][Sn_idx],
exp_sigma_tilde=LOSSES[idx][Sigma_tilde_idx],
exp_fsopt=LOSSES[idx][Sstar_idx])
# Sample estimator
t1 = time.time()
for i in range(nreps):
# sampling random dataset from fixed population covariance matrix
X, _ = sample_dataset(p=p, n=n, Sigma=Sigma)
# estimating sample cov
Sample = sample_estimator(X)
# estimating S_star
S_star = fsopt_estimator(X, Sigma)
# estimating population covariance matrix using sample estimator
Sigma_tilde = sample_estimator(X)
# calculating losses
loss_Sn = loss_mv(sigma_tilde=Sample, sigma=Sigma)
loss_Sstar = loss_mv(sigma_tilde=S_star, sigma=Sigma)
loss_Sigma_tilde = loss_mv(sigma_tilde=Sigma_tilde, sigma=Sigma)
LOSSES[-2][Sn_idx] += loss_Sn
LOSSES[-2][Sstar_idx] += loss_Sstar
LOSSES[-2][Sigma_tilde_idx] += loss_Sigma_tilde
t2 = time.time()
TIMES[-2] = (t2-t1)*1000/nreps # time needed in ms (meaned by number of repetitions)
LOSSES[-2] /= p
PRIALS[-2] = prial_mv(exp_sample=LOSSES[-2][Sn_idx],
exp_sigma_tilde=LOSSES[-2][Sigma_tilde_idx],
exp_fsopt=LOSSES[-2][Sstar_idx])
# FSOpt estimator
t1 = time.time()
for i in range(nreps):
# sampling random dataset from fixed population covariance matrix
X, _ = sample_dataset(p=p, n=n, Sigma=Sigma)
# estimating sample cov
Sample = sample_estimator(X)
# estimating S_star
S_star = fsopt_estimator(X, Sigma)
# estimating population covariance matrix using current estimator
Sigma_tilde = fsopt_estimator(X, Sigma)
# calculating losses
loss_Sn = loss_mv(sigma_tilde=Sample, sigma=Sigma)
loss_Sstar = loss_mv(sigma_tilde=S_star, sigma=Sigma)
loss_Sigma_tilde = loss_mv(sigma_tilde=Sigma_tilde, sigma=Sigma)
LOSSES[-1][Sn_idx] += loss_Sn
LOSSES[-1][Sstar_idx] += loss_Sstar
LOSSES[-1][Sigma_tilde_idx] += loss_Sigma_tilde
t2 = time.time()
TIMES[-1] = (t2-t1)*1000/nreps # time needed in ms (meaned by number of repetitions)
LOSSES[-1] /= p
PRIALS[-1] = prial_mv(exp_sample=LOSSES[-1][Sn_idx],
exp_sigma_tilde=LOSSES[-1][Sigma_tilde_idx],
exp_fsopt=LOSSES[-1][Sstar_idx])
return LOSSES, PRIALS, TIMES
def run_graphic_simulation(estimators, labels, P_list=[5, 50, 100, 150, 200, 300, 400, 500],
N=None, ratio=3, nreps=None, metric='prial'):
# +2 because sample and FSOptimal estimators are always considered
MEASURES = np.zeros((len(P_list), len(estimators)+2))
labels += ['Sample', 'FSOpt']
ratios = []
for (idx, p) in enumerate(P_list):
if N is None:
n = ratio*p
else:
n = N
ratios.append(p/n)
if nreps is None:
nreps = int(max(100, min(1000, 10000/p)))
losses, prials, times = run_simulation(p, n, estimators, nreps=nreps)
if metric == 'prial':
MEASURES[idx] = prials
elif metric == 'loss':
MEASURES[idx] = losses
elif metric == 'time':
MEASURES[idx] = times
if N is None:
lines = plt.plot(P_list, MEASURES, '-D')
plt.xlabel('Matrix dimension p')
else:
lines = plt.plot(ratios, MEASURES, '-D')
plt.xlabel('Ratio p/n')
plt.legend(lines, labels)
if metric == 'prial':
plt.title('Evolution of PRIAL (reps='+str(nreps)+')')
plt.ylabel('PRIAL')
elif metric == 'loss':
plt.title('Evolution of Loss (reps='+str(nreps)+')')
plt.ylabel('Loss')
elif metric == 'time':
plt.title('Duration study on average (reps='+str(nreps)+')')
plt.ylabel('time (ms)')
estimators = [analytical_shrinkage_estimator, linear_shrinkage_estimator]
labels = ['Analytical', 'Linear']
P_list = [5, 50, 100, 200, 300, 400, 500]
run_graphic_simulation(estimators, labels, P_list=P_list, ratio=3, nreps=10, metric='prial')
run_graphic_simulation(estimators, labels, P_list=P_list, ratio=3, nreps=10, metric='prial')
run_graphic_simulation(estimators, labels, P_list=P_list, N=600, nreps=10, metric='prial')
run_graphic_simulation(estimators, labels, P_list=P_list, ratio=3, nreps=10, metric='time')
# Using now all estimators
estimators = [analytical_shrinkage_estimator, linear_shrinkage_estimator,
empirical_bayesian_estimator, minimax_estimator]
labels = ['Analytical', 'Linear', 'Bayesian', 'Minimax']
P_list = [5, 50, 100, 200, 300, 400, 500]
run_graphic_simulation(estimators, labels, P_list=P_list, ratio=3, nreps=10, metric='prial')
run_graphic_simulation(estimators, labels, P_list=P_list, N=600, nreps=10, metric='prial')
run_graphic_simulation(estimators, labels, P_list=P_list, ratio=3, nreps=10, metric='time')
estimators = [analytical_shrinkage_estimator, linear_shrinkage_estimator]
labels = ['Analytical', 'Linear']
P_list = [5, 50, 100, 200, 300, 400, 500]
run_graphic_simulation(estimators, labels, P_list=P_list, ratio=3, nreps=100, metric='prial')
run_graphic_simulation(estimators, labels, P_list=P_list, N=600, nreps=100, metric='prial')
run_graphic_simulation(estimators, labels, P_list=P_list, ratio=3, nreps=100, metric='time')
# Using now all estimators
estimators = [analytical_shrinkage_estimator, linear_shrinkage_estimator,
empirical_bayesian_estimator, minimax_estimator]
labels = ['Analytical', 'Linear', 'Bayesian', 'Minimax']
P_list = [5, 50, 100, 200, 300, 400, 500]
run_graphic_simulation(estimators, labels, P_list=P_list, ratio=3, nreps=100, metric='prial')
run_graphic_simulation(estimators, labels, P_list=P_list, N=600, nreps=100, metric='prial')
run_graphic_simulation(estimators, labels, P_list=P_list, ratio=3, nreps=100, metric='time')
| 0.506591 | 0.912202 |
```
%load_ext lab_black
import solike.clusters
import camb
import numpy as np
%pylab inline
import matplotlib.pyplot as plt
import time
fiducial_params = {
'ombh2': 0.02225, 'omch2': 0.1198, 'H0': 67.3, 'tau': 0.06,
'As': 2.2e-9, 'ns': 0.96,
'mnu': 0.06, 'nnu': 3.046, 'num_massive_neutrinos': 1}
l_max = 1000
zarr = np.linspace(0,2,41)
modules_path = '/Users/nab/Repos/cobaya_modules/'
def pk(_theory={'Pk_interpolator': {'z': np.linspace(0,4,41), 'k_max': 100.0, 'nonlinear': True,'hubble_units': True,'k_hunit': True, 'vars_pairs': [['delta_nonu','delta_nonu']]}}):
#print (_theory.get_Pk_interpolator())
Pk_interpolator = _theory.get_Pk_interpolator()['delta_nonu_delta_nonu'].P
k = np.logspace(-3,1,100)
return Pk_interpolator(0.0,k)
info_fiducial = {
'params': fiducial_params,
'likelihood': {'pks': pk},
'theory': {'camb':{'extra_args':{"accurate_massive_neutrino_transfers": True,
"redshifts": np.linspace(0,2,41), "nonlinear": False,
"kmax": 10., "dark_energy_model":"ppf"}}
},
'modules': modules_path}
from cobaya.model import get_model
model_fiducial = get_model(info_fiducial)
print ('loglike', model_fiducial.loglike({'As': 3e-9, 'ns':0.96}))
# Declare our desired theory product
# (there is no cosmological likelihood doing it for us)
model_fiducial.likelihood.theory.needs(Cl={'tt': l_max},
Pk_interpolator={'z': np.linspace(0,2,41), 'k_max': 5.0,
'nonlinear': False,'hubble_units': True,'k_hunit': True,
'vars_pairs': [['delta_tot','delta_tot']]},
H={'z': np.linspace(0,2,41)})#,
#extra_args={"accurate_massive_neutrino_transfers": True,
# "redshifts": np.linspace(0,4,41), "nonlinear": False,
# "kmax": 10., "dark_energy_model":"ppf"})
# Compute and extract the CMB power spectrum
# (In muK^-2, without l(l+1)/(2pi) factor)
# notice the empty dictionary below: all parameters are fixed
model_fiducial.logposterior({})
Cls = model_fiducial.likelihood.theory.get_Cl(ell_factor=False)
print(model_fiducial.likelihood.theory.requested())
Pk_interpolator = model_fiducial.likelihood.theory.get_Pk_interpolator()['delta_nonu_delta_nonu'].P#(0,k_max=5)
k = np.logspace(-4,np.log10(5),200)
pks = Pk_interpolator(zarr,k)
Ez = model_fiducial.likelihood.theory.get_H(zarr) / model_fiducial.likelihood.theory.get_param('H0')
om = (model_fiducial.likelihood.theory.get_param('omch2') + model_fiducial.likelihood.theory.get_param('ombh2'))/ ((model_fiducial.likelihood.theory.get_param('H0')/100.)**2)
print (pks.shape,Ez.shape)
import solike.clusters.massfunc as mf
hmf = mf.HMF(om,Ez,pk=pks,kh=k,zarr=zarr)
from solike import ClusterLikelihood
import numpy as np
fiducial_params = {
'ombh2': 0.02225, 'omch2': 0.1198, 'H0': 67.3, 'tau': 0.06,
'As': 2.2e-9, 'ns': 0.96,
'mnu': 0.06, 'nnu': 3.046, 'num_massive_neutrinos': 1}
info_fiducial = {
'params': fiducial_params,
'likelihood': {'solike.ClusterLikelihood' : {'stop_at_error': True}},
'theory': {'camb':{'extra_args':{"accurate_massive_neutrino_transfers": True,
"redshifts": np.linspace(0,2,41), "nonlinear": False,
"kmax": 10., "dark_energy_model":"ppf"}}
}}
from cobaya.model import get_model
model_fiducial = get_model(info_fiducial)
like = model_fiducial.likelihood['solike.ClusterLikelihood']
model_fiducial.loglikes({})[0]
len(like.data)
like.data.catalog
(like.data.catalog.tsz_signal * 1e-4).describe()
(like.survey.Ythresh * 5.6)
like._get_n_expected()
model_fiducial.loglikes({})[0]
like = model_fiducial.likelihood['solike.ClusterLikelihood']
like._get_HMF()
surveydata = like._get_catalog()
surveydata.
def _get_catalog():
catalog = Survey.SurveyData(self.data_path, self.data_name)
return catalog
def Prob_per_cluster(self, HMF, cluster_props, param_vals):
# c_z, c_zerr, c_y, c_yerr = cluster_props
tempz = cluster_props[0, :]
zind = np.argsort(tempz)
tempz = 0.
c_z = cluster_props[0, zind]
c_zerr = cluster_props[1, zind]
c_y = cluster_props[2, zind]
c_yerr = cluster_props[3, zind]
Marr = np.outer(int_HMF.M.copy(), np.ones([len(c_z)]))
zarr = np.outer(np.ones([len(int_HMF.M.copy())]), c_z)
if (c_zerr.any() > 0):
# FIX THIS
z_arr = np.arange(-3.*c_zerr, (3.+0.1)*c_zerr, c_zerr) + c_z
Pfunc_ind = self.Pfunc_per_zarr(int_HMF.M.copy(), z_arr, c_y, c_yerr, int_HMF, param_vals)
M200 = int_HMF.cc.Mass_con_del_2_del_mean200(int_HMF.M.copy(), 500, c_z) # FIX THIS?
dn_dzdm = dn_dzdm_int(z_arr, np.log10(int_HMF.M.copy()))
N_z_ind = np.trapz(dn_dzdm*Pfunc_ind, dx=np.diff(M200, axis=0), axis=0)
N_per = np.trapz(N_z_ind*gaussian(z_arr, c_z, c_zerr), dx=np.diff(z_arr))
ans = N_per
else:
Pfunc_ind = self.Pfunc_per(Marr, zarr, c_y, c_yerr, param_vals)
dn_dzdm = HMF.dn_dzdm(c_z, np.log10(int_HMF.M.copy()))
M200 = int_HMF.M200_int(c_z, int_HMF.M.copy())
N_z_ind = np.trapz(dn_dzdm*Pfunc_ind, dx=np.diff(M200, axis=0), axis=0)
ans = N_z_ind
return ans
def Prob_per_cluster(HMF, z, y, param_vals):
from pkg_resources import resource_filename
resource_filename('solike.clusters', 'data/ACTPol_Cond_scatv5.fits')
```
|
github_jupyter
|
%load_ext lab_black
import solike.clusters
import camb
import numpy as np
%pylab inline
import matplotlib.pyplot as plt
import time
fiducial_params = {
'ombh2': 0.02225, 'omch2': 0.1198, 'H0': 67.3, 'tau': 0.06,
'As': 2.2e-9, 'ns': 0.96,
'mnu': 0.06, 'nnu': 3.046, 'num_massive_neutrinos': 1}
l_max = 1000
zarr = np.linspace(0,2,41)
modules_path = '/Users/nab/Repos/cobaya_modules/'
def pk(_theory={'Pk_interpolator': {'z': np.linspace(0,4,41), 'k_max': 100.0, 'nonlinear': True,'hubble_units': True,'k_hunit': True, 'vars_pairs': [['delta_nonu','delta_nonu']]}}):
#print (_theory.get_Pk_interpolator())
Pk_interpolator = _theory.get_Pk_interpolator()['delta_nonu_delta_nonu'].P
k = np.logspace(-3,1,100)
return Pk_interpolator(0.0,k)
info_fiducial = {
'params': fiducial_params,
'likelihood': {'pks': pk},
'theory': {'camb':{'extra_args':{"accurate_massive_neutrino_transfers": True,
"redshifts": np.linspace(0,2,41), "nonlinear": False,
"kmax": 10., "dark_energy_model":"ppf"}}
},
'modules': modules_path}
from cobaya.model import get_model
model_fiducial = get_model(info_fiducial)
print ('loglike', model_fiducial.loglike({'As': 3e-9, 'ns':0.96}))
# Declare our desired theory product
# (there is no cosmological likelihood doing it for us)
model_fiducial.likelihood.theory.needs(Cl={'tt': l_max},
Pk_interpolator={'z': np.linspace(0,2,41), 'k_max': 5.0,
'nonlinear': False,'hubble_units': True,'k_hunit': True,
'vars_pairs': [['delta_tot','delta_tot']]},
H={'z': np.linspace(0,2,41)})#,
#extra_args={"accurate_massive_neutrino_transfers": True,
# "redshifts": np.linspace(0,4,41), "nonlinear": False,
# "kmax": 10., "dark_energy_model":"ppf"})
# Compute and extract the CMB power spectrum
# (In muK^-2, without l(l+1)/(2pi) factor)
# notice the empty dictionary below: all parameters are fixed
model_fiducial.logposterior({})
Cls = model_fiducial.likelihood.theory.get_Cl(ell_factor=False)
print(model_fiducial.likelihood.theory.requested())
Pk_interpolator = model_fiducial.likelihood.theory.get_Pk_interpolator()['delta_nonu_delta_nonu'].P#(0,k_max=5)
k = np.logspace(-4,np.log10(5),200)
pks = Pk_interpolator(zarr,k)
Ez = model_fiducial.likelihood.theory.get_H(zarr) / model_fiducial.likelihood.theory.get_param('H0')
om = (model_fiducial.likelihood.theory.get_param('omch2') + model_fiducial.likelihood.theory.get_param('ombh2'))/ ((model_fiducial.likelihood.theory.get_param('H0')/100.)**2)
print (pks.shape,Ez.shape)
import solike.clusters.massfunc as mf
hmf = mf.HMF(om,Ez,pk=pks,kh=k,zarr=zarr)
from solike import ClusterLikelihood
import numpy as np
fiducial_params = {
'ombh2': 0.02225, 'omch2': 0.1198, 'H0': 67.3, 'tau': 0.06,
'As': 2.2e-9, 'ns': 0.96,
'mnu': 0.06, 'nnu': 3.046, 'num_massive_neutrinos': 1}
info_fiducial = {
'params': fiducial_params,
'likelihood': {'solike.ClusterLikelihood' : {'stop_at_error': True}},
'theory': {'camb':{'extra_args':{"accurate_massive_neutrino_transfers": True,
"redshifts": np.linspace(0,2,41), "nonlinear": False,
"kmax": 10., "dark_energy_model":"ppf"}}
}}
from cobaya.model import get_model
model_fiducial = get_model(info_fiducial)
like = model_fiducial.likelihood['solike.ClusterLikelihood']
model_fiducial.loglikes({})[0]
len(like.data)
like.data.catalog
(like.data.catalog.tsz_signal * 1e-4).describe()
(like.survey.Ythresh * 5.6)
like._get_n_expected()
model_fiducial.loglikes({})[0]
like = model_fiducial.likelihood['solike.ClusterLikelihood']
like._get_HMF()
surveydata = like._get_catalog()
surveydata.
def _get_catalog():
catalog = Survey.SurveyData(self.data_path, self.data_name)
return catalog
def Prob_per_cluster(self, HMF, cluster_props, param_vals):
# c_z, c_zerr, c_y, c_yerr = cluster_props
tempz = cluster_props[0, :]
zind = np.argsort(tempz)
tempz = 0.
c_z = cluster_props[0, zind]
c_zerr = cluster_props[1, zind]
c_y = cluster_props[2, zind]
c_yerr = cluster_props[3, zind]
Marr = np.outer(int_HMF.M.copy(), np.ones([len(c_z)]))
zarr = np.outer(np.ones([len(int_HMF.M.copy())]), c_z)
if (c_zerr.any() > 0):
# FIX THIS
z_arr = np.arange(-3.*c_zerr, (3.+0.1)*c_zerr, c_zerr) + c_z
Pfunc_ind = self.Pfunc_per_zarr(int_HMF.M.copy(), z_arr, c_y, c_yerr, int_HMF, param_vals)
M200 = int_HMF.cc.Mass_con_del_2_del_mean200(int_HMF.M.copy(), 500, c_z) # FIX THIS?
dn_dzdm = dn_dzdm_int(z_arr, np.log10(int_HMF.M.copy()))
N_z_ind = np.trapz(dn_dzdm*Pfunc_ind, dx=np.diff(M200, axis=0), axis=0)
N_per = np.trapz(N_z_ind*gaussian(z_arr, c_z, c_zerr), dx=np.diff(z_arr))
ans = N_per
else:
Pfunc_ind = self.Pfunc_per(Marr, zarr, c_y, c_yerr, param_vals)
dn_dzdm = HMF.dn_dzdm(c_z, np.log10(int_HMF.M.copy()))
M200 = int_HMF.M200_int(c_z, int_HMF.M.copy())
N_z_ind = np.trapz(dn_dzdm*Pfunc_ind, dx=np.diff(M200, axis=0), axis=0)
ans = N_z_ind
return ans
def Prob_per_cluster(HMF, z, y, param_vals):
from pkg_resources import resource_filename
resource_filename('solike.clusters', 'data/ACTPol_Cond_scatv5.fits')
| 0.403567 | 0.332812 |
```
# Import
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import cv2
graph_filename = "ssd_mobilenet_v1_coco_11_06_2017/frozen_inference_graph.pb"
# Helper functions
def load_graph(frozen_graph_filename):
with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def, name="")
return graph
def load_image(image_filename):
img = cv2.imread(image_filename)
return cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
def crop_image(image,bbox):
rows = image.shape[0]
cols = image.shape[1]
x = bbox[1] * cols
y = bbox[0] * rows
right = bbox[3] * cols
bottom = bbox[2] * rows
print(x,y,right,bottom)
return image[int(y):int(bottom), int(x):int(right)]
# Load frozen graph
graph = load_graph(graph_filename)
# Load test image
image = load_image('test_images/left0142.jpg')
plt.imshow(image)
plt.show()
# Run inference
with tf.Session(graph=graph) as sess:
tf_image_input = np.expand_dims(image, axis=0)
detections, scores, boxes, classes = sess.run([
sess.graph.get_tensor_by_name('num_detections:0'),
sess.graph.get_tensor_by_name('detection_scores:0'),
sess.graph.get_tensor_by_name('detection_boxes:0'),
sess.graph.get_tensor_by_name('detection_classes:0')],
feed_dict={'image_tensor:0': tf_image_input})
num_detections = int(np.squeeze(detections))
for i in range(num_detections):
classId = int(np.squeeze(classes)[i])
if (classId != 10):
continue
score = np.squeeze(scores)[i]
bbox = [float(v) for v in np.squeeze(boxes)[i]]
if score > 0.3:
#print ("Class: {}, Score: {}".format(classId, score))
image = crop_image(image, bbox)
plt.imshow(image)
plt.show()
# Convert BGR to HSV
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
# define range of bright color in HSV
lower = np.array([40,40,75])
upper = np.array([255,255,255])
# Threshold the HSV image to get only bright colors
mask = cv2.inRange(hsv, lower, upper)
# Bitwise-AND mask and original image
output = cv2.bitwise_and(image ,image, mask= mask)
# show the output image
plt.imshow(np.hstack([image, output]))
plt.show()
max_mean = 0
max_mean_index = -1
y_region_size = (int)(image.shape[0] / 3)
split_region = [y_region_size, 2 * y_region_size]
region_split = np.split(mask,split_region)
for i in range(3):
region_mean = region_split[i].mean()
if region_mean > max_mean:
max_mean = region_mean
max_mean_index = i
print("Output label is: " + str(max_mean_index))
```
|
github_jupyter
|
# Import
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import cv2
graph_filename = "ssd_mobilenet_v1_coco_11_06_2017/frozen_inference_graph.pb"
# Helper functions
def load_graph(frozen_graph_filename):
with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def, name="")
return graph
def load_image(image_filename):
img = cv2.imread(image_filename)
return cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
def crop_image(image,bbox):
rows = image.shape[0]
cols = image.shape[1]
x = bbox[1] * cols
y = bbox[0] * rows
right = bbox[3] * cols
bottom = bbox[2] * rows
print(x,y,right,bottom)
return image[int(y):int(bottom), int(x):int(right)]
# Load frozen graph
graph = load_graph(graph_filename)
# Load test image
image = load_image('test_images/left0142.jpg')
plt.imshow(image)
plt.show()
# Run inference
with tf.Session(graph=graph) as sess:
tf_image_input = np.expand_dims(image, axis=0)
detections, scores, boxes, classes = sess.run([
sess.graph.get_tensor_by_name('num_detections:0'),
sess.graph.get_tensor_by_name('detection_scores:0'),
sess.graph.get_tensor_by_name('detection_boxes:0'),
sess.graph.get_tensor_by_name('detection_classes:0')],
feed_dict={'image_tensor:0': tf_image_input})
num_detections = int(np.squeeze(detections))
for i in range(num_detections):
classId = int(np.squeeze(classes)[i])
if (classId != 10):
continue
score = np.squeeze(scores)[i]
bbox = [float(v) for v in np.squeeze(boxes)[i]]
if score > 0.3:
#print ("Class: {}, Score: {}".format(classId, score))
image = crop_image(image, bbox)
plt.imshow(image)
plt.show()
# Convert BGR to HSV
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
# define range of bright color in HSV
lower = np.array([40,40,75])
upper = np.array([255,255,255])
# Threshold the HSV image to get only bright colors
mask = cv2.inRange(hsv, lower, upper)
# Bitwise-AND mask and original image
output = cv2.bitwise_and(image ,image, mask= mask)
# show the output image
plt.imshow(np.hstack([image, output]))
plt.show()
max_mean = 0
max_mean_index = -1
y_region_size = (int)(image.shape[0] / 3)
split_region = [y_region_size, 2 * y_region_size]
region_split = np.split(mask,split_region)
for i in range(3):
region_mean = region_split[i].mean()
if region_mean > max_mean:
max_mean = region_mean
max_mean_index = i
print("Output label is: " + str(max_mean_index))
| 0.710226 | 0.462291 |
### Author : Wahid T. Ratul
#### This notebook primarily represents the modeling selection and prediction process for Text Classification using the Covid-19 Tweets.
This notebook is an extension to --https://github.com/ratul003/Sentiment_Analysis/blob/main/Covid_19Tweets_Viz.ipynb
Contents:
* [1. Libraries](#1)
* [2. Text Cleaning](#2)
* [3. TF-IDF](#3)
* [4. Model Evaluations](#4)
* [5. ROC-AUC](#5)
### [1. Libraries](#1)
```
# Importing libraries
import pandas as pd
import numpy as np
import re
import nltk
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split, cross_val_score, KFold
from io import StringIO
from sklearn.feature_selection import chi2
from IPython.display import display
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.naive_bayes import MultinomialNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import LinearSVC
from sklearn.model_selection import cross_val_score
from sklearn.metrics import confusion_matrix
from sklearn import metrics
from sklearn.decomposition import PCA, TruncatedSVD
from sklearn.metrics import classification_report, confusion_matrix
import matplotlib.pyplot as plt
%matplotlib in line
import seaborn as sns
plt.style.use('ggplot')
import matplotlib.patches as mpatches
# Importing data
train = pd.read_csv('/Users/wahid/Desktop/Python/Text_Classification/Corona_NLP_train.csv', encoding = 'latin1')
test = pd.read_csv('/Users/wahid/Desktop/Python/Text_Classification/Corona_NLP_test.csv.xls', encoding = 'latin1')
test
# Making seperate columns for text
train['text'] = train.OriginalTweet
train['text'] = train['text'].astype(str)
test['text'] = train.OriginalTweet
test['text'] = train['text'].astype(str)
# Data has 5 classes, let's convert into 3 classes:
def classes_def(text):
if text == 'Extremely Positive':
return "2"
if text == 'Extremely Negative':
return "0"
if text == 'Negative':
return "0"
if text == 'Positive':
return "2"
else:
return "1"
train['label'] = train['Sentiment'].apply(lambda x : classes_def(x))
train
test['label'] = train['Sentiment'].apply(lambda x : classes_def(x))
print(train.label.value_counts(normalize = True))
print("\nWhere by Positive = 2, Neutral = 1 and Negative = 0.")
```
### [2. Text Cleaning](#2)
```
# Remove URLs and HTML links
def remove_urls(text):
url_remove = re.compile(r'https?://\S+|www\.\S+')
return url_remove.sub(r'', text)
# Dataframe without URLs
train['text_new'] = train['text'].apply(lambda x : remove_urls(x))
test['text_new'] = test['text'].apply(lambda x : remove_urls(x))
def remove_html(text):
html_remove = re.compile(r'<.*?>')
return html_remove.sub(r'', text)
# Dataframe without HTML links
train['text'] = train['text_new'].apply(lambda x : remove_html(x))
test['text'] = test['text_new'].apply(lambda x : remove_html(x))
# Lower casing
def lower_case(text):
low_text = text.lower()
return low_text
train['text_new'] = train['text'].apply(lambda x : lower_case(x))
test['text_new'] = test['text'].apply(lambda x : lower_case(x))
# Number removal
def remove_num(text):
remove = re.sub(r'\d+', '', text)
return remove
train['text'] = train['text_new'].apply(lambda x : remove_num(x))
test['text'] = test['text_new'].apply(lambda x : remove_num(x))
# Remove Stopwords and Punctuations
from nltk.corpus import stopwords
", ".join(stopwords.words('english'))
STOPWORDS = set(stopwords.words('english'))
def remove_punc(text):
punc = re.sub(r'[^\w\d\s]', '', text)
return punc
train['text_new'] = train['text'].apply(lambda x : remove_punc(x))
test['text_new'] = test['text'].apply(lambda x : remove_punc(x))
def remove_stopwords(text): # Split() makes it into an array
"custom function to remove stopwords"
return ", ".join([word for word in str(text).split() if word not in STOPWORDS])
train['text'] = train['text_new'].apply(lambda x : remove_stopwords(x))
test['text'] = test['text_new'].apply(lambda x : remove_stopwords(x))
# Remove Mentions and Hashtags
def remove_hash(text):
hash = re.sub(r'#\w+', '', text)
return remove_hash
train['text_new'] = train['text'].apply(lambda x : remove_hash(x))
test['text_new'] = test['text'].apply(lambda x : remove_hash(x))
def remove_mention(text):
mention = re.sub(r'@\w+', '', text)
return mention
train['text_new'] = train['text'].apply(lambda x : remove_mention(x))
test['text_new'] = test['text'].apply(lambda x : remove_mention(x))
# Removing extra spaces
def remove_spaces(text):
spaces = re.sub(r'\s+', '', text)
return spaces
train['text_new'] = train['text'].apply(lambda x : remove_spaces(x))
test['text_new'] = test['text'].apply(lambda x : remove_spaces(x))
train = train.drop(columns = ['text_new'])
test = test.drop(columns = ['text_new'])
train
```
### [3. TF-IDF](#3)
TF-IDF (term frequency-inverse document frequency) is a statistical measure that evaluates how relevant a word is to a document in a collection of documents. This is done by multiplying two metrics: how many times a word appears in a document, and the inverse document frequency of the word across a set of documents.
tf-idf = $\frac{N}{idf}$, where $ idf = \frac{Total no. of documents}{No. of documents the word contain}$
Machine learning with natural language is faced with one major hurdle – its algorithms usually deal with numbers, and natural language is, well, text. So we need to transform that text into numbers, otherwise known as text vectorization. TF-IDF score can be fed to algorithms such as Naive Bayes and Support Vector Machines, greatly improving the results of more basic methods like word counts.
```
# Making the text into a list
x = train['text'].tolist()
y = train['label'].tolist()
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size = 0.80,random_state = 0)
tfidf = TfidfVectorizer(sublinear_tf = True, min_df =5, stop_words = 'english')
# We transform each text into a vector
features = tfidf.fit_transform(train.text).toarray()
labels = train.label
print("Each of the %d tweets is represented by %d features (TF-IDF score of unigrams and bi-grams)" %(features.shape))
```
### [4. Models](#4)
We are now ready to experiment with different machine learning models, evaluate their accuracy and find the source of any potential issues.
We will benchmark the following four models:
* (Multinomial) Naive Bayes
* Linear Support Vector Machine
* Random Forest
```
models = [
RandomForestClassifier(n_estimators = 100, max_depth = 5, random_state = 0),
LinearSVC(),
MultinomialNB(),
]
```
Cross validation is a technique for assessing how the statistical analysis generalises to an independent data set. It is a technique for evaluating machine learning models by training several models on subsets of the available input data and evaluating them on the complementary subset of the data. Using cross-validation, there are high chances that we can detect over-fitting with ease.
```
# 5-fold Cross-validation
CV = 2
cv_df = pd.DataFrame(index = range(CV * len(models)))
# Creating a dictionary to return as a list of output
entries = []
for model in models:
model_name = model.__class__.__name__
accuracies = cross_val_score(model, features, labels, scoring = 'accuracy', cv = CV)
for fold_idx, accuracy in enumerate(accuracies):
entries.append((model_name, fold_idx, accuracy))
cv_df = pd.DataFrame(entries, columns = ['Model_name', 'fold_idx', 'accuracy'])
cv_df
# Model Accuracy Comparison
mean_accuracy = cv_df.groupby('Model_name').accuracy.mean()
std_accuracy = cv_df.groupby('Model_name').accuracy.std()
acc = pd.concat([mean_accuracy, std_accuracy], axis =1, ignore_index = True)
acc.columns = ['Mean Accuracy', 'Standard deviation']
acc
```
Based on from one of the key indicators of model performance, we can observe that Linear SVM has the highest accuracy of 76%, followed by Naive Bayes and Random Forest Classification.
Accuracy in classification problems is the number of correct predictions made by the model over all kinds predictions made.
* Accuracy = $\frac{True \, + \, Positive \, + \, True \, + \, Negative}{All \, Predictions}$
```
# Graphical Model Comparison
import seaborn as sns
plt.style.use('ggplot')
plt.figure(figsize=(10,6))
sns.boxplot(x='Model_name', y='accuracy',
data=cv_df,
color='lightblue',
showmeans=True)
plt.title("MEAN ACCURACY (cv = 5)\n", size=14);
# Splitting the testing and training data for model testing
X_train, X_test, y_train, y_test,indices_train,indices_test = train_test_split(features,
labels,
train.index, test_size=0.80,
random_state=1)
model = LinearSVC()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
# Classification Report
from sklearn.model_selection import cross_val_score
from sklearn.metrics import confusion_matrix
from sklearn import metrics
print('\t\t\tCLASSIFICATIION METRICS\n')
print(metrics.classification_report(y_test, y_pred,
target_names= train['label'].unique()))
```
We see low F1 scores for Positive (2) tweets as compared to Negative (0) and Neutral (1).
Precision talks about how precise/accurate your model is out of those predicted positive, how many of them are actual positive. Precision is a good measure to determine, when the costs of False Positive is high. For instance, email spam detection.
* Precision = $\frac{True \, Positive}{True \, Positive \, + \, False \, Positive}$ OR $\frac{True \, Positive}{All \, Predicted \, Positive}$
Recall measures how many of the Actual Positives our model capture through labeling it as Positive (True Positive). Recall shall be the model metric we use to select our best model when there is a high cost associated with False Negative. For instance, in fraud detection or sick patient detection.
* Recall = $\frac{True \, Positive}{True \, Positive \, + \, False \, Negative}$ OR $\frac{True Positive}{All \, Actual \, Positive}$
F1 Score might be a better measure to use if we need to seek a balance between Precision and Recall unless there is an uneven class distribution between classes. For instance, large number of Actual Negatives.
* F1 score = $2*\frac{Precision \, * \, Recall}{Precision \, + \,Recall}$ --> Harmonic Mean
```
# Create a new column 'category_id' with encoded categories
def classes_def(x):
if x == "Extremely Positive":
return "Positive"
elif x == "Extremely Negative":
return "Negative"
elif x == "Negative":
return "Negative"
elif x == "Positive":
return "Positive"
else:
return "Neutral"
train['Sentiment']= train['Sentiment'].apply(lambda x:classes_def(x))
sentiment_id_df= train[['Sentiment','label']].drop_duplicates()
# Dictionaries for future use
#sentiment_to_id = dict(sentiment_id_df.values)
#id_to_sentiment = dict(sentiment_id_df[['Sentiment','label']].values)
#sentiment_id_df
# Confusion Matrix Display
conf_mat = confusion_matrix(y_test, y_pred)
fig, ax = plt.subplots(figsize=(8,8))
sns.heatmap(conf_mat, annot=True, cmap="Blues", fmt='d',
xticklabels=sentiment_id_df.label.values,
yticklabels=sentiment_id_df.label.values)
#train = train.drop(columns=['Sentiment'])
plt.ylabel('Actual')
plt.xlabel('Predicted')
plt.title("CONFUSION MATRIX - LinearSVC\n", size=16);
```
From the Confusion Matrix, we can observe that majority of the predicted class of the corpus were accurate mostly negative and positive tweets. However, there are few chunks of tweets in the corpus that were not as precise.
### [5. ROC - AUC](#5)
ROC curves typically feature true positive rate on the Y axis, and false positive rate on the X axis. This means that the top left corner of the plot is the “ideal” point - a false positive rate of zero, and a true positive rate of one. This is not very realistic, but it does mean that a larger area under the curve (AUC) is usually better.
The “steepness” of ROC curves is also important, since it is ideal to maximize the true positive rate while minimizing the false positive rate.
When 0.5<AUC<1, there is a high chance that the classifier will be able to distinguish the positive class values from the negative class values. This is so because the classifier is able to detect more numbers of True positives and True negatives than False negatives and False positives.
When AUC=0.5, then the classifier is not able to distinguish between Positive and Negative class points. Meaning either the classifier is predicting random class or constant class for all the data points.
So, the higher the AUC value for a classifier, the better its ability to distinguish between positive and negative classes.
```
# Splitting the testing and training data for model testing
X_train, X_test, y_train, y_test,indices_train,indices_test = train_test_split(features,
labels,
train.index, test_size=0.80,
random_state=1)
xtrain = X_train[1:6000]
ytrain = y_train[1:6000]
xtest= X_test[1:6000]
ytest= y_test[1:6000]
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# roc curve and auc score
from sklearn.datasets import make_classification
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
d = y_test.unique()
class_name = list(d.flatten())
class_name
from sklearn.linear_model import LogisticRegression
#model = LinearSVC()
#model.fit(X_train, y_train)
#y_pred = model.predict(X_test)
LRE = LogisticRegression(solver='lbfgs')
LRE.fit(xtrain, ytrain)
for p in class_name:
fpr0, tpr0, thresholds = metrics.roc_curve(ytest,
LRE.predict_proba(xtest)[:,0], pos_label = p)
auroc0 = round(metrics.auc(fpr0, tpr0),2)
print('Negative','AUC->',auroc)
for p in class_name:
fpr1, tpr1, thresholds = metrics.roc_curve(ytest,
LRE.predict_proba(xtest)[:,1], pos_label = p)
auroc1 = round(metrics.auc(fpr1, tpr1),2)
print('Positive','AUC->',auroc1)
for p in class_name:
fpr2, tpr2, thresholds = metrics.roc_curve(ytest,
LRE.predict_proba(xtest)[:,2], pos_label = p)
auroc2 = round(metrics.auc(fpr2, tpr2),2)
print('Neutral','AUC-->',auroc2)
import matplotlib.pyplot as plt
plt.style.use('seaborn')
random_probs = [0 for i in range(len(ytest))]
p_fpr, p_tpr, _ = roc_curve(ytest, random_probs, pos_label=i)
# plot roc curves
plt.plot(fpr0, tpr0, linestyle='--',color='red', label='Negative')
plt.plot(fpr1, tpr1, linestyle='--',color='green', label='Positive')
plt.plot(fpr2, tpr2, linestyle='--',color='orange', label='Neutral')
plt.plot(p_fpr, p_tpr, linestyle='--', color='blue')
# title
plt.title('Multi-class ROC curve')
# x label
plt.xlabel('False Positive Rate')
# y label
plt.ylabel('True Positive rate')
plt.legend(loc='best')
#plt.savefig('ROC',dpi=300)
plt.show();
```
|
github_jupyter
|
# Importing libraries
import pandas as pd
import numpy as np
import re
import nltk
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split, cross_val_score, KFold
from io import StringIO
from sklearn.feature_selection import chi2
from IPython.display import display
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.naive_bayes import MultinomialNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import LinearSVC
from sklearn.model_selection import cross_val_score
from sklearn.metrics import confusion_matrix
from sklearn import metrics
from sklearn.decomposition import PCA, TruncatedSVD
from sklearn.metrics import classification_report, confusion_matrix
import matplotlib.pyplot as plt
%matplotlib in line
import seaborn as sns
plt.style.use('ggplot')
import matplotlib.patches as mpatches
# Importing data
train = pd.read_csv('/Users/wahid/Desktop/Python/Text_Classification/Corona_NLP_train.csv', encoding = 'latin1')
test = pd.read_csv('/Users/wahid/Desktop/Python/Text_Classification/Corona_NLP_test.csv.xls', encoding = 'latin1')
test
# Making seperate columns for text
train['text'] = train.OriginalTweet
train['text'] = train['text'].astype(str)
test['text'] = train.OriginalTweet
test['text'] = train['text'].astype(str)
# Data has 5 classes, let's convert into 3 classes:
def classes_def(text):
if text == 'Extremely Positive':
return "2"
if text == 'Extremely Negative':
return "0"
if text == 'Negative':
return "0"
if text == 'Positive':
return "2"
else:
return "1"
train['label'] = train['Sentiment'].apply(lambda x : classes_def(x))
train
test['label'] = train['Sentiment'].apply(lambda x : classes_def(x))
print(train.label.value_counts(normalize = True))
print("\nWhere by Positive = 2, Neutral = 1 and Negative = 0.")
# Remove URLs and HTML links
def remove_urls(text):
url_remove = re.compile(r'https?://\S+|www\.\S+')
return url_remove.sub(r'', text)
# Dataframe without URLs
train['text_new'] = train['text'].apply(lambda x : remove_urls(x))
test['text_new'] = test['text'].apply(lambda x : remove_urls(x))
def remove_html(text):
html_remove = re.compile(r'<.*?>')
return html_remove.sub(r'', text)
# Dataframe without HTML links
train['text'] = train['text_new'].apply(lambda x : remove_html(x))
test['text'] = test['text_new'].apply(lambda x : remove_html(x))
# Lower casing
def lower_case(text):
low_text = text.lower()
return low_text
train['text_new'] = train['text'].apply(lambda x : lower_case(x))
test['text_new'] = test['text'].apply(lambda x : lower_case(x))
# Number removal
def remove_num(text):
remove = re.sub(r'\d+', '', text)
return remove
train['text'] = train['text_new'].apply(lambda x : remove_num(x))
test['text'] = test['text_new'].apply(lambda x : remove_num(x))
# Remove Stopwords and Punctuations
from nltk.corpus import stopwords
", ".join(stopwords.words('english'))
STOPWORDS = set(stopwords.words('english'))
def remove_punc(text):
punc = re.sub(r'[^\w\d\s]', '', text)
return punc
train['text_new'] = train['text'].apply(lambda x : remove_punc(x))
test['text_new'] = test['text'].apply(lambda x : remove_punc(x))
def remove_stopwords(text): # Split() makes it into an array
"custom function to remove stopwords"
return ", ".join([word for word in str(text).split() if word not in STOPWORDS])
train['text'] = train['text_new'].apply(lambda x : remove_stopwords(x))
test['text'] = test['text_new'].apply(lambda x : remove_stopwords(x))
# Remove Mentions and Hashtags
def remove_hash(text):
hash = re.sub(r'#\w+', '', text)
return remove_hash
train['text_new'] = train['text'].apply(lambda x : remove_hash(x))
test['text_new'] = test['text'].apply(lambda x : remove_hash(x))
def remove_mention(text):
mention = re.sub(r'@\w+', '', text)
return mention
train['text_new'] = train['text'].apply(lambda x : remove_mention(x))
test['text_new'] = test['text'].apply(lambda x : remove_mention(x))
# Removing extra spaces
def remove_spaces(text):
spaces = re.sub(r'\s+', '', text)
return spaces
train['text_new'] = train['text'].apply(lambda x : remove_spaces(x))
test['text_new'] = test['text'].apply(lambda x : remove_spaces(x))
train = train.drop(columns = ['text_new'])
test = test.drop(columns = ['text_new'])
train
# Making the text into a list
x = train['text'].tolist()
y = train['label'].tolist()
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size = 0.80,random_state = 0)
tfidf = TfidfVectorizer(sublinear_tf = True, min_df =5, stop_words = 'english')
# We transform each text into a vector
features = tfidf.fit_transform(train.text).toarray()
labels = train.label
print("Each of the %d tweets is represented by %d features (TF-IDF score of unigrams and bi-grams)" %(features.shape))
models = [
RandomForestClassifier(n_estimators = 100, max_depth = 5, random_state = 0),
LinearSVC(),
MultinomialNB(),
]
# 5-fold Cross-validation
CV = 2
cv_df = pd.DataFrame(index = range(CV * len(models)))
# Creating a dictionary to return as a list of output
entries = []
for model in models:
model_name = model.__class__.__name__
accuracies = cross_val_score(model, features, labels, scoring = 'accuracy', cv = CV)
for fold_idx, accuracy in enumerate(accuracies):
entries.append((model_name, fold_idx, accuracy))
cv_df = pd.DataFrame(entries, columns = ['Model_name', 'fold_idx', 'accuracy'])
cv_df
# Model Accuracy Comparison
mean_accuracy = cv_df.groupby('Model_name').accuracy.mean()
std_accuracy = cv_df.groupby('Model_name').accuracy.std()
acc = pd.concat([mean_accuracy, std_accuracy], axis =1, ignore_index = True)
acc.columns = ['Mean Accuracy', 'Standard deviation']
acc
# Graphical Model Comparison
import seaborn as sns
plt.style.use('ggplot')
plt.figure(figsize=(10,6))
sns.boxplot(x='Model_name', y='accuracy',
data=cv_df,
color='lightblue',
showmeans=True)
plt.title("MEAN ACCURACY (cv = 5)\n", size=14);
# Splitting the testing and training data for model testing
X_train, X_test, y_train, y_test,indices_train,indices_test = train_test_split(features,
labels,
train.index, test_size=0.80,
random_state=1)
model = LinearSVC()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
# Classification Report
from sklearn.model_selection import cross_val_score
from sklearn.metrics import confusion_matrix
from sklearn import metrics
print('\t\t\tCLASSIFICATIION METRICS\n')
print(metrics.classification_report(y_test, y_pred,
target_names= train['label'].unique()))
# Create a new column 'category_id' with encoded categories
def classes_def(x):
if x == "Extremely Positive":
return "Positive"
elif x == "Extremely Negative":
return "Negative"
elif x == "Negative":
return "Negative"
elif x == "Positive":
return "Positive"
else:
return "Neutral"
train['Sentiment']= train['Sentiment'].apply(lambda x:classes_def(x))
sentiment_id_df= train[['Sentiment','label']].drop_duplicates()
# Dictionaries for future use
#sentiment_to_id = dict(sentiment_id_df.values)
#id_to_sentiment = dict(sentiment_id_df[['Sentiment','label']].values)
#sentiment_id_df
# Confusion Matrix Display
conf_mat = confusion_matrix(y_test, y_pred)
fig, ax = plt.subplots(figsize=(8,8))
sns.heatmap(conf_mat, annot=True, cmap="Blues", fmt='d',
xticklabels=sentiment_id_df.label.values,
yticklabels=sentiment_id_df.label.values)
#train = train.drop(columns=['Sentiment'])
plt.ylabel('Actual')
plt.xlabel('Predicted')
plt.title("CONFUSION MATRIX - LinearSVC\n", size=16);
# Splitting the testing and training data for model testing
X_train, X_test, y_train, y_test,indices_train,indices_test = train_test_split(features,
labels,
train.index, test_size=0.80,
random_state=1)
xtrain = X_train[1:6000]
ytrain = y_train[1:6000]
xtest= X_test[1:6000]
ytest= y_test[1:6000]
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# roc curve and auc score
from sklearn.datasets import make_classification
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
d = y_test.unique()
class_name = list(d.flatten())
class_name
from sklearn.linear_model import LogisticRegression
#model = LinearSVC()
#model.fit(X_train, y_train)
#y_pred = model.predict(X_test)
LRE = LogisticRegression(solver='lbfgs')
LRE.fit(xtrain, ytrain)
for p in class_name:
fpr0, tpr0, thresholds = metrics.roc_curve(ytest,
LRE.predict_proba(xtest)[:,0], pos_label = p)
auroc0 = round(metrics.auc(fpr0, tpr0),2)
print('Negative','AUC->',auroc)
for p in class_name:
fpr1, tpr1, thresholds = metrics.roc_curve(ytest,
LRE.predict_proba(xtest)[:,1], pos_label = p)
auroc1 = round(metrics.auc(fpr1, tpr1),2)
print('Positive','AUC->',auroc1)
for p in class_name:
fpr2, tpr2, thresholds = metrics.roc_curve(ytest,
LRE.predict_proba(xtest)[:,2], pos_label = p)
auroc2 = round(metrics.auc(fpr2, tpr2),2)
print('Neutral','AUC-->',auroc2)
import matplotlib.pyplot as plt
plt.style.use('seaborn')
random_probs = [0 for i in range(len(ytest))]
p_fpr, p_tpr, _ = roc_curve(ytest, random_probs, pos_label=i)
# plot roc curves
plt.plot(fpr0, tpr0, linestyle='--',color='red', label='Negative')
plt.plot(fpr1, tpr1, linestyle='--',color='green', label='Positive')
plt.plot(fpr2, tpr2, linestyle='--',color='orange', label='Neutral')
plt.plot(p_fpr, p_tpr, linestyle='--', color='blue')
# title
plt.title('Multi-class ROC curve')
# x label
plt.xlabel('False Positive Rate')
# y label
plt.ylabel('True Positive rate')
plt.legend(loc='best')
#plt.savefig('ROC',dpi=300)
plt.show();
| 0.522933 | 0.871311 |
```
import pandas as pd
import numpy as np
import requests
import re
from bs4 import BeautifulSoup
def Get_txt_from_webpage(url_value):
url = url_value
res = requests.get(url)
# Initialize the object with the document
soup = BeautifulSoup(res.content, "html.parser")
# Get the whole body tag
tag = soup.body
clean_text = ''
for string in tag.strings:
if string not in clean_text:
clean_text += string
return clean_text
# Cleans text for web page with chapter one
def clean_book_text_ch1(clean_text):
clean_text = re.sub(r'.*WorldChapter', '', clean_text)
clean_text = re.sub(r'Chapter 2Chapter.*$', '', clean_text)
clean_text = re.sub(r'\n', ' ', clean_text)
clean_text = re.sub(r'\r', ' ', clean_text)
clean_text = re.sub(r'\t', ' ', clean_text)
clean_text = re.sub(r'\\', ' ', clean_text)
clean_text = re.sub(r'[\']', '', clean_text)
clean_text = re.sub(' +', ' ', clean_text)
clean_text = re.sub('AnyBooksFree', '', clean_text)
clean_text = re.sub('Chapter', '', clean_text)
clean_text = re.sub(' +', ' ', clean_text)
return clean_text
# Cleans text for all other chapters besides chapter one
def clean_book_text_else(clean_text):
clean_text = re.sub(r'.*WorldChapter', '', clean_text)
clean_text = re.sub(r'Chapter 1Chapter.*$', '', clean_text)
clean_text = re.sub(r'\n', ' ', clean_text)
clean_text = re.sub(r'\r', ' ', clean_text)
clean_text = re.sub(r'\t', ' ', clean_text)
clean_text = re.sub(r'\\', ' ', clean_text)
clean_text = re.sub(r'[\']', '', clean_text)
clean_text = re.sub(' +', ' ', clean_text)
clean_text = re.sub('AnyBooksFree', '', clean_text)
clean_text = re.sub('Chapter', '', clean_text)
clean_text = re.sub(' +', ' ', clean_text)
return clean_text
def get_WoT_1_text():
'''
This function creates a list of all the URLs I want to scrape
to aquire the chapter text.
'''
count = 1
list_of_URLs = []
while count < 58:
URL = "https://thefreeonlinenovel.com/con/the-dragon-reborn_chapter-{}".format(count)
if URL is not list_of_URLs:
list_of_URLs.append(URL)
count += 1
def extract_and_store_WoT():
count = 2
for url in list_of_URLs:
if url == "https://thefreeonlinenovel.com/con/the-dragon-reborn_chapter-1":
text = Get_txt_from_webpage(url)
clean_text = clean_book_text_ch1(text)
text_file = open(r"C:\Users\wscot\gas\Wheel_of_Time_Book_txt\The_Dragon_Reborn/Chapter_1.txt", "w", encoding="utf-8")
text_file.write(clean_text)
text_file.close()
else:
text = Get_txt_from_webpage(url)
clean_text = clean_book_text_else(text)
text_file = open(r"C:\Users\wscot\gas\Wheel_of_Time_Book_txt\The_Dragon_Reborn/Chapter_{}.txt".format(count), "w", encoding="utf-8")
#write string to file
text_file.write(clean_text)
#close file
text_file.close()
count += 1
extract_and_store_WoT()
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import requests
import re
from bs4 import BeautifulSoup
def Get_txt_from_webpage(url_value):
url = url_value
res = requests.get(url)
# Initialize the object with the document
soup = BeautifulSoup(res.content, "html.parser")
# Get the whole body tag
tag = soup.body
clean_text = ''
for string in tag.strings:
if string not in clean_text:
clean_text += string
return clean_text
# Cleans text for web page with chapter one
def clean_book_text_ch1(clean_text):
clean_text = re.sub(r'.*WorldChapter', '', clean_text)
clean_text = re.sub(r'Chapter 2Chapter.*$', '', clean_text)
clean_text = re.sub(r'\n', ' ', clean_text)
clean_text = re.sub(r'\r', ' ', clean_text)
clean_text = re.sub(r'\t', ' ', clean_text)
clean_text = re.sub(r'\\', ' ', clean_text)
clean_text = re.sub(r'[\']', '', clean_text)
clean_text = re.sub(' +', ' ', clean_text)
clean_text = re.sub('AnyBooksFree', '', clean_text)
clean_text = re.sub('Chapter', '', clean_text)
clean_text = re.sub(' +', ' ', clean_text)
return clean_text
# Cleans text for all other chapters besides chapter one
def clean_book_text_else(clean_text):
clean_text = re.sub(r'.*WorldChapter', '', clean_text)
clean_text = re.sub(r'Chapter 1Chapter.*$', '', clean_text)
clean_text = re.sub(r'\n', ' ', clean_text)
clean_text = re.sub(r'\r', ' ', clean_text)
clean_text = re.sub(r'\t', ' ', clean_text)
clean_text = re.sub(r'\\', ' ', clean_text)
clean_text = re.sub(r'[\']', '', clean_text)
clean_text = re.sub(' +', ' ', clean_text)
clean_text = re.sub('AnyBooksFree', '', clean_text)
clean_text = re.sub('Chapter', '', clean_text)
clean_text = re.sub(' +', ' ', clean_text)
return clean_text
def get_WoT_1_text():
'''
This function creates a list of all the URLs I want to scrape
to aquire the chapter text.
'''
count = 1
list_of_URLs = []
while count < 58:
URL = "https://thefreeonlinenovel.com/con/the-dragon-reborn_chapter-{}".format(count)
if URL is not list_of_URLs:
list_of_URLs.append(URL)
count += 1
def extract_and_store_WoT():
count = 2
for url in list_of_URLs:
if url == "https://thefreeonlinenovel.com/con/the-dragon-reborn_chapter-1":
text = Get_txt_from_webpage(url)
clean_text = clean_book_text_ch1(text)
text_file = open(r"C:\Users\wscot\gas\Wheel_of_Time_Book_txt\The_Dragon_Reborn/Chapter_1.txt", "w", encoding="utf-8")
text_file.write(clean_text)
text_file.close()
else:
text = Get_txt_from_webpage(url)
clean_text = clean_book_text_else(text)
text_file = open(r"C:\Users\wscot\gas\Wheel_of_Time_Book_txt\The_Dragon_Reborn/Chapter_{}.txt".format(count), "w", encoding="utf-8")
#write string to file
text_file.write(clean_text)
#close file
text_file.close()
count += 1
extract_and_store_WoT()
| 0.138026 | 0.095223 |
# Response Distributions
Lets look at the answer distributions for each of the 3 questions in out survey.
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import inspect, os
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(currentdir)
os.sys.path.insert(0,parentdir)
from data_generation.join_traces_and_survey import load_survey_dfs
from response_distributions_util import *
import copy
```
## Load and Prep Data
```
d_survey, d_joined = load_survey_dfs()
d_desktop = d_joined[d_joined['host'] == 'mobile']
d_mobile = d_joined[d_joined['host'] == 'desktop']
d_single_motivation = d_joined[d_joined['motivation'].apply(lambda x: len(x.split('|')) == 1)]
print('Num Responses: ', d_survey.shape[0])
print('Num in EL: ', d_joined.shape[0])
print('Num Mobile Responses in EL: ', d_desktop.shape[0])
print('Num Desktop Responses in EL: ', d_mobile.shape[0])
```
## Q1 Information Depth
I am reading this article to ... [Information Depth]
* look up a specific fact or to get a quick answer. [fact]
* get an overview of the topic. [overview]
* get an in-depth understanding of the topic. [in-depth]
### Information Depth Histogram (Dektop vs Mobile)
```
x = 'information depth'
hue = 'host'
title = 'Mobile vs Desktop Information Depth Distribution'
xorder = ['in-depth', 'overview', 'fact']
plot_proportion(d_joined, x, hue, title, xorder = xorder)
```
The distributions are relatively similar across mobile and desktop. The most common use case is to get an overview, then look up a fact, then get an in depth understanding.
## Q2 Familiarity
Prior to visiting this article ... [Prior Knowledge]
* I was already familiar with the topic. [familiar]
* I was not familiar with the topic and I am learning about it for the first time. [unfamiliar]
### Prior Knowledge Histogram (Dektop vs Mobile)
```
x = 'prior knowledge'
hue = 'host'
title = 'Mobile vs Desktop'
plot_proportion(d_joined, x, hue, title)
```
Desktop and mobile are basically identical. People are slightly more likely to be familiar with the topic they are reading about.
## Q3 Motivation
I am reading this article because ... [Motivation]
* I have a work or school-related assignment. [work/school]
* I need to make a personal decision based on this topic (e.g., to buy a book or game, to choose a travel destination). [personal-decision]
* I want to know more about a current event (e.g. Black Friday, a soccer game, a recent earthquake, somebody's death). [current event]
* the topic was referenced in a piece of media (e.g. TV, radio, article, film, book). [media]
* the topic came up in a conversation. [conversation]
* I am bored or randomly exploring Wikipedia for fun. [bored/random]
* this topic is important to me and I want to learn more about it. (e.g., to learn about a culture). [intrinsic_learning]
### Number of Motivations Distribution
Subjects were allowed to select multiple reasons. How many motivations do people select?
```
d_in = pd.DataFrame()
d_in['counts'] = d_joined['motivation'].apply(lambda x: len(x.split('|'))).value_counts()
d_in['Proportion'] = d_in['counts'] / d_in['counts'].sum()
d_in['# of Reasons Given'] = d_in.index
fig = sns.barplot(y="Proportion",
x = '# of Reasons Given',
data=d_in,
color = (0.54308344686732579, 0.73391773700714114, 0.85931565621319939)
)
```
30% of respondents listed more than one motivation.
### Single Motivation Histogram
For responses with only a single motivation, what is the distribution over motivations.
```
d_in = pd.DataFrame()
d_in['counts'] = d_single_motivation['motivation'].value_counts()
d_in['proportion'] = d_in['counts'] / d_in['counts'].sum()
d_in['motivation'] = d_in.index
fig = sns.barplot(y="proportion",
x = 'motivation',
data=d_in,
color = (0.54308344686732579, 0.73391773700714114, 0.85931565621319939),
)
plt.ylabel('Proportion')
plt.xlabel('Motivation')
for item in fig.get_xticklabels():
item.set_rotation(45)
```
Media and work/school are the most popular
### Single Motivation Histogram (Dektop vs Mobile)
```
x = 'motivation'
hue = 'host'
title = 'Mobile vs Desktop'
order = ['media', 'work/school','intrinsic learning', 'bored/random', 'conversation', 'other','current event', 'personal decision', ]
plot_proportion(d_single_motivation, x, hue, title, xorder = order, rotate=True)
```
For Desktop, the most common motivation is work/school. For Mobile, it is media. Also, for mobile users, conversation is more likely compared to desktop.
### Motivation Histogram
For each motivation lets count how often it was chosen as at least one of the motivations.
```
d_in = pd.DataFrame(columns = ['motivation', 'counts'])
ms = [
'work/school',
'personal decision',
'current event',
'media',"conversation",
'bored/random',
'no response',
'intrinsic learning',
'other'
]
for i, m in enumerate(ms):
d_in.loc[i] = [m, d_joined['motivation'].apply(lambda x: m in x).sum()]
d_in['proportion'] = d_in['counts'] / d_in['counts'].sum()
d_in.sort_values(by = 'counts', inplace = True, ascending = False)
fig = sns.barplot(y="proportion",
x = 'motivation',
data=d_in,
color = (0.54308344686732579, 0.73391773700714114, 0.85931565621319939),
)
plt.ylabel('Proportion')
plt.xlabel('Motivation')
for item in fig.get_xticklabels():
item.set_rotation(45)
```
Suddenly intrinsic learning features much more prominently. It must be a common occurence in multi-choice answers.
### Double Motivation Co-occurrence Heatmaps
For users who chose 2 motivations, which motivations co-occur?
```
df = copy.deepcopy(d_joined[d_joined['motivation'].apply(lambda x: len(x.split('|')) == 2)])
df['pm'] = df['motivation'].apply(lambda x: '|'.join(sorted(x.split('|'))))
df_joint = pd.DataFrame()
df_joint['count'] = df['pm'].value_counts()
df_joint['pm'] = df_joint.index
df_joint.index = range(0, df_joint.shape[0])
df_joint['m1'] = df_joint['pm'].apply(lambda x: x.split('|')[0])
df_joint['m2'] = df_joint['pm'].apply(lambda x: x.split('|')[1])
df_joint['count'] = df_joint['count'].apply(int)
df_joint2 = copy.deepcopy(df_joint)
df_joint2['pm'] = df_joint2['pm'].apply(lambda x: '|'.join(sorted(x.split('|'), reverse = True)))
df_joint2['m1'] = df_joint2['pm'].apply(lambda x: x.split('|')[0])
df_joint2['m2'] = df_joint2['pm'].apply(lambda x: x.split('|')[1])
df_joint2.index = range(df_joint.shape[0], 2 * df_joint.shape[0])
df_joint12 = pd.concat([df_joint, df_joint2]).pivot("m1", "m2", "count")
#ax = sns.heatmap(df_joint12, annot=True, fmt="0.0f")
#plt.ylabel('Motivation 1')
#plt.xlabel('Motivation 2')
#plt.title('Raw Co-occurence counts')
```
The since some motivations are more popular than others, the color coding can be misleading. Let's look at the conditional distributions instead.
```
df_joint12_norm = df_joint12.div(df_joint12.sum(axis=1), axis=0)
ax = sns.heatmap(df_joint12_norm, annot=True, fmt="0.2f")
plt.ylabel('Motivation 1')
plt.xlabel('P(Motivation 2 | Motivation 1)')
plt.title('Conditional Distributions')
```
- Given that work/school is a motivation, the most common other motivation is intrinsic_learning by a long shot. seems like people in our survey who choose 2 motivations like their job/studies
- the pattern is similar for personal decisions
- Given that people are bored/randomly exploring, their most likely other motivations is media. the next most likely is intrinsic_learning
- the pattern is similar for current events
# Response Co-occurence
## Information Depth and Prior Knowledge
```
x = 'information depth'
hue = 'prior knowledge'
title = 'P(Prior Knowledge | Information Depth = x) '
xorder = order = ['in-depth', 'overview', 'fact']
plot_proportion(d_joined, x, hue, title, xorder = xorder, normx=False)
```
When seeking in-depth information or looking up a fact, readers are more likely to be familiar with the topic. When they are seeking an overview, the are more likely to be unfamiliar.
```
hue = 'information depth'
x = 'prior knowledge'
title = 'P(Information Depth | Prior Knowledge = x)'
xorder = order = ['familiar', 'unfamiliar']
plot_proportion(d_joined, x, hue, title, xorder = xorder, normx=False)
```
Readers famailiar with the topic are most likely to be looking up a fact. Unfamiliar users are the most likely to be getting an overview.
## Prior Knowledge and Motivation
```
hue = 'prior knowledge'
x = 'motivation'
title = 'P(Prior Knowledge | Motivation = x)'
plot_proportion(d_single_motivation, x, hue, title, rotate=True, normx=False)
```
When people come for intrinsic learning, they tend to be familair with the topic already. When people come because of a reference in the media, they tend to be unfamialiar with the topic.
```
x = 'prior knowledge'
hue = 'motivation'
title = 'P(Motivation | Prior Knowledge = x)'
plot_proportion(d_single_motivation, x, hue, title, rotate=True, normx=False)
```
Bad Visualization
## Information Depth and Motivation
```
hue = 'information depth'
x = 'motivation'
title = 'P(Information Depth | Motivation = x)'
plot_proportion(d_single_motivation, x, hue, title, rotate=True, normx=False)
```
bored/random users are interested in getting an overview. Users in a conversation are loking up a fact...
```
x = 'information depth'
hue = 'motivation'
title = 'P(Motivation | Information Depth = x)'
plot_proportion(d_single_motivation, x, hue, title, rotate=True, normx=False)
Bad Visualization
```
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
%matplotlib inline
import inspect, os
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(currentdir)
os.sys.path.insert(0,parentdir)
from data_generation.join_traces_and_survey import load_survey_dfs
from response_distributions_util import *
import copy
d_survey, d_joined = load_survey_dfs()
d_desktop = d_joined[d_joined['host'] == 'mobile']
d_mobile = d_joined[d_joined['host'] == 'desktop']
d_single_motivation = d_joined[d_joined['motivation'].apply(lambda x: len(x.split('|')) == 1)]
print('Num Responses: ', d_survey.shape[0])
print('Num in EL: ', d_joined.shape[0])
print('Num Mobile Responses in EL: ', d_desktop.shape[0])
print('Num Desktop Responses in EL: ', d_mobile.shape[0])
x = 'information depth'
hue = 'host'
title = 'Mobile vs Desktop Information Depth Distribution'
xorder = ['in-depth', 'overview', 'fact']
plot_proportion(d_joined, x, hue, title, xorder = xorder)
x = 'prior knowledge'
hue = 'host'
title = 'Mobile vs Desktop'
plot_proportion(d_joined, x, hue, title)
d_in = pd.DataFrame()
d_in['counts'] = d_joined['motivation'].apply(lambda x: len(x.split('|'))).value_counts()
d_in['Proportion'] = d_in['counts'] / d_in['counts'].sum()
d_in['# of Reasons Given'] = d_in.index
fig = sns.barplot(y="Proportion",
x = '# of Reasons Given',
data=d_in,
color = (0.54308344686732579, 0.73391773700714114, 0.85931565621319939)
)
d_in = pd.DataFrame()
d_in['counts'] = d_single_motivation['motivation'].value_counts()
d_in['proportion'] = d_in['counts'] / d_in['counts'].sum()
d_in['motivation'] = d_in.index
fig = sns.barplot(y="proportion",
x = 'motivation',
data=d_in,
color = (0.54308344686732579, 0.73391773700714114, 0.85931565621319939),
)
plt.ylabel('Proportion')
plt.xlabel('Motivation')
for item in fig.get_xticklabels():
item.set_rotation(45)
x = 'motivation'
hue = 'host'
title = 'Mobile vs Desktop'
order = ['media', 'work/school','intrinsic learning', 'bored/random', 'conversation', 'other','current event', 'personal decision', ]
plot_proportion(d_single_motivation, x, hue, title, xorder = order, rotate=True)
d_in = pd.DataFrame(columns = ['motivation', 'counts'])
ms = [
'work/school',
'personal decision',
'current event',
'media',"conversation",
'bored/random',
'no response',
'intrinsic learning',
'other'
]
for i, m in enumerate(ms):
d_in.loc[i] = [m, d_joined['motivation'].apply(lambda x: m in x).sum()]
d_in['proportion'] = d_in['counts'] / d_in['counts'].sum()
d_in.sort_values(by = 'counts', inplace = True, ascending = False)
fig = sns.barplot(y="proportion",
x = 'motivation',
data=d_in,
color = (0.54308344686732579, 0.73391773700714114, 0.85931565621319939),
)
plt.ylabel('Proportion')
plt.xlabel('Motivation')
for item in fig.get_xticklabels():
item.set_rotation(45)
df = copy.deepcopy(d_joined[d_joined['motivation'].apply(lambda x: len(x.split('|')) == 2)])
df['pm'] = df['motivation'].apply(lambda x: '|'.join(sorted(x.split('|'))))
df_joint = pd.DataFrame()
df_joint['count'] = df['pm'].value_counts()
df_joint['pm'] = df_joint.index
df_joint.index = range(0, df_joint.shape[0])
df_joint['m1'] = df_joint['pm'].apply(lambda x: x.split('|')[0])
df_joint['m2'] = df_joint['pm'].apply(lambda x: x.split('|')[1])
df_joint['count'] = df_joint['count'].apply(int)
df_joint2 = copy.deepcopy(df_joint)
df_joint2['pm'] = df_joint2['pm'].apply(lambda x: '|'.join(sorted(x.split('|'), reverse = True)))
df_joint2['m1'] = df_joint2['pm'].apply(lambda x: x.split('|')[0])
df_joint2['m2'] = df_joint2['pm'].apply(lambda x: x.split('|')[1])
df_joint2.index = range(df_joint.shape[0], 2 * df_joint.shape[0])
df_joint12 = pd.concat([df_joint, df_joint2]).pivot("m1", "m2", "count")
#ax = sns.heatmap(df_joint12, annot=True, fmt="0.0f")
#plt.ylabel('Motivation 1')
#plt.xlabel('Motivation 2')
#plt.title('Raw Co-occurence counts')
df_joint12_norm = df_joint12.div(df_joint12.sum(axis=1), axis=0)
ax = sns.heatmap(df_joint12_norm, annot=True, fmt="0.2f")
plt.ylabel('Motivation 1')
plt.xlabel('P(Motivation 2 | Motivation 1)')
plt.title('Conditional Distributions')
x = 'information depth'
hue = 'prior knowledge'
title = 'P(Prior Knowledge | Information Depth = x) '
xorder = order = ['in-depth', 'overview', 'fact']
plot_proportion(d_joined, x, hue, title, xorder = xorder, normx=False)
hue = 'information depth'
x = 'prior knowledge'
title = 'P(Information Depth | Prior Knowledge = x)'
xorder = order = ['familiar', 'unfamiliar']
plot_proportion(d_joined, x, hue, title, xorder = xorder, normx=False)
hue = 'prior knowledge'
x = 'motivation'
title = 'P(Prior Knowledge | Motivation = x)'
plot_proportion(d_single_motivation, x, hue, title, rotate=True, normx=False)
x = 'prior knowledge'
hue = 'motivation'
title = 'P(Motivation | Prior Knowledge = x)'
plot_proportion(d_single_motivation, x, hue, title, rotate=True, normx=False)
hue = 'information depth'
x = 'motivation'
title = 'P(Information Depth | Motivation = x)'
plot_proportion(d_single_motivation, x, hue, title, rotate=True, normx=False)
x = 'information depth'
hue = 'motivation'
title = 'P(Motivation | Information Depth = x)'
plot_proportion(d_single_motivation, x, hue, title, rotate=True, normx=False)
Bad Visualization
| 0.406155 | 0.915167 |
# Deep Q-Network (DQN)
---
In this notebook, you will implement a DQN agent with OpenAI Gym's LunarLander-v2 environment.
### 1. Import the Necessary Packages
```
import gym
import random
import torch
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
%matplotlib inline
```
### 2. Instantiate the Environment and Agent
Initialize the environment in the code cell below.
```
env = gym.make('LunarLander-v2')
env.seed(0)
print('State shape: ', env.observation_space.shape)
print('Number of actions: ', env.action_space.n)
```
Please refer to the instructions in `Deep_Q_Network.ipynb` if you would like to write your own DQN agent. Otherwise, run the code cell below to load the solution files.
```
from dqn_agent import Agent
agent = Agent(state_size=8, action_size=4, seed=0)
# watch an untrained agent
state = env.reset()
for j in range(200):
action = agent.act(state)
env.render()
state, reward, done, _ = env.step(action)
if done:
break
env.close()
```
### 3. Train the Agent with DQN
Run the code cell below to train the agent from scratch. You are welcome to amend the supplied values of the parameters in the function, to try to see if you can get better performance!
Alternatively, you can skip to the next step below (**4. Watch a Smart Agent!**), to load the saved model weights from a pre-trained agent.
```
def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995):
"""Deep Q-Learning.
Params
======
n_episodes (int): maximum number of training episodes
max_t (int): maximum number of timesteps per episode
eps_start (float): starting value of epsilon, for epsilon-greedy action selection
eps_end (float): minimum value of epsilon
eps_decay (float): multiplicative factor (per episode) for decreasing epsilon
"""
scores = [] # list containing scores from each episode
scores_window = deque(maxlen=100) # last 100 scores
eps = eps_start # initialize epsilon
for i_episode in range(1, n_episodes+1):
state = env.reset()
score = 0
for t in range(max_t):
action = agent.act(state, eps)
next_state, reward, done, _ = env.step(action)
agent.step(state, action, reward, next_state, done)
state = next_state
score += reward
if done:
break
scores_window.append(score) # save most recent score
scores.append(score) # save most recent score
eps = max(eps_end, eps_decay*eps) # decrease epsilon
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="")
if i_episode % 100 == 0:
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)))
if np.mean(scores_window)>=200.0:
print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window)))
torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth')
break
return scores
scores = dqn()
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
```
### 4. Watch a Smart Agent!
In the next code cell, you will load the trained weights from file to watch a smart agent!
```
# load the weights from file
agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth'))
for i in range(3):
state = env.reset()
for j in range(200):
action = agent.act(state)
env.render()
state, reward, done, _ = env.step(action)
if done:
break
env.close()
```
### 5. Explore
In this exercise, you have implemented a DQN agent and demonstrated how to use it to solve an OpenAI Gym environment. To continue your learning, you are encouraged to complete any (or all!) of the following tasks:
- Amend the various hyperparameters and network architecture to see if you can get your agent to solve the environment faster. Once you build intuition for the hyperparameters that work well with this environment, try solving a different OpenAI Gym task with discrete actions!
- You may like to implement some improvements such as prioritized experience replay, Double DQN, or Dueling DQN!
- Write a blog post explaining the intuition behind the DQN algorithm and demonstrating how to use it to solve an RL environment of your choosing.
|
github_jupyter
|
import gym
import random
import torch
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make('LunarLander-v2')
env.seed(0)
print('State shape: ', env.observation_space.shape)
print('Number of actions: ', env.action_space.n)
from dqn_agent import Agent
agent = Agent(state_size=8, action_size=4, seed=0)
# watch an untrained agent
state = env.reset()
for j in range(200):
action = agent.act(state)
env.render()
state, reward, done, _ = env.step(action)
if done:
break
env.close()
def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995):
"""Deep Q-Learning.
Params
======
n_episodes (int): maximum number of training episodes
max_t (int): maximum number of timesteps per episode
eps_start (float): starting value of epsilon, for epsilon-greedy action selection
eps_end (float): minimum value of epsilon
eps_decay (float): multiplicative factor (per episode) for decreasing epsilon
"""
scores = [] # list containing scores from each episode
scores_window = deque(maxlen=100) # last 100 scores
eps = eps_start # initialize epsilon
for i_episode in range(1, n_episodes+1):
state = env.reset()
score = 0
for t in range(max_t):
action = agent.act(state, eps)
next_state, reward, done, _ = env.step(action)
agent.step(state, action, reward, next_state, done)
state = next_state
score += reward
if done:
break
scores_window.append(score) # save most recent score
scores.append(score) # save most recent score
eps = max(eps_end, eps_decay*eps) # decrease epsilon
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="")
if i_episode % 100 == 0:
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)))
if np.mean(scores_window)>=200.0:
print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window)))
torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth')
break
return scores
scores = dqn()
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
# load the weights from file
agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth'))
for i in range(3):
state = env.reset()
for j in range(200):
action = agent.act(state)
env.render()
state, reward, done, _ = env.step(action)
if done:
break
env.close()
| 0.608478 | 0.953923 |
<img align='right' width='200' style="float:right;" src="./Images/0000.Subway-Time.png" />
<div style="text-align:center;margin:0;font-size:12px;color:#c1121f" align='center'>
<b> Data Science = Solving Problems = Happiness </b>
</div>
<div align='center'>
<h1> The Subway Challenge</h1>
</div>
<div align='center'>
Denzel S. Williams
</div>
<div align='center'>
<i>Springboard Data Science Track '21</i>
</div>
<div align='center'>
<a href="https://linkedin.com/in/williamdst">
<img align='center' src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white" width="75" />
</a>
<a href="https://nbviewer.jupyter.org/github/Williamdst/The-Subway-Challenge/blob/main/Subway-Report.ipynb">
<img align='center' src="https://img.shields.io/badge/Markdown-000000?style=for-the-badge&logo=markdown&logoColor=white" width='80'/>
</a>
<a href="https://github.com/Williamdst/The-Subway-Challenge/blob/main/Subway-Presentation.pdf" />
<img align='center' src="https://img.shields.io/badge/Microsoft_PowerPoint-B7472A?style=for-the-badge&logo=microsoft-powerpoint&logoColor=white" width='150' />
</a>
</div>
<h2>0. Introduction </h2>
To set a record in the Subway Challenge a participant must navigate the entire New York City Subway system (network) in the shortest time possible. The challenge requires competitors to stop at all 472 stations in the network and no person currently holds that record <a href="https://en.wikipedia.org/wiki/Subway_Challenge#Guinness_Record_times"> [1] </a>. The most recent record of 21H:28M:14S was set on July 22, 2016 by Matthew Ahn for the 469-Station Challenge <a href="https://www.timeout.com/newyork/blog/solo-straphanger-sets-new-all-station-subway-world-record-090616"> [2]</a>. Aside from beginning at Far Rockaway-Mott Avenue and ending at Flushing-Main Street, the route and methodology he used to beat the record is unknown. <br></br>
<p style='text-align:center'> <b>The goal of this project is to use graph theory to determine a set of paths that could potentially be used to beat the current record.</b> </p>
<h2>1. Understanding the Problem </h2>
To solve this problem a graph representation of the subway system needs to be constructed. The system can be modeled as a weighted undirected graph, where the weights on the edges are the time it takes to get from one station to the next. Since you can travel in both directions on each line the direction is not needed (there is one station that is an exception). The actual map of the system needs to be translated into nodes and edges; to simplify this translation the <a href="https://new.mta.info/map/5336">Late-Night Subway Service</a> map is used. In the late-night subway map all stations are served though not all lines run; most lines run local, making all stops. The late-night map is a starting point to attempt beating the challenge. The map cannot be used, as is, to beat the challenge because the map is only valid from 00:00 - 06:00 every day. The results of the late-night map will tell you <b>what</b> to do, but not <b>how</b> to do it. <br></br>
Once 06:00 hits, all trains are activated, and express routes are implemented. For example, the late-night A-Train might go to certain stops, but it skips over them in the day. In the day, the A-Train is an express train and staying on it for the entire line wouldn’t take you to every stop. At some point you would have to get off and make a transfer to the local C-train to check off the stops that the A-Train skips. This is the main reason why the objective is to determine a set of paths and not just a single path. The only weight the program understands is the time between stations, it doesn't understand that train switching is expensive. Every time you get off a train you must wait for the next one to arrive, which adds to the overall time. Therefore, the program can only return a set of potential options that a human would then need to filter through.
<p style='text-align:right'> <b>1.0.A Chinese Postman Problem</b> </p>
Like any route-inspection style problem, the Subway Challenge is about decision making, specifically what are you going to do at junctions, stations where you can transfer to a different line, or in the graph theoretical sense, nodes with degree greater than 2. Let's look at the simple network below to understand the idea. The challenge is to stop at every station, and you plan on starting at A. You can basically ignore stations B and C because to get from A to D you don't have a choice but to stop at those stations. It's when you get to station D a choice has to be made. Do you go to F first, come back to D, and then go to E or do you go to E first, come back and go to F. Taking the ABC<b>DFD</b>E route will cost you 28, compared to the 20 it will cost you taking route ABC<b>DED</b>F. What makes the Subway Challenge tricky is that taking the optimal DED route requires you to get off the blue train you started on and wait for the red train, then again in reverse. Whereas the less optimal route only requires one transfer. If the wait time at D for a train is over 8 then the suboptimal DFD route becomes the optimal route.
<figure style='text-align:center'>
<img src="https://raw.githubusercontent.com/Williamdst/Capstone-2/main/Images/0000.Network-Simple.png" style='max-width:50%'/>
<figcaption style='text-align:center'> Figure 1. Simple Graph to Understand the Problem </figcaption>
</figure>
Revisiting the idea of ignoring stations B and C; at those stations you have no choices and on your way to D you will pass them anyway. If that is the case, then you can reduce the number of nodes in network by consolidated them into a single edge (<i>Figure 2</i>). Doing this changes how the challenge is viewed. The nodes in the network used to be every station, now the nodes are only the stations where you must decide on an action, but you still must get to every station. Therefore, in terms of the graph, you need to travel <b>every edge at least once</b>. By meeting that condition you will automatically stop at every station. This modification officially turns our problem into a <b>Chinese Postman Problem</b> (CPP). In simplest terms, the Chinese postman problem aims is to find a path that visits every edge of a graph.
<figure style='text-align:center'>
<img src="https://raw.githubusercontent.com/Williamdst/Capstone-2/main/Images/0001.Network-Simple-Reduced.png" style='max-width:50%'/>
<figcaption style='text-align:center'> Figure 2. The Simple Graph Reduced </figcaption>
</figure>
<h2> 2. Modifying a Prepackaged Solution </h2>
In 2017, Andrew Brooks was tackling a similar problem which he solved using the NetworkX 2.0 library <a href="https://www.datacamp.com/community/tutorials/networkx-python-graph-tutorial"> [3]</a>. Thankfully, he packaged his solution into the <a href="https://github.com/brooksandrew/postman_problems">postman_problems</a>. With this package, you can plug in your own network and solve the CPP problem. Unfortunately, the Subway Challenge isn't a typical CPP problem. The postman always wants to return to his vehicle, so the CPP finds a path that ends where it began. The Subway Challenge has no such requirement, the sole condition is to travel to all the edges at least once. Andrew's postman_package solves the CPP as is, therefore plugging in the subway network wouldn't work because it would always output a sub-optimal solution. However, with a little bit of network theory, the NetworkX 2.5 update, and some tweaks to his package, we could build on his work to solve the problem.
<p style='text-align:center'> <b> 2.1 The Graph Theory Behind Andrew's Solution</b> </p>
In graph theory, a path is a sequence of vertices with the property that each vertex in the sequence is adjacent to the vertex next to it. A circuit is a path that begins and ends at the same vertex. Thinking in terms of racing, a path is a street race. You don't necessarily end up at the same point where you started (although you could) and all the streets are connected to each other. A circuit is a NASCAR race, where you always end where you start. A Euler circuit not only has to meet the conditions of a regular circuit (starting and ending at the same place) it has the added conditions that you have to use <b>every edge</b> of the graph AND you can only use <b>each edge once</b>. This is where the where the issues begin as there is a theorem that states: <br />
<p style='text-align:center'> A connected graph $G$ has an Euler Circuit if and only if every vertex has even degree <p>
In Andrew's problem there are many nodes that have odd degrees, which means there isn't a Euler circuit. To get around this, his solution was to turn odd-degree nodes even by adding artificial edges to those nodes. Then with all even-degree nodes, NetworkX finds the Euler circuit.
<p style='text-align:right'> <b>2.1.A Understanding The Steps</b> </p>
First, he loads in the edge list and then creates the graph from the edge list. The graph that we will use to understand his methodology is shown below. The graph is a weighted undirected graph where the orange nodes are the odd degree nodes.
<figure style='text-align:center'>
<img src="https://raw.githubusercontent.com/Williamdst/Capstone-2/main/Images/0002.Network-Follow.png" style='max-width:20%'/>
<figcaption style='text-align:center'> Figure 3. The Graph for the Follow-Along </figcaption>
</figure>
<b>The theorem states that only graphs where all nodes are even degree qualify to have Euler circuits, so his first efforts are to make all the nodes even.</b> He starts by first creating a separate graph where the odd nodes are artificially paired together. The artificial edges are shown in red and the weights on them represent the <b>fastest</b> time to get from the nodes using <b>actual paths</b> (<i>In this separate graph the even degree nodes and their connections don't exist</i>).
<figure style='text-align:center'>
<img src="https://raw.githubusercontent.com/Williamdst/Capstone-2/main/Images/0003.Network-Follow-Odd-Augment.png" style='max-width:20%'/>
<figcaption style='text-align:center'> Figure 4. Odd Nodes Complete Graph via Artificial Paths </figcaption>
</figure>
All the paths in red aren't necessary, because to turn an odd-degree node even you only need to add a single path. This is where the idea of a matching comes into play, specifically a minimum weight matching. Listed below are three diverse ways to think about matchings:
<ol>
<li> A matching is a subset of edges in which no node occurs more than once. </li>
<li> A matching is a graph where all the nodes have a degree of 0 or 1. </li>
<li> A matching is a subgraph of a graph where there are no edges adjacent to each other </li>
</ol>
The weight of a matching is the sum of the weights of its edges; the cardinality is the number of matched edges. What we are looking for is a matching that has <b>maximum cardinality</b> but <b>minimum weight</b>. Based on the previous graph there are only three choices for matchings (<i>Figure 5</i>) and the one that is ultimately chosen is the middle one with the minimum weight of 11.
<figure style='text-align:center'>
<img src="https://raw.githubusercontent.com/Williamdst/Capstone-2/main/Images/0004.Network-Follow-Matchings.png" style='max-width:50%'/>
<figcaption style='text-align:center'> Figure 5. Matchings with Maximum Cardinality</figcaption>
</figure>
Edges AE and AF are added to the original graph and now all the nodes are even. Remember, the AE edge doesn't exist, so when the algorithm says to follow the AE path, in actuality you go from node A to B to E. <b>The point of the previous steps stems from the fact that there is no choice but to reuse a path, so we need to find which path/s require the least amount of work to double back</b>. The final augmented graph is shown below. From here the NetworkX 2.0 package is used to return the circuit.
<figure style='text-align:center'>
<img src="https://raw.githubusercontent.com/Williamdst/Capstone-2/main/Images/0005.Network-Follow-Final-Augment.png" style='max-width:20%'/>
<figcaption style='text-align:center'> Figure 6. The Final Augmented Graph</figcaption>
</figure>
<p style='text-align:center'> <b> 2.2 The Graph Theory To Build on Andrew's Solution</b> </p>
Andrew's solution solves for the Euler circuit, we are looking for a Euler Path. A Euler Path is a path that has the added conditions of using <b>every edge</b> of the graph <b>exactly once.</b> The difference is that an Euler Path doesn’t have to end where it began. There is a different theorem on Euler Paths that will guide the modification: <br />
<p style='text-align:center'> If a graph $G$ has an Euler Path, then it must have <b>exactly two odd vertices.</b></p>
Essentially, odd-degree nodes are dead-ends. There is going to come a time when you reach the node and there are no more edges leftover to leave. In a Euler Path these dead ends serve as the starting and ending nodes.
<p style='text-align:right'> <b>2.2.A Understanding The Steps</b> </p>
The difference in our problem is that all but two of the odd-degree nodes must become even. Doing that was simple, all the odd nodes were found and two were removed from the list to be conserved. From there all of Andrew’s steps were the same, except the function used from NetworkX 2.5 was the <code>eulerian_path</code> function. The two conserved odd-degree nodes act as the starting point and the ending point of the path. Naturally, the question then became, which two odd-degree nodes do we conserve. Choosing where to start and where to end is part of the difficulty of the Subway Challenge.
The only start and end pair known is Matthew Ahn's pair and there is no guarantee that it is optimal. Therefore, every odd-degree node could be a potential start node and a potential end node and thus there are $\dbinom{O}{2}$ configurations to check, where $O$ is the number of odd-degree nodes. For every configuration, both odd-degree nodes are conserved and then the path is returned for that configuration. Using the same follow along graph from <i>Figure 3</i> the $\dbinom{4}{2} = 6$ start-end configurations: A-E, A-F, A-G, E-F, E-G, and F-G are shown below.
<figure style='text-align:center'>
<img src="https://raw.githubusercontent.com/Williamdst/Capstone-2/main/Images/0006.Network-Path-Configs.png" style='max-width:60%'/>
<figcaption style='text-align:center'> Figure 7. All Possible Start-End Euler Paths w/ Augmented Edges</figcaption>
</figure>
<table>
<tr>
<th></th>
<th></th>
<th></th>
</tr>
<table>
<tr>
<th></th>
<th></th>
<th></th>
</tr>
<h2> 3. Modeling the MTA Subway System </h2>
<div>
<p>
The bulk of the work is translating the map into nodes and edges, saving them as CSV files that the program can understand. Referring to <i>Figures 1&2</i>, not every station needs to be modeled, only the stations where a choice must be made. Of the 472 stations in the system there are only 79 decision stations. The lines on the night map are grouped into colors:
</p>
</div>
<img align='right' width='500' style="float:right;" src="https://raw.githubusercontent.com/Williamdst/Capstone-2/main/Images/0007.MTA-Night.jpg" />
<div style="text-align:center;margin:0;font-size:12px;color:#c1121f" align='center'>
<table>
<tr>
<th>Red Lines</th>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<th>Green Lines</th>
<td>4</td>
<td>5</td>
<td>6</td>
</tr>
<tr>
<th>Purple Line</th>
<td>7</td>
</tr>
<tr>
<th>Blue Lines</th>
<td>A</td>
<td>E</td>
</tr>
<tr>
<th>Orange Lines</th>
<td>D</td>
<td>F</td>
<td>M</td>
</tr>
<tr>
<th>L.Green Line</th>
<td>G</td>
</tr>
<tr>
<th>Brown Line</th>
<td>J</td>
</tr>
<tr>
<th>Grey Lines</th>
<td>L</td>
<td>S</td>
</tr>
<tr>
<th>Yellow Lines</th>
<td>N</td>
<td>Q</td>
<td>R</td>
</tr>
</table>
</div>
<p style='text-align:right'> <b> 3.0.A Modeling the Nodes (Stations)</b> </p>
For every decision station on a line, the Station ID, Station Name, Borough, and Line were documented. Additionally, each station on a line was given a "node-number" (there are stations that have multiple node numbers). For example, look at South Ferry Station (<i>Red Line - Bottom Middle</i>) and Canal St on the Blue Line (<i>Middle-Left</i>). South Ferry is the first stop on the 1 line and Canal St is the 10th station on the A line as well as the 9th station on the E line. Their values in the CSV were:
<table>
<tr>
<th>stationID</th>
<th>stopName</th>
<th>borough</th>
<th>lines</th>
<th>nodes</th>
</tr>
<tr>
<td>330</td>
<td>South Ferry</td>
<td>Manhattan</td>
<td>X1</td>
<td>1001</td>
</tr>
<tr>
<td>169</td>
<td>Canal St</td>
<td>Manhattan</td>
<td>A:E</td>
<td>A010:E009</td>
</tr>
</table>
The A and E train stop at Canal St, so both the "lines" and "nodes" column have more than one value, separated by a colon. The colon was used as a separator so that the values could be read independently when loaded into Neo4j (<code>Load-Neo4j-Cypher-Query.sql</code>). Look at the more complex station, W 4 St-Wash Sq (<i>Blue/Orange Line - Upper Left</i>) where four trains stop at this station: A, D, E, and F train. As before, in the "lines" column and the "nodes" column, every train and their node number were documented:
<table>
<tr>
<th>stationID</th>
<th>stopName</th>
<th>borough</th>
<th>lines</th>
<th>nodes</th>
</tr>
<tr>
<td>167</td>
<td>W 4 St-Wash Sq</td>
<td>Manhattan</td>
<td>A:D:E:F</td>
<td>A011:D005:E008:F008</td>
</tr>
</table>
On the blue line Canal St was A010 and W 4 St-Wash Sq was A011, but what happened to Spring St. Spring St isn't a decision station because if you were traveling from Canal to W 4-St you wouldn't have a choice but to stop at Spring St.
<p style='text-align:right'> <b> 3.0.B Modeling the Edges (Routes)</b> </p>
Modeling the edges was similar to modeling the stations. When modeling stations, each row is a single station and the properties of that station. When modeling edges, each row is a single edge and the properties of that edge. Edges are defined by the two nodes it is connected to, so the first thing needed are the Start Station ID and the Stop Station ID. The three other properties were the routes (same idea as the "lines" column), the nodes (the node numbers), and the distance. In this case, the distance was the <b>time</b> it takes to traverse the edge, or in other words, the time to go from one station to the next. The edge that connects Canal St to W 4 St-Wash Sq is shown below:
<table>
<tr>
<th>startID</th>
<th>stopID</th>
<th>startNode</th>
<th>stopNode</th>
<th>routes</th>
<th>distance</th>
</tr>
<tr>
<td>169</td>
<td>167</td>
<td>A010:E009</td>
<td>A011:E008</td>
<td>A:E</td>
<td>4</td>
</tr>
</table>
Although you can traverse this edge on either the A or E train, it is important that this edge is <b>not</b> duplicated in the edge list. If the edge is duplicated, then the program will read it as two separate edges and will solve the problem under the impression that it must traverse the edge twice. After removing all the duplicates there were 104 edges modeled. <br></br>
Looking at Fulton St <i>(Bottom Right)</i>, there is a single name for all four dots because Fulton St is a station complex, however when it comes to the challenge Fulton St counts as four different stations. This idea may be obvious with Fulton St, but there are other intersections that look like a single station but count as multiple stations in the Challenge. The official 472 stations recognized by the MTA can be found in the <code>Stations-Official-472.csv</code> file. <br />
The black lines connecting the dots are free subway transfers which are paths, not in the graph theory sense, that allow riders to directly walk between two stations. For example, you are on the A train and you get off at Fulton St, you can then walk over to Fulton St on the 3 train. I'm sure these subway transfers are extremely useful when solving the challenge, however they can't be used to model the network at this time. Why? The subway transfers are optional, not a requirement like the other edges. If those transfers were added to the graph, then the program will solve the problem under the impression that it must traverse the edge.
<h2> 4. The Routes </h2>
Of the 79 stations, there were 58 odd-degree nodes resulting in $\dbinom{58}{2} = 1653$ start-end configurations. To store all of the configurations and their stats, a simple SQLite database was integrated in the program.
<figure style='text-align:center'>
<img src="./Images/0017.Route-ERD.png" align='center', style="max-width:40%">
<figcaption> </figcaption>
</figure>
If you never had to double back and could teleport to whatever station you needed to, the time it would take to traverse each of the 104 edges exactly one time would be 14.75 hours (884m). The rest of the time is spent going back over edges you already traveled; in Matthew Ahn's case that was nearly 7 hours. The columns that are used to pick a route are distance_walked and distance_doublebacked. The reason that edges_walked isn't a major concern is because it matters <b>what</b> edge you had to double back over. You can't make the claim that a route with 150 edges_walked is better than a 151-edge route, because that one edge may be the worst edge in the network.
The node that was in 8 of the 10 top routes, either as the start or the end station, was 416 Wakefield-241 St (The last stop of the 2 train). What's more interesting is that all the nodes paired with it were also extreme stations, meaning, they were at the end of a line. More than that, those extremes were aggressively extreme, not only were they at the end of a line, but they were also at the end of lines that had no transfer opportunities and took over 15m to reach. The route that Matthew took started and ended at two very aggressive extremes and the path that contained those two extremes took 21.06 hours (37th ranked route).
<p style='text-align:right'> <b> 4.0.A The "Best" Routes</b> </p>
As was stated before, picking out the best route isn't as straight-forward as querying the database, finding the path with minimal distance, and following the directions. Remember, the program doesn't understand the cost of excessive transfers, that there are transfers that provide shortcuts, and the network topology isn't static. The one major insight that can be used to filter out routes is that aggressively extreme stations are where you want to start and where you want to end, which leaves about only 10 choices (45 configurations). The steps for the best routes aren’t listed in this report because each route has over 145 steps, but there is a <code>Describe-Route.sql</code> file in the repository that contains the query to use to list out all the steps for any path. The most properties of the most interesting paths are shown in the table below:
<table>
<tr>
<th></th>
<th> Start Station </th>
<th> Stop Station </th>
<th> Time (Hrs) </th>
<th> Route Rank </th>
</tr>
<tr>
<th> Gold Route </th>
<td>Wakefield-241 St <i>(2-Train)</i></td>
<td>Woodlawn <i>(4-Train)</i></td>
<td>20.65</td>
<td>1</td>
</tr>
<tr>
<th> Silver Route </th>
<td>Wakefield-241 St <i>(2-Train)</i></td>
<td>Norwood-205 St <i>(D-Train)</i></td>
<td>20.66</td>
<td>2</td>
</tr>
<tr>
<th> Bronze Route </th>
<td>Wakefield-241 St <i>(2-Train)</i></td>
<td>Pelham Bay Park <i>(6-Train)</i></td>
<td>20.7</td>
<td>3</td>
</tr>
<tr>
<th> The Worst Route </th>
<td>Sutphin Blvd-Archer Av-JFK Aiport <i>(E-Train)</i></td>
<td>Coney Island-Stillwell Av <i>(D-Train)</i></td>
<td>22.35</td>
<td>1653</td>
</tr>
<tr>
<th> Matthew Ahn's Route </th>
<td>Far Rockaway-Mott Av <i>(A-Train)</i></td>
<td>Flushing-Main St <i>(7-Train)</i></td>
<td>21.06</td>
<td>37</td>
</tr>
<tr>
<th> My Most Convenient Starting Route </th>
<td>Far Rockaway-Mott Av <i>(A-Train)</i></td>
<td>Norwood-205 St <i>(D-Train)</i></td>
<td>20.95</td>
<td>16</td>
</tr>
<tr>
<th> My Most Convenient Ending Route </th>
<td>Wakefield-241 St <i>(2-Train)</i></td>
<td>Far Rockaway-Mott Av <i>(A-Train)</i></td>
<td>20.75</td>
<td>4</td>
</tr>
<tr>
<th> My Most Convenient Route Overall </th>
<td>Rockaway Park-Beach 116st <i>(A-Train)</i></td>
<td>Far Rockaway-Mott Av <i>(A-Train)</i></td>
<td>21.63</td>
<td>606</td>
</tr>
</table>
<div style="line-height:11px">
<p style="text-align:right;font-style:italic;color:#c1121f"> <b> Data Science = Solving Problems = Happiness </b> </p>
<p style="text-align:right;"> <b> Denzel S. Williams </b> </p>
</div>
<hr>
<h3> A1. Project Improvements & Extenstions </h3>
<b>Subway Transfers & Running Edges </b> <br />
In future installments of the project, I would like to implement those subway transfers into the solution. Additionally, part of Matthew Ahn's record involved him running between stations that aren't connected because that was the fastest way to get there. Using that idea, "Running Transfers" could be artificially added to the network. These running transfers would be especially useful in the Bronx.
<b>Solve the Full Problem</b> <br />
This project only focused on the Late-Night Subway Map and although the order of stations might be transferrable, the edges are not. There are routes that don't go to certain stations at certain times in the day and there are express lines that can be utilized when double backing. Solving the full problem may require an entirely new solution method because there is a mix of optional edges and required edges.
<b>Wait Times & Time Varying Networks</b> <br />
To arrive at a complete solution in its entirety, the program would need to understand how the network changes over time. Not only how the edge set changes from express to local, but how long the wait time for the next train will occur at a decision point, which also changes over time.
<h3>A2. The Graphs of the Lines in Neo4j</h3>
<figure style='text-align:center'>
<img src="./Images/0008.Red.png" style="max-width:40%" />
<figcaption style='text-align:center'> Figure A1. The Red Lines</figcaption>
</figure>
<figure style='text-align:center'>
<img src="./Images/0009.Green.png" style="max-width:40%" />
<figcaption style='text-align:center'> Figure A2. The Green Lines</figcaption>
</figure>
<figure style='text-align:center'>
<img src="./Images/0010.Purple.png" style="max-width:60%" />
<figcaption style='text-align:center'> Figure A3. The Purple Lines</figcaption>
</figure>
<figure style='text-align:center'>
<img src="./Images/0011.Blue.png" style="max-width:60%" />
<figcaption style='text-align:center'> Figure A4. The Blue Lines</figcaption>
</figure>
<figure style='text-align:center'>
<img src="./Images/0012.Orange.png" style="max-width:60%" />
<figcaption style='text-align:center'> Figure A5. The Orange Lines</figcaption>
</figure>
<figure style='text-align:center'>
<img src="./Images/0013.L.Green.png" style="max-width:40%" />
<figcaption style='text-align:center'> Figure A6. The L.Green Lines</figcaption>
</figure>
<figure style='text-align:center'>
<img src="./Images/0014.Brown.png" style="max-width:60%" />
<figcaption style='text-align:center'> Figure A7. The Brown Lines</figcaption>
</figure>
<figure style='text-align:center'>
<img src="./Images/0015.Grey.png" style="max-width:40%" />
<figcaption style='text-align:center'> Figure A8. The Grey Lines</figcaption>
</figure>
<figure style='text-align:center'>
<img src="./Images/0016.Yellow.png" style="max-width:60%" />
<figcaption style='text-align:center'> Figure A9. The Yellow Lines</figcaption>
</figure>
<h3>A3. References </h3>
<ol style="margin: 10px 0;">
<li> “Subway Challenge.” Wikipedia, Wikimedia Foundation, 3 Mar. 2021, <a href="en.wikipedia.org/wiki/Subway_Challenge#Guinness_Record_times"> en.wikipedia.org/wiki/Subway_Challenge#Guinness_Record_times </a>. </li>
<li>Snowden, Scott. “Solo Straphanger Sets New, All-Station Subway World Record.” Time Out New York, Time Out, 6 Sept. 2016, <a href="www.timeout.com/newyork/blog/solo-straphanger-sets-new-all-station-subway-world-record-090616"> www.timeout.com/newyork/blog/solo-straphanger-sets-new-all-station-subway-world-record-090616 </a>. </li>
<li>"Intro to Graph Optimization with NetworkX in Python." DataCamp Community, <a href="www.datacamp.com/community/tutorials/networkx-python-graph-tutorial"> www.datacamp.com/community/tutorials/networkx-python-graph-tutorial</a>. </li>
<li>Brooks, Andrew. “Intro to Graph Optimization: Solving the Chinese Postman Problem.” Andrew Brooks, 7 Oct. 2017, <a href="brooksandrew.github.io/simpleblog/articles/intro-to-graph-optimization-solving-cpp/"> brooksandrew.github.io/simpleblog/articles/intro-to-graph-optimization-solving-cpp/ </a>. </li>
</ol>
|
github_jupyter
|
<img align='right' width='200' style="float:right;" src="./Images/0000.Subway-Time.png" />
<div style="text-align:center;margin:0;font-size:12px;color:#c1121f" align='center'>
<b> Data Science = Solving Problems = Happiness </b>
</div>
<div align='center'>
<h1> The Subway Challenge</h1>
</div>
<div align='center'>
Denzel S. Williams
</div>
<div align='center'>
<i>Springboard Data Science Track '21</i>
</div>
<div align='center'>
<a href="https://linkedin.com/in/williamdst">
<img align='center' src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white" width="75" />
</a>
<a href="https://nbviewer.jupyter.org/github/Williamdst/The-Subway-Challenge/blob/main/Subway-Report.ipynb">
<img align='center' src="https://img.shields.io/badge/Markdown-000000?style=for-the-badge&logo=markdown&logoColor=white" width='80'/>
</a>
<a href="https://github.com/Williamdst/The-Subway-Challenge/blob/main/Subway-Presentation.pdf" />
<img align='center' src="https://img.shields.io/badge/Microsoft_PowerPoint-B7472A?style=for-the-badge&logo=microsoft-powerpoint&logoColor=white" width='150' />
</a>
</div>
<h2>0. Introduction </h2>
To set a record in the Subway Challenge a participant must navigate the entire New York City Subway system (network) in the shortest time possible. The challenge requires competitors to stop at all 472 stations in the network and no person currently holds that record <a href="https://en.wikipedia.org/wiki/Subway_Challenge#Guinness_Record_times"> [1] </a>. The most recent record of 21H:28M:14S was set on July 22, 2016 by Matthew Ahn for the 469-Station Challenge <a href="https://www.timeout.com/newyork/blog/solo-straphanger-sets-new-all-station-subway-world-record-090616"> [2]</a>. Aside from beginning at Far Rockaway-Mott Avenue and ending at Flushing-Main Street, the route and methodology he used to beat the record is unknown. <br></br>
<p style='text-align:center'> <b>The goal of this project is to use graph theory to determine a set of paths that could potentially be used to beat the current record.</b> </p>
<h2>1. Understanding the Problem </h2>
To solve this problem a graph representation of the subway system needs to be constructed. The system can be modeled as a weighted undirected graph, where the weights on the edges are the time it takes to get from one station to the next. Since you can travel in both directions on each line the direction is not needed (there is one station that is an exception). The actual map of the system needs to be translated into nodes and edges; to simplify this translation the <a href="https://new.mta.info/map/5336">Late-Night Subway Service</a> map is used. In the late-night subway map all stations are served though not all lines run; most lines run local, making all stops. The late-night map is a starting point to attempt beating the challenge. The map cannot be used, as is, to beat the challenge because the map is only valid from 00:00 - 06:00 every day. The results of the late-night map will tell you <b>what</b> to do, but not <b>how</b> to do it. <br></br>
Once 06:00 hits, all trains are activated, and express routes are implemented. For example, the late-night A-Train might go to certain stops, but it skips over them in the day. In the day, the A-Train is an express train and staying on it for the entire line wouldn’t take you to every stop. At some point you would have to get off and make a transfer to the local C-train to check off the stops that the A-Train skips. This is the main reason why the objective is to determine a set of paths and not just a single path. The only weight the program understands is the time between stations, it doesn't understand that train switching is expensive. Every time you get off a train you must wait for the next one to arrive, which adds to the overall time. Therefore, the program can only return a set of potential options that a human would then need to filter through.
<p style='text-align:right'> <b>1.0.A Chinese Postman Problem</b> </p>
Like any route-inspection style problem, the Subway Challenge is about decision making, specifically what are you going to do at junctions, stations where you can transfer to a different line, or in the graph theoretical sense, nodes with degree greater than 2. Let's look at the simple network below to understand the idea. The challenge is to stop at every station, and you plan on starting at A. You can basically ignore stations B and C because to get from A to D you don't have a choice but to stop at those stations. It's when you get to station D a choice has to be made. Do you go to F first, come back to D, and then go to E or do you go to E first, come back and go to F. Taking the ABC<b>DFD</b>E route will cost you 28, compared to the 20 it will cost you taking route ABC<b>DED</b>F. What makes the Subway Challenge tricky is that taking the optimal DED route requires you to get off the blue train you started on and wait for the red train, then again in reverse. Whereas the less optimal route only requires one transfer. If the wait time at D for a train is over 8 then the suboptimal DFD route becomes the optimal route.
<figure style='text-align:center'>
<img src="https://raw.githubusercontent.com/Williamdst/Capstone-2/main/Images/0000.Network-Simple.png" style='max-width:50%'/>
<figcaption style='text-align:center'> Figure 1. Simple Graph to Understand the Problem </figcaption>
</figure>
Revisiting the idea of ignoring stations B and C; at those stations you have no choices and on your way to D you will pass them anyway. If that is the case, then you can reduce the number of nodes in network by consolidated them into a single edge (<i>Figure 2</i>). Doing this changes how the challenge is viewed. The nodes in the network used to be every station, now the nodes are only the stations where you must decide on an action, but you still must get to every station. Therefore, in terms of the graph, you need to travel <b>every edge at least once</b>. By meeting that condition you will automatically stop at every station. This modification officially turns our problem into a <b>Chinese Postman Problem</b> (CPP). In simplest terms, the Chinese postman problem aims is to find a path that visits every edge of a graph.
<figure style='text-align:center'>
<img src="https://raw.githubusercontent.com/Williamdst/Capstone-2/main/Images/0001.Network-Simple-Reduced.png" style='max-width:50%'/>
<figcaption style='text-align:center'> Figure 2. The Simple Graph Reduced </figcaption>
</figure>
<h2> 2. Modifying a Prepackaged Solution </h2>
In 2017, Andrew Brooks was tackling a similar problem which he solved using the NetworkX 2.0 library <a href="https://www.datacamp.com/community/tutorials/networkx-python-graph-tutorial"> [3]</a>. Thankfully, he packaged his solution into the <a href="https://github.com/brooksandrew/postman_problems">postman_problems</a>. With this package, you can plug in your own network and solve the CPP problem. Unfortunately, the Subway Challenge isn't a typical CPP problem. The postman always wants to return to his vehicle, so the CPP finds a path that ends where it began. The Subway Challenge has no such requirement, the sole condition is to travel to all the edges at least once. Andrew's postman_package solves the CPP as is, therefore plugging in the subway network wouldn't work because it would always output a sub-optimal solution. However, with a little bit of network theory, the NetworkX 2.5 update, and some tweaks to his package, we could build on his work to solve the problem.
<p style='text-align:center'> <b> 2.1 The Graph Theory Behind Andrew's Solution</b> </p>
In graph theory, a path is a sequence of vertices with the property that each vertex in the sequence is adjacent to the vertex next to it. A circuit is a path that begins and ends at the same vertex. Thinking in terms of racing, a path is a street race. You don't necessarily end up at the same point where you started (although you could) and all the streets are connected to each other. A circuit is a NASCAR race, where you always end where you start. A Euler circuit not only has to meet the conditions of a regular circuit (starting and ending at the same place) it has the added conditions that you have to use <b>every edge</b> of the graph AND you can only use <b>each edge once</b>. This is where the where the issues begin as there is a theorem that states: <br />
<p style='text-align:center'> A connected graph $G$ has an Euler Circuit if and only if every vertex has even degree <p>
In Andrew's problem there are many nodes that have odd degrees, which means there isn't a Euler circuit. To get around this, his solution was to turn odd-degree nodes even by adding artificial edges to those nodes. Then with all even-degree nodes, NetworkX finds the Euler circuit.
<p style='text-align:right'> <b>2.1.A Understanding The Steps</b> </p>
First, he loads in the edge list and then creates the graph from the edge list. The graph that we will use to understand his methodology is shown below. The graph is a weighted undirected graph where the orange nodes are the odd degree nodes.
<figure style='text-align:center'>
<img src="https://raw.githubusercontent.com/Williamdst/Capstone-2/main/Images/0002.Network-Follow.png" style='max-width:20%'/>
<figcaption style='text-align:center'> Figure 3. The Graph for the Follow-Along </figcaption>
</figure>
<b>The theorem states that only graphs where all nodes are even degree qualify to have Euler circuits, so his first efforts are to make all the nodes even.</b> He starts by first creating a separate graph where the odd nodes are artificially paired together. The artificial edges are shown in red and the weights on them represent the <b>fastest</b> time to get from the nodes using <b>actual paths</b> (<i>In this separate graph the even degree nodes and their connections don't exist</i>).
<figure style='text-align:center'>
<img src="https://raw.githubusercontent.com/Williamdst/Capstone-2/main/Images/0003.Network-Follow-Odd-Augment.png" style='max-width:20%'/>
<figcaption style='text-align:center'> Figure 4. Odd Nodes Complete Graph via Artificial Paths </figcaption>
</figure>
All the paths in red aren't necessary, because to turn an odd-degree node even you only need to add a single path. This is where the idea of a matching comes into play, specifically a minimum weight matching. Listed below are three diverse ways to think about matchings:
<ol>
<li> A matching is a subset of edges in which no node occurs more than once. </li>
<li> A matching is a graph where all the nodes have a degree of 0 or 1. </li>
<li> A matching is a subgraph of a graph where there are no edges adjacent to each other </li>
</ol>
The weight of a matching is the sum of the weights of its edges; the cardinality is the number of matched edges. What we are looking for is a matching that has <b>maximum cardinality</b> but <b>minimum weight</b>. Based on the previous graph there are only three choices for matchings (<i>Figure 5</i>) and the one that is ultimately chosen is the middle one with the minimum weight of 11.
<figure style='text-align:center'>
<img src="https://raw.githubusercontent.com/Williamdst/Capstone-2/main/Images/0004.Network-Follow-Matchings.png" style='max-width:50%'/>
<figcaption style='text-align:center'> Figure 5. Matchings with Maximum Cardinality</figcaption>
</figure>
Edges AE and AF are added to the original graph and now all the nodes are even. Remember, the AE edge doesn't exist, so when the algorithm says to follow the AE path, in actuality you go from node A to B to E. <b>The point of the previous steps stems from the fact that there is no choice but to reuse a path, so we need to find which path/s require the least amount of work to double back</b>. The final augmented graph is shown below. From here the NetworkX 2.0 package is used to return the circuit.
<figure style='text-align:center'>
<img src="https://raw.githubusercontent.com/Williamdst/Capstone-2/main/Images/0005.Network-Follow-Final-Augment.png" style='max-width:20%'/>
<figcaption style='text-align:center'> Figure 6. The Final Augmented Graph</figcaption>
</figure>
<p style='text-align:center'> <b> 2.2 The Graph Theory To Build on Andrew's Solution</b> </p>
Andrew's solution solves for the Euler circuit, we are looking for a Euler Path. A Euler Path is a path that has the added conditions of using <b>every edge</b> of the graph <b>exactly once.</b> The difference is that an Euler Path doesn’t have to end where it began. There is a different theorem on Euler Paths that will guide the modification: <br />
<p style='text-align:center'> If a graph $G$ has an Euler Path, then it must have <b>exactly two odd vertices.</b></p>
Essentially, odd-degree nodes are dead-ends. There is going to come a time when you reach the node and there are no more edges leftover to leave. In a Euler Path these dead ends serve as the starting and ending nodes.
<p style='text-align:right'> <b>2.2.A Understanding The Steps</b> </p>
The difference in our problem is that all but two of the odd-degree nodes must become even. Doing that was simple, all the odd nodes were found and two were removed from the list to be conserved. From there all of Andrew’s steps were the same, except the function used from NetworkX 2.5 was the <code>eulerian_path</code> function. The two conserved odd-degree nodes act as the starting point and the ending point of the path. Naturally, the question then became, which two odd-degree nodes do we conserve. Choosing where to start and where to end is part of the difficulty of the Subway Challenge.
The only start and end pair known is Matthew Ahn's pair and there is no guarantee that it is optimal. Therefore, every odd-degree node could be a potential start node and a potential end node and thus there are $\dbinom{O}{2}$ configurations to check, where $O$ is the number of odd-degree nodes. For every configuration, both odd-degree nodes are conserved and then the path is returned for that configuration. Using the same follow along graph from <i>Figure 3</i> the $\dbinom{4}{2} = 6$ start-end configurations: A-E, A-F, A-G, E-F, E-G, and F-G are shown below.
<figure style='text-align:center'>
<img src="https://raw.githubusercontent.com/Williamdst/Capstone-2/main/Images/0006.Network-Path-Configs.png" style='max-width:60%'/>
<figcaption style='text-align:center'> Figure 7. All Possible Start-End Euler Paths w/ Augmented Edges</figcaption>
</figure>
<table>
<tr>
<th></th>
<th></th>
<th></th>
</tr>
<table>
<tr>
<th></th>
<th></th>
<th></th>
</tr>
<h2> 3. Modeling the MTA Subway System </h2>
<div>
<p>
The bulk of the work is translating the map into nodes and edges, saving them as CSV files that the program can understand. Referring to <i>Figures 1&2</i>, not every station needs to be modeled, only the stations where a choice must be made. Of the 472 stations in the system there are only 79 decision stations. The lines on the night map are grouped into colors:
</p>
</div>
<img align='right' width='500' style="float:right;" src="https://raw.githubusercontent.com/Williamdst/Capstone-2/main/Images/0007.MTA-Night.jpg" />
<div style="text-align:center;margin:0;font-size:12px;color:#c1121f" align='center'>
<table>
<tr>
<th>Red Lines</th>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<th>Green Lines</th>
<td>4</td>
<td>5</td>
<td>6</td>
</tr>
<tr>
<th>Purple Line</th>
<td>7</td>
</tr>
<tr>
<th>Blue Lines</th>
<td>A</td>
<td>E</td>
</tr>
<tr>
<th>Orange Lines</th>
<td>D</td>
<td>F</td>
<td>M</td>
</tr>
<tr>
<th>L.Green Line</th>
<td>G</td>
</tr>
<tr>
<th>Brown Line</th>
<td>J</td>
</tr>
<tr>
<th>Grey Lines</th>
<td>L</td>
<td>S</td>
</tr>
<tr>
<th>Yellow Lines</th>
<td>N</td>
<td>Q</td>
<td>R</td>
</tr>
</table>
</div>
<p style='text-align:right'> <b> 3.0.A Modeling the Nodes (Stations)</b> </p>
For every decision station on a line, the Station ID, Station Name, Borough, and Line were documented. Additionally, each station on a line was given a "node-number" (there are stations that have multiple node numbers). For example, look at South Ferry Station (<i>Red Line - Bottom Middle</i>) and Canal St on the Blue Line (<i>Middle-Left</i>). South Ferry is the first stop on the 1 line and Canal St is the 10th station on the A line as well as the 9th station on the E line. Their values in the CSV were:
<table>
<tr>
<th>stationID</th>
<th>stopName</th>
<th>borough</th>
<th>lines</th>
<th>nodes</th>
</tr>
<tr>
<td>330</td>
<td>South Ferry</td>
<td>Manhattan</td>
<td>X1</td>
<td>1001</td>
</tr>
<tr>
<td>169</td>
<td>Canal St</td>
<td>Manhattan</td>
<td>A:E</td>
<td>A010:E009</td>
</tr>
</table>
The A and E train stop at Canal St, so both the "lines" and "nodes" column have more than one value, separated by a colon. The colon was used as a separator so that the values could be read independently when loaded into Neo4j (<code>Load-Neo4j-Cypher-Query.sql</code>). Look at the more complex station, W 4 St-Wash Sq (<i>Blue/Orange Line - Upper Left</i>) where four trains stop at this station: A, D, E, and F train. As before, in the "lines" column and the "nodes" column, every train and their node number were documented:
<table>
<tr>
<th>stationID</th>
<th>stopName</th>
<th>borough</th>
<th>lines</th>
<th>nodes</th>
</tr>
<tr>
<td>167</td>
<td>W 4 St-Wash Sq</td>
<td>Manhattan</td>
<td>A:D:E:F</td>
<td>A011:D005:E008:F008</td>
</tr>
</table>
On the blue line Canal St was A010 and W 4 St-Wash Sq was A011, but what happened to Spring St. Spring St isn't a decision station because if you were traveling from Canal to W 4-St you wouldn't have a choice but to stop at Spring St.
<p style='text-align:right'> <b> 3.0.B Modeling the Edges (Routes)</b> </p>
Modeling the edges was similar to modeling the stations. When modeling stations, each row is a single station and the properties of that station. When modeling edges, each row is a single edge and the properties of that edge. Edges are defined by the two nodes it is connected to, so the first thing needed are the Start Station ID and the Stop Station ID. The three other properties were the routes (same idea as the "lines" column), the nodes (the node numbers), and the distance. In this case, the distance was the <b>time</b> it takes to traverse the edge, or in other words, the time to go from one station to the next. The edge that connects Canal St to W 4 St-Wash Sq is shown below:
<table>
<tr>
<th>startID</th>
<th>stopID</th>
<th>startNode</th>
<th>stopNode</th>
<th>routes</th>
<th>distance</th>
</tr>
<tr>
<td>169</td>
<td>167</td>
<td>A010:E009</td>
<td>A011:E008</td>
<td>A:E</td>
<td>4</td>
</tr>
</table>
Although you can traverse this edge on either the A or E train, it is important that this edge is <b>not</b> duplicated in the edge list. If the edge is duplicated, then the program will read it as two separate edges and will solve the problem under the impression that it must traverse the edge twice. After removing all the duplicates there were 104 edges modeled. <br></br>
Looking at Fulton St <i>(Bottom Right)</i>, there is a single name for all four dots because Fulton St is a station complex, however when it comes to the challenge Fulton St counts as four different stations. This idea may be obvious with Fulton St, but there are other intersections that look like a single station but count as multiple stations in the Challenge. The official 472 stations recognized by the MTA can be found in the <code>Stations-Official-472.csv</code> file. <br />
The black lines connecting the dots are free subway transfers which are paths, not in the graph theory sense, that allow riders to directly walk between two stations. For example, you are on the A train and you get off at Fulton St, you can then walk over to Fulton St on the 3 train. I'm sure these subway transfers are extremely useful when solving the challenge, however they can't be used to model the network at this time. Why? The subway transfers are optional, not a requirement like the other edges. If those transfers were added to the graph, then the program will solve the problem under the impression that it must traverse the edge.
<h2> 4. The Routes </h2>
Of the 79 stations, there were 58 odd-degree nodes resulting in $\dbinom{58}{2} = 1653$ start-end configurations. To store all of the configurations and their stats, a simple SQLite database was integrated in the program.
<figure style='text-align:center'>
<img src="./Images/0017.Route-ERD.png" align='center', style="max-width:40%">
<figcaption> </figcaption>
</figure>
If you never had to double back and could teleport to whatever station you needed to, the time it would take to traverse each of the 104 edges exactly one time would be 14.75 hours (884m). The rest of the time is spent going back over edges you already traveled; in Matthew Ahn's case that was nearly 7 hours. The columns that are used to pick a route are distance_walked and distance_doublebacked. The reason that edges_walked isn't a major concern is because it matters <b>what</b> edge you had to double back over. You can't make the claim that a route with 150 edges_walked is better than a 151-edge route, because that one edge may be the worst edge in the network.
The node that was in 8 of the 10 top routes, either as the start or the end station, was 416 Wakefield-241 St (The last stop of the 2 train). What's more interesting is that all the nodes paired with it were also extreme stations, meaning, they were at the end of a line. More than that, those extremes were aggressively extreme, not only were they at the end of a line, but they were also at the end of lines that had no transfer opportunities and took over 15m to reach. The route that Matthew took started and ended at two very aggressive extremes and the path that contained those two extremes took 21.06 hours (37th ranked route).
<p style='text-align:right'> <b> 4.0.A The "Best" Routes</b> </p>
As was stated before, picking out the best route isn't as straight-forward as querying the database, finding the path with minimal distance, and following the directions. Remember, the program doesn't understand the cost of excessive transfers, that there are transfers that provide shortcuts, and the network topology isn't static. The one major insight that can be used to filter out routes is that aggressively extreme stations are where you want to start and where you want to end, which leaves about only 10 choices (45 configurations). The steps for the best routes aren’t listed in this report because each route has over 145 steps, but there is a <code>Describe-Route.sql</code> file in the repository that contains the query to use to list out all the steps for any path. The most properties of the most interesting paths are shown in the table below:
<table>
<tr>
<th></th>
<th> Start Station </th>
<th> Stop Station </th>
<th> Time (Hrs) </th>
<th> Route Rank </th>
</tr>
<tr>
<th> Gold Route </th>
<td>Wakefield-241 St <i>(2-Train)</i></td>
<td>Woodlawn <i>(4-Train)</i></td>
<td>20.65</td>
<td>1</td>
</tr>
<tr>
<th> Silver Route </th>
<td>Wakefield-241 St <i>(2-Train)</i></td>
<td>Norwood-205 St <i>(D-Train)</i></td>
<td>20.66</td>
<td>2</td>
</tr>
<tr>
<th> Bronze Route </th>
<td>Wakefield-241 St <i>(2-Train)</i></td>
<td>Pelham Bay Park <i>(6-Train)</i></td>
<td>20.7</td>
<td>3</td>
</tr>
<tr>
<th> The Worst Route </th>
<td>Sutphin Blvd-Archer Av-JFK Aiport <i>(E-Train)</i></td>
<td>Coney Island-Stillwell Av <i>(D-Train)</i></td>
<td>22.35</td>
<td>1653</td>
</tr>
<tr>
<th> Matthew Ahn's Route </th>
<td>Far Rockaway-Mott Av <i>(A-Train)</i></td>
<td>Flushing-Main St <i>(7-Train)</i></td>
<td>21.06</td>
<td>37</td>
</tr>
<tr>
<th> My Most Convenient Starting Route </th>
<td>Far Rockaway-Mott Av <i>(A-Train)</i></td>
<td>Norwood-205 St <i>(D-Train)</i></td>
<td>20.95</td>
<td>16</td>
</tr>
<tr>
<th> My Most Convenient Ending Route </th>
<td>Wakefield-241 St <i>(2-Train)</i></td>
<td>Far Rockaway-Mott Av <i>(A-Train)</i></td>
<td>20.75</td>
<td>4</td>
</tr>
<tr>
<th> My Most Convenient Route Overall </th>
<td>Rockaway Park-Beach 116st <i>(A-Train)</i></td>
<td>Far Rockaway-Mott Av <i>(A-Train)</i></td>
<td>21.63</td>
<td>606</td>
</tr>
</table>
<div style="line-height:11px">
<p style="text-align:right;font-style:italic;color:#c1121f"> <b> Data Science = Solving Problems = Happiness </b> </p>
<p style="text-align:right;"> <b> Denzel S. Williams </b> </p>
</div>
<hr>
<h3> A1. Project Improvements & Extenstions </h3>
<b>Subway Transfers & Running Edges </b> <br />
In future installments of the project, I would like to implement those subway transfers into the solution. Additionally, part of Matthew Ahn's record involved him running between stations that aren't connected because that was the fastest way to get there. Using that idea, "Running Transfers" could be artificially added to the network. These running transfers would be especially useful in the Bronx.
<b>Solve the Full Problem</b> <br />
This project only focused on the Late-Night Subway Map and although the order of stations might be transferrable, the edges are not. There are routes that don't go to certain stations at certain times in the day and there are express lines that can be utilized when double backing. Solving the full problem may require an entirely new solution method because there is a mix of optional edges and required edges.
<b>Wait Times & Time Varying Networks</b> <br />
To arrive at a complete solution in its entirety, the program would need to understand how the network changes over time. Not only how the edge set changes from express to local, but how long the wait time for the next train will occur at a decision point, which also changes over time.
<h3>A2. The Graphs of the Lines in Neo4j</h3>
<figure style='text-align:center'>
<img src="./Images/0008.Red.png" style="max-width:40%" />
<figcaption style='text-align:center'> Figure A1. The Red Lines</figcaption>
</figure>
<figure style='text-align:center'>
<img src="./Images/0009.Green.png" style="max-width:40%" />
<figcaption style='text-align:center'> Figure A2. The Green Lines</figcaption>
</figure>
<figure style='text-align:center'>
<img src="./Images/0010.Purple.png" style="max-width:60%" />
<figcaption style='text-align:center'> Figure A3. The Purple Lines</figcaption>
</figure>
<figure style='text-align:center'>
<img src="./Images/0011.Blue.png" style="max-width:60%" />
<figcaption style='text-align:center'> Figure A4. The Blue Lines</figcaption>
</figure>
<figure style='text-align:center'>
<img src="./Images/0012.Orange.png" style="max-width:60%" />
<figcaption style='text-align:center'> Figure A5. The Orange Lines</figcaption>
</figure>
<figure style='text-align:center'>
<img src="./Images/0013.L.Green.png" style="max-width:40%" />
<figcaption style='text-align:center'> Figure A6. The L.Green Lines</figcaption>
</figure>
<figure style='text-align:center'>
<img src="./Images/0014.Brown.png" style="max-width:60%" />
<figcaption style='text-align:center'> Figure A7. The Brown Lines</figcaption>
</figure>
<figure style='text-align:center'>
<img src="./Images/0015.Grey.png" style="max-width:40%" />
<figcaption style='text-align:center'> Figure A8. The Grey Lines</figcaption>
</figure>
<figure style='text-align:center'>
<img src="./Images/0016.Yellow.png" style="max-width:60%" />
<figcaption style='text-align:center'> Figure A9. The Yellow Lines</figcaption>
</figure>
<h3>A3. References </h3>
<ol style="margin: 10px 0;">
<li> “Subway Challenge.” Wikipedia, Wikimedia Foundation, 3 Mar. 2021, <a href="en.wikipedia.org/wiki/Subway_Challenge#Guinness_Record_times"> en.wikipedia.org/wiki/Subway_Challenge#Guinness_Record_times </a>. </li>
<li>Snowden, Scott. “Solo Straphanger Sets New, All-Station Subway World Record.” Time Out New York, Time Out, 6 Sept. 2016, <a href="www.timeout.com/newyork/blog/solo-straphanger-sets-new-all-station-subway-world-record-090616"> www.timeout.com/newyork/blog/solo-straphanger-sets-new-all-station-subway-world-record-090616 </a>. </li>
<li>"Intro to Graph Optimization with NetworkX in Python." DataCamp Community, <a href="www.datacamp.com/community/tutorials/networkx-python-graph-tutorial"> www.datacamp.com/community/tutorials/networkx-python-graph-tutorial</a>. </li>
<li>Brooks, Andrew. “Intro to Graph Optimization: Solving the Chinese Postman Problem.” Andrew Brooks, 7 Oct. 2017, <a href="brooksandrew.github.io/simpleblog/articles/intro-to-graph-optimization-solving-cpp/"> brooksandrew.github.io/simpleblog/articles/intro-to-graph-optimization-solving-cpp/ </a>. </li>
</ol>
| 0.673192 | 0.855791 |
# 5.3 Lab: Cross-Validation and the Bootstrap
## 5.3.1 The Validation Set Approach
```
import numpy as np
import matplotlib.pyplot as plt
import scipy
import pandas as pd
import math
import random
import statsmodels.api as sm
import statsmodels.formula.api as smf
from statsmodels.graphics.regressionplots import *
from sklearn import datasets, linear_model
Auto = pd.read_csv('data/Auto.csv', header=0, na_values='?')
Auto = Auto.dropna().reset_index(drop=True) # drop the observation with NA values and reindex the obs from 0
Auto.shape
```
### Python and R use different random number generator, so we may see slightly difference results in this chapter
```
np.random.seed(1)
train = np.random.choice(Auto.shape[0], 196, replace=False)
select = np.in1d(range(Auto.shape[0]), train)
import statsmodels.formula.api as smf
lm = smf.ols ('mpg~horsepower', data = Auto[select]).fit()
print lm.summary()
preds = lm.predict(Auto)
square_error = (Auto['mpg'] - preds)**2
print '--------Test Error for 1st order--------'
print np.mean(square_error[~select])
lm2 = smf.ols ('mpg~horsepower + I(horsepower ** 2.0)', data = Auto[select]).fit()
preds = lm2.predict(Auto)
square_error = (Auto['mpg'] - preds)**2
print '--------Test Error for 2nd order--------'
print square_error[~select].mean()
lm3 = smf.ols ('mpg~horsepower + I(horsepower ** 2.0) + I(horsepower ** 3.0)', data = Auto[select]).fit()
preds = lm3.predict(Auto)
square_error = (Auto['mpg'] - preds)**2
print '--------Test Error for 3rd order--------'
print np.mean(square_error[~select])
```
### These results are consistent with our previous findings: a model that predicts mpg using a quadratic function of horsepower performs better than a model that involves only a linear function of horsepower, and there is little evidence in favor of a model that uses a cubic function of horsepower.
### If we look at the summmary for 3rd order regression, the coefficient of the 3rd order term is not statistically significant. I will use this as Supporting evidence for the above claim.
```
print lm3.summary()
```
## 5.3.2 Leave-One-Out Cross-Validation
### OLS Fit
```
ols_fit = smf.ols ('mpg~horsepower', data = Auto).fit()
print ols_fit.params
```
### GLM Fit. Compare with OLS fit, the coeffs are the same
```
glm_fit = sm.GLM.from_formula('mpg~horsepower', data = Auto).fit()
print glm_fit.params
```
### Trying CV in Python is not as easy as that in R. It will require some manual coding.
### To use some of implemented function in Python, we use Sklearn for linear model
```
from sklearn.model_selection import KFold, cross_val_score
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import Pipeline
x = pd.DataFrame(Auto.horsepower)
y = Auto.mpg
model = LinearRegression()
model.fit(x, y)
print model.intercept_
print model.coef_
k_fold = KFold(n_splits=x.shape[0]) # loo use folds equal to # of observations
test = cross_val_score(model, x, y, cv=k_fold, scoring = 'neg_mean_squared_error', n_jobs=-1)
print np.mean(-test)
```
### For higher order polynomial fit, we use pipline tool. Below shows how to fit an order 1 to 5 polynomial data and show the loo results
```
A = []
for porder in xrange(1, 6):
model = Pipeline([('poly', PolynomialFeatures(degree=porder)), ('linear', LinearRegression())])
k_fold = KFold(n_splits=x.shape[0]) # loo use folds equal to # of observations
test = cross_val_score(model, x, y, cv=k_fold, scoring = 'neg_mean_squared_error', n_jobs=-1)
A.append(np.mean(-test))
print A
```
## 5.3.3 k-Fold Cross-Validation
### K-fold validation is exactly same as LOO with different n_splits parameter setup. The computation time is much shorter than that of LOOCV.
```
np.random.seed(2)
A = []
for porder in xrange(1, 11):
model = Pipeline([('poly', PolynomialFeatures(degree=porder)), ('linear', LinearRegression())])
k_fold = KFold(n_splits=10)
test = cross_val_score(model, x, y, cv = k_fold, scoring = 'neg_mean_squared_error', n_jobs = -1)
A.append(np.mean(-test))
print A
```
### We still see little evidence that using cubic or higher-order polynomial terms leads to lower test error than simply using a quadratic fit.
## 5.3.4 The Bootstrap
### Bootstrap means sampling with replacement. To eliminate the effect of sample size, the norm practice is to sample the same size as original dataset with replacement.
```
Portfolio = pd.read_csv('data/Portfolio.csv', header=0)
```
### To illustrate the use of the bootstrap on this data, we must first create a function, alpha_fn(), which takes as input the (X, Y) data as well as a vector indicating which observations should be used to estimate alpha.
```
def alpha_fn(data, index):
X = data.X[index]
Y = data.Y[index]
return (np.var(Y) - np.cov(X,Y)[0,1])/(np.var(X) + np.var(Y) - 2 * np.cov(X, Y)[0,1])
alpha_fn(Portfolio, range(0, 100))
```
### Generate one set of random index with 100 elements. The array has been sorted to show there are repeat elements.
```
np.sort(np.random.choice(range(0, 100), size=100, replace=True))
```
### Recall the previous function with a random set of input.
```
alpha_fn(Portfolio, np.random.choice(range(0, 100), size=100, replace=True))
```
### Since I am not aware of boot similar function in python, I just define a ad hoc function called boot_python()
```
def boot_python(data, input_fun, iteration):
n = Portfolio.shape[0]
idx = np.random.randint(0, n, (iteration, n))
stat = np.zeros(iteration)
for i in xrange(len(idx)):
stat[i] = input_fun(data, idx[i])
return {'Mean': np.mean(stat), 'STD': np.std(stat)}
boot_python(Portfolio, alpha_fn, 1000)
```
### Similar idea (boostrap) can be used in a lot of other places, such as estimating the accuracy of a linear regression model coeffcients / Conduct non-parametric testing (permutation test) / Estimate some complicated probability
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import scipy
import pandas as pd
import math
import random
import statsmodels.api as sm
import statsmodels.formula.api as smf
from statsmodels.graphics.regressionplots import *
from sklearn import datasets, linear_model
Auto = pd.read_csv('data/Auto.csv', header=0, na_values='?')
Auto = Auto.dropna().reset_index(drop=True) # drop the observation with NA values and reindex the obs from 0
Auto.shape
np.random.seed(1)
train = np.random.choice(Auto.shape[0], 196, replace=False)
select = np.in1d(range(Auto.shape[0]), train)
import statsmodels.formula.api as smf
lm = smf.ols ('mpg~horsepower', data = Auto[select]).fit()
print lm.summary()
preds = lm.predict(Auto)
square_error = (Auto['mpg'] - preds)**2
print '--------Test Error for 1st order--------'
print np.mean(square_error[~select])
lm2 = smf.ols ('mpg~horsepower + I(horsepower ** 2.0)', data = Auto[select]).fit()
preds = lm2.predict(Auto)
square_error = (Auto['mpg'] - preds)**2
print '--------Test Error for 2nd order--------'
print square_error[~select].mean()
lm3 = smf.ols ('mpg~horsepower + I(horsepower ** 2.0) + I(horsepower ** 3.0)', data = Auto[select]).fit()
preds = lm3.predict(Auto)
square_error = (Auto['mpg'] - preds)**2
print '--------Test Error for 3rd order--------'
print np.mean(square_error[~select])
print lm3.summary()
ols_fit = smf.ols ('mpg~horsepower', data = Auto).fit()
print ols_fit.params
glm_fit = sm.GLM.from_formula('mpg~horsepower', data = Auto).fit()
print glm_fit.params
from sklearn.model_selection import KFold, cross_val_score
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import Pipeline
x = pd.DataFrame(Auto.horsepower)
y = Auto.mpg
model = LinearRegression()
model.fit(x, y)
print model.intercept_
print model.coef_
k_fold = KFold(n_splits=x.shape[0]) # loo use folds equal to # of observations
test = cross_val_score(model, x, y, cv=k_fold, scoring = 'neg_mean_squared_error', n_jobs=-1)
print np.mean(-test)
A = []
for porder in xrange(1, 6):
model = Pipeline([('poly', PolynomialFeatures(degree=porder)), ('linear', LinearRegression())])
k_fold = KFold(n_splits=x.shape[0]) # loo use folds equal to # of observations
test = cross_val_score(model, x, y, cv=k_fold, scoring = 'neg_mean_squared_error', n_jobs=-1)
A.append(np.mean(-test))
print A
np.random.seed(2)
A = []
for porder in xrange(1, 11):
model = Pipeline([('poly', PolynomialFeatures(degree=porder)), ('linear', LinearRegression())])
k_fold = KFold(n_splits=10)
test = cross_val_score(model, x, y, cv = k_fold, scoring = 'neg_mean_squared_error', n_jobs = -1)
A.append(np.mean(-test))
print A
Portfolio = pd.read_csv('data/Portfolio.csv', header=0)
def alpha_fn(data, index):
X = data.X[index]
Y = data.Y[index]
return (np.var(Y) - np.cov(X,Y)[0,1])/(np.var(X) + np.var(Y) - 2 * np.cov(X, Y)[0,1])
alpha_fn(Portfolio, range(0, 100))
np.sort(np.random.choice(range(0, 100), size=100, replace=True))
alpha_fn(Portfolio, np.random.choice(range(0, 100), size=100, replace=True))
def boot_python(data, input_fun, iteration):
n = Portfolio.shape[0]
idx = np.random.randint(0, n, (iteration, n))
stat = np.zeros(iteration)
for i in xrange(len(idx)):
stat[i] = input_fun(data, idx[i])
return {'Mean': np.mean(stat), 'STD': np.std(stat)}
boot_python(Portfolio, alpha_fn, 1000)
| 0.543106 | 0.918517 |
```
#импорт библиотек
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import plotly as py
import plotly.express as px
import plotly.graph_objects as go
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
data = pd.read_csv('HR-Employee-Attrition.csv')
hr = data.copy()
hr.head()
hr.info()
```
Параметры:
- Age - возраст
- Attrition - изнуренность/усталость
- BusinessTravel - бизнес-поездки
- DailyRate - Дневная норма
- Department - Отдел
- Distance From Home - Расстояние от дома
- Education - Образование
- Education Field - Область образования
- Employee Count - Количество сотрудников
- EmployeeNumber - Число работников
- EnvironmentSatisfaction - Окружающая среда
- Gender - Пол
- HourlyRate - Почасовая ставка
- JobInvolvement - Работа
- JobLevel - Уровень работы
- JobRole - Должностная роль
- JobSatisfaction - Удовлетворение от работы
- MaritalStatus - Семейное положение
- MonthlyIncome - Ежемесячный доход
- MonthlyRate - Ежемесячная ставка
- NumCompaniesWorked - Количество компаний
- Over18 - Более 18
- OverTime - Переработки
- PercentSalaryHike - Повышение процентной заработной платы
- PerformanceRating - рейтинг производительности
- RelationshipSatisfaction - Отношения
- StandardHours - Стандартные часы
- Stock Option Level - Уровень опциона на акции
- TotalWorkingYears - Всего лет работы
- Training Times Last Year - Время тренировок/обучения в прошлом году
- Work Life Balance - Баланс между работой и личной жизнью
- YearsAtCompany - Лет в компании
- YearsInCurrentRole - Годы в текущей должности
- YearsSinceLastPromotion - Годы с момента последнего продвижения
- Years With Curr Manager - Годы с текущим менеджером
```
#посмотрим на значения итогового параметра
hr['Attrition'].value_counts()
fig = go.Figure()
fig.add_trace(go.Pie(labels=hr['Attrition'], values=hr['Attrition'].value_counts()))
fig.update_layout(autosize=False, width=300, height=300)
fig.show()
```
Количество данных о сотрудниках с истощением/усталостью меньше по сравнению с сотрудниками, у которых нет истощения/усталости.
```
#проверим на пустоту
hr.isnull().sum().sum()
```
Пропущенных значений нет
```
plt.figure(figsize=(14,8))
sns.heatmap(hr.corr(),vmax=0.8,linewidth=0.1)
plt.show()
```
- TotalWorkingYears имеет положительную связь с JobLevel и MonthlyIncome.
- YearsAtCompany имеет положительную связь с YearsInCurrentRole и YearsWithCurrentManager.
<b>Категориальные параметры</b>
```
cat_params = hr.select_dtypes(include=[np.object]).columns
print(cat_params)
hr[cat_params].head()
```
Проверим корреляции между признаками
```
hr['BusinessTravel'].value_counts()
plt.figure(figsize=(8,5))
sns.countplot(x='BusinessTravel',hue='Attrition', data=hr)
plt.title("Attrition and BusinessTravel")
plt.show()
```
Большинство сотрудников путешествуют по работе редко.
```
hr['Department'].value_counts()
plt.figure(figsize=(8,5))
sns.countplot(x='Department',hue='Attrition', data=hr)
plt.title("Attrition and Department")
plt.show()
hr['EducationField'].value_counts()
plt.figure(figsize=(8,5))
sns.countplot(x='EducationField',hue='Attrition', data=hr)
plt.title("Attrition and Education Field")
plt.xticks(rotation=45)
plt.show()
hr['Gender'].value_counts()
plt.figure(figsize=(8,5))
sns.countplot(x='Gender',hue='Attrition', data=hr)
plt.title("Gender and Attrition")
plt.legend(loc='best')
plt.show()
```
Мужчины чаще страдают от истощения/усталости, чем женщины
```
hr['JobRole'].nunique()
plt.figure(figsize=(8,5))
sns.countplot(x='JobRole',hue='Attrition', data=hr)
plt.title("JobRole and Attrition")
plt.legend(loc='best')
plt.xticks(rotation=45)
plt.show()
```
Больше всего увольняются торговые представители, лаборанты и руководители отдела продаж.
```
hr['OverTime'].value_counts()
plt.figure(figsize=(8,5))
sns.countplot(x='OverTime',hue='Attrition', data=hr)
plt.title("OverTime and Attrition")
plt.legend(loc='best')
plt.show()
```
При переработках повышаетсяя вероятность усталости/истощения
```
#посмотрим на пол тех, кто перерабатывает
pd.crosstab(hr['OverTime'], hr['Gender'])
```
Мужчины чаще работают сверхурочно
<b>Числовые параметры</b>
```
num_params = [feature for feature in hr.columns if hr[feature].dtype != 'O']
print(len(num_params))
hr[num_params].head()
sns.distplot(hr['Age'],hist=False)
plt.show()
hr['Age'].nunique()
```
Возраст нормализован, возраст большинства сотрудников от 25 до 40 лет
У нас есть некоторые числовые столбцы, которые закодированы для нас, это порядковые метки.
```
ordinal_params = ['Education','EnvironmentSatisfaction','JobInvolvement','JobSatisfaction',
'PerformanceRating','RelationshipSatisfaction','WorkLifeBalance']
hr[ordinal_params].head()
hr['Education'].value_counts()
```
Больше всего бакалавров и магистров
```
edu_map = {1 :'Below College', 2: 'College', 3 :'Bachelor', 4 :'Master', 5: 'Doctor'}
plt.figure(figsize=(8,5))
sns.countplot(x=hr['Education'].map(edu_map), hue='Attrition', data=hr)
plt.title("Education and Attrition")
plt.show()
hr['EnvironmentSatisfaction'].value_counts()
es_map = {1 :'Low', 2: 'Medium', 3 :'High', 4 :'Very High'}
plt.figure(figsize=(8,5))
sns.countplot(x=hr['EnvironmentSatisfaction'].map(edu_map), hue='Attrition', data=hr)
plt.title("Environment Satisfaction and Attrition")
plt.show()
hr['JobInvolvement'].value_counts()
ji_map = {1 :'Low', 2: 'Medium', 3 :'High', 4 :'Very High'}
plt.figure(figsize=(8,5))
sns.countplot(x=hr['JobInvolvement'].map(ji_map), hue='Attrition', data=hr)
plt.title("Job Involvement and Attrition")
plt.show()
```
Серьезных корреляций не замечено.
```
hr['JobLevel'].value_counts()
sns.countplot(x='JobLevel',hue='Attrition',data=hr)
plt.show()
num_params = [feature for feature in hr.columns if hr[feature].dtype != 'O' and feature not in ordinal_params]
print(len(num_params))
hr[num_params].head()
sns.distplot(hr['MonthlyIncome'],hist=False)
plt.show()
plt.figure(figsize=(8,5))
sns.boxplot(hr['MonthlyIncome'])
plt.show()
#корреляция возраста и дохода
trace = go.Scatter(x=hr['Age'],y=hr['MonthlyIncome'], mode="markers",
marker=dict(size = 8), line=dict(shape='spline'))
data=[trace]
layout = {"title":"Monthly Income and Age",
"xaxis":{"title":"Age"},
"yaxis":{"title":"MonthlyIncome"}
}
iplot({"data":data, "layout":layout})
```
Доход увеличивается с возрастом
```
hr['NumCompaniesWorked'].value_counts()
sns.countplot(x='NumCompaniesWorked',hue='Attrition',data=hr)
plt.show()
hr['StockOptionLevel'].value_counts()
sns.countplot(x='StockOptionLevel',hue='Attrition',data=hr)
plt.show()
#удалим столбцы, которые не коррелируют с итоговым
hr.drop(['EmployeeCount','EmployeeNumber','StandardHours'],axis=1, inplace=True)
hr[cat_params].head()
#заменим ДА/НЕТ на 1 и 0
hr['Attrition'] = hr['Attrition'].replace({'No':0,'Yes':1})
hr['OverTime'] = hr['OverTime'].map({'No':0,'Yes':1})
hr['Gender'] = hr['Gender'].map({'Male':0,'Female':1})
#избавимся от категориальных
cat_cols = ['BusinessTravel','Department','EducationField','JobRole','MaritalStatus']
for col in cat_cols:
map_dict = {k:i for i, k in enumerate(hr[col].value_counts().index,0)}
hr[col] = hr[col].map(map_dict)
#параметр Over18 не нужен
hr.drop('Over18',axis=1,inplace=True)
#посмотрим на уровень корреляции
hr.corr()['Attrition'][:-1].sort_values(ascending=False)
plt.figure(figsize=(13,9))
sns.heatmap(hr.corr(),vmax=0.8,linewidth=0.1)
plt.show()
x = hr.drop('Attrition',axis=1)
y = hr['Attrition']
from sklearn.ensemble import ExtraTreesClassifier
extra_tree = ExtraTreesClassifier()
extra_tree.fit(x,y)
feat_importance = extra_tree.feature_importances_
plt.figure(figsize=(11,9))
feat_imp = pd.Series(extra_tree.feature_importances_, index=x.columns)
feat_imp.nlargest(20).plot(kind='barh')
plt.show()
```
Самые влияющие факторы на увольнение:
- Переработки
- Возраст
- Зарплата
- Удовлетворенность
- Расстояние до дома итд
```
#масштабируем функцию
from sklearn.preprocessing import MinMaxScaler
min_max = MinMaxScaler()
x_scaled = min_max.fit(x).transform(x)
ExtraTree = ExtraTreesClassifier()
ExtraTree.fit(x_scaled, y)
feature_importance = pd.Series(ExtraTree.feature_importances_, index=x.columns)
feature_importance
plt.figure(figsize=(10,9))
feature_importance.nlargest(20).plot(kind='barh')
plt.show()
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, classification_report
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn.naive_bayes import GaussianNB
```
Для построения моделей разделим данные
```
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.25,random_state=42)
print("training shape: ",x_train.shape)
print("testing shape: ",x_test.shape)
```
Попробуем Логистическую регрессию, метод опорных векторов (svs,svm), k-ближайших соседей, дерево решений, случайный лес, линейный дискриминантный анализ, наивный байес.
```
log_clf = LogisticRegression(max_iter=10000)
svc_clf = SVC()
knn_clf = KNeighborsClassifier()
dt_clf = DecisionTreeClassifier()
rf_clf = RandomForestClassifier()
lda_clf = LDA(n_components=1)
gnb_clf = GaussianNB()
for clf in [log_clf, svc_clf, knn_clf, dt_clf, rf_clf, lda_clf, gnb_clf]:
clf.fit(x_train, y_train)
pred = clf.predict(x_test)
print(clf.__class__.__name__, " ", accuracy_score(y_test,pred))
```
Лучше всех показала себя логистическая регрессия, но значения слишком близки, поэтому построим ансамбль моделей.
```
from sklearn.ensemble import VotingClassifier
voting_clf =VotingClassifier([('lgclf',log_clf),('svc',svc_clf),('knn',knn_clf),('dt',dt_clf),('rf',rf_clf),('lda',lda_clf),('gnb',gnb_clf)])
voting_clf.fit(x_train,y_train)
y_pred = voting_clf.predict(x_test)
print("acuracy: ",accuracy_score(y_test,y_pred))
#попробуем тоже самое с масштабированными данными
x_train_scaled,x_test_scaled,y_train_scaled,y_test_scaled = train_test_split(x_scaled,y,test_size=0.25,random_state=42)
print("training shape: ",x_train_scaled.shape)
print("testing shape: ",x_test_scaled.shape)
for clf in [log_clf, svc_clf, knn_clf, dt_clf, rf_clf, lda_clf, gnb_clf]:
clf.fit(x_train_scaled, y_train_scaled)
pred = clf.predict(x_test_scaled)
print(clf.__class__.__name__, " ", accuracy_score(y_test,pred))
```
Результат стал лучше, попробуем ансамбль.
```
voting_clf =VotingClassifier([('lgclf',log_clf),('svc',svc_clf),('knn',knn_clf),('dt',dt_clf),('rf',rf_clf),('lda',lda_clf),('gnb',gnb_clf)])
voting_clf.fit(x_train_scaled,y_train_scaled)
y_pred = voting_clf.predict(x_test_scaled)
print("acuracy: ",accuracy_score(y_test_scaled,y_pred))
```
Теперь попробуем более "продвинутые" алгоритмы: Adaboost и Xgboost
```
from sklearn.ensemble import AdaBoostClassifier
from xgboost import XGBClassifier
boost = AdaBoostClassifier(base_estimator = DecisionTreeClassifier(max_depth=1), n_estimators=500, algorithm='SAMME',learning_rate=0.01)
boost.fit(x_train_scaled,y_train_scaled)
predictions = boost.predict(x_test_scaled)
print("accuracy:",accuracy_score(y_test,predictions))
print("training accuracy:",boost.score(x_train_scaled,y_train_scaled))
print("testing accuracy:",boost.score(x_test_scaled,y_test_scaled))
xgb = XGBClassifier()
xgb.fit(x_train_scaled, y_train_scaled)
prediction = xgb.predict(x_test_scaled)
print("accuracy: ",accuracy_score(y_test,prediction))
```
Результат получился чуть хуже, чем логистическая регрессия.
<b>Итог:</b>
- Проанализировали корреляцию данных и выявили наиболее влияющие на увольнение параметры
- Применили классические алгоритмы машинного обучения
- Использовали масштабирование данных
- Использовали ансамбли моделей
- Использовали более сложные реализации алгоритмов
Лучший результат дала логистическая регрессия на масштабированных данных: 0,905
|
github_jupyter
|
#импорт библиотек
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import plotly as py
import plotly.express as px
import plotly.graph_objects as go
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
data = pd.read_csv('HR-Employee-Attrition.csv')
hr = data.copy()
hr.head()
hr.info()
#посмотрим на значения итогового параметра
hr['Attrition'].value_counts()
fig = go.Figure()
fig.add_trace(go.Pie(labels=hr['Attrition'], values=hr['Attrition'].value_counts()))
fig.update_layout(autosize=False, width=300, height=300)
fig.show()
#проверим на пустоту
hr.isnull().sum().sum()
plt.figure(figsize=(14,8))
sns.heatmap(hr.corr(),vmax=0.8,linewidth=0.1)
plt.show()
cat_params = hr.select_dtypes(include=[np.object]).columns
print(cat_params)
hr[cat_params].head()
hr['BusinessTravel'].value_counts()
plt.figure(figsize=(8,5))
sns.countplot(x='BusinessTravel',hue='Attrition', data=hr)
plt.title("Attrition and BusinessTravel")
plt.show()
hr['Department'].value_counts()
plt.figure(figsize=(8,5))
sns.countplot(x='Department',hue='Attrition', data=hr)
plt.title("Attrition and Department")
plt.show()
hr['EducationField'].value_counts()
plt.figure(figsize=(8,5))
sns.countplot(x='EducationField',hue='Attrition', data=hr)
plt.title("Attrition and Education Field")
plt.xticks(rotation=45)
plt.show()
hr['Gender'].value_counts()
plt.figure(figsize=(8,5))
sns.countplot(x='Gender',hue='Attrition', data=hr)
plt.title("Gender and Attrition")
plt.legend(loc='best')
plt.show()
hr['JobRole'].nunique()
plt.figure(figsize=(8,5))
sns.countplot(x='JobRole',hue='Attrition', data=hr)
plt.title("JobRole and Attrition")
plt.legend(loc='best')
plt.xticks(rotation=45)
plt.show()
hr['OverTime'].value_counts()
plt.figure(figsize=(8,5))
sns.countplot(x='OverTime',hue='Attrition', data=hr)
plt.title("OverTime and Attrition")
plt.legend(loc='best')
plt.show()
#посмотрим на пол тех, кто перерабатывает
pd.crosstab(hr['OverTime'], hr['Gender'])
num_params = [feature for feature in hr.columns if hr[feature].dtype != 'O']
print(len(num_params))
hr[num_params].head()
sns.distplot(hr['Age'],hist=False)
plt.show()
hr['Age'].nunique()
ordinal_params = ['Education','EnvironmentSatisfaction','JobInvolvement','JobSatisfaction',
'PerformanceRating','RelationshipSatisfaction','WorkLifeBalance']
hr[ordinal_params].head()
hr['Education'].value_counts()
edu_map = {1 :'Below College', 2: 'College', 3 :'Bachelor', 4 :'Master', 5: 'Doctor'}
plt.figure(figsize=(8,5))
sns.countplot(x=hr['Education'].map(edu_map), hue='Attrition', data=hr)
plt.title("Education and Attrition")
plt.show()
hr['EnvironmentSatisfaction'].value_counts()
es_map = {1 :'Low', 2: 'Medium', 3 :'High', 4 :'Very High'}
plt.figure(figsize=(8,5))
sns.countplot(x=hr['EnvironmentSatisfaction'].map(edu_map), hue='Attrition', data=hr)
plt.title("Environment Satisfaction and Attrition")
plt.show()
hr['JobInvolvement'].value_counts()
ji_map = {1 :'Low', 2: 'Medium', 3 :'High', 4 :'Very High'}
plt.figure(figsize=(8,5))
sns.countplot(x=hr['JobInvolvement'].map(ji_map), hue='Attrition', data=hr)
plt.title("Job Involvement and Attrition")
plt.show()
hr['JobLevel'].value_counts()
sns.countplot(x='JobLevel',hue='Attrition',data=hr)
plt.show()
num_params = [feature for feature in hr.columns if hr[feature].dtype != 'O' and feature not in ordinal_params]
print(len(num_params))
hr[num_params].head()
sns.distplot(hr['MonthlyIncome'],hist=False)
plt.show()
plt.figure(figsize=(8,5))
sns.boxplot(hr['MonthlyIncome'])
plt.show()
#корреляция возраста и дохода
trace = go.Scatter(x=hr['Age'],y=hr['MonthlyIncome'], mode="markers",
marker=dict(size = 8), line=dict(shape='spline'))
data=[trace]
layout = {"title":"Monthly Income and Age",
"xaxis":{"title":"Age"},
"yaxis":{"title":"MonthlyIncome"}
}
iplot({"data":data, "layout":layout})
hr['NumCompaniesWorked'].value_counts()
sns.countplot(x='NumCompaniesWorked',hue='Attrition',data=hr)
plt.show()
hr['StockOptionLevel'].value_counts()
sns.countplot(x='StockOptionLevel',hue='Attrition',data=hr)
plt.show()
#удалим столбцы, которые не коррелируют с итоговым
hr.drop(['EmployeeCount','EmployeeNumber','StandardHours'],axis=1, inplace=True)
hr[cat_params].head()
#заменим ДА/НЕТ на 1 и 0
hr['Attrition'] = hr['Attrition'].replace({'No':0,'Yes':1})
hr['OverTime'] = hr['OverTime'].map({'No':0,'Yes':1})
hr['Gender'] = hr['Gender'].map({'Male':0,'Female':1})
#избавимся от категориальных
cat_cols = ['BusinessTravel','Department','EducationField','JobRole','MaritalStatus']
for col in cat_cols:
map_dict = {k:i for i, k in enumerate(hr[col].value_counts().index,0)}
hr[col] = hr[col].map(map_dict)
#параметр Over18 не нужен
hr.drop('Over18',axis=1,inplace=True)
#посмотрим на уровень корреляции
hr.corr()['Attrition'][:-1].sort_values(ascending=False)
plt.figure(figsize=(13,9))
sns.heatmap(hr.corr(),vmax=0.8,linewidth=0.1)
plt.show()
x = hr.drop('Attrition',axis=1)
y = hr['Attrition']
from sklearn.ensemble import ExtraTreesClassifier
extra_tree = ExtraTreesClassifier()
extra_tree.fit(x,y)
feat_importance = extra_tree.feature_importances_
plt.figure(figsize=(11,9))
feat_imp = pd.Series(extra_tree.feature_importances_, index=x.columns)
feat_imp.nlargest(20).plot(kind='barh')
plt.show()
#масштабируем функцию
from sklearn.preprocessing import MinMaxScaler
min_max = MinMaxScaler()
x_scaled = min_max.fit(x).transform(x)
ExtraTree = ExtraTreesClassifier()
ExtraTree.fit(x_scaled, y)
feature_importance = pd.Series(ExtraTree.feature_importances_, index=x.columns)
feature_importance
plt.figure(figsize=(10,9))
feature_importance.nlargest(20).plot(kind='barh')
plt.show()
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, classification_report
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn.naive_bayes import GaussianNB
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.25,random_state=42)
print("training shape: ",x_train.shape)
print("testing shape: ",x_test.shape)
log_clf = LogisticRegression(max_iter=10000)
svc_clf = SVC()
knn_clf = KNeighborsClassifier()
dt_clf = DecisionTreeClassifier()
rf_clf = RandomForestClassifier()
lda_clf = LDA(n_components=1)
gnb_clf = GaussianNB()
for clf in [log_clf, svc_clf, knn_clf, dt_clf, rf_clf, lda_clf, gnb_clf]:
clf.fit(x_train, y_train)
pred = clf.predict(x_test)
print(clf.__class__.__name__, " ", accuracy_score(y_test,pred))
from sklearn.ensemble import VotingClassifier
voting_clf =VotingClassifier([('lgclf',log_clf),('svc',svc_clf),('knn',knn_clf),('dt',dt_clf),('rf',rf_clf),('lda',lda_clf),('gnb',gnb_clf)])
voting_clf.fit(x_train,y_train)
y_pred = voting_clf.predict(x_test)
print("acuracy: ",accuracy_score(y_test,y_pred))
#попробуем тоже самое с масштабированными данными
x_train_scaled,x_test_scaled,y_train_scaled,y_test_scaled = train_test_split(x_scaled,y,test_size=0.25,random_state=42)
print("training shape: ",x_train_scaled.shape)
print("testing shape: ",x_test_scaled.shape)
for clf in [log_clf, svc_clf, knn_clf, dt_clf, rf_clf, lda_clf, gnb_clf]:
clf.fit(x_train_scaled, y_train_scaled)
pred = clf.predict(x_test_scaled)
print(clf.__class__.__name__, " ", accuracy_score(y_test,pred))
voting_clf =VotingClassifier([('lgclf',log_clf),('svc',svc_clf),('knn',knn_clf),('dt',dt_clf),('rf',rf_clf),('lda',lda_clf),('gnb',gnb_clf)])
voting_clf.fit(x_train_scaled,y_train_scaled)
y_pred = voting_clf.predict(x_test_scaled)
print("acuracy: ",accuracy_score(y_test_scaled,y_pred))
from sklearn.ensemble import AdaBoostClassifier
from xgboost import XGBClassifier
boost = AdaBoostClassifier(base_estimator = DecisionTreeClassifier(max_depth=1), n_estimators=500, algorithm='SAMME',learning_rate=0.01)
boost.fit(x_train_scaled,y_train_scaled)
predictions = boost.predict(x_test_scaled)
print("accuracy:",accuracy_score(y_test,predictions))
print("training accuracy:",boost.score(x_train_scaled,y_train_scaled))
print("testing accuracy:",boost.score(x_test_scaled,y_test_scaled))
xgb = XGBClassifier()
xgb.fit(x_train_scaled, y_train_scaled)
prediction = xgb.predict(x_test_scaled)
print("accuracy: ",accuracy_score(y_test,prediction))
| 0.242834 | 0.862178 |
```
import numpy as np
import roboticstoolbox as rtb
from spatialmath import *
from math import pi
import matplotlib.pyplot as plt
from matplotlib import cm
np.set_printoptions(linewidth=100, formatter={'float': lambda x: f"{x:8.4g}" if abs(x) > 1e-10 else f"{0:8.4g}"})
%matplotlib notebook
```
The Toolbox supports models defined using a number of different conventions. We will load a very classical model, a Puma560 robot defined in terms of standard Denavit-Hartenberg parameters
```
p560 = rtb.models.DH.Puma560()
```
Now we can display the simple Denavit-Hartenberg parameter model
```
print(p560)
```
The first table shows the kinematic parameters, and from the column titles we can see clearly that this is expressed in terms of standard Denavit-Hartenberg parameters. The first column shows that the joint variables qi are rotations since they are in the θ column. Joint limits are also shown. Joint flip (motion in the opposite sense) would be indicated by the joint variable being shown as for example like `-q3`, and joint offsets by being shown as for example like `q2 + 45°`.
The second table shows some named joint configurations. For example `p560.qr` is
```
p560.qr
```
If the robot had a base or tool transform they would be listed in this table also.
This object is a subclass of `DHRobot`, equivalent to the `SerialLink` class in the MATLAB version of the Toolbox.
This class has many methods and attributes, and we will explore some of them in this notebook.
We can easily display the robot graphically
```
p560.plot(p560.qn);
```
where `qn` is one of the named configurations shown above, and has the robot positioned to work above a table top. You can use the mouse to rotate the plot and view the robot from different directions. The grey line is the _shadow_ which is a projection of the robot onto the xy-plane.
In this particular case the end-effector pose is given by the forward kinematics
```
p560.fkine(p560.qn)
```
which is a 4x4 SE(3) matrix displayed in a color coded way with rotation matrix in red, translation vector in blue, and constant elements in grey. This is an instance of an `SE3` object safely encapsulates the SE(3) matrix. This class, and related ones, are implemented by the [Spatial Math Toolbox for Python](https://github.com/petercorke/spatialmath-python).
You can verify the end-effector position, the blue numbers are from top to bottom the x-, y- and z-coordinates of the end-effector position, match the plot shown above.
We can manually adjust the joint angles of this robot (click and drag the sliders) to see how the shape of the robot changes and how the end-effector pose changes
```
# p560.teach(); # works from console, hangs in Jupyter
```
An important problem in robotics is _inverse kinematics_, determining the joint angles to put the robot's end effector at a particular pose.
Suppose we want the end-effector to be at position (0.5, 0.2, 0.1) and to have its gripper pointing (its _approach vector_) in the x-direction, and its fingers one above the other so that its _orientation vector_ is parallel to the z-axis.
We can specify that pose by composing two SE(3) matrices:
1. a pure translation
2. a pure rotation defined in terms of the orientation and approach vectors
```
T = SE3(0.5, 0.2, 0.5) * SE3.OA([0,0,1], [1,0,0])
T
```
Now we can compute the joint angles that results in this pose
```
sol = p560.ikine_LM(T)
```
which returns the joint coordinates as well as solution status
```
sol
```
indicating, in this case, that there is no failure. The joint coordinates are
```
sol.q
```
and we can confirm that this is indeed an inverse kinematic solution by computing the forward kinematics
```
p560.fkine(sol.q)
```
which matches the original transform.
A simple trajectory between two joint configuration is
```
qt = rtb.tools.trajectory.jtraj(p560.qz, sol.q, 50)
```
The result is a _namedtuple_ with attributes `q` containing the joint angles, as well as `qd`, `qdd` and `t` which hold the joint velocity, joint accelerations and time respectively.
The joint angles are a matrix with one column per joint and one row per timestep, and time increasing with row number.
```
qt.q
```
We can plot this trajectory as a function of time using the convenience function `qplot`
```
rtb.tools.trajectory.qplot(qt.q, block=False)
```
and then we can animate this
```
p560.plot(qt.q, dt=0.1);
```
_Note: animation not working in Jupyter..._
The inverse kinematic solution was found using an iterative numerical procedure. It is quite general but it has several drawbacks:
- it can be slow
- it may not find a solution, if the initial choice of joint coordinates is far from the solution (in the case above the default initial choice of all zeros was used)
- it may not find the solution you want, in general there are multiple solutions for inverse kinematics. For the same end-effector pose, the robot might:
- have it's arm on the left or right of its waist axis,
- the elbow could be up or down, and
- the wrist can flipped or not flipped. For a two-finger gripper a rotation of
180° about the gripper axis leaves the fingers in the same configuration.
Most industrial robots have a _spherical wrist_ which means that the last three joint axes intersect at a single point in the middle of the wrist mechanism. We can test for this condition
```
p560.isspherical()
```
This greatly simplifies things because the last three joints only control orientation and have no effect on the end-effector position. This means that only the first three joints define position $(x_e, y_e, z_e)$. Three joints that control three unknowns is relatively easy to solve for, and analytical solutions (complex trigonmetric equations) can be found, and in fact have been published for most industrial robot manipulators.
The Puma560 has an analytical solution. We can request the solution with the arm to the left and the elbow up, and the wrist not flipped by using the configuration string `"lun"`
```
sol = p560.ikine_a(T, "lun")
sol
```
which is different to the values found earlier, but we can verify it is a valid solution
```
p560.fkine(sol.q)
```
In fact the solution we found earlier, but didn't explicitly specify, is the right-handed elbow-up configuration
```
sol = p560.ikine_a(T, "run")
sol.q
```
Other useful functions include the manipulator Jacobian which maps joint velocity to end-effector velocity expressed in the world frame
```
p560.jacob0(p560.qn)
```
|
github_jupyter
|
import numpy as np
import roboticstoolbox as rtb
from spatialmath import *
from math import pi
import matplotlib.pyplot as plt
from matplotlib import cm
np.set_printoptions(linewidth=100, formatter={'float': lambda x: f"{x:8.4g}" if abs(x) > 1e-10 else f"{0:8.4g}"})
%matplotlib notebook
p560 = rtb.models.DH.Puma560()
print(p560)
p560.qr
p560.plot(p560.qn);
p560.fkine(p560.qn)
# p560.teach(); # works from console, hangs in Jupyter
T = SE3(0.5, 0.2, 0.5) * SE3.OA([0,0,1], [1,0,0])
T
sol = p560.ikine_LM(T)
sol
sol.q
p560.fkine(sol.q)
qt = rtb.tools.trajectory.jtraj(p560.qz, sol.q, 50)
qt.q
rtb.tools.trajectory.qplot(qt.q, block=False)
p560.plot(qt.q, dt=0.1);
p560.isspherical()
sol = p560.ikine_a(T, "lun")
sol
p560.fkine(sol.q)
sol = p560.ikine_a(T, "run")
sol.q
p560.jacob0(p560.qn)
| 0.355551 | 0.977841 |
```
class Solution:
def findMinDifference(self, timePoints) -> int:
if len(set(timePoints)) < len(timePoints): return 0 # 如果有重复的时间,则最小差为0
time_list = []
dist = float('inf')
for i, time in enumerate(timePoints):
l_s_time = time.split(':')
l_h, l_m = int(l_s_time[0]), int(l_s_time[1])
l_h_o = 24 if l_h == 0 else l_h
l_m_o = 60 if l_m == 0 else l_m
for j in range(i+1, len(timePoints)):
r_s_time = timePoints[j].split(':')
r_h, r_m = int(r_s_time[0]), int(r_s_time[1])
r_h_o = 24 if r_h == 0 else r_h
r_m_o = 60 if r_m == 0 else r_m
if l_h == 0:
pass
if l_m == 0:
pass
if r_h == 0:
pass
if r_m == 0:
pass
h_dist = l_h - r_h
m_dist = l_m - r_m
dist = abs(h_dist * 60 + (l_m - r_m))
return dist
class Solution:
def findMinDifference(self, timePoints):
total = 60*24
buckets = [False]*total
for time in timePoints:
h, m = time.split(":")
tm = int(h)*60+ int(m)
if buckets[tm] is False:
buckets[tm] = True
else: # 重复的时间,相差为0
return 0
prev = first = -1
mn = total
for i in range(total):
if buckets[i]:
if prev != -1:
mn = min(mn, i - prev)
else:
first = i # 最早出现的时间分钟
prev = i
mn = min(mn, total - prev + first)
return mn
class Solution:
def findMinDifference(self, timePoints):
total = 60 * 24
buckets = [False] * total
for time in timePoints:
s_time = time.split(':')
h, m = int(s_time[0]), int(s_time[1])
if buckets[h*60 + m] is False:
buckets[h*60 + m] = True
else:
return 0
dif = total
first = -1
pre = -1
for i in range(total):
if buckets[i]:
if pre != -1:
dif = min(dif, i - pre)
if first == -1:
first = i
pre = i
return min(dif, total - (pre - first))
timepoints = ["23:59","00:00"]
solution = Solution()
solution.findMinDifference(timepoints)
```
|
github_jupyter
|
class Solution:
def findMinDifference(self, timePoints) -> int:
if len(set(timePoints)) < len(timePoints): return 0 # 如果有重复的时间,则最小差为0
time_list = []
dist = float('inf')
for i, time in enumerate(timePoints):
l_s_time = time.split(':')
l_h, l_m = int(l_s_time[0]), int(l_s_time[1])
l_h_o = 24 if l_h == 0 else l_h
l_m_o = 60 if l_m == 0 else l_m
for j in range(i+1, len(timePoints)):
r_s_time = timePoints[j].split(':')
r_h, r_m = int(r_s_time[0]), int(r_s_time[1])
r_h_o = 24 if r_h == 0 else r_h
r_m_o = 60 if r_m == 0 else r_m
if l_h == 0:
pass
if l_m == 0:
pass
if r_h == 0:
pass
if r_m == 0:
pass
h_dist = l_h - r_h
m_dist = l_m - r_m
dist = abs(h_dist * 60 + (l_m - r_m))
return dist
class Solution:
def findMinDifference(self, timePoints):
total = 60*24
buckets = [False]*total
for time in timePoints:
h, m = time.split(":")
tm = int(h)*60+ int(m)
if buckets[tm] is False:
buckets[tm] = True
else: # 重复的时间,相差为0
return 0
prev = first = -1
mn = total
for i in range(total):
if buckets[i]:
if prev != -1:
mn = min(mn, i - prev)
else:
first = i # 最早出现的时间分钟
prev = i
mn = min(mn, total - prev + first)
return mn
class Solution:
def findMinDifference(self, timePoints):
total = 60 * 24
buckets = [False] * total
for time in timePoints:
s_time = time.split(':')
h, m = int(s_time[0]), int(s_time[1])
if buckets[h*60 + m] is False:
buckets[h*60 + m] = True
else:
return 0
dif = total
first = -1
pre = -1
for i in range(total):
if buckets[i]:
if pre != -1:
dif = min(dif, i - pre)
if first == -1:
first = i
pre = i
return min(dif, total - (pre - first))
timepoints = ["23:59","00:00"]
solution = Solution()
solution.findMinDifference(timepoints)
| 0.383757 | 0.451387 |
```
import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings('ignore')
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.set(style='white', color_codes=True)
df=pd.read_csv('/train.csv/')
df.head()
df.shape
df.tail()
df.describe()
df.info()
miss_val = df.isna().sum()
miss_val
col_name=df.columns
for i in col_name:
print(i,'has :',df[i].nunique(),"Unique values")
df['Employee Identifier'].fillna(0.0, inplace=True)
df['Salaries'].fillna(0.0, inplace=True)
df['Overtime'].fillna(0.0, inplace=True)
df['Other Salaries'].fillna(0.0, inplace=True)
df['Total Salary'].fillna(0.0, inplace=True)
df['Retirement'].fillna(0.0, inplace=True)
df['Health and Dental'].fillna(0.0, inplace=True)
df['Other Benefits'].fillna(0.0, inplace=True)
df['Total Benefits'].fillna(0.0, inplace=True)
df['Total Compensation'].fillna(0.0, inplace=True)
df['Union Code'].fillna(0.0, inplace=True)
df['Union'].fillna("unknown", inplace=True)
df['Department Code'].fillna('uuu', inplace=True)
df['Department'].fillna('unknown', inplace=True)
df['Job'].fillna("unknown", inplace=True)
df['Organization Group Code'].value_counts()
df['Organization Group Code'].unique()
df['Job Family Code'].value_counts()
df["Job Family Code"].replace("SCRT","0000",inplace=True)
df["Job Family Code"].replace("H000","0001",inplace = True)
df["Job Family Code"].replace("Q000","0002", inplace=True)
df["Job Family Code"].replace("SFRA","0003", inplace=True)
df["Job Family Code"].replace("__UNASSIGNED__","0004", inplace=True)
df['Job Family Code'].unique()
df['Job Code'].unique()
df['Job Code'].unique()
df["Job Code"].replace("351C","3510",inplace=True)
df['Year Type'].unique()
df['Year Type'].value_counts()
df['Year'].unique()
df['Organization Group'].unique()
df['Department Code'].unique()
df['Department Code'].unique()
df['Department'].unique()
df['Union'].unique()
df['Union'].unique()
df['Union Code'].unique()
df['Union Code'].value_counts()
df['Job Family'].unique()
df['Job'].unique()
df['Employee Identifier'].unique()
df_num = df.select_dtypes(include = ['object',])
df_num.head()
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
df['Job Family Code1']=le.fit_transform(df['Job Family Code'])
df['Job Code1']=le.fit_transform(df['Job Code'])
df['Year Type1']=le.fit_transform(df['Year Type'])
df['Organization Group1']=le.fit_transform(df['Organization Group'])
df['Department Code1']=le.fit_transform(df['Department Code'])
df['Department1']=le.fit_transform(df['Department'])
df['Union1']=le.fit_transform(df['Union'])
df['Job Family1']=le.fit_transform(df['Job Family'])
df['Job1']=le.fit_transform(df['Job'])
df.drop(['Job Family Code'],axis=1,inplace=True)
df.drop(['Job Code'],axis=1,inplace=True)
df.drop(['Year Type'],axis=1,inplace=True)
df.drop(['Organization Group'],axis=1,inplace=True)
df.drop(['Department Code'],axis=1,inplace=True)
df.drop(['Department'],axis=1,inplace=True)
df.drop(['Union'],axis=1,inplace=True)
df.drop(['Job Family'],axis=1,inplace=True)
df.drop(['Job'],axis=1,inplace=True)
df.head()
sns.distplot(df['Total Benefits'])
plt.show()
sns.distplot(df['Total Compensation'])
plt.show()
print("Skewness: %f" % df['Total Compensation'].skew())
print("Kurtosis: %f" % df['Total Compensation'].kurt())
from sklearn.cluster import KMeans
kmeans=KMeans(n_clusters=3)
df.columns
X = df.drop(['Organization Group Code', 'Union Code', 'Employee Identifier',
'Salaries', 'Overtime', 'Other Salaries', 'Retirement',
'Health and Dental', 'Other Benefits', 'Total Benefits',
'Job Family Code1', 'Job Code1', 'Year Type1',
'Organization Group1', 'Department Code1', 'Department1', 'Union1',
'Job Family1', 'Job1'], axis=1)
display(X)
kmeans.fit(X)
print(kmeans.cluster_centers_)
y=kmeans.labels_
print(y)
sns.countplot(x=kmeans.labels_, palette='Oranges')
plt.show()
plt.scatter(df.iloc[:,0].values,df.iloc[:,3].values, c=kmeans.labels_, cmap="rainbow")
plt.show()
centers = np.array(kmeans.cluster_centers_)
plt.scatter(centers[:,0], centers[:,1], marker="x", color='k')
plt.scatter(df.iloc[:,0].values,df.iloc[:,3].values, c=kmeans.labels_, cmap="rainbow")
centers = np.array(kmeans.cluster_centers_)
plt.scatter(centers[:,0], centers[:,1], marker="x", color='k')
plt.show()
from scipy.spatial.distance import cdist, pdist
from sklearn.cluster import KMeans
#K = range(1,10)
#X = df.drop(['Organization Group Code', 'Union Code', 'Employee Identifier',
# 'Salaries', 'Overtime', 'Other Salaries', 'Retirement',
# 'Health and Dental', 'Other Benefits', 'Total Benefits',
# 'Job Family Code1', 'Job Code1', 'Year Type1',
#'Organization Group1', 'Department Code1', 'Department1', 'Union1',
#'Job Family1', 'Job1'], axis=1)
#KM = [KMeans(n_clusters=k,verbose=1).fit(X) for k in K]
#centroids = [k.cluster_centers_ for k in KM]
#D_k = [cdist(X, cent, 'euclidean') for cent in centroids]
#cIdx = [np.argmin(D,axis=1) for D in D_k]
#dist = [np.min(D,axis=1) for D in D_k]
#avgWithinSS = [sum(d)/X.shape[0] for d in dist]
#Total with-in sum of square
#wcss = [sum(d**2) for d in dist]
#tss = sum(pdist(X)**2)/X.shape[0]
#3bss = tss-wcss
# varExplained = bss/tss*100
sse = {}
for k in range(1, 10):
kmeans = KMeans(n_clusters=k, max_iter=1000).fit(X)
sse[k] = kmeans.inertia_ # Inertia: Sum of distances of samples to their closest cluster center
plt.figure()
plt.plot(list(sse.keys()), list(sse.values()))
plt.xlabel("Number of cluster")
plt.ylabel("SSE")
plt.show()
import scipy
from scipy.cluster.hierarchy import dendrogram,linkage
from scipy.cluster.hierarchy import fcluster
from scipy.cluster.hierarchy import cophenet
from scipy.spatial.distance import pdist
from pylab import rcParams
from sklearn.cluster import AgglomerativeClustering
import sklearn.metrics as sm
plt.style.available
plt.style.use('seaborn-whitegrid')
df.head()
plt.figure(figsize=(15,10))
Z=linkage(df.drop(['Total Compensation'],axis=1),method='complete')
print("Z-Shape:",Z.shape)
plt.title("COMPLETE",size=30)
dendrogram(Z,orientation='top',truncate_mode='lastp',p=12,get_leaves=False,leaf_rotation=45,leaf_font_size=15,show_contracted=True,)
plt.xlabel("Cluster Size",fontsize=30)
plt.ylabel("Distances",fontsize=30)
plt.show()
plt.figure(figsize=(15,10))
Z=linkage(df.drop(['Total Compensation'],axis=1),method='weighted')
print("Z-Shape:",Z.shape)
plt.title("Weighted",loc='center',size=30)
dendrogram(Z,orientation='top',truncate_mode='lastp',p=12,get_leaves=False,leaf_rotation=45,leaf_font_size=15,show_contracted=True,)
plt.xlabel("Cluster Size",fontsize=30)
plt.ylabel("Distances",fontsize=30)
plt.show()
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings('ignore')
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.set(style='white', color_codes=True)
df=pd.read_csv('/train.csv/')
df.head()
df.shape
df.tail()
df.describe()
df.info()
miss_val = df.isna().sum()
miss_val
col_name=df.columns
for i in col_name:
print(i,'has :',df[i].nunique(),"Unique values")
df['Employee Identifier'].fillna(0.0, inplace=True)
df['Salaries'].fillna(0.0, inplace=True)
df['Overtime'].fillna(0.0, inplace=True)
df['Other Salaries'].fillna(0.0, inplace=True)
df['Total Salary'].fillna(0.0, inplace=True)
df['Retirement'].fillna(0.0, inplace=True)
df['Health and Dental'].fillna(0.0, inplace=True)
df['Other Benefits'].fillna(0.0, inplace=True)
df['Total Benefits'].fillna(0.0, inplace=True)
df['Total Compensation'].fillna(0.0, inplace=True)
df['Union Code'].fillna(0.0, inplace=True)
df['Union'].fillna("unknown", inplace=True)
df['Department Code'].fillna('uuu', inplace=True)
df['Department'].fillna('unknown', inplace=True)
df['Job'].fillna("unknown", inplace=True)
df['Organization Group Code'].value_counts()
df['Organization Group Code'].unique()
df['Job Family Code'].value_counts()
df["Job Family Code"].replace("SCRT","0000",inplace=True)
df["Job Family Code"].replace("H000","0001",inplace = True)
df["Job Family Code"].replace("Q000","0002", inplace=True)
df["Job Family Code"].replace("SFRA","0003", inplace=True)
df["Job Family Code"].replace("__UNASSIGNED__","0004", inplace=True)
df['Job Family Code'].unique()
df['Job Code'].unique()
df['Job Code'].unique()
df["Job Code"].replace("351C","3510",inplace=True)
df['Year Type'].unique()
df['Year Type'].value_counts()
df['Year'].unique()
df['Organization Group'].unique()
df['Department Code'].unique()
df['Department Code'].unique()
df['Department'].unique()
df['Union'].unique()
df['Union'].unique()
df['Union Code'].unique()
df['Union Code'].value_counts()
df['Job Family'].unique()
df['Job'].unique()
df['Employee Identifier'].unique()
df_num = df.select_dtypes(include = ['object',])
df_num.head()
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
df['Job Family Code1']=le.fit_transform(df['Job Family Code'])
df['Job Code1']=le.fit_transform(df['Job Code'])
df['Year Type1']=le.fit_transform(df['Year Type'])
df['Organization Group1']=le.fit_transform(df['Organization Group'])
df['Department Code1']=le.fit_transform(df['Department Code'])
df['Department1']=le.fit_transform(df['Department'])
df['Union1']=le.fit_transform(df['Union'])
df['Job Family1']=le.fit_transform(df['Job Family'])
df['Job1']=le.fit_transform(df['Job'])
df.drop(['Job Family Code'],axis=1,inplace=True)
df.drop(['Job Code'],axis=1,inplace=True)
df.drop(['Year Type'],axis=1,inplace=True)
df.drop(['Organization Group'],axis=1,inplace=True)
df.drop(['Department Code'],axis=1,inplace=True)
df.drop(['Department'],axis=1,inplace=True)
df.drop(['Union'],axis=1,inplace=True)
df.drop(['Job Family'],axis=1,inplace=True)
df.drop(['Job'],axis=1,inplace=True)
df.head()
sns.distplot(df['Total Benefits'])
plt.show()
sns.distplot(df['Total Compensation'])
plt.show()
print("Skewness: %f" % df['Total Compensation'].skew())
print("Kurtosis: %f" % df['Total Compensation'].kurt())
from sklearn.cluster import KMeans
kmeans=KMeans(n_clusters=3)
df.columns
X = df.drop(['Organization Group Code', 'Union Code', 'Employee Identifier',
'Salaries', 'Overtime', 'Other Salaries', 'Retirement',
'Health and Dental', 'Other Benefits', 'Total Benefits',
'Job Family Code1', 'Job Code1', 'Year Type1',
'Organization Group1', 'Department Code1', 'Department1', 'Union1',
'Job Family1', 'Job1'], axis=1)
display(X)
kmeans.fit(X)
print(kmeans.cluster_centers_)
y=kmeans.labels_
print(y)
sns.countplot(x=kmeans.labels_, palette='Oranges')
plt.show()
plt.scatter(df.iloc[:,0].values,df.iloc[:,3].values, c=kmeans.labels_, cmap="rainbow")
plt.show()
centers = np.array(kmeans.cluster_centers_)
plt.scatter(centers[:,0], centers[:,1], marker="x", color='k')
plt.scatter(df.iloc[:,0].values,df.iloc[:,3].values, c=kmeans.labels_, cmap="rainbow")
centers = np.array(kmeans.cluster_centers_)
plt.scatter(centers[:,0], centers[:,1], marker="x", color='k')
plt.show()
from scipy.spatial.distance import cdist, pdist
from sklearn.cluster import KMeans
#K = range(1,10)
#X = df.drop(['Organization Group Code', 'Union Code', 'Employee Identifier',
# 'Salaries', 'Overtime', 'Other Salaries', 'Retirement',
# 'Health and Dental', 'Other Benefits', 'Total Benefits',
# 'Job Family Code1', 'Job Code1', 'Year Type1',
#'Organization Group1', 'Department Code1', 'Department1', 'Union1',
#'Job Family1', 'Job1'], axis=1)
#KM = [KMeans(n_clusters=k,verbose=1).fit(X) for k in K]
#centroids = [k.cluster_centers_ for k in KM]
#D_k = [cdist(X, cent, 'euclidean') for cent in centroids]
#cIdx = [np.argmin(D,axis=1) for D in D_k]
#dist = [np.min(D,axis=1) for D in D_k]
#avgWithinSS = [sum(d)/X.shape[0] for d in dist]
#Total with-in sum of square
#wcss = [sum(d**2) for d in dist]
#tss = sum(pdist(X)**2)/X.shape[0]
#3bss = tss-wcss
# varExplained = bss/tss*100
sse = {}
for k in range(1, 10):
kmeans = KMeans(n_clusters=k, max_iter=1000).fit(X)
sse[k] = kmeans.inertia_ # Inertia: Sum of distances of samples to their closest cluster center
plt.figure()
plt.plot(list(sse.keys()), list(sse.values()))
plt.xlabel("Number of cluster")
plt.ylabel("SSE")
plt.show()
import scipy
from scipy.cluster.hierarchy import dendrogram,linkage
from scipy.cluster.hierarchy import fcluster
from scipy.cluster.hierarchy import cophenet
from scipy.spatial.distance import pdist
from pylab import rcParams
from sklearn.cluster import AgglomerativeClustering
import sklearn.metrics as sm
plt.style.available
plt.style.use('seaborn-whitegrid')
df.head()
plt.figure(figsize=(15,10))
Z=linkage(df.drop(['Total Compensation'],axis=1),method='complete')
print("Z-Shape:",Z.shape)
plt.title("COMPLETE",size=30)
dendrogram(Z,orientation='top',truncate_mode='lastp',p=12,get_leaves=False,leaf_rotation=45,leaf_font_size=15,show_contracted=True,)
plt.xlabel("Cluster Size",fontsize=30)
plt.ylabel("Distances",fontsize=30)
plt.show()
plt.figure(figsize=(15,10))
Z=linkage(df.drop(['Total Compensation'],axis=1),method='weighted')
print("Z-Shape:",Z.shape)
plt.title("Weighted",loc='center',size=30)
dendrogram(Z,orientation='top',truncate_mode='lastp',p=12,get_leaves=False,leaf_rotation=45,leaf_font_size=15,show_contracted=True,)
plt.xlabel("Cluster Size",fontsize=30)
plt.ylabel("Distances",fontsize=30)
plt.show()
| 0.31384 | 0.302571 |
```
import networkx as nx
import osmnx as ox
import geopandas as gpd
import contextily as ctx
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
```
# Traveling Salesperson Problem
The canonical [Traveling Salesperson Problem](https://en.wikipedia.org/wiki/Travelling_salesman_problem) is stated as:
> "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the origin city?"
This is generalizable to finding the shortest [Hamiltonian cycle](http://mathworld.wolfram.com/HamiltonianCycle.html) on a fully connected graph (i.e. all nodes can be reached from all other nodes).
This problem is [NP-hard](https://en.wikipedia.org/wiki/P_versus_NP_problem), meaning it is not possible for an algorithm to solve all instances of the problem quickly (i.e. in polynomial time). However, there are many approximate and heuristic approaches which can give reasonable solutions in shorter time.
```
place_name = 'New York City, NY, United States'
place_roads = ox.graph_from_place(place_name)
# save graph to file for reuse
ox.io.save_graphml(place_roads, 'nyc_osmnx.graphml')
# loading graph from a file
place_roads = ox.io.load_graphml('nyc_osmnx.graphml')
place_roads_nodes, place_roads_edges = ox.graph_to_gdfs(place_roads)
fig = plt.figure(figsize=[10,10])
ax = fig.add_subplot(1,1,1)
place_roads_edges.plot(ax=ax, color=[0, 0, 0], linewidth=0.5)
```
Let's say you wanted to do a ice cream crawl: you want to visit every ice cream shop in a city. What is the shortest route that you would take that takes you to every ice cream shop in a city and brings you back to your starting point?
```
place_ice_cream = ox.geometries.geometries_from_place(place_name, tags={"amenity":"ice_cream"})
#some of the ice cream shops return polygons instead of points, so we need to take their centroids
place_ice_cream = place_ice_cream.to_crs("epsg:3857") #projecting to Web-Mercator for more accurate centroids
place_ice_cream["geometry"] = place_ice_cream["geometry"].centroid
place_ice_cream = place_ice_cream.to_crs("epsg:4326") #projecting back to lat/long
place_ice_cream
place_ice_cream
ice_cream_nodes = ox.distance.nearest_nodes(place_roads, place_ice_cream.geometry.x, place_ice_cream.geometry.y)
ice_cream_nodes
```
## Exercise
Plot the locations of the ice cream shops on the map of the roads
## Compute shortest path matrix
```
shortest_path_matrix = np.zeros([len(ice_cream_nodes),len(ice_cream_nodes)])
for idx_i, orig in enumerate(ice_cream_nodes):
shortest_paths = nx.single_source_dijkstra_path_length(place_roads, orig, weight='length')
for idx_j, dest in enumerate(ice_cream_nodes):
shortest_path_matrix[idx_i, idx_j] = shortest_paths[dest]
shortest_path_matrix
ice_cream_graph = nx.from_numpy_matrix(shortest_path_matrix, create_using=nx.MultiDiGraph)
# new graph indexes from 0
ice_cream_graph.nodes
# rename node labels using original labels
ice_cream_graph = nx.relabel_nodes(ice_cream_graph,{k:v for k, v in zip(ice_cream_graph.nodes, ice_cream_nodes)})
ice_cream_graph.nodes
```
## Exercise
Implement each of the following methods to see how good of a TSP path you can obtain.
## Method 1: Random
Let's start by setting a baseline; how well can we do by starting at a random node and choosing a random node out of the ones remaining each time?
After you find the path, draw it on the map and print its length. (You don't need to draw the actual roads taken, just draw lines between the nodes.)
## Method 2: Greedy
Now, let's try to choose nodes more intelligently: start at a random node again, but instead of choosing a random node each time, always choose the node closest to the current node each time.
Again, draw the path on the map and print its length.
## Method 3: Random with 2-opt swapping
You may have noticed that both paths contain a lot of edges that cross each other, which is nonideal. However, there exists an algorithm to remove all the paths that cross each other from a Hamiltonian cycle: the [2-opt](https://en.wikipedia.org/wiki/2-opt) algorithm. We can use that to our advantage here.
Start by generating a random Hamiltonian cycle like in method 1, but this time, use the 2-opt algorithm to optimize it further. Again, draw it on the map and print its length.
## Method 4: Open-ended
Although the 2-opt swaps reduce the length of the Hamiltonian cycle by quite a lot, they almost never provide the optimal solution. See if you can use another method to produce a Hamiltonian cycle shorter than the one you got with method 3. Some options to explore include:
- [3-opt](https://en.wikipedia.org/wiki/3-opt)
- [Multi-fragment algorithm](https://en.wikipedia.org/wiki/Multi-fragment_algorithm) with 2- or 3-opt swapping
- [Simulated annealing](https://en.wikipedia.org/wiki/Simulated_annealing)
The [TSP Wikipedia page](https://en.wikipedia.org/wiki/Travelling_salesman_problem) has many other algorithms that could be of use to you as well.
|
github_jupyter
|
import networkx as nx
import osmnx as ox
import geopandas as gpd
import contextily as ctx
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
place_name = 'New York City, NY, United States'
place_roads = ox.graph_from_place(place_name)
# save graph to file for reuse
ox.io.save_graphml(place_roads, 'nyc_osmnx.graphml')
# loading graph from a file
place_roads = ox.io.load_graphml('nyc_osmnx.graphml')
place_roads_nodes, place_roads_edges = ox.graph_to_gdfs(place_roads)
fig = plt.figure(figsize=[10,10])
ax = fig.add_subplot(1,1,1)
place_roads_edges.plot(ax=ax, color=[0, 0, 0], linewidth=0.5)
place_ice_cream = ox.geometries.geometries_from_place(place_name, tags={"amenity":"ice_cream"})
#some of the ice cream shops return polygons instead of points, so we need to take their centroids
place_ice_cream = place_ice_cream.to_crs("epsg:3857") #projecting to Web-Mercator for more accurate centroids
place_ice_cream["geometry"] = place_ice_cream["geometry"].centroid
place_ice_cream = place_ice_cream.to_crs("epsg:4326") #projecting back to lat/long
place_ice_cream
place_ice_cream
ice_cream_nodes = ox.distance.nearest_nodes(place_roads, place_ice_cream.geometry.x, place_ice_cream.geometry.y)
ice_cream_nodes
shortest_path_matrix = np.zeros([len(ice_cream_nodes),len(ice_cream_nodes)])
for idx_i, orig in enumerate(ice_cream_nodes):
shortest_paths = nx.single_source_dijkstra_path_length(place_roads, orig, weight='length')
for idx_j, dest in enumerate(ice_cream_nodes):
shortest_path_matrix[idx_i, idx_j] = shortest_paths[dest]
shortest_path_matrix
ice_cream_graph = nx.from_numpy_matrix(shortest_path_matrix, create_using=nx.MultiDiGraph)
# new graph indexes from 0
ice_cream_graph.nodes
# rename node labels using original labels
ice_cream_graph = nx.relabel_nodes(ice_cream_graph,{k:v for k, v in zip(ice_cream_graph.nodes, ice_cream_nodes)})
ice_cream_graph.nodes
| 0.272993 | 0.956472 |
# "Numpy 기본"
> "numpy 기본 코드 실습(한글)"
- toc:true
- branch: master
- badges: true
- comments: true
- author: Jiho Yeo
- categories: [jupyter, python]
**도구 - 넘파이(NumPy)**
*넘파이(NumPy)는 파이썬의 과학 컴퓨팅을 위한 기본 라이브러리입니다. 넘파이의 핵심은 강력한 N-차원 배열 객체입니다. 또한 선형 대수, 푸리에(Fourier) 변환, 유사 난수 생성과 같은 유용한 함수들도 제공합니다."
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/rickiepark/handson-ml2/blob/master/tools_numpy.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩에서 실행하기</a>
</td>
</table>
# 배열 생성
`numpy`를 임포트해 보죠. 대부분의 사람들이 `np`로 알리아싱하여 임포트합니다:
```
import numpy as np
```
## `np.zeros`
`zeros` 함수는 0으로 채워진 배열을 만듭니다:
```
np.zeros(5)
```
2D 배열(즉, 행렬)을 만들려면 원하는 행과 열의 크기를 튜플로 전달합니다. 예를 들어 다음은 $3 \times 4$ 크기의 행렬입니다:
```
np.zeros((3,4))
```
## 용어
* 넘파이에서 각 차원을 **축**(axis) 이라고 합니다
* 축의 개수를 **랭크**(rank) 라고 합니다.
* 예를 들어, 위의 $3 \times 4$ 행렬은 랭크 2인 배열입니다(즉 2차원입니다).
* 첫 번째 축의 길이는 3이고 두 번째 축의 길이는 4입니다.
* 배열의 축 길이를 배열의 **크기**(shape)라고 합니다.
* 예를 들어, 위 행렬의 크기는 `(3, 4)`입니다.
* 랭크는 크기의 길이와 같습니다.
* 배열의 **사이즈**(size)는 전체 원소의 개수입니다. 축의 길이를 모두 곱해서 구할 수 있습니다(가령, $3 \times 4=12$).
```
a = np.zeros((3,4))
a
a.shape
a.ndim # len(a.shape)와 같습니다
a.size
```
## N-차원 배열
임의의 랭크 수를 가진 N-차원 배열을 만들 수 있습니다. 예를 들어, 다음은 크기가 `(2,3,4)`인 3D 배열(랭크=3)입니다:
```
np.zeros((2,2,5))
```
## 배열 타입
넘파이 배열의 타입은 `ndarray`입니다:
```
type(np.zeros((3,4)))
```
## `np.ones`
`ndarray`를 만들 수 있는 넘파이 함수가 많습니다.
다음은 1로 채워진 $3 \times 4$ 크기의 행렬입니다:
```
np.ones((3,4))
```
## `np.full`
주어진 값으로 지정된 크기의 배열을 초기화합니다. 다음은 `π`로 채워진 $3 \times 4$ 크기의 행렬입니다.
```
np.full((3,4), np.pi)
```
## `np.empty`
초기화되지 않은 $2 \times 3$ 크기의 배열을 만듭니다(배열의 내용은 예측이 불가능하며 메모리 상황에 따라 달라집니다):
```
np.empty((2,3))
```
## np.array
`array` 함수는 파이썬 리스트를 사용하여 `ndarray`를 초기화합니다:
```
np.array([[1,2,3,4], [10, 20, 30, 40]])
```
## `np.arange`
파이썬의 기본 `range` 함수와 비슷한 넘파이 `arange` 함수를 사용하여 `ndarray`를 만들 수 있습니다:
```
np.arange(1, 5)
```
부동 소수도 가능합니다:
```
np.arange(1.0, 5.0)
```
파이썬의 기본 `range` 함수처럼 건너 뛰는 정도를 지정할 수 있습니다:
```
np.arange(1, 5, 0.5)
```
부동 소수를 사용하면 원소의 개수가 일정하지 않을 수 있습니다. 예를 들면 다음과 같습니다:
```
print(np.arange(0, 5/3, 1/3)) # 부동 소수 오차 때문에, 최댓값은 4/3 또는 5/3이 됩니다.
print(np.arange(0, 5/3, 0.333333333))
print(np.arange(0, 5/3, 0.333333334))
```
## `np.linspace`
이런 이유로 부동 소수를 사용할 땐 `arange` 대신에 `linspace` 함수를 사용하는 것이 좋습니다. `linspace` 함수는 지정된 개수만큼 두 값 사이를 나눈 배열을 반환합니다(`arange`와는 다르게 최댓값이 **포함**됩니다):
```
print(np.linspace(0, 5/3, 6))
```
## `np.rand`와 `np.randn`
넘파이의 `random` 모듈에는 `ndarray`를 랜덤한 값으로 초기화할 수 있는 함수들이 많이 있습니다.
예를 들어, 다음은 (균등 분포인) 0과 1사이의 랜덤한 부동 소수로 $3 \times 4$ 행렬을 초기화합니다:
```
np.random.rand(3,4)
```
다음은 평균이 0이고 분산이 1인 일변량 [정규 분포](https://ko.wikipedia.org/wiki/%EC%A0%95%EA%B7%9C_%EB%B6%84%ED%8F%AC)(가우시안 분포)에서 샘플링한 랜덤한 부동 소수를 담은 $3 \times 4$ 행렬입니다:
```
np.random.randn(3,4)
```
이 분포의 모양을 알려면 맷플롯립을 사용해 그려보는 것이 좋습니다(더 자세한 것은 [맷플롯립 튜토리얼](tools_matplotlib.ipynb)을 참고하세요):
```
%matplotlib inline
import matplotlib.pyplot as plt
plt.hist(np.random.rand(100000), density=True, bins=100, histtype="step", color="blue", label="rand")
plt.hist(np.random.randn(100000), density=True, bins=100, histtype="step", color="red", label="randn")
plt.axis([-2.5, 2.5, 0, 1.1])
plt.legend(loc = "upper left")
plt.title("Random distributions")
plt.xlabel("Value")
plt.ylabel("Density")
plt.show()
```
## np.fromfunction
함수를 사용하여 `ndarray`를 초기화할 수도 있습니다:
```
def my_function(z, y, x):
return x + 10 * y + 100 * z
np.fromfunction(my_function, (3, 2, 10))
```
넘파이는 먼저 크기가 `(3, 2, 10)`인 세 개의 `ndarray`(차원마다 하나씩)를 만듭니다. 각 배열은 축을 따라 좌표 값과 같은 값을 가집니다. 예를 들어, `z` 축에 있는 배열의 모든 원소는 z-축의 값과 같습니다:
[[[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]
[[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]]
[[ 2. 2. 2. 2. 2. 2. 2. 2. 2. 2.]
[ 2. 2. 2. 2. 2. 2. 2. 2. 2. 2.]]]
위의 식 `x + 10 * y + 100 * z`에서 `x`, `y`, `z`는 사실 `ndarray`입니다(배열의 산술 연산에 대해서는 아래에서 설명합니다). 중요한 점은 함수 `my_function`이 원소마다 호출되는 것이 아니고 딱 **한 번** 호출된다는 점입니다. 그래서 매우 효율적으로 초기화할 수 있습니다.
# 배열 데이터
## `dtype`
넘파이의 `ndarray`는 모든 원소가 동일한 타입(보통 숫자)을 가지기 때문에 효율적입니다. `dtype` 속성으로 쉽게 데이터 타입을 확인할 수 있습니다:
```
c = np.arange(1, 5)
print(c.dtype, c)
c = np.arange(1.0, 5.0)
print(c.dtype, c)
```
넘파이가 데이터 타입을 결정하도록 내버려 두는 대신 `dtype` 매개변수를 사용해서 배열을 만들 때 명시적으로 지정할 수 있습니다:
```
d = np.arange(1, 5, dtype=np.complex64)
print(d.dtype, d)
```
가능한 데이터 타입은 `int8`, `int16`, `int32`, `int64`, `uint8`|`16`|`32`|`64`, `float16`|`32`|`64`, `complex64`|`128`가 있습니다. 전체 리스트는 [온라인 문서](http://docs.scipy.org/doc/numpy/user/basics.types.html)를 참고하세요.
## `itemsize`
`itemsize` 속성은 각 아이템의 크기(바이트)를 반환합니다:
```
e = np.arange(1, 5, dtype=np.complex64)
e.itemsize
```
## `data` 버퍼
배열의 데이터는 1차원 바이트 버퍼로 메모리에 저장됩니다. `data` 속성을 사용해 참조할 수 있습니다(사용할 일은 거의 없겠지만요).
```
f = np.array([[1,2],[1000, 2000]], dtype=np.int32)
f.data
```
파이썬 2에서는 `f.data`가 버퍼이고 파이썬 3에서는 memoryview입니다.
```
if (hasattr(f.data, "tobytes")):
data_bytes = f.data.tobytes() # python 3
else:
data_bytes = memoryview(f.data).tobytes() # python 2
data_bytes
```
여러 개의 `ndarray`가 데이터 버퍼를 공유할 수 있습니다. 하나를 수정하면 다른 것도 바뀝니다. 잠시 후에 예를 살펴 보겠습니다.
# 배열 크기 변경
## 자신을 변경
`ndarray`의 `shape` 속성을 지정하면 간단히 크기를 바꿀 수 있습니다. 배열의 원소 개수는 동일하게 유지됩니다.
```
g = np.arange(24)
print(g)
print("랭크:", g.ndim)
g.shape = (6, 4)
print(g)
print("랭크:", g.ndim)
g.shape = (2, 3, 4)
print(g)
print("랭크:", g.ndim)
```
## `reshape`
`reshape` 함수는 동일한 데이터를 가리키는 새로운 `ndarray` 객체를 반환합니다. 한 배열을 수정하면 다른 것도 함께 바뀝니다.
```
g2 = g.reshape(4,6)
print(g2)
print("랭크:", g2.ndim)
```
행 1, 열 2의 원소를 999로 설정합니다(인덱싱 방식은 아래를 참고하세요).
```
g2[1, 2] = 999
g2
```
이에 상응하는 `g`의 원소도 수정됩니다.
```
g
```
## `ravel`
마지막으로 `ravel` 함수는 동일한 데이터를 가리키는 새로운 1차원 `ndarray`를 반환합니다:
```
g.ravel()
```
# 산술 연산
일반적인 산술 연산자(`+`, `-`, `*`, `/`, `//`, `**` 등)는 모두 `ndarray`와 사용할 수 있습니다. 이 연산자는 원소별로 적용됩니다:
```
a = np.array([14, 23, 32, 41])
b = np.array([5, 4, 3, 2])
print("a + b =", a + b)
print("a - b =", a - b)
print("a * b =", a * b)
print("a / b =", a / b)
print("a // b =", a // b)
print("a % b =", a % b)
print("a ** b =", a ** b)
```
여기 곱셈은 행렬 곱셈이 아닙니다. 행렬 연산은 아래에서 설명합니다.
배열의 크기는 같아야 합니다. 그렇지 않으면 넘파이가 브로드캐스팅 규칙을 적용합니다.
# 브로드캐스팅
일반적으로 넘파이는 동일한 크기의 배열을 기대합니다. 그렇지 않은 상황에는 브로드캐시틍 규칙을 적용합니다:
## 규칙 1
배열의 랭크가 동일하지 않으면 랭크가 맞을 때까지 랭크가 작은 배열 앞에 1을 추가합니다.
```
h = np.arange(5).reshape(1, 1, 5)
h
```
여기에 `(1,1,5)` 크기의 3D 배열에 `(5,)` 크기의 1D 배열을 더해 보죠. 브로드캐스팅의 규칙 1이 적용됩니다!
```
h + [10, 20, 30, 40, 50] # 다음과 동일합니다: h + [[[10, 20, 30, 40, 50]]]
```
## 규칙 2
특정 차원이 1인 배열은 그 차원에서 크기가 가장 큰 배열의 크기에 맞춰 동작합니다. 배열의 원소가 차원을 따라 반복됩니다.
```
k = np.arange(6).reshape(2, 3)
k
```
`(2,3)` 크기의 2D `ndarray`에 `(2,1)` 크기의 2D 배열을 더해 보죠. 넘파이는 브로드캐스팅 규칙 2를 적용합니다:
```
k + [[100], [200]] # 다음과 같습니다: k + [[100, 100, 100], [200, 200, 200]]
```
규칙 1과 2를 합치면 다음과 같이 동작합니다:
```
k + [100, 200, 300] # 규칙 1 적용: [[100, 200, 300]], 규칙 2 적용: [[100, 200, 300], [100, 200, 300]]
```
또 매우 간단히 다음 처럼 해도 됩니다:
```
k + 1000 # 다음과 같습니다: k + [[1000, 1000, 1000], [1000, 1000, 1000]]
```
## 규칙 3
규칙 1 & 2을 적용했을 때 모든 배열의 크기가 맞아야 합니다.
```
try:
k + [33, 44]
except ValueError as e:
print(e)
```
브로드캐스팅 규칙은 산술 연산 뿐만 아니라 넘파이 연산에서 많이 사용됩니다. 아래에서 더 보도록 하죠. 브로드캐스팅에 관한 더 자세한 정보는 [온라인 문서](https://docs.scipy.org/doc/numpy-dev/user/basics.broadcasting.html)를 참고하세요.
## 업캐스팅
`dtype`이 다른 배열을 합칠 때 넘파이는 (실제 값에 상관없이) 모든 값을 다룰 수 있는 타입으로 업캐스팅합니다.
```
k1 = np.arange(0, 5, dtype=np.uint8)
print(k1.dtype, k1)
k2 = k1 + np.array([5, 6, 7, 8, 9], dtype=np.int8)
print(k2.dtype, k2)
```
모든 `int8`과 `uint8` 값(-128에서 255까지)을 표현하기 위해 `int16`이 필요합니다. 이 코드에서는 `uint8`이면 충분하지만 업캐스팅되었습니다.
```
k3 = k1 + 1.5
print(k3.dtype, k3)
```
# 조건 연산자
조건 연산자도 원소별로 적용됩니다:
```
m = np.array([20, -5, 30, 40])
m < [15, 16, 35, 36]
```
브로드캐스팅을 사용합니다:
```
m < 25 # m < [25, 25, 25, 25] 와 동일
```
불리언 인덱싱과 함께 사용하면 아주 유용합니다(아래에서 설명하겠습니다).
```
m[m < 25]
```
# 수학 함수와 통계 함수
`ndarray`에서 사용할 수 있는 수학 함수와 통계 함수가 많습니다.
## `ndarray` 메서드
일부 함수는 `ndarray` 메서드로 제공됩니다. 예를 들면:
```
a = np.array([[-2.5, 3.1, 7], [10, 11, 12]])
print(a)
print("평균 =", a.mean())
```
이 명령은 크기에 상관없이 `ndarray`에 있는 모든 원소의 평균을 계산합니다.
다음은 유용한 `ndarray` 메서드입니다:
```
for func in (a.min, a.max, a.sum, a.prod, a.std, a.var):
print(func.__name__, "=", func())
```
이 함수들은 선택적으로 매개변수 `axis`를 사용합니다. 지정된 축을 따라 원소에 연산을 적용하는데 사용합니다. 예를 들면:
```
c=np.arange(24).reshape(2,3,4)
c
c.sum(axis=0) # 첫 번째 축을 따라 더함, 결과는 3x4 배열
c.sum(axis=1) # 두 번째 축을 따라 더함, 결과는 2x4 배열
```
여러 축에 대해서 더할 수도 있습니다:
```
c.sum(axis=(0,2)) # 첫 번째 축과 세 번째 축을 따라 더함, 결과는 (3,) 배열
0+1+2+3 + 12+13+14+15, 4+5+6+7 + 16+17+18+19, 8+9+10+11 + 20+21+22+23
```
## 일반 함수
넘파이는 일반 함수(universal function) 또는 **ufunc**라고 부르는 원소별 함수를 제공합니다. 예를 들면 `square` 함수는 원본 `ndarray`를 복사하여 각 원소를 제곱한 새로운 `ndarray` 객체를 반환합니다:
```
a = np.array([[-2.5, 3.1, 7], [10, 11, 12]])
np.square(a)
```
다음은 유용한 단항 일반 함수들입니다:
```
print("원본 ndarray")
print(a)
for func in (np.abs, np.sqrt, np.exp, np.log, np.sign, np.ceil, np.modf, np.isnan, np.cos):
print("\n", func.__name__)
print(func(a))
```
## 이항 일반 함수
두 개의 `ndarray`에 원소별로 적용되는 이항 함수도 많습니다. 두 배열이 동일한 크기가 아니면 브로드캐스팅 규칙이 적용됩니다:
```
a = np.array([1, -2, 3, 4])
b = np.array([2, 8, -1, 7])
np.add(a, b) # a + b 와 동일
np.greater(a, b) # a > b 와 동일
np.maximum(a, b)
np.copysign(a, b)
```
# 배열 인덱싱
## 1차원 배열
1차원 넘파이 배열은 보통의 파이썬 배열과 비슷하게 사용할 수 있습니다:
```
a = np.array([1, 5, 3, 19, 13, 7, 3])
a[3]
a[2:5]
a[2:-1]
a[:2]
a[2::2]
a[::-1]
```
물론 원소를 수정할 수 있죠:
```
a[3]=999
a
```
슬라이싱을 사용해 `ndarray`를 수정할 수 있습니다:
```
a[2:5] = [997, 998, 999]
a
```
## 보통의 파이썬 배열과 차이점
보통의 파이썬 배열과 대조적으로 `ndarray` 슬라이싱에 하나의 값을 할당하면 슬라이싱 전체에 복사됩니다. 위에서 언급한 브로드캐스팅 덕택입니다.
```
a[2:5] = -1
a
```
또한 이런 식으로 `ndarray` 크기를 늘리거나 줄일 수 없습니다:
```
try:
a[2:5] = [1,2,3,4,5,6] # 너무 길어요
except ValueError as e:
print(e)
```
원소를 삭제할 수도 없습니다:
```
try:
del a[2:5]
except ValueError as e:
print(e)
```
중요한 점은 `ndarray`의 슬라이싱은 같은 데이터 버퍼를 바라보는 뷰(view)입니다. 슬라이싱된 객체를 수정하면 실제 원본 `ndarray`가 수정됩니다!
```
a_slice = a[2:6]
a_slice[1] = 1000
a # 원본 배열이 수정됩니다!
a[3] = 2000
a_slice # 비슷하게 원본 배열을 수정하면 슬라이싱 객체에도 반영됩니다!
```
데이터를 복사하려면 `copy` 메서드를 사용해야 합니다:
```
another_slice = a[2:6].copy()
another_slice[1] = 3000
a # 원본 배열이 수정되지 않습니다
a[3] = 4000
another_slice # 마찬가지로 원본 배열을 수정해도 복사된 배열은 바뀌지 않습니다
```
## 다차원 배열
다차원 배열은 비슷한 방식으로 각 축을 따라 인덱싱 또는 슬라이싱해서 사용합니다. 콤마로 구분합니다:
```
b = np.arange(48).reshape(4, 12)
b
b[1, 2] # 행 1, 열 2
b[1, :] # 행 1, 모든 열
b[:, 1] # 모든 행, 열 1
```
**주의**: 다음 두 표현에는 미묘한 차이가 있습니다:
```
b[1, :]
b[1:2, :]
```
첫 번째 표현식은 `(12,)` 크기인 1D 배열로 행이 하나입니다. 두 번째는 `(1, 12)` 크기인 2D 배열로 같은 행을 반환합니다.
## 팬시 인덱싱(Fancy indexing)
관심 대상의 인덱스 리스트를 지정할 수도 있습니다. 이를 팬시 인덱싱이라고 부릅니다.
```
b[(0,2), 2:5] # 행 0과 2, 열 2에서 4(5-1)까지
b[:, (-1, 2, -1)] # 모든 행, 열 -1 (마지막), 2와 -1 (다시 반대 방향으로)
```
여러 개의 인덱스 리스트를 지정하면 인덱스에 맞는 값이 포함된 1D `ndarray`를 반환됩니다.
```
b[(-1, 2, -1, 2), (5, 9, 1, 9)] # returns a 1D array with b[-1, 5], b[2, 9], b[-1, 1] and b[2, 9] (again)
```
## 고차원
고차원에서도 동일한 방식이 적용됩니다. 몇 가지 예를 살펴 보겠습니다:
```
c = b.reshape(4,2,6)
c
c[2, 1, 4] # 행렬 2, 행 1, 열 4
c[2, :, 3] # 행렬 2, 모든 행, 열 3
```
어떤 축에 대한 인덱스를 지정하지 않으면 이 축의 모든 원소가 반환됩니다:
```
c[2, 1] # 행렬 2, 행 1, 모든 열이 반환됩니다. c[2, 1, :]와 동일합니다.
```
## 생략 부호 (`...`)
생략 부호(`...`)를 쓰면 모든 지정하지 않은 축의 원소를 포함합니다.
```
c[2, ...] # 행렬 2, 모든 행, 모든 열. c[2, :, :]와 동일
c[2, 1, ...] # 행렬 2, 행 1, 모든 열. c[2, 1, :]와 동일
c[2, ..., 3] # 행렬 2, 모든 행, 열 3. c[2, :, 3]와 동일
c[..., 3] # 모든 행렬, 모든 행, 열 3. c[:, :, 3]와 동일
```
## 불리언 인덱싱
불리언 값을 가진 `ndarray`를 사용해 축의 인덱스를 지정할 수 있습니다.
```
b = np.arange(48).reshape(4, 12)
b
rows_on = np.array([True, False, True, False])
b[rows_on, :] # 행 0과 2, 모든 열. b[(0, 2), :]와 동일
cols_on = np.array([False, True, False] * 4)
b[:, cols_on] # 모든 행, 열 1, 4, 7, 10
```
## `np.ix_`
여러 축에 걸쳐서는 불리언 인덱싱을 사용할 수 없고 `ix_` 함수를 사용합니다:
```
b[np.ix_(rows_on, cols_on)]
np.ix_(rows_on, cols_on)
```
`ndarray`와 같은 크기의 불리언 배열을 사용하면 해당 위치가 `True`인 모든 원소를 담은 1D 배열이 반환됩니다. 일반적으로 조건 연산자와 함께 사용합니다:
```
b[b % 3 == 1]
```
# 반복
`ndarray`를 반복하는 것은 일반적인 파이썬 배열을 반복한는 것과 매우 유사합니다. 다차원 배열을 반복하면 첫 번째 축에 대해서 수행됩니다.
```
c = np.arange(24).reshape(2, 3, 4) # 3D 배열 (두 개의 3x4 행렬로 구성됨)
c
for m in c:
print("아이템:")
print(m)
for i in range(len(c)): # len(c) == c.shape[0]
print("아이템:")
print(c[i])
```
`ndarray`에 있는 모든 원소를 반복하려면 `flat` 속성을 사용합니다:
```
for i in c.flat:
print("아이템:", i)
```
# 배열 쌓기
종종 다른 배열을 쌓아야 할 때가 있습니다. 넘파이는 이를 위해 몇 개의 함수를 제공합니다. 먼저 배열 몇 개를 만들어 보죠.
```
q1 = np.full((3,4), 1.0)
q1
q2 = np.full((4,4), 2.0)
q2
q3 = np.full((3,4), 3.0)
q3
```
## `vstack`
`vstack` 함수를 사용하여 수직으로 쌓아보죠:
```
q4 = np.vstack((q1, q2, q3))
q4
q4.shape
```
q1, q2, q3가 모두 같은 크기이므로 가능합니다(수직으로 쌓기 때문에 수직 축은 크기가 달라도 됩니다).
## `hstack`
`hstack`을 사용해 수평으로도 쌓을 수 있습니다:
```
q5 = np.hstack((q1, q3))
q5
q5.shape
```
q1과 q3가 모두 3개의 행을 가지고 있기 때문에 가능합니다. q2는 4개의 행을 가지고 있기 때문에 q1, q3와 수평으로 쌓을 수 없습니다:
```
try:
q5 = np.hstack((q1, q2, q3))
except ValueError as e:
print(e)
```
## `concatenate`
`concatenate` 함수는 지정한 축으로도 배열을 쌓습니다.
```
q7 = np.concatenate((q1, q2, q3), axis=0) # vstack과 동일
q7
q7.shape
```
예상했겠지만 `hstack`은 `axis=1`으로 `concatenate`를 호출하는 것과 같습니다.
## `stack`
`stack` 함수는 새로운 축을 따라 배열을 쌓습니다. 모든 배열은 같은 크기를 가져야 합니다.
```
q8 = np.stack((q1, q3))
q8
q8.shape
```
# 배열 분할
분할은 쌓기의 반대입니다. 예를 들어 `vsplit` 함수는 행렬을 수직으로 분할합니다.
먼저 6x4 행렬을 만들어 보죠:
```
r = np.arange(24).reshape(6,4)
r
```
수직으로 동일한 크기로 나누어 보겠습니다:
```
r1, r2, r3 = np.vsplit(r, 3)
r1
r2
r3
```
`split` 함수는 주어진 축을 따라 배열을 분할합니다. `vsplit`는 `axis=0`으로 `split`를 호출하는 것과 같습니다. `hsplit` 함수는 `axis=1`로 `split`를 호출하는 것과 같습니다:
```
r4, r5 = np.hsplit(r, 2)
r4
r5
```
# 배열 전치
`transpose` 메서드는 주어진 순서대로 축을 뒤바꾸어 `ndarray` 데이터에 대한 새로운 뷰를 만듭니다.
예를 위해 3D 배열을 만들어 보죠:
```
t = np.arange(24).reshape(4,2,3)
t
```
`0, 1, 2`(깊이, 높이, 너비) 축을 `1, 2, 0` (깊이→너비, 높이→깊이, 너비→높이) 순서로 바꾼 `ndarray`를 만들어 보겠습니다:
```
t1 = t.transpose((1,2,0))
t1
t1.shape
```
`transpose` 기본값은 차원의 순서를 역전시킵니다:
```
t2 = t.transpose() # t.transpose((2, 1, 0))와 동일
t2
t2.shape
```
넘파이는 두 축을 바꾸는 `swapaxes` 함수를 제공합니다. 예를 들어 깊이와 높이를 뒤바꾸어 `t`의 새로운 뷰를 만들어 보죠:
```
t3 = t.swapaxes(0,1) # t.transpose((1, 0, 2))와 동일
t3
t3.shape
```
# 선형 대수학
넘파이 2D 배열을 사용하면 파이썬에서 행렬을 효율적으로 표현할 수 있습니다. 주요 행렬 연산을 간단히 둘러 보겠습니다. 선형 대수학, 벡터와 행렬에 관한 자세한 내용은 [Linear Algebra tutorial](math_linear_algebra.ipynb)를 참고하세요.
## 행렬 전치
`T` 속성은 랭크가 2보다 크거나 같을 때 `transpose()`를 호출하는 것과 같습니다:
```
m1 = np.arange(10).reshape(2,5)
m1
m1.T
```
`T` 속성은 랭크가 0이거나 1인 배열에는 아무런 영향을 미치지 않습니다:
```
m2 = np.arange(5)
m2
m2.T
```
먼저 1D 배열을 하나의 행이 있는 행렬(2D)로 바꾼다음 전치를 수행할 수 있습니다:
```
m2r = m2.reshape(1,5)
m2r
m2r.T
```
## 행렬 곱셈
두 개의 행렬을 만들어 `dot` 메서드로 행렬 [곱셈](https://ko.wikipedia.org/wiki/%ED%96%89%EB%A0%AC_%EA%B3%B1%EC%85%88)을 실행해 보죠.
```
n1 = np.arange(10).reshape(2, 5)
n1
n2 = np.arange(15).reshape(5,3)
n2
n1.dot(n2)
```
**주의**: 앞서 언급한 것처럼 `n1*n2`는 행렬 곱셈이 아니라 원소별 곱셈(또는 [아다마르 곱](https://ko.wikipedia.org/wiki/%EC%95%84%EB%8B%A4%EB%A7%88%EB%A5%B4_%EA%B3%B1)이라 부릅니다)입니다.
## 역행렬과 유사 역행렬
`numpy.linalg` 모듈 안에 많은 선형 대수 함수들이 있습니다. 특히 `inv` 함수는 정방 행렬의 역행렬을 계산합니다:
```
import numpy.linalg as linalg
m3 = np.array([[1,2,3],[5,7,11],[21,29,31]])
m3
linalg.inv(m3)
```
`pinv` 함수를 사용하여 [유사 역행렬](https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_pseudoinverse)을 계산할 수도 있습니다:
```
linalg.pinv(m3)
```
## 단위 행렬
행렬과 그 행렬의 역행렬을 곱하면 단위 행렬이 됩니다(작은 소숫점 오차가 있습니다):
```
m3.dot(linalg.inv(m3))
```
`eye` 함수는 NxN 크기의 단위 행렬을 만듭니다:
```
np.eye(3)
```
## QR 분해
`qr` 함수는 행렬을 [QR 분해](https://en.wikipedia.org/wiki/QR_decomposition)합니다:
```
q, r = linalg.qr(m3)
q
r
q.dot(r) # q.r는 m3와 같습니다
```
## 행렬식
`det` 함수는 [행렬식](https://en.wikipedia.org/wiki/Determinant)을 계산합니다:
```
linalg.det(m3) # 행렬식 계산
```
## 고윳값과 고유벡터
`eig` 함수는 정방 행렬의 [고윳값과 고유벡터](https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors)를 계산합니다:
```
eigenvalues, eigenvectors = linalg.eig(m3)
eigenvalues # λ
eigenvectors # v
m3.dot(eigenvectors) - eigenvalues * eigenvectors # m3.v - λ*v = 0
```
## 특잇값 분해
`svd` 함수는 행렬을 입력으로 받아 그 행렬의 [특잇값 분해](https://en.wikipedia.org/wiki/Singular_value_decomposition)를 반환합니다:
```
m4 = np.array([[1,0,0,0,2], [0,0,3,0,0], [0,0,0,0,0], [0,2,0,0,0]])
m4
U, S_diag, V = linalg.svd(m4)
U
S_diag
```
`svd` 함수는 Σ의 대각 원소 값만 반환합니다. 전체 Σ 행렬은 다음과 같이 만듭니다:
```
S = np.zeros((4, 5))
S[np.diag_indices(4)] = S_diag
S # Σ
V
U.dot(S).dot(V) # U.Σ.V == m4
```
## 대각원소와 대각합
```
np.diag(m3) # m3의 대각 원소입니다(왼쪽 위에서 오른쪽 아래)
np.trace(m3) # np.diag(m3).sum()와 같습니다
```
## 선형 방정식 풀기
`solve` 함수는 다음과 같은 선형 방정식을 풉니다:
* $2x + 6y = 6$
* $5x + 3y = -9$
```
coeffs = np.array([[2, 6], [5, 3]])
depvars = np.array([6, -9])
solution = linalg.solve(coeffs, depvars)
solution
```
solution을 확인해 보죠:
```
coeffs.dot(solution), depvars # 네 같네요
```
좋습니다! 다른 방식으로도 solution을 확인해 보죠:
```
np.allclose(coeffs.dot(solution), depvars)
```
# 벡터화
한 번에 하나씩 개별 배열 원소에 대해 연산을 실행하는 대신 배열 연산을 사용하면 훨씬 효율적인 코드를 만들 수 있습니다. 이를 벡터화라고 합니다. 이를 사용하여 넘파이의 최적화된 성능을 활용할 수 있습니다.
예를 들어, $sin(xy/40.5)$ 식을 기반으로 768x1024 크기 배열을 생성하려고 합니다. 중첩 반복문 안에 파이썬의 math 함수를 사용하는 것은 **나쁜** 방법입니다:
```
import math
data = np.empty((768, 1024))
for y in range(768):
for x in range(1024):
data[y, x] = math.sin(x*y/40.5) # 매우 비효율적입니다!
```
작동은 하지만 순수한 파이썬 코드로 반복문이 진행되기 때문에 아주 비효율적입니다. 이 알고리즘을 벡터화해 보죠. 먼저 넘파이 `meshgrid` 함수로 좌표 벡터를 사용해 행렬을 만듭니다.
```
x_coords = np.arange(0, 1024) # [0, 1, 2, ..., 1023]
y_coords = np.arange(0, 768) # [0, 1, 2, ..., 767]
X, Y = np.meshgrid(x_coords, y_coords)
X
Y
```
여기서 볼 수 있듯이 `X`와 `Y` 모두 768x1024 배열입니다. `X`에 있는 모든 값은 수평 좌표에 해당합니다. `Y`에 있는 모든 값은 수직 좌표에 해당합니다.
이제 간단히 배열 연산을 사용해 계산할 수 있습니다:
```
data = np.sin(X*Y/40.5)
```
맷플롯립의 `imshow` 함수를 사용해 이 데이터를 그려보죠([matplotlib tutorial](tools_matplotlib.ipynb)을 참조하세요).
```
import matplotlib.pyplot as plt
import matplotlib.cm as cm
fig = plt.figure(1, figsize=(7, 6))
plt.imshow(data, cmap=cm.hot)
plt.show()
```
# 저장과 로딩
넘파이는 `ndarray`를 바이너리 또는 텍스트 포맷으로 손쉽게 저장하고 로드할 수 있습니다.
## 바이너리 `.npy` 포맷
랜덤 배열을 만들고 저장해 보죠.
```
a = np.random.rand(2,3)
a
np.save("my_array", a)
```
끝입니다! 파일 이름의 확장자를 지정하지 않았기 때문에 넘파이는 자동으로 `.npy`를 붙입니다. 파일 내용을 확인해 보겠습니다:
```
with open("my_array.npy", "rb") as f:
content = f.read()
content
```
이 파일을 넘파이 배열로 로드하려면 `load` 함수를 사용합니다:
```
a_loaded = np.load("my_array.npy")
a_loaded
```
## 텍스트 포맷
배열을 텍스트 포맷으로 저장해 보죠:
```
np.savetxt("my_array.csv", a)
```
파일 내용을 확인해 보겠습니다:
```
with open("my_array.csv", "rt") as f:
print(f.read())
```
이 파일은 탭으로 구분된 CSV 파일입니다. 다른 구분자를 지정할 수도 있습니다:
```
np.savetxt("my_array.csv", a, delimiter=",")
```
이 파일을 로드하려면 `loadtxt` 함수를 사용합니다:
```
a_loaded = np.loadtxt("my_array.csv", delimiter=",")
a_loaded
```
## 압축된 `.npz` 포맷
여러 개의 배열을 압축된 한 파일로 저장하는 것도 가능합니다:
```
b = np.arange(24, dtype=np.uint8).reshape(2, 3, 4)
b
np.savez("my_arrays", my_a=a, my_b=b)
```
파일 내용을 확인해 보죠. `.npz` 파일 확장자가 자동으로 추가되었습니다.
```
with open("my_arrays.npz", "rb") as f:
content = f.read()
repr(content)[:180] + "[...]"
```
다음과 같이 이 파일을 로드할 수 있습니다:
```
my_arrays = np.load("my_arrays.npz")
my_arrays
```
게으른 로딩을 수행하는 딕셔너리와 유사한 객체입니다:
```
my_arrays.keys()
my_arrays["my_a"]
```
# 그 다음은?
넘파이 기본 요소를 모두 배웠지만 훨씬 더 많은 기능이 있습니다. 이를 배우는 가장 좋은 방법은 넘파이를 직접 실습해 보고 훌륭한 [넘파이 문서](http://docs.scipy.org/doc/numpy/reference/index.html)에서 필요한 함수와 기능을 찾아 보세요.
|
github_jupyter
|
import numpy as np
np.zeros(5)
np.zeros((3,4))
a = np.zeros((3,4))
a
a.shape
a.ndim # len(a.shape)와 같습니다
a.size
np.zeros((2,2,5))
type(np.zeros((3,4)))
np.ones((3,4))
np.full((3,4), np.pi)
np.empty((2,3))
np.array([[1,2,3,4], [10, 20, 30, 40]])
np.arange(1, 5)
np.arange(1.0, 5.0)
np.arange(1, 5, 0.5)
print(np.arange(0, 5/3, 1/3)) # 부동 소수 오차 때문에, 최댓값은 4/3 또는 5/3이 됩니다.
print(np.arange(0, 5/3, 0.333333333))
print(np.arange(0, 5/3, 0.333333334))
print(np.linspace(0, 5/3, 6))
np.random.rand(3,4)
np.random.randn(3,4)
%matplotlib inline
import matplotlib.pyplot as plt
plt.hist(np.random.rand(100000), density=True, bins=100, histtype="step", color="blue", label="rand")
plt.hist(np.random.randn(100000), density=True, bins=100, histtype="step", color="red", label="randn")
plt.axis([-2.5, 2.5, 0, 1.1])
plt.legend(loc = "upper left")
plt.title("Random distributions")
plt.xlabel("Value")
plt.ylabel("Density")
plt.show()
def my_function(z, y, x):
return x + 10 * y + 100 * z
np.fromfunction(my_function, (3, 2, 10))
c = np.arange(1, 5)
print(c.dtype, c)
c = np.arange(1.0, 5.0)
print(c.dtype, c)
d = np.arange(1, 5, dtype=np.complex64)
print(d.dtype, d)
e = np.arange(1, 5, dtype=np.complex64)
e.itemsize
f = np.array([[1,2],[1000, 2000]], dtype=np.int32)
f.data
if (hasattr(f.data, "tobytes")):
data_bytes = f.data.tobytes() # python 3
else:
data_bytes = memoryview(f.data).tobytes() # python 2
data_bytes
g = np.arange(24)
print(g)
print("랭크:", g.ndim)
g.shape = (6, 4)
print(g)
print("랭크:", g.ndim)
g.shape = (2, 3, 4)
print(g)
print("랭크:", g.ndim)
g2 = g.reshape(4,6)
print(g2)
print("랭크:", g2.ndim)
g2[1, 2] = 999
g2
g
g.ravel()
a = np.array([14, 23, 32, 41])
b = np.array([5, 4, 3, 2])
print("a + b =", a + b)
print("a - b =", a - b)
print("a * b =", a * b)
print("a / b =", a / b)
print("a // b =", a // b)
print("a % b =", a % b)
print("a ** b =", a ** b)
h = np.arange(5).reshape(1, 1, 5)
h
h + [10, 20, 30, 40, 50] # 다음과 동일합니다: h + [[[10, 20, 30, 40, 50]]]
k = np.arange(6).reshape(2, 3)
k
k + [[100], [200]] # 다음과 같습니다: k + [[100, 100, 100], [200, 200, 200]]
k + [100, 200, 300] # 규칙 1 적용: [[100, 200, 300]], 규칙 2 적용: [[100, 200, 300], [100, 200, 300]]
k + 1000 # 다음과 같습니다: k + [[1000, 1000, 1000], [1000, 1000, 1000]]
try:
k + [33, 44]
except ValueError as e:
print(e)
k1 = np.arange(0, 5, dtype=np.uint8)
print(k1.dtype, k1)
k2 = k1 + np.array([5, 6, 7, 8, 9], dtype=np.int8)
print(k2.dtype, k2)
k3 = k1 + 1.5
print(k3.dtype, k3)
m = np.array([20, -5, 30, 40])
m < [15, 16, 35, 36]
m < 25 # m < [25, 25, 25, 25] 와 동일
m[m < 25]
a = np.array([[-2.5, 3.1, 7], [10, 11, 12]])
print(a)
print("평균 =", a.mean())
for func in (a.min, a.max, a.sum, a.prod, a.std, a.var):
print(func.__name__, "=", func())
c=np.arange(24).reshape(2,3,4)
c
c.sum(axis=0) # 첫 번째 축을 따라 더함, 결과는 3x4 배열
c.sum(axis=1) # 두 번째 축을 따라 더함, 결과는 2x4 배열
c.sum(axis=(0,2)) # 첫 번째 축과 세 번째 축을 따라 더함, 결과는 (3,) 배열
0+1+2+3 + 12+13+14+15, 4+5+6+7 + 16+17+18+19, 8+9+10+11 + 20+21+22+23
a = np.array([[-2.5, 3.1, 7], [10, 11, 12]])
np.square(a)
print("원본 ndarray")
print(a)
for func in (np.abs, np.sqrt, np.exp, np.log, np.sign, np.ceil, np.modf, np.isnan, np.cos):
print("\n", func.__name__)
print(func(a))
a = np.array([1, -2, 3, 4])
b = np.array([2, 8, -1, 7])
np.add(a, b) # a + b 와 동일
np.greater(a, b) # a > b 와 동일
np.maximum(a, b)
np.copysign(a, b)
a = np.array([1, 5, 3, 19, 13, 7, 3])
a[3]
a[2:5]
a[2:-1]
a[:2]
a[2::2]
a[::-1]
a[3]=999
a
a[2:5] = [997, 998, 999]
a
a[2:5] = -1
a
try:
a[2:5] = [1,2,3,4,5,6] # 너무 길어요
except ValueError as e:
print(e)
try:
del a[2:5]
except ValueError as e:
print(e)
a_slice = a[2:6]
a_slice[1] = 1000
a # 원본 배열이 수정됩니다!
a[3] = 2000
a_slice # 비슷하게 원본 배열을 수정하면 슬라이싱 객체에도 반영됩니다!
another_slice = a[2:6].copy()
another_slice[1] = 3000
a # 원본 배열이 수정되지 않습니다
a[3] = 4000
another_slice # 마찬가지로 원본 배열을 수정해도 복사된 배열은 바뀌지 않습니다
b = np.arange(48).reshape(4, 12)
b
b[1, 2] # 행 1, 열 2
b[1, :] # 행 1, 모든 열
b[:, 1] # 모든 행, 열 1
b[1, :]
b[1:2, :]
b[(0,2), 2:5] # 행 0과 2, 열 2에서 4(5-1)까지
b[:, (-1, 2, -1)] # 모든 행, 열 -1 (마지막), 2와 -1 (다시 반대 방향으로)
b[(-1, 2, -1, 2), (5, 9, 1, 9)] # returns a 1D array with b[-1, 5], b[2, 9], b[-1, 1] and b[2, 9] (again)
c = b.reshape(4,2,6)
c
c[2, 1, 4] # 행렬 2, 행 1, 열 4
c[2, :, 3] # 행렬 2, 모든 행, 열 3
c[2, 1] # 행렬 2, 행 1, 모든 열이 반환됩니다. c[2, 1, :]와 동일합니다.
c[2, ...] # 행렬 2, 모든 행, 모든 열. c[2, :, :]와 동일
c[2, 1, ...] # 행렬 2, 행 1, 모든 열. c[2, 1, :]와 동일
c[2, ..., 3] # 행렬 2, 모든 행, 열 3. c[2, :, 3]와 동일
c[..., 3] # 모든 행렬, 모든 행, 열 3. c[:, :, 3]와 동일
b = np.arange(48).reshape(4, 12)
b
rows_on = np.array([True, False, True, False])
b[rows_on, :] # 행 0과 2, 모든 열. b[(0, 2), :]와 동일
cols_on = np.array([False, True, False] * 4)
b[:, cols_on] # 모든 행, 열 1, 4, 7, 10
b[np.ix_(rows_on, cols_on)]
np.ix_(rows_on, cols_on)
b[b % 3 == 1]
c = np.arange(24).reshape(2, 3, 4) # 3D 배열 (두 개의 3x4 행렬로 구성됨)
c
for m in c:
print("아이템:")
print(m)
for i in range(len(c)): # len(c) == c.shape[0]
print("아이템:")
print(c[i])
for i in c.flat:
print("아이템:", i)
q1 = np.full((3,4), 1.0)
q1
q2 = np.full((4,4), 2.0)
q2
q3 = np.full((3,4), 3.0)
q3
q4 = np.vstack((q1, q2, q3))
q4
q4.shape
q5 = np.hstack((q1, q3))
q5
q5.shape
try:
q5 = np.hstack((q1, q2, q3))
except ValueError as e:
print(e)
q7 = np.concatenate((q1, q2, q3), axis=0) # vstack과 동일
q7
q7.shape
q8 = np.stack((q1, q3))
q8
q8.shape
r = np.arange(24).reshape(6,4)
r
r1, r2, r3 = np.vsplit(r, 3)
r1
r2
r3
r4, r5 = np.hsplit(r, 2)
r4
r5
t = np.arange(24).reshape(4,2,3)
t
t1 = t.transpose((1,2,0))
t1
t1.shape
t2 = t.transpose() # t.transpose((2, 1, 0))와 동일
t2
t2.shape
t3 = t.swapaxes(0,1) # t.transpose((1, 0, 2))와 동일
t3
t3.shape
m1 = np.arange(10).reshape(2,5)
m1
m1.T
m2 = np.arange(5)
m2
m2.T
m2r = m2.reshape(1,5)
m2r
m2r.T
n1 = np.arange(10).reshape(2, 5)
n1
n2 = np.arange(15).reshape(5,3)
n2
n1.dot(n2)
import numpy.linalg as linalg
m3 = np.array([[1,2,3],[5,7,11],[21,29,31]])
m3
linalg.inv(m3)
linalg.pinv(m3)
m3.dot(linalg.inv(m3))
np.eye(3)
q, r = linalg.qr(m3)
q
r
q.dot(r) # q.r는 m3와 같습니다
linalg.det(m3) # 행렬식 계산
eigenvalues, eigenvectors = linalg.eig(m3)
eigenvalues # λ
eigenvectors # v
m3.dot(eigenvectors) - eigenvalues * eigenvectors # m3.v - λ*v = 0
m4 = np.array([[1,0,0,0,2], [0,0,3,0,0], [0,0,0,0,0], [0,2,0,0,0]])
m4
U, S_diag, V = linalg.svd(m4)
U
S_diag
S = np.zeros((4, 5))
S[np.diag_indices(4)] = S_diag
S # Σ
V
U.dot(S).dot(V) # U.Σ.V == m4
np.diag(m3) # m3의 대각 원소입니다(왼쪽 위에서 오른쪽 아래)
np.trace(m3) # np.diag(m3).sum()와 같습니다
coeffs = np.array([[2, 6], [5, 3]])
depvars = np.array([6, -9])
solution = linalg.solve(coeffs, depvars)
solution
coeffs.dot(solution), depvars # 네 같네요
np.allclose(coeffs.dot(solution), depvars)
import math
data = np.empty((768, 1024))
for y in range(768):
for x in range(1024):
data[y, x] = math.sin(x*y/40.5) # 매우 비효율적입니다!
x_coords = np.arange(0, 1024) # [0, 1, 2, ..., 1023]
y_coords = np.arange(0, 768) # [0, 1, 2, ..., 767]
X, Y = np.meshgrid(x_coords, y_coords)
X
Y
data = np.sin(X*Y/40.5)
import matplotlib.pyplot as plt
import matplotlib.cm as cm
fig = plt.figure(1, figsize=(7, 6))
plt.imshow(data, cmap=cm.hot)
plt.show()
a = np.random.rand(2,3)
a
np.save("my_array", a)
with open("my_array.npy", "rb") as f:
content = f.read()
content
a_loaded = np.load("my_array.npy")
a_loaded
np.savetxt("my_array.csv", a)
with open("my_array.csv", "rt") as f:
print(f.read())
np.savetxt("my_array.csv", a, delimiter=",")
a_loaded = np.loadtxt("my_array.csv", delimiter=",")
a_loaded
b = np.arange(24, dtype=np.uint8).reshape(2, 3, 4)
b
np.savez("my_arrays", my_a=a, my_b=b)
with open("my_arrays.npz", "rb") as f:
content = f.read()
repr(content)[:180] + "[...]"
my_arrays = np.load("my_arrays.npz")
my_arrays
my_arrays.keys()
my_arrays["my_a"]
| 0.281406 | 0.919895 |
<div align="center">
<h1><img width="30" src="https://madewithml.com/static/images/rounded_logo.png"> <a href="https://madewithml.com/">Made With ML</a></h1>
Applied ML · MLOps · Production
<br>
Join 30K+ developers in learning how to responsibly <a href="https://madewithml.com/about/">deliver value</a> with ML.
<br>
</div>
<br>
<div align="center">
<a target="_blank" href="https://newsletter.madewithml.com"><img src="https://img.shields.io/badge/Subscribe-30K-brightgreen"></a>
<a target="_blank" href="https://github.com/GokuMohandas/MadeWithML"><img src="https://img.shields.io/github/stars/GokuMohandas/MadeWithML.svg?style=social&label=Star"></a>
<a target="_blank" href="https://www.linkedin.com/in/goku"><img src="https://img.shields.io/badge/style--5eba00.svg?label=LinkedIn&logo=linkedin&style=social"></a>
<a target="_blank" href="https://twitter.com/GokuMohandas"><img src="https://img.shields.io/twitter/follow/GokuMohandas.svg?label=Follow&style=social"></a>
<br>
🔥 Among the <a href="https://github.com/topics/deep-learning" target="_blank">top ML</a> repositories on GitHub
</div>
<br>
<hr>
# Transformers
In this lesson we will learn how to implement the Transformer architecture to extract contextual embeddings for our text classification task.
<div align="left">
<a target="_blank" href="https://madewithml.com/courses/foundations/transformers/"><img src="https://img.shields.io/badge/📖 Read-blog post-9cf"></a>
<a href="https://github.com/GokuMohandas/MadeWithML/blob/main/notebooks/15_Transformers.ipynb" role="button"><img src="https://img.shields.io/static/v1?label=&message=View%20On%20GitHub&color=586069&logo=github&labelColor=2f363d"></a>
<a href="https://colab.research.google.com/github/GokuMohandas/MadeWithML/blob/main/notebooks/15_Transformers.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
</div>
# Overview
Transformers are a very popular architecture that leverage and extend the concept of self-attention to create very useful representations of our input data for a downstream task.
- **advantages**:
- better representation for our input tokens via contextual embeddings where the token representation is based on the specific neighboring tokens using self-attention.
- sub-word tokens, as opposed to character tokens, since they can hold more meaningful representation for many of our keywords, prefixes, suffixes, etc.
- attend (in parallel) to all the tokens in our input, as opposed to being limited by filter spans (CNNs) or memory issues from sequential processing (RNNs).
- **disadvantages**:
- computationally intensive
- required large amounts of data (mitigated using pretrained models)
<div align="left">
<img src="https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/images/foundations/transformers/architecture.png" width="800">
</div>
<div align="left">
<small><a href="https://arxiv.org/abs/1706.03762" target="_blank">Attention Is All You Need</a></small>
</div>
# Set up
```
!pip install transformers==3.0.2 -q
import numpy as np
import pandas as pd
import random
import torch
import torch.nn as nn
SEED = 1234
def set_seeds(seed=1234):
"""Set seeds for reproducibility."""
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed) # multi-GPU# Set seeds for reproducibility
set_seeds(seed=SEED)
# Set seeds for reproducibility
set_seeds(seed=SEED)
# Set device
cuda = True
device = torch.device("cuda" if (
torch.cuda.is_available() and cuda) else "cpu")
torch.set_default_tensor_type("torch.FloatTensor")
if device.type == "cuda":
torch.set_default_tensor_type("torch.cuda.FloatTensor")
print (device)
```
## Load data
We will download the [AG News dataset](http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html), which consists of 120K text samples from 4 unique classes (`Business`, `Sci/Tech`, `Sports`, `World`)
```
import numpy as np
import pandas as pd
import re
import urllib
# Load data
url = "https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/datasets/news.csv"
df = pd.read_csv(url, header=0) # load
df = df.sample(frac=1).reset_index(drop=True) # shuffle
df.head()
# Reduce data size (too large to fit in Colab's limited memory)
df = df[:10000]
print (len(df))
```
## Preprocessing
We're going to clean up our input data first by doing operations such as lower text, removing stop (filler) words, filters using regular expressions, etc.
```
import nltk
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
import re
nltk.download("stopwords")
STOPWORDS = stopwords.words("english")
print (STOPWORDS[:5])
porter = PorterStemmer()
def preprocess(text, stopwords=STOPWORDS):
"""Conditional preprocessing on our text unique to our task."""
# Lower
text = text.lower()
# Remove stopwords
pattern = re.compile(r'\b(' + r'|'.join(stopwords) + r')\b\s*')
text = pattern.sub('', text)
# Remove words in paranthesis
text = re.sub(r'\([^)]*\)', '', text)
# Spacing and filters
text = re.sub(r"([-;;.,!?<=>])", r" \1 ", text)
text = re.sub('[^A-Za-z0-9]+', ' ', text) # remove non alphanumeric chars
text = re.sub(' +', ' ', text) # remove multiple spaces
text = text.strip()
return text
# Sample
text = "Great week for the NYSE!"
preprocess(text=text)
# Apply to dataframe
preprocessed_df = df.copy()
preprocessed_df.title = preprocessed_df.title.apply(preprocess)
print (f"{df.title.values[0]}\n\n{preprocessed_df.title.values[0]}")
```
## Split data
```
import collections
from sklearn.model_selection import train_test_split
TRAIN_SIZE = 0.7
VAL_SIZE = 0.15
TEST_SIZE = 0.15
def train_val_test_split(X, y, train_size):
"""Split dataset into data splits."""
X_train, X_, y_train, y_ = train_test_split(X, y, train_size=TRAIN_SIZE, stratify=y)
X_val, X_test, y_val, y_test = train_test_split(X_, y_, train_size=0.5, stratify=y_)
return X_train, X_val, X_test, y_train, y_val, y_test
# Data
X = preprocessed_df["title"].values
y = preprocessed_df["category"].values
# Create data splits
X_train, X_val, X_test, y_train, y_val, y_test = train_val_test_split(
X=X, y=y, train_size=TRAIN_SIZE)
print (f"X_train: {X_train.shape}, y_train: {y_train.shape}")
print (f"X_val: {X_val.shape}, y_val: {y_val.shape}")
print (f"X_test: {X_test.shape}, y_test: {y_test.shape}")
print (f"Sample point: {X_train[0]} → {y_train[0]}")
```
## Label encoder
```
class LabelEncoder(object):
"""Label encoder for tag labels."""
def __init__(self, class_to_index={}):
self.class_to_index = class_to_index
self.index_to_class = {v: k for k, v in self.class_to_index.items()}
self.classes = list(self.class_to_index.keys())
def __len__(self):
return len(self.class_to_index)
def __str__(self):
return f"<LabelEncoder(num_classes={len(self)})>"
def fit(self, y):
classes = np.unique(y)
for i, class_ in enumerate(classes):
self.class_to_index[class_] = i
self.index_to_class = {v: k for k, v in self.class_to_index.items()}
self.classes = list(self.class_to_index.keys())
return self
def encode(self, y):
y_one_hot = np.zeros((len(y), len(self.class_to_index)), dtype=int)
for i, item in enumerate(y):
y_one_hot[i][self.class_to_index[item]] = 1
return y_one_hot
def decode(self, y):
classes = []
for i, item in enumerate(y):
index = np.where(item == 1)[0][0]
classes.append(self.index_to_class[index])
return classes
def save(self, fp):
with open(fp, "w") as fp:
contents = {'class_to_index': self.class_to_index}
json.dump(contents, fp, indent=4, sort_keys=False)
@classmethod
def load(cls, fp):
with open(fp, "r") as fp:
kwargs = json.load(fp=fp)
return cls(**kwargs)
# Encode
label_encoder = LabelEncoder()
label_encoder.fit(y_train)
num_classes = len(label_encoder)
label_encoder.class_to_index
# Class weights
counts = np.bincount([label_encoder.class_to_index[class_] for class_ in y_train])
class_weights = {i: 1.0/count for i, count in enumerate(counts)}
print (f"counts: {counts}\nweights: {class_weights}")
# Convert labels to tokens
print (f"y_train[0]: {y_train[0]}")
y_train = label_encoder.encode(y_train)
y_val = label_encoder.encode(y_val)
y_test = label_encoder.encode(y_test)
print (f"y_train[0]: {y_train[0]}")
print (f"decode([y_train[0]]): {label_encoder.decode([y_train[0]])}")
```
## Tokenizer
We'll be using the [BertTokenizer](https://huggingface.co/transformers/model_doc/bert.html#berttokenizer) to tokenize our input text in to sub-word tokens.
```
from transformers import DistilBertTokenizer
from transformers import BertTokenizer
# Load tokenizer and model
# tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased")
tokenizer = BertTokenizer.from_pretrained("allenai/scibert_scivocab_uncased")
vocab_size = len(tokenizer)
print (vocab_size)
# Tokenize inputs
encoded_input = tokenizer(X_train.tolist(), return_tensors="pt", padding=True)
X_train_ids = encoded_input["input_ids"]
X_train_masks = encoded_input["attention_mask"]
print (X_train_ids.shape, X_train_masks.shape)
encoded_input = tokenizer(X_val.tolist(), return_tensors="pt", padding=True)
X_val_ids = encoded_input["input_ids"]
X_val_masks = encoded_input["attention_mask"]
print (X_val_ids.shape, X_val_masks.shape)
encoded_input = tokenizer(X_test.tolist(), return_tensors="pt", padding=True)
X_test_ids = encoded_input["input_ids"]
X_test_masks = encoded_input["attention_mask"]
print (X_test_ids.shape, X_test_masks.shape)
# Decode
print (f"{X_train_ids[0]}\n{tokenizer.decode(X_train_ids[0])}")
# Sub-word tokens
print (tokenizer.convert_ids_to_tokens(ids=X_train_ids[0]))
```
## Datasets
We're going to create Datasets and DataLoaders to be able to efficiently create batches with our data splits.
```
class TransformerTextDataset(torch.utils.data.Dataset):
def __init__(self, ids, masks, targets):
self.ids = ids
self.masks = masks
self.targets = targets
def __len__(self):
return len(self.targets)
def __str__(self):
return f"<Dataset(N={len(self)})>"
def __getitem__(self, index):
ids = torch.tensor(self.ids[index], dtype=torch.long)
masks = torch.tensor(self.masks[index], dtype=torch.long)
targets = torch.FloatTensor(self.targets[index])
return ids, masks, targets
def create_dataloader(self, batch_size, shuffle=False, drop_last=False):
return torch.utils.data.DataLoader(
dataset=self,
batch_size=batch_size,
shuffle=shuffle,
drop_last=drop_last,
pin_memory=False)
# Create datasets
train_dataset = TransformerTextDataset(ids=X_train_ids, masks=X_train_masks, targets=y_train)
val_dataset = TransformerTextDataset(ids=X_val_ids, masks=X_val_masks, targets=y_val)
test_dataset = TransformerTextDataset(ids=X_test_ids, masks=X_test_masks, targets=y_test)
print ("Data splits:\n"
f" Train dataset:{train_dataset.__str__()}\n"
f" Val dataset: {val_dataset.__str__()}\n"
f" Test dataset: {test_dataset.__str__()}\n"
"Sample point:\n"
f" ids: {train_dataset[0][0]}\n"
f" masks: {train_dataset[0][1]}\n"
f" targets: {train_dataset[0][2]}")
# Create dataloaders
batch_size = 128
train_dataloader = train_dataset.create_dataloader(
batch_size=batch_size)
val_dataloader = val_dataset.create_dataloader(
batch_size=batch_size)
test_dataloader = test_dataset.create_dataloader(
batch_size=batch_size)
batch = next(iter(train_dataloader))
print ("Sample batch:\n"
f" ids: {batch[0].size()}\n"
f" masks: {batch[1].size()}\n"
f" targets: {batch[2].size()}")
```
## Trainer
Let's create the `Trainer` class that we'll use to facilitate training for our experiments.
```
import torch.nn.functional as F
class Trainer(object):
def __init__(self, model, device, loss_fn=None, optimizer=None, scheduler=None):
# Set params
self.model = model
self.device = device
self.loss_fn = loss_fn
self.optimizer = optimizer
self.scheduler = scheduler
def train_step(self, dataloader):
"""Train step."""
# Set model to train mode
self.model.train()
loss = 0.0
# Iterate over train batches
for i, batch in enumerate(dataloader):
# Step
batch = [item.to(self.device) for item in batch] # Set device
inputs, targets = batch[:-1], batch[-1]
self.optimizer.zero_grad() # Reset gradients
z = self.model(inputs) # Forward pass
J = self.loss_fn(z, targets) # Define loss
J.backward() # Backward pass
self.optimizer.step() # Update weights
# Cumulative Metrics
loss += (J.detach().item() - loss) / (i + 1)
return loss
def eval_step(self, dataloader):
"""Validation or test step."""
# Set model to eval mode
self.model.eval()
loss = 0.0
y_trues, y_probs = [], []
# Iterate over val batches
with torch.inference_mode():
for i, batch in enumerate(dataloader):
# Step
batch = [item.to(self.device) for item in batch] # Set device
inputs, y_true = batch[:-1], batch[-1]
z = self.model(inputs) # Forward pass
J = self.loss_fn(z, y_true).item()
# Cumulative Metrics
loss += (J - loss) / (i + 1)
# Store outputs
y_prob = F.softmax(z).cpu().numpy()
y_probs.extend(y_prob)
y_trues.extend(y_true.cpu().numpy())
return loss, np.vstack(y_trues), np.vstack(y_probs)
def predict_step(self, dataloader):
"""Prediction step."""
# Set model to eval mode
self.model.eval()
y_probs = []
# Iterate over val batches
with torch.inference_mode():
for i, batch in enumerate(dataloader):
# Forward pass w/ inputs
inputs, targets = batch[:-1], batch[-1]
z = self.model(inputs)
# Store outputs
y_prob = F.softmax(z).cpu().numpy()
y_probs.extend(y_prob)
return np.vstack(y_probs)
def train(self, num_epochs, patience, train_dataloader, val_dataloader):
best_val_loss = np.inf
for epoch in range(num_epochs):
# Steps
train_loss = self.train_step(dataloader=train_dataloader)
val_loss, _, _ = self.eval_step(dataloader=val_dataloader)
self.scheduler.step(val_loss)
# Early stopping
if val_loss < best_val_loss:
best_val_loss = val_loss
best_model = self.model
_patience = patience # reset _patience
else:
_patience -= 1
if not _patience: # 0
print("Stopping early!")
break
# Logging
print(
f"Epoch: {epoch+1} | "
f"train_loss: {train_loss:.5f}, "
f"val_loss: {val_loss:.5f}, "
f"lr: {self.optimizer.param_groups[0]['lr']:.2E}, "
f"_patience: {_patience}"
)
return best_model
```
# Transformer
## Scaled dot-product attention
The most popular type of self-attention is scaled dot-product attention from the widely-cited [Attention is all you need](https://arxiv.org/abs/1706.03762) paper. This type of attention involves projecting our encoded input sequences onto three matrices, queries (Q), keys (K) and values (V), whose weights we learn.
$ inputs \in \mathbb{R}^{NXMXH} $ ($N$ = batch size, $M$ = sequence length, $H$ = hidden dim)
$ Q = XW_q $ where $ W_q \in \mathbb{R}^{HXd_q} $
$ K = XW_k $ where $ W_k \in \mathbb{R}^{HXd_k} $
$ V = XW_v $ where $ W_v \in \mathbb{R}^{HXd_v} $
$ attention (Q, K, V) = softmax( \frac{Q K^{T}}{\sqrt{d_k}} )V \in \mathbb{R}^{MXd_v} $
## Multi-head attention
Instead of applying self-attention only once across the entire encoded input, we can also separate the input and apply self-attention in parallel (heads) to each input section and concatenate them. This allows the different head to learn unique representations while maintaining the complexity since we split the input into smaller subspaces.
$ MultiHead(Q, K, V) = concat({head}_1, ..., {head}_{h})W_O $
* ${head}_i = attention(Q_i, K_i, V_i) $
* $h$ = # of self-attention heads
* $W_O \in \mathbb{R}^{hd_vXH} $
* $H$ = hidden dim. (or dimension of the model $d_{model}$)
## Positional encoding
With self-attention, we aren't able to account for the sequential position of our input tokens. To address this, we can use positional encoding to create a representation of the location of each token with respect to the entire sequence. This can either be learned (with weights) or we can use a fixed function that can better extend to create positional encoding for lengths during inference that were not observed during training.
$ PE_{(pos,2i)} = sin({pos}/{10000^{2i/H}}) $
$ PE_{(pos,2i+1)} = cos({pos}/{10000^{2i/H}}) $
where:
* $pos$ = position of the token $(1...M)$
* $i$ = hidden dim $(1..H)$
This effectively allows us to represent each token's relative position using a fixed function for very large sequences. And because we've constrained the positional encodings to have the same dimensions as our encoded inputs, we can simply concatenate them before feeding them into the multi-head attention heads.
## Architecture
And here's how it all fits together! It's an end-to-end architecture that creates these contextual representations and uses an encoder-decoder architecture to predict the outcomes (one-to-one, many-to-one, many-to-many, etc.) Due to the complexity of the architecture, they require massive amounts of data for training without overfitting, however, they can be leveraged as pretrained models to finetune with smaller datasets that are similar to the larger set it was initially trained on.
<div align="left">
<img src="https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/images/foundations/transformers/architecture.png" width="800">
</div>
<div align="left">
<small><a href="https://arxiv.org/abs/1706.03762" target="_blank">Attention Is All You Need</a></small>
</div>
> We're not going to the implement the Transformer [from scratch](https://nlp.seas.harvard.edu/2018/04/03/attention.html) but we will use the[ Hugging Face library](https://github.com/huggingface/transformers) to load a pretrained [BertModel](https://huggingface.co/transformers/model_doc/bert.html#bertmodel) , which we'll use as a feature extractor and fine-tune on our own dataset.
## Model
We're going to use a pretrained [BertModel](https://huggingface.co/transformers/model_doc/bert.html#bertmodel) to act as a feature extractor. We'll only use the encoder to receive sequential and pooled outputs (`is_decoder=False` is default).
```
from transformers import BertModel
# transformer = BertModel.from_pretrained("distilbert-base-uncased")
# embedding_dim = transformer.config.dim
transformer = BertModel.from_pretrained("allenai/scibert_scivocab_uncased")
embedding_dim = transformer.config.hidden_size
class Transformer(nn.Module):
def __init__(self, transformer, dropout_p, embedding_dim, num_classes):
super(Transformer, self).__init__()
self.transformer = transformer
self.dropout = torch.nn.Dropout(dropout_p)
self.fc1 = torch.nn.Linear(embedding_dim, num_classes)
def forward(self, inputs):
ids, masks = inputs
seq, pool = self.transformer(input_ids=ids, attention_mask=masks)
z = self.dropout(pool)
z = self.fc1(z)
return z
```
> We decided to work with the pooled output, but we could have just as easily worked with the sequential output (encoder representation for each sub-token) and applied a CNN (or other decoder options) on top of it.
```
# Initialize model
dropout_p = 0.5
model = Transformer(
transformer=transformer, dropout_p=dropout_p,
embedding_dim=embedding_dim, num_classes=num_classes)
model = model.to(device)
print (model.named_parameters)
```
## Training
```
# Arguments
lr = 1e-4
num_epochs = 100
patience = 10
# Define loss
class_weights_tensor = torch.Tensor(np.array(list(class_weights.values())))
loss_fn = nn.BCEWithLogitsLoss(weight=class_weights_tensor)
# Define optimizer & scheduler
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(
optimizer, mode="min", factor=0.1, patience=5)
# Trainer module
trainer = Trainer(
model=model, device=device, loss_fn=loss_fn,
optimizer=optimizer, scheduler=scheduler)
# Train
best_model = trainer.train(num_epochs, patience, train_dataloader, val_dataloader)
```
## Evaluation
```
import json
from sklearn.metrics import precision_recall_fscore_support
def get_performance(y_true, y_pred, classes):
"""Per-class performance metrics."""
# Performance
performance = {"overall": {}, "class": {}}
# Overall performance
metrics = precision_recall_fscore_support(y_true, y_pred, average="weighted")
performance["overall"]["precision"] = metrics[0]
performance["overall"]["recall"] = metrics[1]
performance["overall"]["f1"] = metrics[2]
performance["overall"]["num_samples"] = np.float64(len(y_true))
# Per-class performance
metrics = precision_recall_fscore_support(y_true, y_pred, average=None)
for i in range(len(classes)):
performance["class"][classes[i]] = {
"precision": metrics[0][i],
"recall": metrics[1][i],
"f1": metrics[2][i],
"num_samples": np.float64(metrics[3][i]),
}
return performance
# Get predictions
test_loss, y_true, y_prob = trainer.eval_step(dataloader=test_dataloader)
y_pred = np.argmax(y_prob, axis=1)
# Determine performance
performance = get_performance(
y_true=np.argmax(y_true, axis=1), y_pred=y_pred, classes=label_encoder.classes)
print (json.dumps(performance["overall"], indent=2))
# Save artifacts
from pathlib import Path
dir = Path("transformers")
dir.mkdir(parents=True, exist_ok=True)
label_encoder.save(fp=Path(dir, "label_encoder.json"))
torch.save(best_model.state_dict(), Path(dir, "model.pt"))
with open(Path(dir, "performance.json"), "w") as fp:
json.dump(performance, indent=2, sort_keys=False, fp=fp)
```
## Inference
```
def get_probability_distribution(y_prob, classes):
"""Create a dict of class probabilities from an array."""
results = {}
for i, class_ in enumerate(classes):
results[class_] = np.float64(y_prob[i])
sorted_results = {k: v for k, v in sorted(
results.items(), key=lambda item: item[1], reverse=True)}
return sorted_results
# Load artifacts
device = torch.device("cpu")
tokenizer = BertTokenizer.from_pretrained("allenai/scibert_scivocab_uncased")
label_encoder = LabelEncoder.load(fp=Path(dir, "label_encoder.json"))
transformer = BertModel.from_pretrained("allenai/scibert_scivocab_uncased")
embedding_dim = transformer.config.hidden_size
model = Transformer(
transformer=transformer, dropout_p=dropout_p,
embedding_dim=embedding_dim, num_classes=num_classes)
model.load_state_dict(torch.load(Path(dir, "model.pt"), map_location=device))
model.to(device);
# Initialize trainer
trainer = Trainer(model=model, device=device)
# Create datasets
train_dataset = TransformerTextDataset(ids=X_train_ids, masks=X_train_masks, targets=y_train)
val_dataset = TransformerTextDataset(ids=X_val_ids, masks=X_val_masks, targets=y_val)
test_dataset = TransformerTextDataset(ids=X_test_ids, masks=X_test_masks, targets=y_test)
print ("Data splits:\n"
f" Train dataset:{train_dataset.__str__()}\n"
f" Val dataset: {val_dataset.__str__()}\n"
f" Test dataset: {test_dataset.__str__()}\n"
"Sample point:\n"
f" ids: {train_dataset[0][0]}\n"
f" masks: {train_dataset[0][1]}\n"
f" targets: {train_dataset[0][2]}")
# Dataloader
text = "The final tennis tournament starts next week."
X = preprocess(text)
encoded_input = tokenizer(X, return_tensors="pt", padding=True).to(torch.device("cpu"))
ids = encoded_input["input_ids"]
masks = encoded_input["attention_mask"]
y_filler = label_encoder.encode([label_encoder.classes[0]]*len(ids))
dataset = TransformerTextDataset(ids=ids, masks=masks, targets=y_filler)
dataloader = dataset.create_dataloader(batch_size=int(batch_size))
# Inference
y_prob = trainer.predict_step(dataloader)
y_pred = np.argmax(y_prob, axis=1)
label_encoder.index_to_class[y_pred[0]]
# Class distributions
prob_dist = get_probability_distribution(y_prob=y_prob[0], classes=label_encoder.classes)
print (json.dumps(prob_dist, indent=2))
```
## Interpretability
Let's visualize the self-attention weights from each of the attention heads in the encoder.
```
import sys
!rm -r bertviz_repo
!test -d bertviz_repo || git clone https://github.com/jessevig/bertviz bertviz_repo
if not "bertviz_repo" in sys.path:
sys.path += ["bertviz_repo"]
from bertviz import head_view
# Print input ids
print (ids)
print (tokenizer.batch_decode(ids))
# Get encoder attentions
seq, pool, attn = model.transformer(input_ids=ids, attention_mask=masks, output_attentions=True)
print (len(attn)) # 12 attention layers (heads)
print (attn[0].shape)
# HTML set up
def call_html():
import IPython
display(IPython.core.display.HTML('''
<script src="/static/components/requirejs/require.js"></script>
<script>
requirejs.config({
paths: {
base: '/static/base',
"d3": "https://cdnjs.cloudflare.com/ajax/libs/d3/3.5.8/d3.min",
jquery: '//ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min',
},
});
</script>
'''))
# Visualize self-attention weights
call_html()
tokens = tokenizer.convert_ids_to_tokens(ids[0])
head_view(attention=attn, tokens=tokens)
```
> Now you're ready to start the [MLOps lessons](https://madewithml.com/#mlops) to learn how to apply all this foundational modeling knowledge to responsibly deliver value.
|
github_jupyter
|
!pip install transformers==3.0.2 -q
import numpy as np
import pandas as pd
import random
import torch
import torch.nn as nn
SEED = 1234
def set_seeds(seed=1234):
"""Set seeds for reproducibility."""
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed) # multi-GPU# Set seeds for reproducibility
set_seeds(seed=SEED)
# Set seeds for reproducibility
set_seeds(seed=SEED)
# Set device
cuda = True
device = torch.device("cuda" if (
torch.cuda.is_available() and cuda) else "cpu")
torch.set_default_tensor_type("torch.FloatTensor")
if device.type == "cuda":
torch.set_default_tensor_type("torch.cuda.FloatTensor")
print (device)
import numpy as np
import pandas as pd
import re
import urllib
# Load data
url = "https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/datasets/news.csv"
df = pd.read_csv(url, header=0) # load
df = df.sample(frac=1).reset_index(drop=True) # shuffle
df.head()
# Reduce data size (too large to fit in Colab's limited memory)
df = df[:10000]
print (len(df))
import nltk
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
import re
nltk.download("stopwords")
STOPWORDS = stopwords.words("english")
print (STOPWORDS[:5])
porter = PorterStemmer()
def preprocess(text, stopwords=STOPWORDS):
"""Conditional preprocessing on our text unique to our task."""
# Lower
text = text.lower()
# Remove stopwords
pattern = re.compile(r'\b(' + r'|'.join(stopwords) + r')\b\s*')
text = pattern.sub('', text)
# Remove words in paranthesis
text = re.sub(r'\([^)]*\)', '', text)
# Spacing and filters
text = re.sub(r"([-;;.,!?<=>])", r" \1 ", text)
text = re.sub('[^A-Za-z0-9]+', ' ', text) # remove non alphanumeric chars
text = re.sub(' +', ' ', text) # remove multiple spaces
text = text.strip()
return text
# Sample
text = "Great week for the NYSE!"
preprocess(text=text)
# Apply to dataframe
preprocessed_df = df.copy()
preprocessed_df.title = preprocessed_df.title.apply(preprocess)
print (f"{df.title.values[0]}\n\n{preprocessed_df.title.values[0]}")
import collections
from sklearn.model_selection import train_test_split
TRAIN_SIZE = 0.7
VAL_SIZE = 0.15
TEST_SIZE = 0.15
def train_val_test_split(X, y, train_size):
"""Split dataset into data splits."""
X_train, X_, y_train, y_ = train_test_split(X, y, train_size=TRAIN_SIZE, stratify=y)
X_val, X_test, y_val, y_test = train_test_split(X_, y_, train_size=0.5, stratify=y_)
return X_train, X_val, X_test, y_train, y_val, y_test
# Data
X = preprocessed_df["title"].values
y = preprocessed_df["category"].values
# Create data splits
X_train, X_val, X_test, y_train, y_val, y_test = train_val_test_split(
X=X, y=y, train_size=TRAIN_SIZE)
print (f"X_train: {X_train.shape}, y_train: {y_train.shape}")
print (f"X_val: {X_val.shape}, y_val: {y_val.shape}")
print (f"X_test: {X_test.shape}, y_test: {y_test.shape}")
print (f"Sample point: {X_train[0]} → {y_train[0]}")
class LabelEncoder(object):
"""Label encoder for tag labels."""
def __init__(self, class_to_index={}):
self.class_to_index = class_to_index
self.index_to_class = {v: k for k, v in self.class_to_index.items()}
self.classes = list(self.class_to_index.keys())
def __len__(self):
return len(self.class_to_index)
def __str__(self):
return f"<LabelEncoder(num_classes={len(self)})>"
def fit(self, y):
classes = np.unique(y)
for i, class_ in enumerate(classes):
self.class_to_index[class_] = i
self.index_to_class = {v: k for k, v in self.class_to_index.items()}
self.classes = list(self.class_to_index.keys())
return self
def encode(self, y):
y_one_hot = np.zeros((len(y), len(self.class_to_index)), dtype=int)
for i, item in enumerate(y):
y_one_hot[i][self.class_to_index[item]] = 1
return y_one_hot
def decode(self, y):
classes = []
for i, item in enumerate(y):
index = np.where(item == 1)[0][0]
classes.append(self.index_to_class[index])
return classes
def save(self, fp):
with open(fp, "w") as fp:
contents = {'class_to_index': self.class_to_index}
json.dump(contents, fp, indent=4, sort_keys=False)
@classmethod
def load(cls, fp):
with open(fp, "r") as fp:
kwargs = json.load(fp=fp)
return cls(**kwargs)
# Encode
label_encoder = LabelEncoder()
label_encoder.fit(y_train)
num_classes = len(label_encoder)
label_encoder.class_to_index
# Class weights
counts = np.bincount([label_encoder.class_to_index[class_] for class_ in y_train])
class_weights = {i: 1.0/count for i, count in enumerate(counts)}
print (f"counts: {counts}\nweights: {class_weights}")
# Convert labels to tokens
print (f"y_train[0]: {y_train[0]}")
y_train = label_encoder.encode(y_train)
y_val = label_encoder.encode(y_val)
y_test = label_encoder.encode(y_test)
print (f"y_train[0]: {y_train[0]}")
print (f"decode([y_train[0]]): {label_encoder.decode([y_train[0]])}")
from transformers import DistilBertTokenizer
from transformers import BertTokenizer
# Load tokenizer and model
# tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased")
tokenizer = BertTokenizer.from_pretrained("allenai/scibert_scivocab_uncased")
vocab_size = len(tokenizer)
print (vocab_size)
# Tokenize inputs
encoded_input = tokenizer(X_train.tolist(), return_tensors="pt", padding=True)
X_train_ids = encoded_input["input_ids"]
X_train_masks = encoded_input["attention_mask"]
print (X_train_ids.shape, X_train_masks.shape)
encoded_input = tokenizer(X_val.tolist(), return_tensors="pt", padding=True)
X_val_ids = encoded_input["input_ids"]
X_val_masks = encoded_input["attention_mask"]
print (X_val_ids.shape, X_val_masks.shape)
encoded_input = tokenizer(X_test.tolist(), return_tensors="pt", padding=True)
X_test_ids = encoded_input["input_ids"]
X_test_masks = encoded_input["attention_mask"]
print (X_test_ids.shape, X_test_masks.shape)
# Decode
print (f"{X_train_ids[0]}\n{tokenizer.decode(X_train_ids[0])}")
# Sub-word tokens
print (tokenizer.convert_ids_to_tokens(ids=X_train_ids[0]))
class TransformerTextDataset(torch.utils.data.Dataset):
def __init__(self, ids, masks, targets):
self.ids = ids
self.masks = masks
self.targets = targets
def __len__(self):
return len(self.targets)
def __str__(self):
return f"<Dataset(N={len(self)})>"
def __getitem__(self, index):
ids = torch.tensor(self.ids[index], dtype=torch.long)
masks = torch.tensor(self.masks[index], dtype=torch.long)
targets = torch.FloatTensor(self.targets[index])
return ids, masks, targets
def create_dataloader(self, batch_size, shuffle=False, drop_last=False):
return torch.utils.data.DataLoader(
dataset=self,
batch_size=batch_size,
shuffle=shuffle,
drop_last=drop_last,
pin_memory=False)
# Create datasets
train_dataset = TransformerTextDataset(ids=X_train_ids, masks=X_train_masks, targets=y_train)
val_dataset = TransformerTextDataset(ids=X_val_ids, masks=X_val_masks, targets=y_val)
test_dataset = TransformerTextDataset(ids=X_test_ids, masks=X_test_masks, targets=y_test)
print ("Data splits:\n"
f" Train dataset:{train_dataset.__str__()}\n"
f" Val dataset: {val_dataset.__str__()}\n"
f" Test dataset: {test_dataset.__str__()}\n"
"Sample point:\n"
f" ids: {train_dataset[0][0]}\n"
f" masks: {train_dataset[0][1]}\n"
f" targets: {train_dataset[0][2]}")
# Create dataloaders
batch_size = 128
train_dataloader = train_dataset.create_dataloader(
batch_size=batch_size)
val_dataloader = val_dataset.create_dataloader(
batch_size=batch_size)
test_dataloader = test_dataset.create_dataloader(
batch_size=batch_size)
batch = next(iter(train_dataloader))
print ("Sample batch:\n"
f" ids: {batch[0].size()}\n"
f" masks: {batch[1].size()}\n"
f" targets: {batch[2].size()}")
import torch.nn.functional as F
class Trainer(object):
def __init__(self, model, device, loss_fn=None, optimizer=None, scheduler=None):
# Set params
self.model = model
self.device = device
self.loss_fn = loss_fn
self.optimizer = optimizer
self.scheduler = scheduler
def train_step(self, dataloader):
"""Train step."""
# Set model to train mode
self.model.train()
loss = 0.0
# Iterate over train batches
for i, batch in enumerate(dataloader):
# Step
batch = [item.to(self.device) for item in batch] # Set device
inputs, targets = batch[:-1], batch[-1]
self.optimizer.zero_grad() # Reset gradients
z = self.model(inputs) # Forward pass
J = self.loss_fn(z, targets) # Define loss
J.backward() # Backward pass
self.optimizer.step() # Update weights
# Cumulative Metrics
loss += (J.detach().item() - loss) / (i + 1)
return loss
def eval_step(self, dataloader):
"""Validation or test step."""
# Set model to eval mode
self.model.eval()
loss = 0.0
y_trues, y_probs = [], []
# Iterate over val batches
with torch.inference_mode():
for i, batch in enumerate(dataloader):
# Step
batch = [item.to(self.device) for item in batch] # Set device
inputs, y_true = batch[:-1], batch[-1]
z = self.model(inputs) # Forward pass
J = self.loss_fn(z, y_true).item()
# Cumulative Metrics
loss += (J - loss) / (i + 1)
# Store outputs
y_prob = F.softmax(z).cpu().numpy()
y_probs.extend(y_prob)
y_trues.extend(y_true.cpu().numpy())
return loss, np.vstack(y_trues), np.vstack(y_probs)
def predict_step(self, dataloader):
"""Prediction step."""
# Set model to eval mode
self.model.eval()
y_probs = []
# Iterate over val batches
with torch.inference_mode():
for i, batch in enumerate(dataloader):
# Forward pass w/ inputs
inputs, targets = batch[:-1], batch[-1]
z = self.model(inputs)
# Store outputs
y_prob = F.softmax(z).cpu().numpy()
y_probs.extend(y_prob)
return np.vstack(y_probs)
def train(self, num_epochs, patience, train_dataloader, val_dataloader):
best_val_loss = np.inf
for epoch in range(num_epochs):
# Steps
train_loss = self.train_step(dataloader=train_dataloader)
val_loss, _, _ = self.eval_step(dataloader=val_dataloader)
self.scheduler.step(val_loss)
# Early stopping
if val_loss < best_val_loss:
best_val_loss = val_loss
best_model = self.model
_patience = patience # reset _patience
else:
_patience -= 1
if not _patience: # 0
print("Stopping early!")
break
# Logging
print(
f"Epoch: {epoch+1} | "
f"train_loss: {train_loss:.5f}, "
f"val_loss: {val_loss:.5f}, "
f"lr: {self.optimizer.param_groups[0]['lr']:.2E}, "
f"_patience: {_patience}"
)
return best_model
from transformers import BertModel
# transformer = BertModel.from_pretrained("distilbert-base-uncased")
# embedding_dim = transformer.config.dim
transformer = BertModel.from_pretrained("allenai/scibert_scivocab_uncased")
embedding_dim = transformer.config.hidden_size
class Transformer(nn.Module):
def __init__(self, transformer, dropout_p, embedding_dim, num_classes):
super(Transformer, self).__init__()
self.transformer = transformer
self.dropout = torch.nn.Dropout(dropout_p)
self.fc1 = torch.nn.Linear(embedding_dim, num_classes)
def forward(self, inputs):
ids, masks = inputs
seq, pool = self.transformer(input_ids=ids, attention_mask=masks)
z = self.dropout(pool)
z = self.fc1(z)
return z
# Initialize model
dropout_p = 0.5
model = Transformer(
transformer=transformer, dropout_p=dropout_p,
embedding_dim=embedding_dim, num_classes=num_classes)
model = model.to(device)
print (model.named_parameters)
# Arguments
lr = 1e-4
num_epochs = 100
patience = 10
# Define loss
class_weights_tensor = torch.Tensor(np.array(list(class_weights.values())))
loss_fn = nn.BCEWithLogitsLoss(weight=class_weights_tensor)
# Define optimizer & scheduler
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(
optimizer, mode="min", factor=0.1, patience=5)
# Trainer module
trainer = Trainer(
model=model, device=device, loss_fn=loss_fn,
optimizer=optimizer, scheduler=scheduler)
# Train
best_model = trainer.train(num_epochs, patience, train_dataloader, val_dataloader)
import json
from sklearn.metrics import precision_recall_fscore_support
def get_performance(y_true, y_pred, classes):
"""Per-class performance metrics."""
# Performance
performance = {"overall": {}, "class": {}}
# Overall performance
metrics = precision_recall_fscore_support(y_true, y_pred, average="weighted")
performance["overall"]["precision"] = metrics[0]
performance["overall"]["recall"] = metrics[1]
performance["overall"]["f1"] = metrics[2]
performance["overall"]["num_samples"] = np.float64(len(y_true))
# Per-class performance
metrics = precision_recall_fscore_support(y_true, y_pred, average=None)
for i in range(len(classes)):
performance["class"][classes[i]] = {
"precision": metrics[0][i],
"recall": metrics[1][i],
"f1": metrics[2][i],
"num_samples": np.float64(metrics[3][i]),
}
return performance
# Get predictions
test_loss, y_true, y_prob = trainer.eval_step(dataloader=test_dataloader)
y_pred = np.argmax(y_prob, axis=1)
# Determine performance
performance = get_performance(
y_true=np.argmax(y_true, axis=1), y_pred=y_pred, classes=label_encoder.classes)
print (json.dumps(performance["overall"], indent=2))
# Save artifacts
from pathlib import Path
dir = Path("transformers")
dir.mkdir(parents=True, exist_ok=True)
label_encoder.save(fp=Path(dir, "label_encoder.json"))
torch.save(best_model.state_dict(), Path(dir, "model.pt"))
with open(Path(dir, "performance.json"), "w") as fp:
json.dump(performance, indent=2, sort_keys=False, fp=fp)
def get_probability_distribution(y_prob, classes):
"""Create a dict of class probabilities from an array."""
results = {}
for i, class_ in enumerate(classes):
results[class_] = np.float64(y_prob[i])
sorted_results = {k: v for k, v in sorted(
results.items(), key=lambda item: item[1], reverse=True)}
return sorted_results
# Load artifacts
device = torch.device("cpu")
tokenizer = BertTokenizer.from_pretrained("allenai/scibert_scivocab_uncased")
label_encoder = LabelEncoder.load(fp=Path(dir, "label_encoder.json"))
transformer = BertModel.from_pretrained("allenai/scibert_scivocab_uncased")
embedding_dim = transformer.config.hidden_size
model = Transformer(
transformer=transformer, dropout_p=dropout_p,
embedding_dim=embedding_dim, num_classes=num_classes)
model.load_state_dict(torch.load(Path(dir, "model.pt"), map_location=device))
model.to(device);
# Initialize trainer
trainer = Trainer(model=model, device=device)
# Create datasets
train_dataset = TransformerTextDataset(ids=X_train_ids, masks=X_train_masks, targets=y_train)
val_dataset = TransformerTextDataset(ids=X_val_ids, masks=X_val_masks, targets=y_val)
test_dataset = TransformerTextDataset(ids=X_test_ids, masks=X_test_masks, targets=y_test)
print ("Data splits:\n"
f" Train dataset:{train_dataset.__str__()}\n"
f" Val dataset: {val_dataset.__str__()}\n"
f" Test dataset: {test_dataset.__str__()}\n"
"Sample point:\n"
f" ids: {train_dataset[0][0]}\n"
f" masks: {train_dataset[0][1]}\n"
f" targets: {train_dataset[0][2]}")
# Dataloader
text = "The final tennis tournament starts next week."
X = preprocess(text)
encoded_input = tokenizer(X, return_tensors="pt", padding=True).to(torch.device("cpu"))
ids = encoded_input["input_ids"]
masks = encoded_input["attention_mask"]
y_filler = label_encoder.encode([label_encoder.classes[0]]*len(ids))
dataset = TransformerTextDataset(ids=ids, masks=masks, targets=y_filler)
dataloader = dataset.create_dataloader(batch_size=int(batch_size))
# Inference
y_prob = trainer.predict_step(dataloader)
y_pred = np.argmax(y_prob, axis=1)
label_encoder.index_to_class[y_pred[0]]
# Class distributions
prob_dist = get_probability_distribution(y_prob=y_prob[0], classes=label_encoder.classes)
print (json.dumps(prob_dist, indent=2))
import sys
!rm -r bertviz_repo
!test -d bertviz_repo || git clone https://github.com/jessevig/bertviz bertviz_repo
if not "bertviz_repo" in sys.path:
sys.path += ["bertviz_repo"]
from bertviz import head_view
# Print input ids
print (ids)
print (tokenizer.batch_decode(ids))
# Get encoder attentions
seq, pool, attn = model.transformer(input_ids=ids, attention_mask=masks, output_attentions=True)
print (len(attn)) # 12 attention layers (heads)
print (attn[0].shape)
# HTML set up
def call_html():
import IPython
display(IPython.core.display.HTML('''
<script src="/static/components/requirejs/require.js"></script>
<script>
requirejs.config({
paths: {
base: '/static/base',
"d3": "https://cdnjs.cloudflare.com/ajax/libs/d3/3.5.8/d3.min",
jquery: '//ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min',
},
});
</script>
'''))
# Visualize self-attention weights
call_html()
tokens = tokenizer.convert_ids_to_tokens(ids[0])
head_view(attention=attn, tokens=tokens)
| 0.809991 | 0.973469 |
# `Permutation` explainer
This notebooks demonstrates how to use the Permutation explainer on some simple datasets. The Permutation explainer is model-agnostic, so it can compute Shapley values and Owen values for any model. It works by iterating over complete permutations of the features forward and the reversed. By doing this, changing one feature at a time we can minimize the number of model evaluations that are required, and always ensure we satisfy efficiency no matter how many executions of the original model we choose to use for appoximation the feature attribution values. So the SHAP values computed, while approximate, do exactly sum up to the difference between the base value of the model and the output of the model for each explained instance.
Because the Permutation explainer has important performance optimizations, and does not require regularization parameter tuning like Kernel explainer, the Permutation explainer is the default model agnostic explainer used for tabular datasets that have more features than would be appropriate for the Exact explainer.
Below we domonstrate how to use the Permutation explainer on a simple adult income classification dataset and model.
```
import shap
import xgboost
# get a dataset on income prediction
X,y = shap.datasets.adult()
# train an XGBoost model (but any other model type would also work)
model = xgboost.XGBClassifier()
model.fit(X, y);
```
## Tabular data with independent (Shapley value) masking
```
# build a Permutation explainer and explain the model predictions on the given dataset
explainer = shap.explainers.Permutation(model.predict_proba, X)
shap_values = explainer(X[:100])
# get just the explanations for the positive class
shap_values = shap_values[...,1]
```
### Plot a global summary
```
shap.plots.bar(shap_values)
```
### Plot a single instance
```
shap.plots.waterfall(shap_values[0])
```
## Tabular data with partition (Owen value) masking
While Shapley values result from treating each feature independently of the other features, it is often useful to enforce a structure on the model inputs. Enforcing such a structure produces a structure game (i.e. a game with rules about valid input feature coalitions), and when that structure is a nest set of feature grouping we get the Owen values as a recursive application of Shapley values to the group. In SHAP, we take the partitioning to the limit and build a binary herarchial clustering tree to represent the structure of the data. This structure could be chosen in many ways, but for tabular data it is often helpful to build the structure from the redundancy of information between the input features about the output label. This is what we do below:
```
# build a clustering of the features based on shared information about y
clustering = shap.utils.hclust(X, y)
# above we implicitly used shap.maskers.Independent by passing a raw dataframe as the masker
# now we explicitly use a Partition masker that uses the clustering we just computed
masker = shap.maskers.Partition(X, clustering=clustering)
# build a Permutation explainer and explain the model predictions on the given dataset
explainer = shap.explainers.Permutation(model.predict_proba, masker)
shap_values2 = explainer(X[:100])
# get just the explanations for the positive class
shap_values2 = shap_values2[...,1]
```
### Plot a global summary
Note that only the Relationship and Marital status features share more that 50% of their explanation power (as measured by R2) with each other, so all the other parts of the clustering tree are removed by the the default `clustering_cutoff=0.5` setting:
```
shap.plots.bar(shap_values2)
```
### Plot a single instance
Note that there is a strong similarity between the explanation from the Independent masker above and the Partition masker here. In general the distinctions between these methods for tabular data are not large, though the Partition masker allows for much faster runtime and potentially more realistic manipulations of the model inputs (since groups of clustered features are masked/unmasked together).
```
shap.plots.waterfall(shap_values2[0])
```
<hr>
Have an idea for more helpful examples? Pull requests that add to this documentation notebook are encouraged!
|
github_jupyter
|
import shap
import xgboost
# get a dataset on income prediction
X,y = shap.datasets.adult()
# train an XGBoost model (but any other model type would also work)
model = xgboost.XGBClassifier()
model.fit(X, y);
# build a Permutation explainer and explain the model predictions on the given dataset
explainer = shap.explainers.Permutation(model.predict_proba, X)
shap_values = explainer(X[:100])
# get just the explanations for the positive class
shap_values = shap_values[...,1]
shap.plots.bar(shap_values)
shap.plots.waterfall(shap_values[0])
# build a clustering of the features based on shared information about y
clustering = shap.utils.hclust(X, y)
# above we implicitly used shap.maskers.Independent by passing a raw dataframe as the masker
# now we explicitly use a Partition masker that uses the clustering we just computed
masker = shap.maskers.Partition(X, clustering=clustering)
# build a Permutation explainer and explain the model predictions on the given dataset
explainer = shap.explainers.Permutation(model.predict_proba, masker)
shap_values2 = explainer(X[:100])
# get just the explanations for the positive class
shap_values2 = shap_values2[...,1]
shap.plots.bar(shap_values2)
shap.plots.waterfall(shap_values2[0])
| 0.490968 | 0.995523 |
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
import os
print(os.listdir("../input"))
# Any results you write to the current directory are saved as output.
train_set = pd.read_csv("../input/train.csv")
test_set = pd.read_csv("../input/test.csv")
train_set.head()
""" Exploratory Data Analysis """
print(train_set['Sex'].value_counts())
print(train_set['Embarked'].value_counts())
print(train_set.isnull().values.any())
print(train_set.isnull().sum().sum())
print(train_set.describe())
# Selecting required features from training dataset
train_set.drop(['PassengerId','Name','Cabin','Ticket'],axis=1 ,inplace=True)
test_set.drop(['PassengerId','Name','Cabin','Ticket'],axis=1, inplace=True)
print(train_set.head())
print(test_set.head())
#Encoding Categorial Data
train_set = pd.get_dummies(data= train_set , dummy_na = True,columns =['Sex' , 'Embarked'])
test_set = pd.get_dummies(data= test_set , dummy_na = True,columns =['Sex' , 'Embarked'])
train_set.drop('Sex_nan',axis=1,inplace=True)
test_set.drop('Sex_nan',axis=1,inplace=True)
print(train_set.head())
print(test_set.head())
# impute missing values by mean on train and test set
train_set.fillna(train_set.mean(),inplace=True)
train_set.isnull().values.any()
test_set.fillna(train_set.mean(),inplace=True)
#Checking for nan values
test_set.isnull().values.any()
# Selecting Features and target
X = train_set.iloc[:,1:13].values
y = train_set.iloc[:,0].values
X_test = test_set.iloc[:,:].values
"""Validating Model for Parameter tuning """
from sklearn.model_selection import train_test_split
X_train , X_validate , y_train , y_validate = train_test_split(X,y,test_size=0.18,random_state=42)
#Feature Scaling
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_validate = sc_X.transform(X_validate)
#Now Appling Various ML Models For Classification
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=1000,min_samples_split=30,min_samples_leaf=4,random_state=42,warm_start=True)
clf.fit(X_train,y_train)
y_pred = clf.predict(X_validate)
#metrics
from sklearn.metrics import confusion_matrix
cnf = confusion_matrix(y_validate,y_pred)
print(cnf)
#Out of 161 validation set 130(84+46) predictions are right
acu = (130/161)*100
print(acu)
#Now applying Model Total dataset and testing on test data
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X = sc_X.fit_transform(X)
X_test = sc_X.transform(X_test)
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=1000,min_samples_split=30,min_samples_leaf=4,random_state=42,warm_start=True)
clf.fit(X,y)
#Predicting the survial on test set
y_predict = clf.predict(X_test)
```
|
github_jupyter
|
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
import os
print(os.listdir("../input"))
# Any results you write to the current directory are saved as output.
train_set = pd.read_csv("../input/train.csv")
test_set = pd.read_csv("../input/test.csv")
train_set.head()
""" Exploratory Data Analysis """
print(train_set['Sex'].value_counts())
print(train_set['Embarked'].value_counts())
print(train_set.isnull().values.any())
print(train_set.isnull().sum().sum())
print(train_set.describe())
# Selecting required features from training dataset
train_set.drop(['PassengerId','Name','Cabin','Ticket'],axis=1 ,inplace=True)
test_set.drop(['PassengerId','Name','Cabin','Ticket'],axis=1, inplace=True)
print(train_set.head())
print(test_set.head())
#Encoding Categorial Data
train_set = pd.get_dummies(data= train_set , dummy_na = True,columns =['Sex' , 'Embarked'])
test_set = pd.get_dummies(data= test_set , dummy_na = True,columns =['Sex' , 'Embarked'])
train_set.drop('Sex_nan',axis=1,inplace=True)
test_set.drop('Sex_nan',axis=1,inplace=True)
print(train_set.head())
print(test_set.head())
# impute missing values by mean on train and test set
train_set.fillna(train_set.mean(),inplace=True)
train_set.isnull().values.any()
test_set.fillna(train_set.mean(),inplace=True)
#Checking for nan values
test_set.isnull().values.any()
# Selecting Features and target
X = train_set.iloc[:,1:13].values
y = train_set.iloc[:,0].values
X_test = test_set.iloc[:,:].values
"""Validating Model for Parameter tuning """
from sklearn.model_selection import train_test_split
X_train , X_validate , y_train , y_validate = train_test_split(X,y,test_size=0.18,random_state=42)
#Feature Scaling
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_validate = sc_X.transform(X_validate)
#Now Appling Various ML Models For Classification
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=1000,min_samples_split=30,min_samples_leaf=4,random_state=42,warm_start=True)
clf.fit(X_train,y_train)
y_pred = clf.predict(X_validate)
#metrics
from sklearn.metrics import confusion_matrix
cnf = confusion_matrix(y_validate,y_pred)
print(cnf)
#Out of 161 validation set 130(84+46) predictions are right
acu = (130/161)*100
print(acu)
#Now applying Model Total dataset and testing on test data
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X = sc_X.fit_transform(X)
X_test = sc_X.transform(X_test)
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=1000,min_samples_split=30,min_samples_leaf=4,random_state=42,warm_start=True)
clf.fit(X,y)
#Predicting the survial on test set
y_predict = clf.predict(X_test)
| 0.569733 | 0.414632 |
# `ADKit`
# Documentation
*****
## Introduction
Derivatives are ubiquitous in many fields such as engineering design optimization, fluid dynamics and machine learning.
There are in general three ways to calculate the derivatives: automatic differentiation, numeric differentiation, and
symbolic differentiation. Automatic Differentiation (AD) brings a family of techniques that can calculate the partial
derivatives of any function at any point efficiently and accurately. Unlike numeric differentiation, AD does not have
the problem of floating point precision errors, since it calculates the derivative of a simple function, and keeps track of
these derivatives, and there is no need of step sizes. Compared to symbolic differentiation, AD is not as memory intense,
and can be much faster in terms of the calculation. Therefore, AD is an important way to calculate derivatives in
practice.
There are two modes in Automatic Differentiation: forward mode and the reverse mode. In forward mode, the chain rule is applied to each basic operation, and both the variable's value and derivative are calculated along the way, leading to a complete derivative trace. In reverse mode, there is a forward pass, where the intermediate variables are computed and their values and partial derivatives with respect to the previous layer stored in the memory, and also a reverse pass (popularly known as backward propagation), where we propagate back the derivatives with the help of the chain rule.
The software that we design calculates the derivatives given the user’s input using the forward mode/reverse mode of automatic differentiation depending on the user's choice,
and provides the user with an easy way to solve their optimization problem using derivatives.
## Background
At the core of Automatic Differentiation is the principle that functions implemented as computer code can be broken down into elementary functions, ranging from arithmetic operations (e.g. addition, subtraction etc.) and other functions (e.g. power, exponential, sin etc.). Hence, any differentiable function can be interpreted as a composition of different functions.
For example, given a function, $f = sin^2(2x)$, it can be rewritten as:
$$ f = \phi_1(\phi_2(\phi_3(x))) $$
where $$ \phi_1(z) = z^2, \phi_2(y) = sin(y) \text{ and } \phi_3(x) = 2x$$
In the forward mode, the chain rule can then be applied successively to each elementary component function to obtain the derivative of the function. Using the same example above, let $c$ be a real number:
$$ f'(c) = \phi_3'(\phi_2(\phi_1(c))) \cdot \phi_2'(\phi_1(c)) \cdot \phi_1'(c)$$
Based on the example above, the derivative, $f'(c)$, can be evaluated based on the following function-derivative pairs at each stage of computing the function:
$$(\phi_1(c), \phi_1'(c))$$
$$(\phi_2(\phi_1(c)), (\phi_2'(\phi_1(c)) \cdot \phi_1'(c)))$$
$$(\phi_3(\phi_2(\phi_1(c))), \phi_3'(\phi_2(\phi_1(c)) \cdot \phi_2'(\phi_1(c)) \cdot \phi_1'(c))$$
Effectively, the forward mode computes the Jacobian-vector product, $Jp$. This decomposition can be represented via a computational graph structure of calculations, requiring initial values to be set for $x_1$, and $x'_1$:
$$x_1 \rightarrow^{\phi_3(x)} x_2 \rightarrow^{\phi_2(x)} x_3 \rightarrow^{\phi_1(x)} y $$
where $$ \phi_1(x) = x^2, \phi_2(x) = sin(x) \text{ and } \phi_3(x) = 2x$$
At each stage of the function, the derivative of the function with respect to its argument is calculated. The exact values of the function and its derivative are used for the following function-derivative pair of values. An example of the computational trace for the equation $f = sin^2(2x)$ would look like this, for $x = \dfrac{\pi}{6}$.
| Trace | Elementary Operation | Derivative | $\left(f\left(a\right), f^{\prime}\left(a\right)\right)$ |
| :------: | :----------------------: | :------------------------------: | :------------------------------: |
| $x_{1}$ | $\dfrac{\pi}{6}$ | $1$ | $\left(\dfrac{\pi}{6}, 1\right)$ |
| $x_{2}$ | $2x_{1}$ | $2\dot{x}_{1}$ | $\left(\dfrac{\pi}{3}, 2\right)$ |
| $x_{3}$ | $\sin(x_{2})$ | $\cos\left(x_{2}\right)\dot{x}_{2}$ | $\left(\dfrac{\sqrt{3}}{2}, 1\right)$ |
| $x_{4}$ | $x_{3}^{2}$ | $2x_{3}\dot{x}_{3}$ | $\left(\dfrac{3}{4}, \sqrt{3}\right)$ |
By evaluating the derivative at each step of the chain rule, we eventually obtain the value of the derivative $f'(x) = \sqrt{3}$ at $x = \dfrac{\pi}{6}$, as second entry of the final tuple in the table.
While the above illustrates the forward mode of AD (the focus of our package), AD also has a reverse mode. Without using chain rule, it first does a forward pass to store the partial derivatives, before undertaking a reverse pass, which starts from the final function to be differentiated, $y$. After fixing the derivative of the final function, it then computes the derivative of each component function with respect to its parent function recursively (using chain rule) until the derivative of the function with respect to the basic-level argument (e.g. $x_1$) can be calculated.
In terms of efficiency, the forward mode is more efficient when the number of functions to evaluate is much greater than the number of inputs, whereas the reverse mode, which computes the Jacobian-transpose-vector-product is more efficient when the number of inputs is much greater than the number of functions.
More details on the reverse mode is covered in the **Extension (Reverse Mode)** section further below in the documentation.
## How to Use
### How to install `ADKit`
ADKit can be installed through the Python Package Index using the following command in the command terminal:
Alternatively, the user may install ADKit by cloning the github repository (https://github.com/the-differentiators/cs207-FinalProject.git) or downloading as a zipped archive.
ADKit has only `numpy` (v. 1.14.3 or higher) as a pre-installation requirement. If `numpy` is not installed, this can be installed using the `requirements.txt` file included in the repository, after it is downloaded, using the following code:
### How to use `ADKit` (Forward Mode)
The following steps walk the user through a demo of how to import and use the `ADKit` package.
#### Importing `ADKit.AutoDiff` and requirements
The following code imports the forward mode variable class from ADKit.
For the purposes of this demo, we will import `numpy`, which is a requirement for the `ADKit` package, as well as the forward mode variable class from ADKit's AutoDiff module.
```
import numpy as np
from ADKit.AutoDiff import Ad_Var
```
#### Using `ADKit` to compute derivative of a scalar function of one variable (forward mode)
Below, we have included a basic demo for a scalar function, given a single input. The function used in the demo is $f = sin^2(2x)$, which was used for illustration in the *Background* section earlier. Our objective is to use the the `Ad_Var` class to compute the value of the derivative for this function automatically, unlike the manual computational trace drawn out earlier.
First, we create an instance of the `Ad_Var` object, with the value of $x = \dfrac{\pi}{6}$ assigned to the input variable, `val`.
```
a = np.pi / 6
x = Ad_Var(a)
```
The user should note that the `ADKit` package assumes that for a single input, the object being initialised will have a derivative value of 1 (stored as a Class attribute `self._ders`).
Next, we create `f`, which represents the full function. The `Ad_Var` object from the previous code can be used with dunder functions and additional functions within `Ad_Var` class to construct the full function being evaluated.
```
f = (Ad_Var.sin(2*x))**2
```
As the functions are applied to the original `Ad_Var` object `x`, the `_val` and `_ders` attributes of the object are being updated with new values. The object `f`, representing the full function, will have its `_val` and `_ders` attributes containing the actual function and derivative values respectively.
To note: the user also has the ability to manually set function and derivative values outside of instance initialization using the setter methods provided (`set_val` and `set_ders`). In this way, the user has the option to reuse the same objects after resetting the value and derivative(s).
The associated function value and derivative(s) of any `Ad_Var` instance may be retrieved through the `get_val` and `get_ders` functions as shown below:
```
print(f.get_val(), f.get_ders())
```
Also, the function value and derivative can be printed by directly printing the `Ad_Var` object associated with the function `f`.
```
print(f)
```
#### Using `ADKit` to compute the gradient of a scalar multivariate function (forward mode)
If the user wants to calculate the value and the gradient vector of a scalar multivariate function, then each variable must be first instantiated as an `Ad_Var` object, with inputs `val`, the scalar value of that variable, and `ders`, a `numpy` array representing the seed vector which indicates a direction along which the directional derivative of a function will be calculated. An example is shown below:
```
x = Ad_Var(1, np.array([1, 0, 0]))
y = Ad_Var(2, np.array([0, 1, 0]))
z = Ad_Var(3, np.array([0, 0, 1]))
```
Then, the user can define the function which consists of the instantiated `Ad_Var` variables. For example, below , we are calculating the value and the gradient of the function $f = sin^2(2x) + z^y$:
```
f = (Ad_Var.sin(2*x))**2 + z**y
print(f)
```
As we can see above, the gradient of the function `f` is a 3-dimensional vector, since `f` is a function of 3 variables. The first dimension of the gradient vector is the directional derivative of `f` along the seed vector $[1, 0, 0]$. Since, `x` was instantiated with this seed vector, the first dimension of the gradient vector corresponds to the partial derivative $\frac{\partial f}{\partial x}$ evaluated at $x=1, y=2, z=3$. Similarly, $y$ was instantiated with the seed vector $[0, 1, 0]$. In this way, the user has indicated that the second element of the gradient of `f` corresponds to $\frac{\partial f}{\partial y}$ evaluated at at $x=1, y=2, z=3$. Similarly, $z$ was instantiated with the seed vector $[0, 0, 1]$, hence the third dimension of the gradient vector corresponds to $\frac{\partial f}{\partial z}$ evaluated at $x=1, y=2, z=3$.
In summary, each variable should be instantiated with a seed vector with dimensions equal to the dimensions of the gradient vector of the target function. For each variable, the values of the seed vector should be 0 except for one value which should be the derivative of that variable such as 1. The index of the element in the seed vector which has nonzero value indicates the index of the gradient vector which stores the partial derivative value of the target function with respect to this specific variable. For example, if a variable is initiated with a seed vector of $[1,0,0]$, then this should be interpreted as the first variable among the three variables, and its derivative is set to be 1.
#### Using `ADKit` to compute derivative of a vector-valued multivariate function (forward mode)
The user can also use `ADKit` to calculate the value and the jacobian matrix of a vector-valued function. Again the variables must be instantiated in the same way as discussed above. Then, a vector-valued function can be defined as a numpy array of functions composed of instantiated `Ad_Var` variables. An example is shown below for the vector valued function $f = \begin{bmatrix}
sin^2(2x) + z^y \\
e^x + z
\end{bmatrix}$ for $x = 1, y = 2, z = 3$:
```
x = Ad_Var(1, np.array([1, 0, 0]))
y = Ad_Var(2, np.array([0, 1, 0]))
z = Ad_Var(3, np.array([0, 0, 1]))
f = np.array([(Ad_Var.sin(2*x))**2 + z**y, Ad_Var.exp(x) + z])
```
Then, the user can call `get_jacobian` to get the jacobian matrix of `f` evaluated at $x = 1, y = 2, z = 3$. The first argument of this method is the vector-valued function $f$ defined as a numpy array. The second argument is the dimension of the vector of the functions (in this example the vector-valued function has 2 dimensions). The third argument is the number of variables composing the vector-valued function (in this example vector-valued function is composed of 3 variables, $x,y$ and $z$).
```
Ad_Var.get_jacobian(f, 2, 3)
```
Also, the user can call `get_values` by passing `f`, to calculate the value of the vector-valued function for the given values of the variables.
```
Ad_Var.get_values(f)
```
Alternatively, the vector valued function can also be defined as a numpy array of other already instantiated functions, as shown below:
```
g = (Ad_Var.sin(2*x))**2 + z**y
h = Ad_Var.exp(x) + z
f = np.array([g, h])
Ad_Var.get_jacobian(f, 2, 3)
Ad_Var.get_values(f)
```
#### Using `ADKit` to compute the derivatives of any type of function on a grid of points (forward mode)
In the above examples, the derivative/gradient/jacobian of a function is evaluated at a single point which is defined by the value with which each variable is instantiated. `ADKit`, however, can be used to evaluate the derivative/gradient/jacobian of a function on a grid of points defined by the user. The first step to do this is again to instantiate the variables with any value (please note that the default value of an `Ad_Var` variable is 1 so the value argument can be skipped).
```
x = Ad_Var(ders = np.array([1, 0, 0]))
y = Ad_Var(ders = np.array([0, 1, 0]))
z = Ad_Var(ders = np.array([0, 0, 1]))
```
Then, the user needs to define the function as a string using the same standard syntax used in any of the examples above. For example, if function is $f = sin^2(2x) + z^y$:
```
f_string = "(Ad_Var.sin(2*x))**2 + z**y"
```
Then, the user can call `grid_eval` to calculate the gradient and the value of the given function on a grid of points. The first argument passed is the function string. The second argument is a list of strings where each string represents one of the variables used in the function string. The third argument is the list of the already instantiated `Ad_Var` objects which are referenced in the function string. The last argument is a list of lists defining the grid of all possible points that the user wants to calculate the gradient and the value of the function for. For example, below the function and its gradient are evaluated for all possible combinations of $(x, y, z)$ where $x \in \{1, 2\}, y \in \{2, 3\}, z=4$. The function returns a dictionary where each key is one of the points of the grid and the value is a tuple. The first element of the tuple is the value of the function at this point and the second element of the tuple is the gradient of the function evaluated at this point.
```
Ad_Var.grid_eval(f_string, ['x', 'y', 'z'], [x, y, z], [[1, 2], [2,3], [4]])
```
The function `grid_eval` can also be used to evaluate the jacobian for vector valued functions at different points. In this case, the string representation of the vector-valued function must be written as a list of functions referencing the already instantiated `Ad_Var` variables. Please note that in this case the string representation corresponds to a list of functions and not a numpy array of functions. For example, if the user wants to evaluate the jacobian of the vector-valued function $f = \begin{bmatrix}
sin^2(2x) + z^y \\
e^x + z
\end{bmatrix}$ at different points, the function string should be defined as follows:
```
f_string = "[(Ad_Var.sin(2*x))**2 + z**y, Ad_Var.exp(x) + z]"
```
Then, by calling `grid_eval` on this function string, a dictionary is returned where each key is one of the points of the grid and the value is a tuple. The first element of the tuple is the value of the function at this point and the second element of the tuple is the jacobian of the vector-valued function evaluated at this point.
```
Ad_Var.grid_eval(f_string, ['x', 'y', 'z'], [x, y, z], [[1, 2], [2,3], [4]])
```
### How to use `ADKit` (Reverse Mode)
As part of the extension to the minimum requirements, we have implemented the reverse mode of Automatic Differentiation. Using a separate class `rAd_Var`, the user is able to use ADKit to implement the reverse mode. The value of using the reverse mode over the forward mode is the increase in efficiency when the number of inputs is much greater than the number of functions.
The user should note that usage of the `rAd_Var` class differs from that of the `Ad_Var` class in the following ways:
* The initialization of a `rAd_Var` instance does not allow for the input of a derivative value. The implementation necessitates that the derivative of the instance is initialized as None. There is the option for the user to manually set the derivative using the `set_ders` function, if they so wish to.
* The derivatives of the `rAd_Var` object obtained using the `get_ders` method will be returned as a numpy array of partial derivative(s) of the input variables, unlike the forward mode, where the final derivative at the given value of the input variables is calculated using the chain rule.
* To obtain a Jacobian matrix for vector functions with multiple real scalar inputs, e.g. $f = \begin{bmatrix}
xy \\
y \\
ln(x^y)\\
\end{bmatrix}$, the user will need to define the functions first before passing them (as Python functions) as arguments for the `get_jacobian` method, together with the variable names and the given values for the inputs, as demonstrated below.
*Note: In defining the functions, the user should only include variables used in the function as arguments. Adding additional variables would lead to errors in the calculation.*
This difference in the implementation of the Jacobian matrix is because `rAd_Var` objects can only be defined in the context of one function. Hence, feeding all the functions into the `get_jacobian` method allows for the Jacobian to obtain the respective partial derivatives for each function separately, before combining them and returning them in a single Jacobian matrix.
#### Using `ADKit` to compute derivative of a scalar function of one variable (reverse mode)
As a demo, we will again find the value and the derivative of $f = sin^2(2x)$ at $x = \dfrac{\pi}{6}$, similar to the demo for `Ad_Var` above.
```
from ADKit.AutoDiff import rAd_Var
a = np.pi / 6
x = rAd_Var(a)
f = (rAd_Var.sin(2*x))**2
print(f)
```
#### Using `rAd_Var` to compute derivative, values of a vector-valued multivariate function (reverse mode)
We will use a similar function as shown above to obtain the Jacobian matrix of a vector-valued function. The functions in the vector must be defined as Python functions as discussed above. A basic example is shown below for the vector valued function $f = \begin{bmatrix}
xy \\
y \\
ln(x^y)\\
\end{bmatrix}$ at the points where $x = 1, y = 2$.
```
def f1(x, y):
return x * y
def f2(y):
return y
def f3(x, y):
return rAd_Var.log(x ** y)
rAd_Var.get_jacobian([f1, f2, f3], ["x","y"], [1, 2])
```
In the event that more variables are defined in the constructor but not used in the functions, their partial derivative is defined in the Jacobian matrix as 0, as shown below with the inclusion of the variable `z`. This gives the user flexibility in adjusting the vector functions to include/exclude a variable where needed. Note: In the arguments passed into the Python functions, the user should only include variables used in the function as arguments (e.g. for `f2`, including only `y` and not both `x` and `y`).
```
def f1(x, y):
return x * y
def f2(y):
return y
def f3(x, y):
return rAd_Var.log(x ** y)
rAd_Var.get_jacobian([f1, f2, f3], ["x","y","z"], [1, 2, 3])
```
The `get_values` method is then used to return an array of computed values for the functions passed into the method.
```
rAd_Var.get_values([f1, f2, f3], ["x","y","z"], [1, 2, 3])
```
### Comparison of `rAd_Var` vs. `Ad_var` in obtaining the Jacobian matrix
While the implementation of the Jacobian matrix for the reverse mode is different, it will return a similar result as the forward mode (implemented in the `Ad_Var` class). We recommend that the user uses the `rAd_Var` class, in the event that the number of inputs is significantly greater than the number of functions, where it will perform more efficiently.
Below we compare the output from the `get_jacobian` method from both methods, based on the more complicated equation $f = \begin{bmatrix}
sin^2(2x) + z^y \\
e^x + z
\end{bmatrix}$ for $x = 1, y = 2, z = 3$, used in the earlier demo.
```
x = Ad_Var(1, np.array([1, 0, 0]))
y = Ad_Var(2, np.array([0, 1, 0]))
z = Ad_Var(3, np.array([0, 0, 1]))
f = np.array([(Ad_Var.sin(2*x))**2 + z**y, Ad_Var.exp(x) + z])
Ad_Var.get_jacobian(f, 2, 3)
def f1(x, z, y):
return rAd_Var.sin(2*x)**2 + z**y
def f2(x, z):
return rAd_Var.exp(x) + z
rAd_Var.get_jacobian([f1, f2], ["x","y","z"], [1, 2, 3])
```
## Software Organization
### Directory structure
Our intended directory structure is as follows:
```
cs207-FinalProject/
ADKit/
test/
test_autodiff.py
test_autodiff_reverse.py
AutoDiff.py
demo/
Forward_Mode_demo1.ipynb
Forward_Mode_demo2.ipynb
Reverse_Demo.ipynb
docs/
documentation.ipynb
Milestone 1.pdf
Milestone 2.ipynb
LICENSE
README.md
requirements.txt
setup.cfg
setup.py
```
### Modules
The primary module is a single `AutoDiff.py` file. Contained within it are two classes - the `Ad_Var` class and `rAd_Var` class.
Instances of these two classes, through interaction with other objects of the same class, are able to compute the value of a function as well as the value of that function's derivative with respect to any input variable. The `AutoDiff` module is powerful enough to handle both forward and reverse mode of Automatic Differentiation of any function comprised of the following elementary functions:
* Fundamental arithmetic operators (addition, subtraction, multiplication, and division)
* Logarithm (of any base)
* Negation
* Exponentiation ($e^x$ for an `Ad_Var` instance $x$)
* Power and root functions ($x^n$ for some real $n$)
* Trigonometric functions ($\sin(x)$, $\cos(x)$, $\tan(x)$)
* Inverse trigonometric functions ($\arcsin(x)$, $\arccos(x)$, $\arctan(x)$)
Each instance of the `Ad_Var` and `rAd_Var` class in the `AutoDiff` module represents the definition of a set of variables at a particular evaluation point. Through manipulations of these instances (either through fundamental arithmetic operations or built-in methods representing additional elementary functions described earlier), a user has the capability of representing any continuous differentiable function, be it scalar or vector. This was shown earlier via the code demo.
The other modules in the package are stored in the `test` folder and make up the test-suite for `AutoDiff.py`, with more details in the *Testing and Coverage* section below.
### Testing and Coverage
In the `test` folder, there are two separate Python modules `test_autodiff.py` and `test_autodiff_reverse.py`, which together consist of the test-suite for `AutoDiff.py`.
`test_autodiff.py` will contain tests for the methods in the `Ad_Var` class and `test_autodiff_reverse.py` for the `rAd_Var` class, to ensure that the elementary functions return the desired output. Tests are run using pytest. The tests are linked to Travis CI and CodeCov, which will manage continuous integration and code coverage respectively.
### Installation and Distribution of package
The user must ensure that the package's requirements (`numpy`) are installed, or install them manually.
Following this, the user may install the package through PyPI (see above "How to Install ADKit"), or by cloning the GitHub repository directly.
## Implementation Details
### Class Implementation and Core Attributes
* There are two classes used for forward mode, as well as the extension (reverse mode): `Ad_Var` class and `rAd_Var` class respectively.
* The choice of keeping them as two separate classes is based on the fact that there is limited resuability of code between both implementations - the forward mode determines the derivatives of the variables using the chain rule whereas reverse mode traverses the computational graph in the forward pass and stores both parent-child relationships and the respective partial derivatives of the variables without doing the chain rule.
#### `Ad_Var` Class (Reverse Mode)
* The `Ad_Var` class will represent the variables that are used in the forward mode of Automatic Differentiation process. In the case of a single input, the instance should be initialized with, `val`, a scalar value of that variable to be evaluated on when calculating both the function and derivative values (as shown in the demo above)
* In the case of multiple inputs, each input will be initialized as an `Ad_Var` object, with inputs `val`, a scalar value of that variable and `ders`, a `numpy` array representing the derivative of the input with regards to the other variables. An example is shown below:
```
x1 = Ad_Var(1, np.array([1, 0, 0]))
x2 = Ad_Var(2, np.array([0, 1, 0]))
x3 = Ad_Var(3, np.array([0, 0, 1]))
```
* Dunder methods such as `__add__` and `__mul__`, and other elementary functions are implemented under this class. More information on this is covered below in the *Class Methods* section.
* As part of the class methods, we have included two static methods, `get_jacobian` and `get_values`, which respectively compute the Jacobian matrix and an array of function values for an array of `Ad_Var` objects. Also, a static method `grid_eval` is included which evaluates the function and its derivative/gradient/jacobian on a grid of points.
* In our implementation, we will also use the try-except method to catch unexpected input types: for example, if the user initializes the variable value of the `Ad_Var` instance with a value of type string, which is not a valid input type.
#### `Ad_Var`: Core Attributes
* `_val`: float value, indicating the function value of the Ad_Var object evaluated at the given point
* `_ders` (for single input): float value, indicating the derivative value of Ad_Var object evaluated at the given point
* `_ders` (for multiple inputs): 1-D array of floats, representing the value of the derivatives of the multiple inputs evaluated at the given point
`_val` and `_ders` attributes are made pseudoprivate to prevent users from manually setting function and derivative values outside of instance initialization
#### `rAd_Var` Class (Reverse Mode)
* The `rAd_Var` class will represent the variables that are used in the reverse mode of Automatic Differentiation process. In the case of a single input, the instance should be initialized with, `val`, a scalar value of that variable to be evaluated on when calculating both the function and derivative values (as shown in the demo above)
* The initialization of a `rAd_Var` instance does not allow for the input of a derivative value. The implementation necessitates that the derivative of the instance is initialized as None. There is the option for the user to manually set the derivative using the `set_ders` function, if they so wish to.
* The derivatives of the `rAd_Var` object obtained using the `get_ders` method will be returned as a numpy array of partial derivative(s) of the input variables with respective to the final function
* As part of the class methods, we have included two static methods, `get_jacobian` and `get_values`, which respectively compute the Jacobian matrix and an array of function values for an array of `rAd_Var` objects.
* To obtain a Jacobian matrix for vector functions, the user will need to define the functions first before passing them (as Python functions) as arguments for the `get_jacobian` method, together with the variable names and the given values for the inputs, as shown in the demo above.
* This difference in the implementation of the Jacobian matrix is because `rAd_Var` objects can only be defined in the context of one function. Hence, feeding all the functions into the `get_jacobian` method allows for the Jacobian to obtain the respective partial derivatives for each function separately, before combining them and returning them in a single Jacobian matrix.
* In the case of multiple inputs, each input will be initialized as an `rAd_Var` object, with inputs `val`.
#### `rAd_Var`: Core Attributes
* `_val`: float value, indicating the function value of the rAd_Var object evaluated at the given point
* `_ders`: instantiated as None and updated via the `get_ders` method which sets the derivative of the final function (with respect to itself) as 1 before using the `get_gradient` helper method to recursively go through all children of the input variables and updating `_ders` with their partial derivative value
* `parents`: lists containing the parent node(s) of the rAd_Var object; initialized as an empty list and populated at every computation (via class methods)
* `children`: lists of tuples containing the children node(s) of the rAd_Var object; initialized as an empty and populated at every computation (via class methods)
* `visited`: boolean value used to track if a node has been traversed in the reverse pass
### Core Data Structures
In both classes, the following core data structures were used:
* **`numpy` arrays**: 1-D `numpy` arrays will be used to keep the gradient vectors as the entire trace is evaluated. `numpy`
provides vectorized operations which will make the overloading of elementary functions much more efficient for
multivariate functions. If a vector function is provided, 2-D `numpy` arrays will be used to hold the Jacobian matrix.
* **Dictionaries**: In the `Ad_Var` class, dictionaries are used to keep the results of `grid_eval` function call. Particularly, the keys of the dictionary are points on the grid defined by the user and the corresponding values are the function value and its derivative/gradient/jacobian at the corresponding point. In the `rAd_Var` class, dictionaries are used extensively in the `get_jacobian` method to keep and track the variable inputs and the partial derivatives for each variable separately.
Specifically, in the `rAd_Var` class, where a node is defined as the point at which the computation of a new `rAd_Var` instance is performed, lists are used to store the the parent-child relationships of each node.
* **Lists**: At the every computation of a new `rAd_Var` instance, the new object and the partial derivative of the new function with respect to the input variable are stored as a tuple. This simulates the forward pass of the reverse mode. The tuples in the list are then accessed in the reverse pass of the reverse mode using the `get_ders` method.
### External Dependencies
* `numpy` for implementation of the elementary functions (e.g. sin, sqrt, log and exp), by overloading `numpy` implementations for these functions
* `pytest` and `doctest` for testing
* TravisCI and CodeCov used to manage continuous integration and code coverage
### Elementary Functions and Class Methods
While both the forward mode and reverse mode support a same set of basic operations, comparison operators and elementary functions, the computation of the attributes of the object returned by the functions is vastly different.
At each computation, the reverse mode stores both parent-child relationships and the respective partial derivatives of the variables to each other without doing the chain rule via the attributes `parents` and `children`, covered above. Below are the functions supported by both classes - for more details on how the partial derivatives of the variables are calculated for `rAd_Var`, refer to the code.
#### Elementary Functions / Operators supported by both classes
* `__add__(self, other)` and `__radd__(self, other)`:
* Other can be a float, int or an `AutoDiff` object
* Returns an `Ad_Var` or `rAd_Var` object when calculating self + other or other + self
* `__sub__(self, other)` and `__rsub__(self, other)`:
* Other can be a float, int or an `AutoDiff` object
* Returns an `Ad_Var` or `rAd_Var` object when calculating self - other or other - self
* `__mul__(self, other)` and `__rmul__(self, other)`:
* Other can be a float, int or an `AutoDiff` object
* Returns an `Ad_Var` or `rAd_Var` object when calculating self * other or other * self
* `__truediv__(self, other)` and `__rtruediv__(self, other)`:
* Other can be a float, int or an `AutoDiff` object
* Returns an `Ad_Var` or `rAd_Var` object when calculating self / other or other / self
* `__pow__(self, other)` and `__rpow__(self, other)`:
* `other` can be a float, int or `Ad_Var` object
* `__rpow__` will require `other` to be a numeric type, otherwise, it will raise a TypeError
* Returns an `Ad_Var` or `rAd_Var` object when calculating self ** other
* `__neg__(self)`:
* Returns an `Ad_Var` or `rAd_Var` object when calculating - self
* `__eq__(self, other)`:
* Returns True if `self._val` == `other._val` and `self._ders` == `other._ders`, returns False otherwise
* `__ne__(self, other)`:
* Returns True if `self._val` != `other._val` or `self._ders` != `other._ders`, returns False otherwise
* `__repr__(self)`:
* Returns a string representing the value of `self._val` (Value) and the value of `self._ders` (Gradient)
* `sqrt(self)`:
* Returns an `Ad_Var` or `rAd_Var` object by calling the __pow__ method using self**0.5
* `exp(self)`:
* `Ad_Var`: Returns an `Ad_Var` object with `self._val = np.exp(self._val)` and `self._ders = np.exp(self._val) * self._ders`
* `rAd_Var`: Returns an `rAd_Var` object with `self._val = np.exp(self._val)` and the `parent` and `children` attributes for both `self` and the new `rAd_Var` object updated accordingly
* `log(self, logbase=np.e)`:
* Optional argument for `logbase` (can be a float or int). By default, `logbase` is set to the exponential.
* `Ad_Var`: Returns an `Ad_Var` object with `self._val = np.log(self._val) / np.log(logbase)` and `self._ders = self._ders / (self._val * np.log(logbase))`.
* `rAd_Var`: Returns an `rAd_Var` object with `self._val = np.log(self._val) / np.log(logbase)` and the `parent` and `children` attributes for both `self` and the new `rAd_Var` object updated accordingly
* `sin(self)` and `cos(self)` and `tan(self)`:
* Returns an `Ad_Var` or `rAd_Var` object with the respective class attributes updated accordingly based on the given trigonometric function
* `arcsin(self)` and `arccos(self)` and `arctan(self)`:
* Returns an `Ad_Var` or `rAd_Var` object with respective class attributes updated accordingly based on the given inverse trigonometric function
* `sinh(self)` and `cosh(self)` and `tanh(self)`:
* Returns an `Ad_Var` or `rAd_Var` object with respective class attributes updated accordingly based on the given hyperbolic function
* `logistic(self)`:
* Returns an `Ad_Var` or `rAd_Var` object with respective class attributes updated accordingly based on the logistic (sigmoid) function
* `set_val(self, value)`:
* Set the value of the private attribute `self._val` with `value`
* `set_ders(self, derivatives)`:
* Set the value of the private attribute `self._ders` with `derivatives`
* `get_val(self)`:
* Returns the value of the attribute `self._val`
#### Specific `Ad_Var` class methods
* `__init__(self, val, ders=1)`:
* Sets `self._val` to the argument `val`
* Sets `self._ders` to the argument `ders`
* `get_ders(self)`:
* Returns the value of the attribute `self._ders`
* `get_jacobian(functions_array, functions_dim, vars_dim)`:
* Static method that returns the Jacobian matrix for a given array of `Ad_Var` objects
* `get_values(functions_array)`:
* Static method that returns an array of function values for a given array of `Ad_Var` objects
* `grid_eval(func_string, vars_strings, Ad_Vars_list, grid)`:
* Static method that evaluates a function and its derivative/gradient/jacobian on a grid of points. A dictionary is returned where each key is a point of the grid and the value is a tuple with the first element being the value of the function at this point and second element is the derivative/gradient/jacobian evaluated at this point.
#### Specific `rAd_Var` class methods
* `__init__(self, val, ders=1)`:
* Sets `self._val` to the argument `val`
* Initializes `self._ders` as `None`
* Initializes `self.parents` as an empty list
* Initializes `self.children` as an empty list
* Initializes `self.visited` as `False`
* `get_ders(self)`:
* Sets the derivative of the final function (with respect to itself) as 1 before using the `get_gradient` helper method, which recursively traverses all children of the input variables and updating `_ders` with their partial derivative value
* Returns `gradient_matrix`, a `numpy` array consisting of the partial derivatives of input variables used to compute the final function
* `get_jacobian(functions_array, var_list, var_values)`:
* Static method that returns the Jacobian matrix for a vector of Python functions, with given variable names and values for the variables used as arguments in these functions
* Instantiation of `rAd_Var` objects is done within this method based on the functions and variables passed in
* `get_values(functions_array, var_list, var_values)`:
* Static method that returns an array of function values for a vector of Python functions, with given variable names and values for the variables used as arguments in these functions
* Instantiation of `rAd_Var` objects is done within this method based on the functions and variables passed in
## Extension (Reverse Mode)
### Description
As part of an extension from the earlier milestone, we have implemented the reverse mode of Automatic Differentiation using the `rAd_Var` class, with details on the class (data structures, attributes and methods) elaborated and demonstrated above.
In terms of efficiency, the forward mode is more efficient when the number of functions to evaluate is much greater than the number of inputs, whereas the reverse mode, which computes the Jacobian-transpose-vector-product is more efficient when the number of inputs is much greater than the number of functions.
Having the `rAd_Var` class allows the user of this package the flexibility to choose between the two modes of automatic differentiation, depending on the vector functions and variables that they will carry out Automatic Differentiation on.
### Implementation Details
Here, instead of propagating the chain rule forwards from the innermost expressions outwards, we begin by constructing an evaluation tree for the entire expression, seed the outermost one with a derivative of one, and work from the outside inwards until we have computed the derivatives for everything down to the input variables. This method of automatic differentiation can be faster in many circumstances, but at the cost of storage.
This space overhead is due to the need to keep track of the evaluation tree; reverse mode would not be possible without being able to traverse the structure of an expression. Each stage in computation must store its children, the intermediate variables that constitute itself, and its parents, any intermediates that make use of it. To this end, we've implemented the `rAd_Var` class, which is largely identical to the forward-mode `Ad_Var` class with several key differences.
Most significantly, `rAd_Var` keep track of both parents and children, enabling backwards traversal of the evaluation. This means that an individual `rAd_Var` instance is larger than an `Ad_Var` instance, and can be many times more so depending on the structure of the expression being differentiated. In addition, computing the derivatives of an expression necessitates changing each `rAd_Var` object's `_ders` attribute. As a result, `rAd_Var` instances may only be used once; an input variable cannot be used in two different expressions, even if they have the same value.
### Additional information / background
Unlike the forward mode of Automatic Differentiation, the reverse mode of Automatic Differentiation consists of two stages - 1) forward pass, followed by the 2) reverse pass (also popularly known as backward propagation).
With a function $f$ which takes in $n$ inputs, $x_1, x_2, ..., x_n$ and produces a single output, the reverse mode will return derivatives of the function $\dfrac{\partial f}{\partial x_{i}}$ for all $i$.
The following references and builds on [CS207 2019 Lecture 12 materials](https://harvard-iacs.github.io/2019-CS207/lectures/lecture12/notebook/) (Sondak):
**Forward pass**
In the forward pass, the evaluation of the function is carried out at each elementary function, $f_i$, of the entire function, $f$. What this means is that for $i = n + 1, n + 2, ..., N$, where $\pi(i)$ denotes the set of "parents" of $x_i$.
$$x_i = f_i(x_\pi(i))$$
E.g. for $x_{3} = x_{1}x_{2}$, where the elementary function, $f_3$, is a multiplication of two input variables, thus forming a node in the computational graph. Here, $\pi(3) = (x_1, x_2)$. This is important in the forward pass, which stores the partial derivatives of every node with respect to its "parents", as denoted by $\pi(i)$. In the example above, where $x_{3} = x_{1}x_{2}$, we store $\dfrac{\partial x_{3}}{\partial x_{1}}$ and $\dfrac{\partial x_{3}}{\partial x_{2}}$.
This can be best illustrated using an example where $$ f (x, y) = 2xy - \exp(xy)$$ at the point $x = 1, y = 2$
Here, to generate the forward trace, we calculate the partial derivatives of a node with regards to its children:
| Node | Current Value | Numerical Value | $\partial_{1}$ | $\partial_{1}$ Value | $\partial_{2}$ | $\partial_{2}$ Value |
| :---: | :----------------------------: | :-----------: | :----------------------------: | :-----------------: | :-----------------: | :-----------------: |
| $x_{1}$ | $x$ | $1$ | $1$ | $1$ | $0$ | $0$ |
| $x_{2}$ | $y$ | $2$ | $0$ | $0$ | $1$ | $1$ |
| $x_{3}$ | $x_{1}x_{2}$ | $2$ | $x_{2}$ | $2$ | $x_{1}$ | $1$ |
| $x_{4}$ | $2x_{3}$ | $4$ | $2$ | $2$ | $-$ | $-$ |
| $x_{5}$ | $\exp\left(x_{3}\right)$ | $e^{2}$ | $\exp\left(x_{3}\right)$ | $e^{2}$ | $-$ | $-$ |
| $x_{6}$ | $-x_5$ | $-e^{2}$ | $-1$ | $-1$ | $-$ | $-$ |
| $x_{7}$ | $x_{4} + x_{6}$ | $4 - e^{2}$ | $1$ | $1$ | $1$ | $1$ |
**Reverse pass**
Following the forward pass, the reverse pass starts from the final function to be differentiated, $f_N$, setting $\overline{x}_{N} = \dfrac{\partial f}{\partial x_{N}} = 1$ (since $f = x_{N}$).
Then, using the chain rule, it traverses the computational graph to obtain values for the partial derivative of every variable in the computational graph, $x_i$:
\begin{align}
\overline{x}_{i} = \dfrac{\partial f}{\partial x_{i}} = \sum_{\text{j a child of i}}{\dfrac{\partial f}{\partial x_{j}}\dfrac{\partial x_{j}}{\partial x_{i}}}.
\end{align}
This is done recursively until the partial derivatives of the function with respect to the $n$ inputs, $x_1, x_2, ..., x_n$ are computed.
Using the same example above, we recursively go through every variable($x_i$) in the computational trace shown above. The computation of the gradient of each variable accesses the partial derivative with respect to its children that has been previously computed and stored during the forward pass.
$$\overline{x}_{7} = \dfrac{\partial f}{\partial x_{7}} = 1$$
$$\overline{x}_{6} = \dfrac{\partial f}{\partial x_{7}}\dfrac{\partial x_{6}}{\partial x_{7}} = 1 \cdot 1 = 1$$
$$\overline{x}_{5} = \dfrac{\partial f}{\partial x_{6}}\dfrac{\partial x_{6}}{\partial x_{5}} = 1 \cdot (-1) = -1$$
$$\overline{x}_{4} = \dfrac{\partial f}{\partial x_{7}}\dfrac{\partial x_{7}}{\partial x_{4}} = 1 \cdot 1 = 1$$
$$\overline{x}_{3} = \dfrac{\partial f}{\partial x_{5}}\dfrac{\partial x_{5}}{\partial x_{3}} + \dfrac{\partial f}{\partial x_{4}}\dfrac{\partial x_{4}}{\partial x_{3}}= (-1 \cdot e^{2}) + (1\cdot 2) = 2 - e^{2}$$
$$\overline{x}_{2} = \dfrac{\partial f}{\partial x_{3}}\dfrac{\partial x_{3}}{\partial x_{2}} = (2 - e^{2}) \cdot 1 = 2 - e^{2}$$
$$\overline{x}_{1} = \dfrac{\partial f}{\partial x_{3}}\dfrac{\partial x_{3}}{\partial x_{1}} = (2 - e^{2}) \cdot 2 = 4 - 2e^{2}$$
This gives us the gradient of:
\begin{align}
\nabla f &=
\begin{bmatrix}
4 - 2e^{2} \\
2 - e^{2}
\end{bmatrix}
\end{align}
which is identical to what we would have calculated using symbolic differentiation for the function $f (x, y) = 2xy - \exp(xy)$:
$$\nabla f = \begin{bmatrix} 2y - \exp\left(xy\right)y \\ 2x - \exp\left(xy\right)x \end{bmatrix}$$
Reverse mode of Automatic Differentiation is supported by the `rAd_Var` class in the `ADKit` package. Further details on using the class is covered in the **How to use ADKit (Reverse Mode)** section of the Documentation
#### References
* [A Hitchhiker’s Guide to Automatic Differentiation](https://link.springer.com/article/10.1007/s11075-015-0067-6)
* [A simple explanation of reverse-mode automatic differentiation](https://justindomke.wordpress.com/2009/03/24/a-simple-explanation-of-reverse-mode-automatic-differentiation/)
* Harvard CS207 2019 course materials
|
github_jupyter
|
import numpy as np
from ADKit.AutoDiff import Ad_Var
a = np.pi / 6
x = Ad_Var(a)
f = (Ad_Var.sin(2*x))**2
print(f.get_val(), f.get_ders())
print(f)
x = Ad_Var(1, np.array([1, 0, 0]))
y = Ad_Var(2, np.array([0, 1, 0]))
z = Ad_Var(3, np.array([0, 0, 1]))
f = (Ad_Var.sin(2*x))**2 + z**y
print(f)
x = Ad_Var(1, np.array([1, 0, 0]))
y = Ad_Var(2, np.array([0, 1, 0]))
z = Ad_Var(3, np.array([0, 0, 1]))
f = np.array([(Ad_Var.sin(2*x))**2 + z**y, Ad_Var.exp(x) + z])
Ad_Var.get_jacobian(f, 2, 3)
Ad_Var.get_values(f)
g = (Ad_Var.sin(2*x))**2 + z**y
h = Ad_Var.exp(x) + z
f = np.array([g, h])
Ad_Var.get_jacobian(f, 2, 3)
Ad_Var.get_values(f)
x = Ad_Var(ders = np.array([1, 0, 0]))
y = Ad_Var(ders = np.array([0, 1, 0]))
z = Ad_Var(ders = np.array([0, 0, 1]))
f_string = "(Ad_Var.sin(2*x))**2 + z**y"
Ad_Var.grid_eval(f_string, ['x', 'y', 'z'], [x, y, z], [[1, 2], [2,3], [4]])
f_string = "[(Ad_Var.sin(2*x))**2 + z**y, Ad_Var.exp(x) + z]"
Ad_Var.grid_eval(f_string, ['x', 'y', 'z'], [x, y, z], [[1, 2], [2,3], [4]])
from ADKit.AutoDiff import rAd_Var
a = np.pi / 6
x = rAd_Var(a)
f = (rAd_Var.sin(2*x))**2
print(f)
def f1(x, y):
return x * y
def f2(y):
return y
def f3(x, y):
return rAd_Var.log(x ** y)
rAd_Var.get_jacobian([f1, f2, f3], ["x","y"], [1, 2])
def f1(x, y):
return x * y
def f2(y):
return y
def f3(x, y):
return rAd_Var.log(x ** y)
rAd_Var.get_jacobian([f1, f2, f3], ["x","y","z"], [1, 2, 3])
rAd_Var.get_values([f1, f2, f3], ["x","y","z"], [1, 2, 3])
x = Ad_Var(1, np.array([1, 0, 0]))
y = Ad_Var(2, np.array([0, 1, 0]))
z = Ad_Var(3, np.array([0, 0, 1]))
f = np.array([(Ad_Var.sin(2*x))**2 + z**y, Ad_Var.exp(x) + z])
Ad_Var.get_jacobian(f, 2, 3)
def f1(x, z, y):
return rAd_Var.sin(2*x)**2 + z**y
def f2(x, z):
return rAd_Var.exp(x) + z
rAd_Var.get_jacobian([f1, f2], ["x","y","z"], [1, 2, 3])
cs207-FinalProject/
ADKit/
test/
test_autodiff.py
test_autodiff_reverse.py
AutoDiff.py
demo/
Forward_Mode_demo1.ipynb
Forward_Mode_demo2.ipynb
Reverse_Demo.ipynb
docs/
documentation.ipynb
Milestone 1.pdf
Milestone 2.ipynb
LICENSE
README.md
requirements.txt
setup.cfg
setup.py
x1 = Ad_Var(1, np.array([1, 0, 0]))
x2 = Ad_Var(2, np.array([0, 1, 0]))
x3 = Ad_Var(3, np.array([0, 0, 1]))
| 0.324021 | 0.992131 |
# Alternating Line Current
## Import modules
```
import os
import numpy as np
import KUEM as EM
import matplotlib.pyplot as plt
plt.close("all")
```
## Setup constants and settings
```
# Constants for J
SurfaceChargeDensity = 1
d = 0.2
# Grid constants
N = np.array([1, 1, 10000], dtype = int)
delta_x = np.array([2, 2, 2])
x0 = np.array([-1, -1, -1])
Boundaries = ["periodic", "periodic", ["closed", "flat"]]
# Evaluation constants
Exact = True
Progress = 5
approx_n = 0.1
# Plotting settings
PlotVector = True
PlotStreams = False
StreamDensity = 2
StreamLength = 1
# File names
FilePos = "InfinitePlateCapacitor/"
Name_E_2D = "ExInfinitePlateCapacitorE.png"
Name_V_1D = "ExInfinitePlateCapacitorV.png"
Save = True
```
## Create the J function
```
# Define the charge
def J(dx, N, x0, c, mu0):
# Create grid
Grid = np.zeros(tuple(N) + (4,))
# Add in the charge, normalising so the charge is the same no matter the grid size
Grid[:, :, int(N[2] * (1 + d) / 2), 0] = -c * SurfaceChargeDensity / dx[2]
Grid[:, :, int(N[2] * (1 - d) / 2), 0] = c * SurfaceChargeDensity / dx[2]
# Turn into a vector
J_Vector = EM.to_vector(Grid, N)
# Return the vector
def get_J(t):
return J_Vector
return get_J
```
## Setup the simulation
```
# Setup the simulation
Sim = EM.sim(N, delta_x = delta_x, x0 = x0, approx_n = approx_n, J = J, boundaries = Boundaries)
```
## Define the samplers
```
# Define hat vectors
x_hat = np.array([1, 0, 0])
y_hat = np.array([0, 0, 1])
# Define the resolutions
Res_line = 1000
Res_vector = 30
# Define extents
extent = [0, delta_x[2], 0, delta_x[2]]
PointsSize = np.array([delta_x[2], delta_x[2]])
x1 = np.array([0, 0, -delta_x[2] / 2])
x2 = np.array([0, 0, delta_x[2] / 2])
# Get grid points
Points_line = EM.sample_points_line(x1, x2, Res_line)
Points_vector = EM.sample_points_plane(x_hat, y_hat, np.array([0, 0, 0]), PointsSize, np.array([Res_vector, Res_vector]))
# Setup samplers
Sampler_E_2D = EM.sampler_E_vector(Sim, Points_vector, x_hat, y_hat)
Sampler_V_1D = EM.sampler_V_line(Sim, Points_line)
```
## Simulate
```
# Solve the statics problem
print("Solving")
StaticTime = Sim.solve(exact = Exact, progress = Progress)
print(f"Solved in {StaticTime:.2g} s")
```
## Create images
```
# Create the images
if Save is True and not os.path.exists(FilePos):
os.mkdir(FilePos)
fig_E_2D, _, _ = Sampler_E_2D.plot(0, extent = extent, cutoff = 0.01, use_vector = PlotVector, use_streams = PlotStreams, density = StreamDensity, length = StreamLength)
if Save is True:
fig_E_2D.savefig(FilePos + Name_E_2D)
fig_V_1D, _, _ = Sampler_V_1D.plot(0)
if Save is True:
fig_V_1D.savefig(FilePos + Name_V_1D)
```
|
github_jupyter
|
import os
import numpy as np
import KUEM as EM
import matplotlib.pyplot as plt
plt.close("all")
# Constants for J
SurfaceChargeDensity = 1
d = 0.2
# Grid constants
N = np.array([1, 1, 10000], dtype = int)
delta_x = np.array([2, 2, 2])
x0 = np.array([-1, -1, -1])
Boundaries = ["periodic", "periodic", ["closed", "flat"]]
# Evaluation constants
Exact = True
Progress = 5
approx_n = 0.1
# Plotting settings
PlotVector = True
PlotStreams = False
StreamDensity = 2
StreamLength = 1
# File names
FilePos = "InfinitePlateCapacitor/"
Name_E_2D = "ExInfinitePlateCapacitorE.png"
Name_V_1D = "ExInfinitePlateCapacitorV.png"
Save = True
# Define the charge
def J(dx, N, x0, c, mu0):
# Create grid
Grid = np.zeros(tuple(N) + (4,))
# Add in the charge, normalising so the charge is the same no matter the grid size
Grid[:, :, int(N[2] * (1 + d) / 2), 0] = -c * SurfaceChargeDensity / dx[2]
Grid[:, :, int(N[2] * (1 - d) / 2), 0] = c * SurfaceChargeDensity / dx[2]
# Turn into a vector
J_Vector = EM.to_vector(Grid, N)
# Return the vector
def get_J(t):
return J_Vector
return get_J
# Setup the simulation
Sim = EM.sim(N, delta_x = delta_x, x0 = x0, approx_n = approx_n, J = J, boundaries = Boundaries)
# Define hat vectors
x_hat = np.array([1, 0, 0])
y_hat = np.array([0, 0, 1])
# Define the resolutions
Res_line = 1000
Res_vector = 30
# Define extents
extent = [0, delta_x[2], 0, delta_x[2]]
PointsSize = np.array([delta_x[2], delta_x[2]])
x1 = np.array([0, 0, -delta_x[2] / 2])
x2 = np.array([0, 0, delta_x[2] / 2])
# Get grid points
Points_line = EM.sample_points_line(x1, x2, Res_line)
Points_vector = EM.sample_points_plane(x_hat, y_hat, np.array([0, 0, 0]), PointsSize, np.array([Res_vector, Res_vector]))
# Setup samplers
Sampler_E_2D = EM.sampler_E_vector(Sim, Points_vector, x_hat, y_hat)
Sampler_V_1D = EM.sampler_V_line(Sim, Points_line)
# Solve the statics problem
print("Solving")
StaticTime = Sim.solve(exact = Exact, progress = Progress)
print(f"Solved in {StaticTime:.2g} s")
# Create the images
if Save is True and not os.path.exists(FilePos):
os.mkdir(FilePos)
fig_E_2D, _, _ = Sampler_E_2D.plot(0, extent = extent, cutoff = 0.01, use_vector = PlotVector, use_streams = PlotStreams, density = StreamDensity, length = StreamLength)
if Save is True:
fig_E_2D.savefig(FilePos + Name_E_2D)
fig_V_1D, _, _ = Sampler_V_1D.plot(0)
if Save is True:
fig_V_1D.savefig(FilePos + Name_V_1D)
| 0.613815 | 0.914138 |
```
import starepandas
import shapely
import matplotlib.pyplot as plt
polygon = shapely.geometry.Polygon([[102.1, 33.1],
[101.1, 35.1],
[102.1, 35.1],
[104.1, 33.1],
[102.1, 33.1]])
indices = starepandas.from_polygon(polygon, level=8, force_ccw=True)
indices
starepandas.to_trixels(indices, as_multipolygon=True)
geom = shapely.wkt.loads('''POLYGON ((61.21081709172574 35.65007233330923,
62.23065148300589 35.27066396742229,
71.34813113799026 38.25890534113216,
72.92002485544447 36.72000702569632,
69.31776411324256 31.90141225842444,
68.92667687365767 31.62018911389207,
66.34647260932442 29.88794342703618,
65.04686201361611 29.56003062592809,
64.14800215033125 29.34081920014597,
63.55026085801117 29.46833079682617,
61.69931440618083 31.37950613049267,
60.96370039250601 33.52883230237626,
60.80319339380745 34.40410187431986,
61.21081709172574 35.65007233330923))''')
indices = starepandas.from_polygon(geom, level=10, force_ccw=True)
fig, ax = plt.subplots()
ax.grid(True)
trixels = starepandas.to_trixels(indices, as_multipolygon=False)
for triangle in trixels:
ax.plot(*triangle.exterior.xy, color='y')
ax.plot(*geom.exterior.xy, marker='o', color='b')
fig, ax = plt.subplots()
ax.grid(True)
pt = shapely.geometry.Point(66, 34.3)
pt_stare5 = starepandas.from_shapely(pt, 5)
pt_stare6 = starepandas.from_shapely(pt, 6)
pt_stare7 = starepandas.from_shapely(pt, 7)
pt_trixel5 = starepandas.to_trixels(pt_stare5)
pt_trixel6 = starepandas.to_trixels(pt_stare6)
pt_trixel7 = starepandas.to_trixels(pt_stare7)
index_ranges = starepandas.from_polygon(geom, level=7, force_ccw=True)
triangles = starepandas.to_trixels(index_ranges)
for triangle in triangles:
ax.plot(*triangle.exterior.xy, color='y', zorder=0)
ax.plot(*geom.exterior.coords.xy, marker='o', zorder=1)
ax.plot(pt.x, pt.y, marker='*', color='r', zorder=1)
ax.plot(*pt_trixel6.exterior.coords.xy, color='m', zorder=2)
ax.plot(*pt_trixel7.exterior.coords.xy, color='r', zorder=3)
#ax.plot(*pt_trixel5.exterior.coords.xy, color='g', zorder=2)
intersection_trixel = starepandas.to_trixels([4063372763795030021])
ax.plot(*intersection_trixel.exterior.xy, color='green',
linewidth=3)
print(hex(4063511879588628615), hex(4063372763795030021))
```
## Intersection
```
import pystare
fig, ax = plt.subplots()
ax.grid(True)
polygon1 = shapely.geometry.Polygon([[102, 33], [101, 35], [105, 34], [104, 33], [102, 33]])
polygon2 = shapely.geometry.Polygon([[102, 34], [106, 35], [106, 33], [102, 33.5], [102, 34]])
range_indices1 = starepandas.from_polygon(polygon1, level=10, force_ccw=True)
range_indices2 = starepandas.from_polygon(polygon2, level=10, force_ccw=True)
triangles1 = starepandas.to_trixels(range_indices1)
triangles2 = starepandas.to_trixels(range_indices2)
for triangle in triangles1:
ax.plot(*triangle.exterior.xy, color='blue', linewidth=0.5)
for triangle in triangles2:
ax.plot(*triangle.exterior.xy, color='red', linewidth=0.5)
ax.plot(*polygon1.exterior.xy, marker='o', linewidth=2, color='blue')
ax.plot(*polygon2.exterior.xy, marker='o', linewidth=2, color='red')
intersect = pystare.intersect(range_indices1, range_indices2)
triangles3 = starepandas.to_trixels(intersect)
for triangle in triangles3:
ax.plot(*triangle.exterior.xy, color='green', linewidth=1)
df2.set_trixels().plot(ax=ax, trixels=True)
```
### High Level
```
polygon1 = shapely.geometry.Polygon([[102, 33], [101, 35], [105, 34], [104, 33], [102, 33]])
polygon2 = shapely.geometry.Polygon([[102, 34], [106, 35], [106, 33], [102, 33.5], [102, 34]])
sids1 = starepandas.from_polygon(polygon1, level=5, force_ccw=True)
sids2 = starepandas.from_polygon(polygon2, level=5, force_ccw=True)
df = starepandas.STAREDataFrame(stare=[sids1])
intersection = df.stare_intersection(sids2)
df2 = starepandas.STAREDataFrame(stare=intersection)
df2.set_trixels().plot(trixels=True)
intersection.iloc[0]
```
|
github_jupyter
|
import starepandas
import shapely
import matplotlib.pyplot as plt
polygon = shapely.geometry.Polygon([[102.1, 33.1],
[101.1, 35.1],
[102.1, 35.1],
[104.1, 33.1],
[102.1, 33.1]])
indices = starepandas.from_polygon(polygon, level=8, force_ccw=True)
indices
starepandas.to_trixels(indices, as_multipolygon=True)
geom = shapely.wkt.loads('''POLYGON ((61.21081709172574 35.65007233330923,
62.23065148300589 35.27066396742229,
71.34813113799026 38.25890534113216,
72.92002485544447 36.72000702569632,
69.31776411324256 31.90141225842444,
68.92667687365767 31.62018911389207,
66.34647260932442 29.88794342703618,
65.04686201361611 29.56003062592809,
64.14800215033125 29.34081920014597,
63.55026085801117 29.46833079682617,
61.69931440618083 31.37950613049267,
60.96370039250601 33.52883230237626,
60.80319339380745 34.40410187431986,
61.21081709172574 35.65007233330923))''')
indices = starepandas.from_polygon(geom, level=10, force_ccw=True)
fig, ax = plt.subplots()
ax.grid(True)
trixels = starepandas.to_trixels(indices, as_multipolygon=False)
for triangle in trixels:
ax.plot(*triangle.exterior.xy, color='y')
ax.plot(*geom.exterior.xy, marker='o', color='b')
fig, ax = plt.subplots()
ax.grid(True)
pt = shapely.geometry.Point(66, 34.3)
pt_stare5 = starepandas.from_shapely(pt, 5)
pt_stare6 = starepandas.from_shapely(pt, 6)
pt_stare7 = starepandas.from_shapely(pt, 7)
pt_trixel5 = starepandas.to_trixels(pt_stare5)
pt_trixel6 = starepandas.to_trixels(pt_stare6)
pt_trixel7 = starepandas.to_trixels(pt_stare7)
index_ranges = starepandas.from_polygon(geom, level=7, force_ccw=True)
triangles = starepandas.to_trixels(index_ranges)
for triangle in triangles:
ax.plot(*triangle.exterior.xy, color='y', zorder=0)
ax.plot(*geom.exterior.coords.xy, marker='o', zorder=1)
ax.plot(pt.x, pt.y, marker='*', color='r', zorder=1)
ax.plot(*pt_trixel6.exterior.coords.xy, color='m', zorder=2)
ax.plot(*pt_trixel7.exterior.coords.xy, color='r', zorder=3)
#ax.plot(*pt_trixel5.exterior.coords.xy, color='g', zorder=2)
intersection_trixel = starepandas.to_trixels([4063372763795030021])
ax.plot(*intersection_trixel.exterior.xy, color='green',
linewidth=3)
print(hex(4063511879588628615), hex(4063372763795030021))
import pystare
fig, ax = plt.subplots()
ax.grid(True)
polygon1 = shapely.geometry.Polygon([[102, 33], [101, 35], [105, 34], [104, 33], [102, 33]])
polygon2 = shapely.geometry.Polygon([[102, 34], [106, 35], [106, 33], [102, 33.5], [102, 34]])
range_indices1 = starepandas.from_polygon(polygon1, level=10, force_ccw=True)
range_indices2 = starepandas.from_polygon(polygon2, level=10, force_ccw=True)
triangles1 = starepandas.to_trixels(range_indices1)
triangles2 = starepandas.to_trixels(range_indices2)
for triangle in triangles1:
ax.plot(*triangle.exterior.xy, color='blue', linewidth=0.5)
for triangle in triangles2:
ax.plot(*triangle.exterior.xy, color='red', linewidth=0.5)
ax.plot(*polygon1.exterior.xy, marker='o', linewidth=2, color='blue')
ax.plot(*polygon2.exterior.xy, marker='o', linewidth=2, color='red')
intersect = pystare.intersect(range_indices1, range_indices2)
triangles3 = starepandas.to_trixels(intersect)
for triangle in triangles3:
ax.plot(*triangle.exterior.xy, color='green', linewidth=1)
df2.set_trixels().plot(ax=ax, trixels=True)
polygon1 = shapely.geometry.Polygon([[102, 33], [101, 35], [105, 34], [104, 33], [102, 33]])
polygon2 = shapely.geometry.Polygon([[102, 34], [106, 35], [106, 33], [102, 33.5], [102, 34]])
sids1 = starepandas.from_polygon(polygon1, level=5, force_ccw=True)
sids2 = starepandas.from_polygon(polygon2, level=5, force_ccw=True)
df = starepandas.STAREDataFrame(stare=[sids1])
intersection = df.stare_intersection(sids2)
df2 = starepandas.STAREDataFrame(stare=intersection)
df2.set_trixels().plot(trixels=True)
intersection.iloc[0]
| 0.376967 | 0.815747 |
# Tutorial: Linear Programming, (CPLEX Part 1)
This notebook gives an overview of Linear Programming (or LP). After completing this unit, you should be able to
- describe the characteristics of an LP in terms of the objective, decision variables and constraints,
- formulate a simple LP model on paper,
- conceptually explain some standard terms related to LP, such as dual, feasible region, infeasible, unbounded, slack, reduced cost, and degenerate.
You should also be able to describe some of the algorithms used to solve LPs, explain what presolve does, and recognize the elements of an LP in a basic DOcplex model.
>This notebook is part of [Prescriptive Analytics for Python](http://ibmdecisionoptimization.github.io/docplex-doc/).
>It requires a valid subscription to **Decision Optimization on Cloud** or a **local installation of CPLEX Optimizers**.
Discover us [here](https://developer.ibm.com/docloud).
Table of contents:
* [Introduction to Linear Programming](#Introduction-to-Linear-Programming)
* [Example: a production problem](#Example:-a-production-problem)
* [CPLEX Modeling for Python](#Use-IBM-Decision-Optimization-CPLEX-Modeling-for-Python)
* [Algorithms for solving LPs](#Algorithms-for-solving-LPs)
* [Summary](#Summary)
* [References](#References)
# Introduction to Linear Programming
In this topic, you’ll learn what the basic characteristics of a linear program are.
## What is Linear Programming?
Linear programming deals with the maximization (or minimization) of a linear objective function, subject to linear constraints, where all the decision variables are continuous. That is, no discrete variables are allowed. The linear objective and constraints must consist of linear expressions.
## What is a linear expression?
A linear expression is a scalar product, for example, the expression:
$$
\sum{a_i x_i}
$$
where a_i represents constants (that is, data) and x_i represents variables or unknowns.
Such an expression can also be written in short form as a vector product:
$$^{t}A X
$$
where $A$ is the vector of constants and $X$ is the vector of variables.
*Note*: Nonlinear terms that involve variables (such as x and y) are not allowed in linear expressions.
Terms that are not allowed in linear expressions include
- multiplication of two or more variables (such as x times y),
- quadratic and higher order terms (such as x squared or x cubed),
- exponents,
- logarithms,
- absolute values.
## What is a linear constraint?
A linear constraint is expressed by an equality or inequality as follows:
- $linear\_expression = linear\_expression$
- $linear\_expression \le linear\_expression$
- $linear\_expression \ge linear\_expression$
Any linear constraint can be rewritten as one or two expressions of the type linear expression is less than or equal to zero.
Note that *strict* inequality operators (that is, $>$ and $<$) are not allowed in linear constraints.
## What is a continuous variable?
A variable (or _decision_ variable) is an unknown of the problem. Continuous variables are variables the set of real numbers (or an interval).
Restrictions on their values that create discontinuities, for example a restriction that a variable should take integer values, are not allowed.
## Symbolic representation of an LP
A typical symbolic representation of a Linear Programming is as follows:
$
minimize \sum c_{i} x_{i}\\
\\
subject\ to:\\
\ a_{11}x_{1} + a_{12} x_{2} ... + a_{1n} x_{n} \ge b_{1}\\
\ a_{21}x_{1} + a_{22} x_{2} ... + a_{2n} x_{n} \ge b_{2}\\
...
\ a_{m1}x_{1} + a_{m2} x_{2} ... + a_{mn} x_{n} \ge b_{m}\\
x_{1}, x_{2}...x_{n} \ge 0
$
This can be written in a concise form using matrices and vectors as:
$
min\ C^{t}x\\
s.\ t.\ Ax \ge B\\
x \ge 0
$
Where $x$ denotes the vector of variables with size $n$, $A$ denotes the matrix of constraint coefficients, with $m$ rows and $n$ columns and $B$ is a vector of numbers with size $m$.
## Characteristics of a linear program
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/1.png?raw=true" >
</ul>
# Example: a production problem
In this topic, you’ll analyze a simple production problem in terms of decision variables, the objective function, and constraints.
You’ll learn how to write an LP formulation of this problem, and how to construct a graphical representation of the model. You’ll also learn what feasible, optimal, infeasible, and unbounded mean in the context of LP.
## Problem description: telephone production
A telephone company produces and sells two kinds of telephones, namely desk phones and cellular phones.
Each type of phone is assembled and painted by the company. The objective is to maximize profit, and the company has to produce at least 100 of each type of phone.
There are limits in terms of the company’s production capacity, and the company has to calculate the optimal number of each type of phone to produce, while not exceeding the capacity of the plant.
## Writing a descriptive model
It is good practice to start with a descriptive model before attempting to write a mathematical model. In order to come up with a descriptive model, you should consider what the decision variables, objectives, and constraints for the business problem are, and write these down in words.
In order to come up with a descriptive model, consider the following questions:
- What are the decision variables?
- What is the objective?
- What are the constraints?
## Telephone production: a descriptive model
A possible descriptive model of the telephone production problem is as follows:
- Decision variables:
- Number of desk phones produced (DeskProduction)
- Number of cellular phones produced (CellProduction)
- Objective: Maximize profit
- Constraints:
1. The DeskProduction should be greater than or equal to 100.
2. The CellProduction should be greater than or equal to 100.
3. The assembly time for DeskProduction plus the assembly time for CellProduction should not exceed 400 hours.
4. The painting time for DeskProduction plus the painting time for CellProduction should not exceed 490 hours.
## Writing a mathematical model
Convert the descriptive model into a mathematical model:
- Use the two decision variables DeskProduction and CellProduction
- Use the data given in the problem description (remember to convert minutes to hours where appropriate)
- Write the objective as a mathematical expression
- Write the constraints as mathematical expressions (use “=”, “<=”, or “>=”, and name the constraints to describe their purpose)
- Define the domain for the decision variables
### Telephone production: a mathematical model
To express the last two constraints, we model assembly time and painting time as linear combinations of the two productions, resulting in the following mathematical model:
$
maximize:\\
\ \ 12\ desk\_production + 20\ cell\_production\\
subject\ to: \\
\ \ desk\_production >= 100 \\
\ \ cell\_production >= 100 \\
\ \ 0.2\ desk\_production + 0.4\ cell\_production <= 400 \\
\ \ 0.5\ desk\_production + 0.4\ cell\_production <= 490 \\
$
### Using DOcplex to formulate the mathematical model in Python
Use the [DOcplex](http://ibmdecisionoptimization.github.io/docplex-doc/) Python library to write the mathematical model in Python.
This is done in four steps:
- create a instance of docplex.mp.Model to hold all model objects
- create decision variables,
- create linear constraints,
- finally, define the objective.
But first, we have to import the class `Model` from the docplex module.
## Use IBM Decision Optimization CPLEX Modeling for Python
Let's use the DOcplex Python library to write the mathematical model in Python.
### Step 1: Download the library
First install *docplex* if needed.
```
import sys
try:
import docplex.mp
except:
if hasattr(sys, 'real_prefix'):
#we are in a virtual env.
!pip install docplex
else:
!pip install --user docplex
```
### Step 2: Set up the prescriptive engine
* Subscribe to our private cloud offer or Decision Optimization on Cloud solve service [here](https://developer.ibm.com/docloud) if you do not want to use a local solver.
* Get the service URL and your personal API key and enter your credentials here if accurate:
```
url = None
key = None
```
### Step 3: Set up the prescriptive model
#### Create the model
All objects of the model belong to one model instance.
```
# first import the Model class from docplex.mp
from docplex.mp.model import Model
# create one model instance, with a name
m = Model(name='telephone_production')
```
#### Define the decision variables
- The continuous variable `desk` represents the production of desk telephones.
- The continuous variable `cell` represents the production of cell phones.
```
# by default, all variables in Docplex have a lower bound of 0 and infinite upper bound
desk = m.continuous_var(name='desk')
cell = m.continuous_var(name='cell')
```
#### Set up the constraints
- Desk and cel phone must both be greater than 100
- Assembly time is limited
- Painting time is limited.
```
# write constraints
# constraint #1: desk production is greater than 100
m.add_constraint(desk >= 100)
# constraint #2: cell production is greater than 100
m.add_constraint(cell >= 100)
# constraint #3: assembly time limit
ct_assembly = m.add_constraint( 0.2 * desk + 0.4 * cell <= 400)
# constraint #4: paiting time limit
ct_painting = m.add_constraint( 0.5 * desk + 0.4 * cell <= 490)
```
#### Express the objective
We want to maximize the expected revenue.
```
m.maximize(12 * desk + 20 * cell)
```
A few remarks about how we formulated the mathemtical model in Python using DOcplex:
- all arithmetic operations (+, \*, \-) are done using Python operators
- comparison operators used in writing linear constraint use Python comparison operators too.
#### Print information about the model
We can print information about the model to see how many objects of each type it holds:
```
m.print_information()
```
### Graphical representation of a Linear Problem
A simple 2-dimensional LP (with 2 decision variables) can be represented graphically using a x- and y-axis.
This is often done to demonstrate optimization concepts.
To do this, follow these steps:
- Assign one variable to the x-axis and the other to the y-axis.
- Draw each of the constraints as you would draw any line in 2 dimensions.
- Use the signs of the constraints (=, <= or >=) to determine which side of each line falls within the feasible region (allowable solutions).
- Draw the objective function as you would draw any line in 2 dimensions, by substituting any value for the objective (for example, 12 * DeskProduction + 20 * CellProduction = 4000)
#### Feasible set of solutions
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/19.png?raw=true" >
</ul>
This graphic shows the feasible region for the telephone problem.
Recall that the feasible region of an LP is the region delimited by the constraints, and it represents all feasible solutions. In this graphic, the variables DeskProduction and CellProduction are abbreviated to be desk and cell instead. Look at this diagram and search intuitively for the optimal solution. That is, which combination of desk and cell phones will yield the highest profit.
#### The optimal solution
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/20.png?raw=true" >
</ul>
To find the optimal solution to the LP, you must find values for the decision variables, within the feasible region, that maximize profit as defined by the objective function. In this problem, the objective function is to maximize
$$12 * desk + 20 * cell
$$
To do this, first draw a line representing the objective by substituting a value for the objective.
Next move the line up (because this is a maximization problem) to find the point where the line last touches the feasible region. Note that all the solutions on one objective line, such as AB, yield the same objective value. Other values of the objective will be found along parallel lines (such as line CD).
In a profit maximizing problem such as this one, these parallel lines are often called isoprofit lines, because all the points along such a line represent the same profit. In a cost minimization problem, they are known as isocost lines. Since all isoprofit lines have the same slope, you can find all other isoprofit lines by pushing the objective value further out, moving in parallel, until the isoprofit lines no longer intersect the feasible region. The last isoprofit line that touches the feasible region defines the largest (therefore maximum) possible value of the objective function. In the case of the telephone production problem, this is found along line EF.
The optimal solution of a linear program always belongs to an extreme point of the feasible region (that is, at a vertex or an edge).
### Solve with the Decision Optimization solve service
If url and key are None, the Modeling layer will look for a local runtime, otherwise will use the credentials.
Look at the documentation for a good understanding of the various solving/generation modes.
If you're using a Community Edition of CPLEX runtimes, depending on the size of the problem, the solve stage may fail and will need a paying subscription or product installation.
In any case, `Model.solve()` returns a solution object in Python, containing the optimal values of decision variables, if the solve succeeds, or else it returns `None`.
```
s = m.solve(url=url, key=key)
m.print_solution()
```
In this case, CPLEX has found an optimal solution at (300, 850). You can check that this point is indeed an extreme point of the feasible region.
### Multiple Optimal Solutions
It is possible that an LP has multiple optimal solutions.
At least one optimal solution will be at a vertex.
By default, the CPLEX® Optimizer reports the first optimal solution found.
#### Example of multiple optimal solutions
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/22.png?raw=true" >
</ul>
This graphic shows an example of an LP with multiple optimal solutions. This can happen when the slope of the objective function is the same as the slope of one of the constraints, in this case line AB. All the points on line AB are optimal solutions, with the same objective value, because they are all extreme points within the feasible region.
### Binding and nonbinding constraints
A constraint is binding if the constraint becomes an equality when the solution values are substituted.
Graphically, binding constraints are constraints where the optimal solution lies exactly on the line representing that constraint.
In the telephone production problem, the constraint limiting time on the assembly machine is binding:
$$
0.2desk + 0.4 cell <= 400\\
desk = 300cell = 8500.2(300) + 0.4(850) = 400
$$
The same is true for the time limit on the painting machine:
$$
0.5desk + 0.4cell <= 4900.5(300) + 0.4(850) = 490
$$
On the other hand, the requirement that at least 100 of each telephone type be produced is nonbinding because the left and right hand sides are not equal:
$$
desk >= 100\\
300 \neq 100
$$
### Infeasibility
A model is infeasible when no solution exists that satisfies all the constraints. This may be because:
The model formulation is incorrect.
The data is incorrect.
The model and data are correct, but represent a real-world conflict in the system being modeled.
When faced with an infeasible model, it's not always easy to identify the source of the infeasibility.
DOcplex helps you identify potential causes of infeasibilities, and it will also suggest changes to make the model feasible.
#### An example of infeasible problem
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/26.png?raw=true" >
</ul>
This graphic shows an example of an infeasible constraint set for the telephone production problem. Assume in this case that the person entering data had accidentally entered lower bounds on the production of 1100 instead of 100. The arrows show the direction of the feasible region with respect to each constraint. This data entry error moves the lower bounds on production higher than the upper bounds from the assembly and painting constraints, meaning that the feasible region is empty and there are no possible solutions.
#### Infeasible models in DOcplex
Calling `solve()` on an infeasible model returns None. Let's experiment this with DOcplex. First, we take a copy of our model and an extra infeasible constraint which states that desk telephone production must be greater than 1100
```
# create a new model, copy of m
im = m.copy()
# get the 'desk' variable of the new model from its name
idesk = im.get_var_by_name('desk')
# add a new (infeasible) constraint
im.add_constraint(idesk >= 1100);
# solve the new proble, we expect a result of None as the model is now infeasible
ims = im.solve(url=url, key=key)
if ims is None:
print('- model is infeasible')
```
### Correcting infeasible models
To correct an infeasible model, you must use your knowledge of the real-world situation you are modeling.
If you know that the model is realizable, you can usually manually construct an example of a feasible solution and use it to determine where your model or data is incorrect. For example, the telephone production manager may input the previous month's production figures as a solution to the model and discover that they violate the erroneously entered bounds of 1100.
DOcplex can help perform infeasibility analysis, which can get very complicated in large models. In this analysis, DOcplex may suggest relaxing one or more constraints.
### Relaxing constraints by changing the model
In the case of LP models, the term “relaxation” refers to changing the right hand side of the constraint to allow some violation of the original constraint.
For example, a relaxation of the assembly time constraint is as follows:
$$
0.2 \ desk + 0.4\ cell <= 440
$$
Here, the right hand side has been relaxed from 400 to 440, meaning that you allow more time for assembly than originally planned.
#### Relaxing model by converting hard constraints to soft constraints
- A _soft_ constraint is a constraint that can be violated in some circumstances.
- A _hard_ constraint cannot be violated under any circumstances. So far, all constraints we have encountered are hard constraints.
Converting hard constraints to soft is one way to resolve infeasibilities.
The original hard constraint on assembly time is as follows:
$$
0.2 \ desk + 0.4 \ cell <= 400
$$
You can turn this into a soft constraint if you know that, for example, an additional 40 hours of overtime are available at an additional cost. First add an overtime term to the right-hand side:
$$
0.2 \ desk + 0.4 \ cell <= 400 + overtime
$$
Next, add a hard limit to the amount of overtime available:
$$
overtime <= 40
$$
Finally, add an additional cost to the objective to penalize use of overtime.
Assume that in this case overtime costs an additional $2/hour, then the new objective becomes:
$$
maximize\ 12 * desk + 20 * cell — 2 * overtime
$$
#### Implement the soft constraint model using DOcplex
First and extra variable for overtime, with an upper bound of 100. This suffices to express the hard limit on overtime.
```
overtime = m.continuous_var(name='overtime', ub=40)
```
Modify the assembly time constraint by changing its right-hand side by adding overtime.
*Note*: this operation modifies the model by performing a _side-effect_ on the constraint object. DOcplex allows dynamic edition of model elements.
```
ct_assembly.rhs = 400 + overtime
```
Last, modify the objective expression to add the penalization term.
Note that we use the Python decrement operator.
```
m.maximize(12*desk + 20 * cell - 2 * overtime)
```
And solve again using DOcplex:
```
s2 = m.solve(url=url, key=key)
m.print_solution()
```
### Unbounded Variable vs. Unbounded model
A variable is unbounded when one or both of its bounds is infinite.
A model is unbounded when its objective value can be increased or decreased without limit.
The fact that a variable is unbounded does not necessarily influence the solvability of the model and should not be confused with a model being unbounded.
An unbounded model is almost certainly not correctly formulated.
While infeasibility implies a model where constraints are too limiting, unboundedness implies a model where an important constraint is either missing or not restrictive enough.
By default, DOcplex variables are unbounded: their upper bound is infinite (but their lower bound is zero).
#### Unbounded feasible region
The telephone production problem would become unbounded if, for example, the constraints on the assembly and painting time were neglected. The feasible region would then look as in this diagram where the objective value can increase without limit, up to infinity, because there is no upper boundary to the region.
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/32.png?raw=true" >
</ul>
## Algorithms for solving LPs
The IBM® CPLEX® Optimizers to solve LP problems in CPLEX include:
- Simplex Optimizer
- Dual-simplex Optimizer
- Barrier Optimizer
### The simplex algorithm
The simplex algorithm, developed by George Dantzig in 1947, was the first generalized algorithm for solving LP problems. It is the basis of many optimization algorithms. The simplex method is an iterative method. It starts with an initial feasible solution, and then tests to see if it can improve the result of the objective function. It continues until the objective function cannot be further improved.
The following diagram illustrates how the simplex algorithm traverses the boundary of the feasible region for the telephone production problem. The algorithm, starts somewhere along the edge of the shaded feasible region, and advances vertex-by-vertex until arriving at the vertex that also intersects the optimal objective line. Assume it starts at the red dot indicated on the diagam.
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/36.png?raw=true" >
</ul>
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/37.png?raw=true" >
</ul>
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/38.png?raw=true" >
</ul>
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/39.png?raw=true" >
</ul>
### The revised simplex algorithm
To improve the efficiency of the simplex algorithm, George Dantzig and W. Orchard-Hays revised it in 1953. CPLEX uses the revised simplex algorithm, with a number of improvements. The CPLEX Optimizers are particularly efficient and can solve very large problems rapidly. You can tune some CPLEX Optimizer parameters to change the algorithmic behavior according to your needs.
### The Dual simple algorithm
#### The dual of a LP
The concept of duality is important in linear programming. Every LP problem has an associated LP problem known as its _dual_. The dual of this associated problem is the original LP problem (known as the primal problem). If the primal problem is a minimization problem, then the dual problem is a maximization problem and vice versa.
#### A primal-dual pair
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/42.png?raw=true" >
</ul>
*Primal (P)*
--------------------
$max\ z=\sum_{i} c_{i}x_{i}$
*Dual (D)*
-------------------------------
$min\ w= \sum_{j}b_{j}y_{j}$
- Each constraint in the primal has an associated dual variable, yi.
- Any feasible solution to D is an upper bound to P, and any feasible solution to P is a lower bound to D.
- In LP, the optimal objective values of D and P are equivalent, and occurs where these bounds meet.
- The dual can help solve difficult primal problems by providing a bound that in the best case equals the optimal solution to the primal problem.
#### Dual prices
In any solution to the dual, the values of the dual variables are known as the dual prices, also called shadow prices.
For each constraint in the primal problem, its associated dual price indicates how much the dual objective will change with a unit change in the right hand side of the constraint.
The dual price of a non-binding constraint is zero. That is, changing the right hand side of the constraint will not affect the objective value.
The dual price of a binding constraint can help you make decisions regarding the constraint.
For example, the dual price of a binding resource constraint can be used to determine whether more of the resource should be purchased or not.
#### The dual simplex algorithm
The simplex algorithm works by finding a feasible solution and moving progressively toward optimality.
The dual simplex algorithm implicitly uses the dual to try and find an optimal solution to the primal as early as it can, and regardless of whether the solution is feasible or not.
It then moves from one vertex to another, gradually decreasing the infeasibility while maintaining optimality, until an optimal feasible solution to the primal problem is found.
In CPLEX, the Dual-simplex Optimizer is the first choice for most LP problems.
### Basic solutions and basic variables
You learned earlier that the simplex algorithm travels from vertex to vertex to search for the optimal solution.
A solution at a vertex is known as a _basic_ solution. Without getting into too much detail, it's worth knowing that part of the simplex algorithm involves setting a subset of variables to zero at each iteration.
These variables are known as non-basic variables. The remaining variables are the _basic_ variables. The concepts of basic solutions and variables are relevant in the definition of reduced costs that follows next.
### Reduced Costs
The reduced cost of a variable gives an indication of the amount the objective will change with a unit increase in the variable value.
Consider the simplest form of an LP:
$
minimize\ c^{t}x\\
s.t. \\
Ax = b \\
x \ge 0
$
If $y$ represents the dual variables for a given basic solution, then the reduced costs are defined as:
$$
c - y^{t}A
$$
Such a basic solution is optimal if:
$$
c - y^{t}A \ge 0
$$
If all reduced costs for this LP are non-negative, it follows that the objective value can only increase with a change in the variable value, and therefore the solution (when minimizing) is optimal.
DOcplex lets you acces sreduced costs of variable, after a successful solve. Let's experiment with the two decision variables of our problem:
#### Getting reduced cost values with DOcplex
DOcplex lets you access reduced costs of variable, after a successful solve. Let's experiment with the two decision variables of our problem:
```
print('* desk variable has reduced cost: {0}'.format(desk.reduced_cost))
print('* cell variable has reduced cost: {0}'.format(cell.reduced_cost))
```
### Default optimality criteria for CPLEX optimizer
Because CPLEX Optimizer operates on finite precision computers, it uses an optimality tolerance to test the reduced costs.
The default optimality tolerance is –1e-6, with optimality criteria for the simplest form of an LP then being:
$$
c — y^{t}A> –10^{-6}
$$
You can adjust this optimality tolerance, for example if the algorithm takes very long to converge and has already achieved a solution sufficiently close to optimality.
### Reduced Costs and multiple optimal solutions
In the earlier example you saw how one can visualize multiple optimal solutions for an LP with two variables.
For larger LPs, the reduced costs can be used to determine whether multiple optimal solutions exist. Multiple optimal solutions exist when one or more non-basic variables with a zero reduced cost exist in an optimal solution (that is, variable values that can change without affecting the objective value).
In order to determine whether multiple optimal solutions exist, you can examine the values of the reduced costs with DOcplex.
### Slack values
For any solution, the difference between the left and right hand sides of a constraint is known as the _slack_ value for that constraint.
For example, if a constraint states that f(x) <= 100, and in the solution f(x) = 80, then the slack value of this constraint is 20.
In the earlier example, you learned about binding and non-binding constraints. For example, f(x) <= 100 is binding if f(x) = 100, and non-binding if f(x) = 80.
The slack value for a binding constraint is always zero, that is, the constraint is met exactly.
You can determine which constraints are binding in a solution by examining the slack values with DOcplex.
This might help to better interpret the solution and help suggest which constraints may benefit from a change in bounds or a change into a soft constraint.
#### Accessing slack values with DOcplex
As an example, let's examine the slack values of some constraints in our problem, after we revert the change to soft constrants
```
# revert soft constraints
ct_assembly.rhs = 440
s3 = m.solve(url=url, key=key)
# now get slack value for assembly constraint: expected value is 40
print('* slack value for assembly time constraint is: {0}'.format(ct_assembly.slack_value))
# get slack value for painting time constraint, expected value is 0.
print('* slack value for painting time constraint is: {0}'.format(ct_painting.slack_value))
```
### Degeneracy
It is possible that multiple non-optimal solutions with the same objective value exist.
As the simplex algorithm attempts to move in the direction of an improved objective value, it might happen that the algorithm starts cycling between non-optimal solutions with equivalent objective values. This is known as _degeneracy_.
Modern LP solvers, such as CPLEX Simplex Optimizer, have built-in mechanisms to help escape such cycling by using perturbation techniques involving the variable bounds.
If the default algorithm does not break the degenerate cycle, it's a good idea to try some other algorithms, for example the Dual-simplex Optimizer. Problem that are primal degenerate, are often not dual degenerate, and vice versa.
#### Setting a LP algorithm with DOcplex
Users can change the algorithm by editing the `lpmethod` parameter of the model.
We won't go into details here, it suffices to know this parameter accepts an integer from 0 to 6, where 0 denotes automatic choice of the algorithm, 1 is for primal simplex, 2 is for dual simplex, and 4 is for barrier...
For example, choosing the barrier algorithm is done by setting value 4 to this parameter. We access the `parameters` property of the model and from there, assign the `lpmethod` parameter
```
m.parameters.lpmethod = 4
m.solve(url=url, key=key, log_output=True)
```
### Barrier methods
Most of the CPLEX Optimizers for MP call upon the basic simplex method or some variation of it.
Some, such as the Barrier Optimizer, use alternative methods.
In graphical terms, the simplex algorithm starts along the edge of the feasible region and searches for an optimal vertex.
The barrier method starts somewhere inside the feasible region – in other words, it avoids the “barrier” that is created by the constraints, and burrows through the feasible region to find the optimal solution.
In its search, the method uses what is known as a predictor-corrector algorithm that constantly adjusts its path through the center of the feasible region (the central path).
This diagram shows how the barrier method works compared to the simplex method. As you can see, the simplex method traverses the edge of the feasible region, while the barrier method moves through the interior, with a predictor-corrector determining the path. In general, it’s a good idea to experiment with different algorithms in CPLEX when trying to improve performance.
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/52.png?raw=true" >
</ul>
### Presolve
CPLEX Optimizer provides a _presolve_ procedure.
Presolve evaluates the model formulation before solving it, and attempts to reduce the size of the problem that is sent to the solver engine.
A reduction in problem size typically translates to a reduction in total run time.
For example, a real problem presented to CPLEX Optimizer with approximately 160,000 constraints and 596,000 decision variables, was reduced by presolve to a problem with 27,000 constraints and 150,000 decision variables.
The presolve time was only 1.32 seconds and reduced the solution time from nearly half an hour to under 25 seconds.
#### An example of presolve operations
Let's consider the following Linear problem:
$
maximize:\\
[1]\ 2x_{1}+ 3x_{2} — x_{3} — x_{4}\\
subject\ to:\\
[2]\ x_{1} + x_{2} + x_{3} — 2x_{4} <= 4\\
[3]\ -x_{1} — x_{2} + x_{3} — x_{4} <= 1\\
[4]\ x_{1} + x_{4} <= 3\\
[5]\ x_{1}, x_{2}, x_{3}, x_{4} >= 0
$
- Because $x_{3}$ has a negative coefficient in the objective, the optimization will minimize $x_{3}$.
- In constraints [2] and [3] $x_{3}$ has positive coefficients, and the constraints are <=. Thus, $x_{3}$ can be reduced to 0, and becomes redundant.
- In constraint [3], all the coefficients are now negative. Because the left hand side of [3] can never be positive, any assignment of values will satisfy the constraint. The constraint is redundant and can be removed.
# Summary
Having completed this notebook, you should be able to:
- Describe the characteristics of an LP in terms of the objective, decision variables and constraints
- Formulate a simple LP model on paper
- Conceptually explain the following terms in the context of LP:
- dual
- feasible region
- infeasible
- unbounded
- slack
- reduced cost
- degenerate
- Describe some of the algorithms used to solve LPs
- Explain what presolve does
- Write a simple LP model with DOcplex
## References
* [CPLEX Modeling for Python documentation](http://ibmdecisionoptimization.github.io/docplex-doc/)
* [Decision Optimization on Cloud](https://developer.ibm.com/docloud/)
* Need help with DOcplex or to report a bug? Please go [here](https://developer.ibm.com/answers/smartspace/docloud).
* Contact us at dofeedback@wwpdl.vnet.ibm.com.
|
github_jupyter
|
import sys
try:
import docplex.mp
except:
if hasattr(sys, 'real_prefix'):
#we are in a virtual env.
!pip install docplex
else:
!pip install --user docplex
url = None
key = None
# first import the Model class from docplex.mp
from docplex.mp.model import Model
# create one model instance, with a name
m = Model(name='telephone_production')
# by default, all variables in Docplex have a lower bound of 0 and infinite upper bound
desk = m.continuous_var(name='desk')
cell = m.continuous_var(name='cell')
# write constraints
# constraint #1: desk production is greater than 100
m.add_constraint(desk >= 100)
# constraint #2: cell production is greater than 100
m.add_constraint(cell >= 100)
# constraint #3: assembly time limit
ct_assembly = m.add_constraint( 0.2 * desk + 0.4 * cell <= 400)
# constraint #4: paiting time limit
ct_painting = m.add_constraint( 0.5 * desk + 0.4 * cell <= 490)
m.maximize(12 * desk + 20 * cell)
m.print_information()
s = m.solve(url=url, key=key)
m.print_solution()
# create a new model, copy of m
im = m.copy()
# get the 'desk' variable of the new model from its name
idesk = im.get_var_by_name('desk')
# add a new (infeasible) constraint
im.add_constraint(idesk >= 1100);
# solve the new proble, we expect a result of None as the model is now infeasible
ims = im.solve(url=url, key=key)
if ims is None:
print('- model is infeasible')
overtime = m.continuous_var(name='overtime', ub=40)
ct_assembly.rhs = 400 + overtime
m.maximize(12*desk + 20 * cell - 2 * overtime)
s2 = m.solve(url=url, key=key)
m.print_solution()
print('* desk variable has reduced cost: {0}'.format(desk.reduced_cost))
print('* cell variable has reduced cost: {0}'.format(cell.reduced_cost))
# revert soft constraints
ct_assembly.rhs = 440
s3 = m.solve(url=url, key=key)
# now get slack value for assembly constraint: expected value is 40
print('* slack value for assembly time constraint is: {0}'.format(ct_assembly.slack_value))
# get slack value for painting time constraint, expected value is 0.
print('* slack value for painting time constraint is: {0}'.format(ct_painting.slack_value))
m.parameters.lpmethod = 4
m.solve(url=url, key=key, log_output=True)
| 0.351756 | 0.988668 |
Deep Learning
=============
Assignment 2
------------
Previously in `1_notmnist.ipynb`, we created a pickle with formatted datasets for training, development and testing on the [notMNIST dataset](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html).
The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow.
```
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
```
First reload the data we generated in `1_notmnist.ipynb`.
```
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
```
Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
```
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
```
We're first going to train a multinomial logistic regression using simple gradient descent.
TensorFlow works like this:
* First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below:
with graph.as_default():
...
* Then you can run the operations on this graph as many times as you want by calling `session.run()`, providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below:
with tf.Session(graph=graph) as session:
...
Let's load all the data into TensorFlow and build the computation graph corresponding to our training:
```
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random values following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
```
Let's run this computation and iterate:
```
# Defining accuracy function to find accuracy of predictions against actuals
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
num_steps = 801
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.global_variables_initializer().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
```
Let's now switch to stochastic gradient descent training instead, which is much faster.
The graph will be similar, except that instead of holding all the training data into a constant node, we create a `Placeholder` node which will be fed actual data at every call of `session.run()`.
```
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
```
Let's run it:
```
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
```
---
Problem
-------
Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units [nn.relu()](https://www.tensorflow.org/versions/r0.7/api_docs/python/nn.html#relu) and 1024 hidden nodes. This model should improve your validation / test accuracy.
---
```
batch_size = 128
num_hidden_nodes = 1024
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = {
'hidden': tf.Variable(tf.truncated_normal([image_size * image_size, num_hidden_nodes])),
'output': tf.Variable(tf.truncated_normal([num_hidden_nodes, num_labels]))
}
biases = {
'hidden': tf.Variable(tf.zeros([num_hidden_nodes])),
'output': tf.Variable(tf.zeros([num_labels]))
}
# Training computation.
hidden_train = tf.nn.relu(tf.matmul(tf_train_dataset, weights['hidden']) + biases['hidden'])
logits = tf.matmul(hidden_train, weights['output']) + biases['output']
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
hidden_valid = tf.nn.relu(tf.matmul(tf_valid_dataset, weights['hidden']) + biases['hidden'])
valid_prediction = tf.nn.softmax(tf.matmul(hidden_valid, weights['output']) + biases['output'])
hidden_test = tf.nn.relu(tf.matmul(tf_test_dataset, weights['hidden']) + biases['hidden'])
test_prediction = tf.nn.softmax(tf.matmul(hidden_test, weights['output']) + biases['output'])
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
```
|
github_jupyter
|
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random values following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
# Defining accuracy function to find accuracy of predictions against actuals
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
num_steps = 801
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.global_variables_initializer().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
batch_size = 128
num_hidden_nodes = 1024
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = {
'hidden': tf.Variable(tf.truncated_normal([image_size * image_size, num_hidden_nodes])),
'output': tf.Variable(tf.truncated_normal([num_hidden_nodes, num_labels]))
}
biases = {
'hidden': tf.Variable(tf.zeros([num_hidden_nodes])),
'output': tf.Variable(tf.zeros([num_labels]))
}
# Training computation.
hidden_train = tf.nn.relu(tf.matmul(tf_train_dataset, weights['hidden']) + biases['hidden'])
logits = tf.matmul(hidden_train, weights['output']) + biases['output']
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
hidden_valid = tf.nn.relu(tf.matmul(tf_valid_dataset, weights['hidden']) + biases['hidden'])
valid_prediction = tf.nn.softmax(tf.matmul(hidden_valid, weights['output']) + biases['output'])
hidden_test = tf.nn.relu(tf.matmul(tf_test_dataset, weights['hidden']) + biases['hidden'])
test_prediction = tf.nn.softmax(tf.matmul(hidden_test, weights['output']) + biases['output'])
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
| 0.849222 | 0.980986 |
# Writing a Molecular Monte Carlo Simulation
Starting today, make sure you have the functions
1. `calculate_LJ` - written in class
1. `read_xyz` - provided in class
1. `calculate_total_energy` - modified version provided in this notebook written for homework which has cutoff
1. `calculate_distance` - should be the version written for homework which accounts for periodic boundaries.
1. `calculate_tail_correction` - written for homework
```
# add imports here
import math
import random
import matplotlib
def calculate_total_energy(coordinates, box_length, cutoff):
"""
Calculate the total energy of a set of particles using the Lennard Jones potential.
Parameters
----------
coordinates : list
A nested list containing the x, y,z coordinate for each particle
box_length : float
The length of the box. Assumes cubic box.
cutoff : float
The cutoff length
Returns
-------
total_energy : float
The total energy of the set of coordinates.
"""
total_energy = 0
num_atoms = len(coordinates)
for i in range(num_atoms):
for j in range(i+1, num_atoms):
# Calculate the distance between the particles - exercise.
dist_ij = calculate_distance(coordinates[i], coordinates[j], box_length)
if dist_ij < cutoff:
# Calculate the pairwise LJ energy
LJ_ij = calculate_LJ(dist_ij)
# Add to total energy.
total_energy += LJ_ij
return total_energy
def read_xyz(filepath):
"""
Reads coordinates from an xyz file.
Parameters
----------
filepath : str
The path to the xyz file to be processed.
Returns
-------
atomic_coordinates : list
A two dimensional list containing atomic coordinates
"""
with open(filepath) as f:
box_length = float(f.readline().split()[0])
num_atoms = float(f.readline())
coordinates = f.readlines()
atomic_coordinates = []
for atom in coordinates:
split_atoms = atom.split()
float_coords = []
# We split this way to get rid of the atom label.
for coord in split_atoms[1:]:
float_coords.append(float(coord))
atomic_coordinates.append(float_coords)
return atomic_coordinates, box_length
def calculate_LJ(r_ij):
"""
The LJ interaction energy between two particles.
Computes the pairwise Lennard Jones interaction energy based on the separation distance in reduced units.
Parameters
----------
r_ij : float
The distance between the particles in reduced units.
Returns
-------
pairwise_energy : float
The pairwise Lennard Jones interaction energy in reduced units.
Examples
--------
>>> calculate_LJ(1)
0
"""
r6_term = math.pow(1/r_ij, 6)
r12_term = math.pow(r6_term, 2)
pairwise_energy = 4 * (r12_term - r6_term)
return pairwise_energy
def calculate_distance(coord1, coord2, box_length=None):
"""
Calculate the distance between two points. When box_length is set, the minimum image convention is used to calculate the distance between the points.
Parameters
----------
coord1, coord2 : list
The coordinates of the points, [x, y, z]
box_length : float, optional
The box length
Returns
-------
distance : float
The distance between the two points accounting for periodic boundaries
"""
distance = 0
for i in range(3):
hold_dist = abs(coord2[i] - coord1[i])
if (box_length):
if hold_dist > box_length/2:
hold_dist = hold_dist - (box_length * round(hold_dist/box_length))
distance += math.pow(hold_dist, 2)
return math.sqrt(distance)
## Add your group's tail correction function
def calculate_tail_correction(num_particles, box_length, cutoff):
"""
The tail correction associated with using a cutoff radius.
Computes the tail correction based on a cutoff radius used in the LJ energy calculation in reduced units.
Parameters
----------
num_particles : int
The number of particles in the system.
box_length : int
Size of the box length of the system, used to calculate volume.
cutoff : int
Cutoff distance.
Returns
-------
tail_correction : float
The tail correction associated with using the cutoff.
"""
brackets = (1/3*math.pow(1/cutoff,9)) - math.pow(1/cutoff,3)
volume = box_length**3
constant = ((8*math.pi*(num_particles**2))/(3*volume))
tail_correction = constant * brackets
return tail_correction
```
The Metropolis Criterion
$$ P_{acc}(m \rightarrow n) = \text{min} \left[
1,e^{-\beta \Delta U}
\right] $$
```
def accept_or_reject(delta_U, beta):
"""
Accept or reject a move based on the Metropolis criterion.
Parameters
----------
detlta_U : float
The change in energy for moving system from state m to n.
beta : float
1/temperature
Returns
-------
boolean
Whether the move is accepted.
"""
if delta_U <= 0.0:
accept = True
else:
#Generate a random number on (0,1)
random_number = random.random()
p_acc = math.exp(-beta*delta_U)
if random_number < p_acc:
accept = True
else:
accept = False
return accept
def calculate_pacc(delta_U, beta):
"""
Accept or reject a move based on the Metropolis criterion.
Parameters
----------
detlta_U : float
The change in energy for moving system from state m to n.
beta : float
1/temperature
Returns
-------
boolean
Whether the move is accepted.
"""
return math.exp(-beta*delta_U)
# Sanity checks - test cases
delta_energy = -1
beta = 1
accepted = accept_or_reject(delta_energy, beta)
assert accepted
# Sanity checks - test cases
delta_energy = 0
beta = 1
accepted = accept_or_reject(delta_energy, beta)
assert accepted
# To test function with random numbers
# can set random seed
#To set seed
random.seed(0)
random.random()
delta_energy = 1
beta = 1
random.seed(0)
accepted = accept_or_reject(delta_energy, beta)
assert accepted is False
#Clear seed
random.seed()
def calculate_pair_energy(coordinates, i_particle, box_length, cutoff):
"""
Calculate the interaction energy of a particle with its environment (all other particles in the system)
Parameters
----------
coordinates : list
The coordinates for all the particles in the system.
i_particle : int
The particle number for which to calculate the energy.
cutoff : float
The simulation cutoff. Beyond this distance, interactions are not calculated.
box_length : float
The length of the box for periodic bounds
Returns
-------
e_total : float
The pairwise interaction energy of the ith particles with all other particles in the system
"""
e_total = 0.0
#creates a list of the coordinates for the i_particle
i_position = coordinates[i_particle]
num_atoms = len(coordinates)
for j_particle in range(num_atoms):
if i_particle != j_particle:
#creates a list of coordinates for the j_particle
j_position = coordinates[j_particle]
rij = calculate_distance(i_position, j_position, box_length)
if rij < cutoff:
e_pair = calculate_LJ(rij)
e_total += e_pair
return e_total
## Sanity checks
test_coords = [[0, 0, 0], [0, 0, 2**(1/6)], [0, 0, 2*2**(1/6)]]
# What do you expect the result to be for particle index 1 (use cutoff of 3)?
assert calculate_pair_energy(test_coords, 1, 10, 3) == -2
# What do you expect the result to be for particle index 0 (use cutoff of 2)?
assert calculate_pair_energy(test_coords, 0, 10, 2) == -1
assert calculate_pair_energy(test_coords, 0, 10, 3) == calculate_pair_energy(test_coords, 2, 10, 3)
```
# Monte Carlo Loop
```
# Read or generate initial coordinates
coordinates, box_length = read_xyz('../../lj_sample_configurations/lj_sample_config_periodic1.txt')
# Set simulation parameters
reduced_temperature = 0.9
num_steps = 5000
max_displacement = 0.1
cutoff = 3
#how often to print an update
freq = 1000
# Calculated quantities
beta = 1 / reduced_temperature
num_particles = len(coordinates)
# Energy calculations
total_energy = calculate_total_energy(coordinates, box_length, cutoff)
print(total_energy)
total_correction = calculate_tail_correction(num_particles, box_length, cutoff)
print(total_correction)
total_energy += total_correction
for step in range(num_steps):
# 1. Randomly pick one of the particles.
random_particle = random.randrange(num_particles)
# 2. Calculate the interaction energy of the selected particle with the system.
current_energy = calculate_pair_energy(coordinates, random_particle, box_length, cutoff)
# 3. Generate a random x, y, z displacement.
x_rand = random.uniform(-max_displacement, max_displacement)
y_rand = random.uniform(-max_displacement, max_displacement)
z_rand = random.uniform(-max_displacement, max_displacement)
# 4. Modify the coordinate of Nth particle by generated displacements.
coordinates[random_particle][0] += x_rand
coordinates[random_particle][1] += y_rand
coordinates[random_particle][2] += z_rand
# 5. Calculate the interaction energy of the moved particle with the system and store this value.
proposed_energy = calculate_pair_energy(coordinates, random_particle, box_length, cutoff)
delta_energy = proposed_energy - current_energy
# 6. Calculate if we accept the move based on energy difference.
accept = accept_or_reject(delta_energy, beta)
# 7. If accepted, move the particle.
if accept:
total_energy += delta_energy
else:
#Move not accepted, roll back coordinates
coordinates[random_particle][0] -= x_rand
coordinates[random_particle][1] -= y_rand
coordinates[random_particle][2] -= z_rand
# 8. Print the energy if step is a multiple of freq.
if step % freq == 0:
print(step, total_energy/num_particles)
```
# Visualization
```
import matplotlib.pyplot as plt
# special jupyter notebook command to make plots interactive
%matplotlib notebook
# Create plots
fig = plt.figure(figsize=(4,4))
ax = fig.add_subplot(111)
ax.set_ylim([-1, 2])
#ax.legend([line1, line2, line3], ['T=0.4', 'T=0.9', 'T=1.4'])
# Sanity checks - test cases
#accepted = accept_or_reject(delta_energy, beta)
def test_pacc():
ts = [0.4, 0.9, 1.4]
colors = ['ob', 'og', 'or']
count = 0
for t in ts:
for i in range(-2, 3):
#print(i)
beta = 1/t
y = math.exp(-beta*i)
#print(y)
color = colors[count]
ax.plot(i,y, color)
count += 1
test_pacc()
```
# Exploring the Acceptance Criteria?
What is the effect of temperature on the probability of a MC move being accepted?
As temperature increases then probability of aceptance increases
```
def system_init(n_particles, volume):
"""
Sets up initial system configurations from a number of particles and box size
Parameters
----------
n_particles : int
Number of particles.
volume : float
Volume of the box
Return
------
coordinates : list of tuples
Coordinates created
box_length : float
Calculated box length
"""
coordinates = []
box_length = volume**(1/3)
for p in range(n_particles):
x = random.uniform(-box_length/2, box_length/2)
y = random.uniform(-box_length/2, box_length/2)
z = random.uniform(-box_length/2, box_length/2)
coordinates.append((x,y,z))
return coordinates, box_length
n_particles = 800
coordinates, box_length = system_init(n_particles, 512)
print(box_length)
assert len(coordinates) == n_particles
print(coordinates)
```
|
github_jupyter
|
# add imports here
import math
import random
import matplotlib
def calculate_total_energy(coordinates, box_length, cutoff):
"""
Calculate the total energy of a set of particles using the Lennard Jones potential.
Parameters
----------
coordinates : list
A nested list containing the x, y,z coordinate for each particle
box_length : float
The length of the box. Assumes cubic box.
cutoff : float
The cutoff length
Returns
-------
total_energy : float
The total energy of the set of coordinates.
"""
total_energy = 0
num_atoms = len(coordinates)
for i in range(num_atoms):
for j in range(i+1, num_atoms):
# Calculate the distance between the particles - exercise.
dist_ij = calculate_distance(coordinates[i], coordinates[j], box_length)
if dist_ij < cutoff:
# Calculate the pairwise LJ energy
LJ_ij = calculate_LJ(dist_ij)
# Add to total energy.
total_energy += LJ_ij
return total_energy
def read_xyz(filepath):
"""
Reads coordinates from an xyz file.
Parameters
----------
filepath : str
The path to the xyz file to be processed.
Returns
-------
atomic_coordinates : list
A two dimensional list containing atomic coordinates
"""
with open(filepath) as f:
box_length = float(f.readline().split()[0])
num_atoms = float(f.readline())
coordinates = f.readlines()
atomic_coordinates = []
for atom in coordinates:
split_atoms = atom.split()
float_coords = []
# We split this way to get rid of the atom label.
for coord in split_atoms[1:]:
float_coords.append(float(coord))
atomic_coordinates.append(float_coords)
return atomic_coordinates, box_length
def calculate_LJ(r_ij):
"""
The LJ interaction energy between two particles.
Computes the pairwise Lennard Jones interaction energy based on the separation distance in reduced units.
Parameters
----------
r_ij : float
The distance between the particles in reduced units.
Returns
-------
pairwise_energy : float
The pairwise Lennard Jones interaction energy in reduced units.
Examples
--------
>>> calculate_LJ(1)
0
"""
r6_term = math.pow(1/r_ij, 6)
r12_term = math.pow(r6_term, 2)
pairwise_energy = 4 * (r12_term - r6_term)
return pairwise_energy
def calculate_distance(coord1, coord2, box_length=None):
"""
Calculate the distance between two points. When box_length is set, the minimum image convention is used to calculate the distance between the points.
Parameters
----------
coord1, coord2 : list
The coordinates of the points, [x, y, z]
box_length : float, optional
The box length
Returns
-------
distance : float
The distance between the two points accounting for periodic boundaries
"""
distance = 0
for i in range(3):
hold_dist = abs(coord2[i] - coord1[i])
if (box_length):
if hold_dist > box_length/2:
hold_dist = hold_dist - (box_length * round(hold_dist/box_length))
distance += math.pow(hold_dist, 2)
return math.sqrt(distance)
## Add your group's tail correction function
def calculate_tail_correction(num_particles, box_length, cutoff):
"""
The tail correction associated with using a cutoff radius.
Computes the tail correction based on a cutoff radius used in the LJ energy calculation in reduced units.
Parameters
----------
num_particles : int
The number of particles in the system.
box_length : int
Size of the box length of the system, used to calculate volume.
cutoff : int
Cutoff distance.
Returns
-------
tail_correction : float
The tail correction associated with using the cutoff.
"""
brackets = (1/3*math.pow(1/cutoff,9)) - math.pow(1/cutoff,3)
volume = box_length**3
constant = ((8*math.pi*(num_particles**2))/(3*volume))
tail_correction = constant * brackets
return tail_correction
def accept_or_reject(delta_U, beta):
"""
Accept or reject a move based on the Metropolis criterion.
Parameters
----------
detlta_U : float
The change in energy for moving system from state m to n.
beta : float
1/temperature
Returns
-------
boolean
Whether the move is accepted.
"""
if delta_U <= 0.0:
accept = True
else:
#Generate a random number on (0,1)
random_number = random.random()
p_acc = math.exp(-beta*delta_U)
if random_number < p_acc:
accept = True
else:
accept = False
return accept
def calculate_pacc(delta_U, beta):
"""
Accept or reject a move based on the Metropolis criterion.
Parameters
----------
detlta_U : float
The change in energy for moving system from state m to n.
beta : float
1/temperature
Returns
-------
boolean
Whether the move is accepted.
"""
return math.exp(-beta*delta_U)
# Sanity checks - test cases
delta_energy = -1
beta = 1
accepted = accept_or_reject(delta_energy, beta)
assert accepted
# Sanity checks - test cases
delta_energy = 0
beta = 1
accepted = accept_or_reject(delta_energy, beta)
assert accepted
# To test function with random numbers
# can set random seed
#To set seed
random.seed(0)
random.random()
delta_energy = 1
beta = 1
random.seed(0)
accepted = accept_or_reject(delta_energy, beta)
assert accepted is False
#Clear seed
random.seed()
def calculate_pair_energy(coordinates, i_particle, box_length, cutoff):
"""
Calculate the interaction energy of a particle with its environment (all other particles in the system)
Parameters
----------
coordinates : list
The coordinates for all the particles in the system.
i_particle : int
The particle number for which to calculate the energy.
cutoff : float
The simulation cutoff. Beyond this distance, interactions are not calculated.
box_length : float
The length of the box for periodic bounds
Returns
-------
e_total : float
The pairwise interaction energy of the ith particles with all other particles in the system
"""
e_total = 0.0
#creates a list of the coordinates for the i_particle
i_position = coordinates[i_particle]
num_atoms = len(coordinates)
for j_particle in range(num_atoms):
if i_particle != j_particle:
#creates a list of coordinates for the j_particle
j_position = coordinates[j_particle]
rij = calculate_distance(i_position, j_position, box_length)
if rij < cutoff:
e_pair = calculate_LJ(rij)
e_total += e_pair
return e_total
## Sanity checks
test_coords = [[0, 0, 0], [0, 0, 2**(1/6)], [0, 0, 2*2**(1/6)]]
# What do you expect the result to be for particle index 1 (use cutoff of 3)?
assert calculate_pair_energy(test_coords, 1, 10, 3) == -2
# What do you expect the result to be for particle index 0 (use cutoff of 2)?
assert calculate_pair_energy(test_coords, 0, 10, 2) == -1
assert calculate_pair_energy(test_coords, 0, 10, 3) == calculate_pair_energy(test_coords, 2, 10, 3)
# Read or generate initial coordinates
coordinates, box_length = read_xyz('../../lj_sample_configurations/lj_sample_config_periodic1.txt')
# Set simulation parameters
reduced_temperature = 0.9
num_steps = 5000
max_displacement = 0.1
cutoff = 3
#how often to print an update
freq = 1000
# Calculated quantities
beta = 1 / reduced_temperature
num_particles = len(coordinates)
# Energy calculations
total_energy = calculate_total_energy(coordinates, box_length, cutoff)
print(total_energy)
total_correction = calculate_tail_correction(num_particles, box_length, cutoff)
print(total_correction)
total_energy += total_correction
for step in range(num_steps):
# 1. Randomly pick one of the particles.
random_particle = random.randrange(num_particles)
# 2. Calculate the interaction energy of the selected particle with the system.
current_energy = calculate_pair_energy(coordinates, random_particle, box_length, cutoff)
# 3. Generate a random x, y, z displacement.
x_rand = random.uniform(-max_displacement, max_displacement)
y_rand = random.uniform(-max_displacement, max_displacement)
z_rand = random.uniform(-max_displacement, max_displacement)
# 4. Modify the coordinate of Nth particle by generated displacements.
coordinates[random_particle][0] += x_rand
coordinates[random_particle][1] += y_rand
coordinates[random_particle][2] += z_rand
# 5. Calculate the interaction energy of the moved particle with the system and store this value.
proposed_energy = calculate_pair_energy(coordinates, random_particle, box_length, cutoff)
delta_energy = proposed_energy - current_energy
# 6. Calculate if we accept the move based on energy difference.
accept = accept_or_reject(delta_energy, beta)
# 7. If accepted, move the particle.
if accept:
total_energy += delta_energy
else:
#Move not accepted, roll back coordinates
coordinates[random_particle][0] -= x_rand
coordinates[random_particle][1] -= y_rand
coordinates[random_particle][2] -= z_rand
# 8. Print the energy if step is a multiple of freq.
if step % freq == 0:
print(step, total_energy/num_particles)
import matplotlib.pyplot as plt
# special jupyter notebook command to make plots interactive
%matplotlib notebook
# Create plots
fig = plt.figure(figsize=(4,4))
ax = fig.add_subplot(111)
ax.set_ylim([-1, 2])
#ax.legend([line1, line2, line3], ['T=0.4', 'T=0.9', 'T=1.4'])
# Sanity checks - test cases
#accepted = accept_or_reject(delta_energy, beta)
def test_pacc():
ts = [0.4, 0.9, 1.4]
colors = ['ob', 'og', 'or']
count = 0
for t in ts:
for i in range(-2, 3):
#print(i)
beta = 1/t
y = math.exp(-beta*i)
#print(y)
color = colors[count]
ax.plot(i,y, color)
count += 1
test_pacc()
def system_init(n_particles, volume):
"""
Sets up initial system configurations from a number of particles and box size
Parameters
----------
n_particles : int
Number of particles.
volume : float
Volume of the box
Return
------
coordinates : list of tuples
Coordinates created
box_length : float
Calculated box length
"""
coordinates = []
box_length = volume**(1/3)
for p in range(n_particles):
x = random.uniform(-box_length/2, box_length/2)
y = random.uniform(-box_length/2, box_length/2)
z = random.uniform(-box_length/2, box_length/2)
coordinates.append((x,y,z))
return coordinates, box_length
n_particles = 800
coordinates, box_length = system_init(n_particles, 512)
print(box_length)
assert len(coordinates) == n_particles
print(coordinates)
| 0.92566 | 0.98758 |
# Summary LSUV Paper
> [All you need is a good init](https://arxiv.org/abs/1511.06422) suggests a novel initialization technique that allows the initalization of deep architectures wrt to other activations than ReLU (focus of Kaiming init)
- toc: true
- badges: false
- comments: true
- categories: [jupyter]
- image: images/resnet_var.png
## What did the authors want to achieve ?
- improve training of deep nets
- generalize Xavier initalization to activations other than ReLU (Kaiming init.), such as tanh and maxout
## Key elements
LSUV extends orthogonal initalization and consists of two steps :
1) Fill the weights with Gaussian noise with unit variance
2) Decompose to orthonormal basis with QR or SVD decomposition and replace the weights with one of the components.
LSUV then estimates the variance of each convolution and inner product layer, the variance is scaled to equal one. It is worth mentioning that the batch size is neglactable in wide margins.
In total LSUV can be seen as orthonormal initialization with batch norm applied at the first mini-batch. The orthonormal initalization of weights matrices de-correlates layer activations, a batch norm similarity is the unit variance normalization procedure. When compared to traditional batch norm, the results are sufficient and computationally more efficient. (Batch Norm adds about 30% in compute complexity to the system).It is not always possible to normalize the variance with the desired precision due to inconsistencies in data variations.

The pseudocode for LSUV can be seen above, in order to restrict the number of maximum trials (avoid infinite loops) a $T_{max}$ is set. 1-5 iterations are required for the desired variance.
### Implementation
An implementation tutorial, powered by fastai can be found [here](https://cedric-perauer.github.io/DL_from_Foundations/jupyter/2020/04/15/LSUV.html).
## Results and Conclusion
### CIFAR 10/100

As we can see the FitNet with LSUV outperforms other techniques, but is virtually the same as orthonormal initalization. SGD was used with a learning rate of 0.01 and weight decay @ epoch 10/150/200 for 230 epochs in total.
### Analysis of empircal results
For FitNet-1 the authors did not experience any problems with any of the activation functions that they used (ReLU,maxout,tanh) optimizers (SGD,RMSProp) or initalizaton techniques (Xavier,MSRA,Ortho,LSUV). This is most likely due to the fact that CNNs tolerate a wide range of mediocre inits, only the training time increases. However FitNet-4 was much more difficult to optimize.
Training a FitResNet-4 on CIFAR-10, which tests the initalization with ResNet training "out-of-the-box", LSUV is proven to be the only initalization technique that leads all nets to converge regardless of the activation function that was used :

### LSUV compared to Batch Norm
LSUV can be seen as batch norm of layer output done before the start of training. The authors also prove that putting batch norm after the activation function is proven to work for FitNet-4.
### ImageNet training

When training on ImageNet the authors found out that, LSUV reduces the starting flat-loss time from 0.5 epochs to 0.05 for CaffeNet. It also converges faster in the beginning, but is then overtaken by a standard CaffeNet architecture at the 30th epoch and has a 1.3% lower precision in the end. The authors of the paper do not have any explanation for this empirical phenomenon. Especially since in contrast GoogLeNet performed better (0.68 compared to 0.672)
### LSUV Timing
The significant part of LSUV is SVD-decomposition of the weight matrices. The compute overhead on top of generating the Gaussian noise (that's almost instant) is about 3.5 Minutes for CaffeNet, which is very small compared to total training time.
The authors state that the experiments confirm the finding of Romero et al. (2015) that very thin, thus fast and low in parameters, but deep networks obtain comparable or even better performance than wider, but shallower nets. LSUV is fast and the results are almost state-of-the art.
|
github_jupyter
|
# Summary LSUV Paper
> [All you need is a good init](https://arxiv.org/abs/1511.06422) suggests a novel initialization technique that allows the initalization of deep architectures wrt to other activations than ReLU (focus of Kaiming init)
- toc: true
- badges: false
- comments: true
- categories: [jupyter]
- image: images/resnet_var.png
## What did the authors want to achieve ?
- improve training of deep nets
- generalize Xavier initalization to activations other than ReLU (Kaiming init.), such as tanh and maxout
## Key elements
LSUV extends orthogonal initalization and consists of two steps :
1) Fill the weights with Gaussian noise with unit variance
2) Decompose to orthonormal basis with QR or SVD decomposition and replace the weights with one of the components.
LSUV then estimates the variance of each convolution and inner product layer, the variance is scaled to equal one. It is worth mentioning that the batch size is neglactable in wide margins.
In total LSUV can be seen as orthonormal initialization with batch norm applied at the first mini-batch. The orthonormal initalization of weights matrices de-correlates layer activations, a batch norm similarity is the unit variance normalization procedure. When compared to traditional batch norm, the results are sufficient and computationally more efficient. (Batch Norm adds about 30% in compute complexity to the system).It is not always possible to normalize the variance with the desired precision due to inconsistencies in data variations.

The pseudocode for LSUV can be seen above, in order to restrict the number of maximum trials (avoid infinite loops) a $T_{max}$ is set. 1-5 iterations are required for the desired variance.
### Implementation
An implementation tutorial, powered by fastai can be found [here](https://cedric-perauer.github.io/DL_from_Foundations/jupyter/2020/04/15/LSUV.html).
## Results and Conclusion
### CIFAR 10/100

As we can see the FitNet with LSUV outperforms other techniques, but is virtually the same as orthonormal initalization. SGD was used with a learning rate of 0.01 and weight decay @ epoch 10/150/200 for 230 epochs in total.
### Analysis of empircal results
For FitNet-1 the authors did not experience any problems with any of the activation functions that they used (ReLU,maxout,tanh) optimizers (SGD,RMSProp) or initalizaton techniques (Xavier,MSRA,Ortho,LSUV). This is most likely due to the fact that CNNs tolerate a wide range of mediocre inits, only the training time increases. However FitNet-4 was much more difficult to optimize.
Training a FitResNet-4 on CIFAR-10, which tests the initalization with ResNet training "out-of-the-box", LSUV is proven to be the only initalization technique that leads all nets to converge regardless of the activation function that was used :

### LSUV compared to Batch Norm
LSUV can be seen as batch norm of layer output done before the start of training. The authors also prove that putting batch norm after the activation function is proven to work for FitNet-4.
### ImageNet training

When training on ImageNet the authors found out that, LSUV reduces the starting flat-loss time from 0.5 epochs to 0.05 for CaffeNet. It also converges faster in the beginning, but is then overtaken by a standard CaffeNet architecture at the 30th epoch and has a 1.3% lower precision in the end. The authors of the paper do not have any explanation for this empirical phenomenon. Especially since in contrast GoogLeNet performed better (0.68 compared to 0.672)
### LSUV Timing
The significant part of LSUV is SVD-decomposition of the weight matrices. The compute overhead on top of generating the Gaussian noise (that's almost instant) is about 3.5 Minutes for CaffeNet, which is very small compared to total training time.
The authors state that the experiments confirm the finding of Romero et al. (2015) that very thin, thus fast and low in parameters, but deep networks obtain comparable or even better performance than wider, but shallower nets. LSUV is fast and the results are almost state-of-the art.
| 0.936263 | 0.927822 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.