code
stringlengths 2.5k
6.36M
| kind
stringclasses 2
values | parsed_code
stringlengths 0
404k
| quality_prob
float64 0
0.98
| learning_prob
float64 0.03
1
|
---|---|---|---|---|
# ETL Processes
Use this notebook to develop the ETL process for each of your tables before completing the `etl.py` file to load the whole datasets.
```
import os
import glob
import psycopg2
import pandas as pd
from sql_queries import *
conn = psycopg2.connect("host=127.0.0.1 dbname=sparkifydb user=student password=student")
cur = conn.cursor()
def get_files(filepath):
all_files = []
for root, dirs, files in os.walk(filepath):
files = glob.glob(os.path.join(root,'*.json'))
for f in files :
all_files.append(os.path.abspath(f))
return all_files
```
# Process `song_data`
In this first part, you'll perform ETL on the first dataset, `song_data`, to create the `songs` and `artists` dimensional tables.
Let's perform ETL on a single song file and load a single record into each table to start.
- Use the `get_files` function provided above to get a list of all song JSON files in `data/song_data`
- Select the first song in this list
- Read the song file and view the data
```
song_files = get_files("data/song_data")
filepath = song_files[0]
filepath
df = pd.read_json(filepath, lines = True)
df.head()
```
## #1: `songs` Table
#### Extract Data for Songs Table
- Select columns for song ID, title, artist ID, year, and duration
- Use `df.values` to select just the values from the dataframe
- Index to select the first (only) record in the dataframe
- Convert the array to a list and set it to `song_data`
```
song_data = df.song_id.tolist() + df.title.tolist() + df.artist_id.tolist() + df.year.tolist() + df.duration.tolist()
song_data
```
#### Insert Record into Song Table
Implement the `song_table_insert` query in `sql_queries.py` and run the cell below to insert a record for this song into the `songs` table. Remember to run `create_tables.py` before running the cell below to ensure you've created/resetted the `songs` table in the sparkify database.
```
cur.execute(song_table_insert, song_data)
conn.commit()
```
Run `test.ipynb` to see if you've successfully added a record to this table.
## #2: `artists` Table
#### Extract Data for Artists Table
- Select columns for artist ID, name, location, latitude, and longitude
- Use `df.values` to select just the values from the dataframe
- Index to select the first (only) record in the dataframe
- Convert the array to a list and set it to `artist_data`
```
artist_data = df.artist_id.tolist() + df.artist_name.tolist() + df.artist_location.tolist() + df.artist_latitude.tolist() + df.artist_longitude.tolist()
artist_data
```
#### Insert Record into Artist Table
Implement the `artist_table_insert` query in `sql_queries.py` and run the cell below to insert a record for this song's artist into the `artists` table. Remember to run `create_tables.py` before running the cell below to ensure you've created/resetted the `artists` table in the sparkify database.
```
cur.execute(artist_table_insert, artist_data)
conn.commit()
```
Run `test.ipynb` to see if you've successfully added a record to this table.
# Process `log_data`
In this part, you'll perform ETL on the second dataset, `log_data`, to create the `time` and `users` dimensional tables, as well as the `songplays` fact table.
Let's perform ETL on a single log file and load a single record into each table.
- Use the `get_files` function provided above to get a list of all log JSON files in `data/log_data`
- Select the first log file in this list
- Read the log file and view the data
```
log_files = get_files("data/log_data")
filepath = log_files[0]
df = pd.read_json(filepath, lines = True)
df.head()
```
## #3: `time` Table
#### Extract Data for Time Table
- Filter records by `NextSong` action
- Convert the `ts` timestamp column to datetime
- Hint: the current timestamp is in milliseconds
- Extract the timestamp, hour, day, week of year, month, year, and weekday from the `ts` column and set `time_data` to a list containing these values in order
- Hint: use pandas' [`dt` attribute](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.html) to access easily datetimelike properties.
- Specify labels for these columns and set to `column_labels`
- Create a dataframe, `time_df,` containing the time data for this file by combining `column_labels` and `time_data` into a dictionary and converting this into a dataframe
```
df = df[df['page'] == 'NextSong']
df.head()
t = pd.to_datetime(df['ts'], unit = 'ms')
t.head()
time_data = ([[x, x.hour, x.day, x.week, x.month, x.year, x.dayofweek] for x in t])
column_labels = ('start_time', 'hour', 'day', 'week', 'month', 'year', 'weekday')
time_df = pd.DataFrame(time_data, columns = column_labels)
time_df.head()
```
#### Insert Records into Time Table
Implement the `time_table_insert` query in `sql_queries.py` and run the cell below to insert records for the timestamps in this log file into the `time` table. Remember to run `create_tables.py` before running the cell below to ensure you've created/resetted the `time` table in the sparkify database.
```
for i, row in time_df.iterrows():
cur.execute(time_table_insert, list(row))
conn.commit()
```
Run `test.ipynb` to see if you've successfully added records to this table.
## #4: `users` Table
#### Extract Data for Users Table
- Select columns for user ID, first name, last name, gender and level and set to `user_df`
```
user_df = df[['userId', 'firstName', 'lastName', 'gender', 'level']].drop_duplicates()
```
#### Insert Records into Users Table
Implement the `user_table_insert` query in `sql_queries.py` and run the cell below to insert records for the users in this log file into the `users` table. Remember to run `create_tables.py` before running the cell below to ensure you've created/resetted the `users` table in the sparkify database.
```
for i, row in user_df.iterrows():
cur.execute(user_table_insert, row)
conn.commit()
```
Run `test.ipynb` to see if you've successfully added records to this table.
## #5: `songplays` Table
#### Extract Data and Songplays Table
This one is a little more complicated since information from the songs table, artists table, and original log file are all needed for the `songplays` table. Since the log file does not specify an ID for either the song or the artist, you'll need to get the song ID and artist ID by querying the songs and artists tables to find matches based on song title, artist name, and song duration time.
- Implement the `song_select` query in `sql_queries.py` to find the song ID and artist ID based on the title, artist name, and duration of a song.
- Select the timestamp, user ID, level, song ID, artist ID, session ID, location, and user agent and set to `songplay_data`
#### Insert Records into Songplays Table
- Implement the `songplay_table_insert` query and run the cell below to insert records for the songplay actions in this log file into the `songplays` table. Remember to run `create_tables.py` before running the cell below to ensure you've created/resetted the `songplays` table in the sparkify database.
```
for index, row in df.iterrows():
# get songid and artistid from song and artist tables
cur.execute(song_select, (row.song, row.artist, row.length))
results = cur.fetchone()
if results:
songid, artistid = results
else:
songid, artistid = None, None
# insert songplay record
songplay_data = (pd.to_datetime(row.ts, unit = 'ms'), row.userId, row.level, songid, artistid, row.sessionId, row.location, row.userAgent)
cur.execute(songplay_table_insert, songplay_data)
conn.commit()
```
Run `test.ipynb` to see if you've successfully added records to this table.
# Close Connection to Sparkify Database
```
conn.close()
```
# Implement `etl.py`
Use what you've completed in this notebook to implement `etl.py`.
|
github_jupyter
|
import os
import glob
import psycopg2
import pandas as pd
from sql_queries import *
conn = psycopg2.connect("host=127.0.0.1 dbname=sparkifydb user=student password=student")
cur = conn.cursor()
def get_files(filepath):
all_files = []
for root, dirs, files in os.walk(filepath):
files = glob.glob(os.path.join(root,'*.json'))
for f in files :
all_files.append(os.path.abspath(f))
return all_files
song_files = get_files("data/song_data")
filepath = song_files[0]
filepath
df = pd.read_json(filepath, lines = True)
df.head()
song_data = df.song_id.tolist() + df.title.tolist() + df.artist_id.tolist() + df.year.tolist() + df.duration.tolist()
song_data
cur.execute(song_table_insert, song_data)
conn.commit()
artist_data = df.artist_id.tolist() + df.artist_name.tolist() + df.artist_location.tolist() + df.artist_latitude.tolist() + df.artist_longitude.tolist()
artist_data
cur.execute(artist_table_insert, artist_data)
conn.commit()
log_files = get_files("data/log_data")
filepath = log_files[0]
df = pd.read_json(filepath, lines = True)
df.head()
df = df[df['page'] == 'NextSong']
df.head()
t = pd.to_datetime(df['ts'], unit = 'ms')
t.head()
time_data = ([[x, x.hour, x.day, x.week, x.month, x.year, x.dayofweek] for x in t])
column_labels = ('start_time', 'hour', 'day', 'week', 'month', 'year', 'weekday')
time_df = pd.DataFrame(time_data, columns = column_labels)
time_df.head()
for i, row in time_df.iterrows():
cur.execute(time_table_insert, list(row))
conn.commit()
user_df = df[['userId', 'firstName', 'lastName', 'gender', 'level']].drop_duplicates()
for i, row in user_df.iterrows():
cur.execute(user_table_insert, row)
conn.commit()
for index, row in df.iterrows():
# get songid and artistid from song and artist tables
cur.execute(song_select, (row.song, row.artist, row.length))
results = cur.fetchone()
if results:
songid, artistid = results
else:
songid, artistid = None, None
# insert songplay record
songplay_data = (pd.to_datetime(row.ts, unit = 'ms'), row.userId, row.level, songid, artistid, row.sessionId, row.location, row.userAgent)
cur.execute(songplay_table_insert, songplay_data)
conn.commit()
conn.close()
| 0.138695 | 0.946101 |
```
import matplotlib.pyplot as plt
from enmspring.spring import Spring
from enmspring import k_b0_util
from enmspring.k_b0_plot import BoxPlot
import pandas as pd
import numpy as np
```
### Part 0: Initialize
```
rootfolder = '/Users/alayah361/fluctmatch/enm/cg_13'
cutoff = 4.7
host = 'nome'
type_na = 'bdna+bdna'
n_bp = 13 ## 21
spring_obj = Spring(rootfolder, host, type_na, n_bp)
```
### Part 1 : Process Data
```
plot_agent = BoxPlot(rootfolder, cutoff)
```
### Part 2: PP
```
category = 'PP'
key = 'k'
nrows = 2
ncols = 2
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(16, 10))
d_axes = plot_agent.plot_main(category, axes, key, nrows, ncols)
plt.savefig(f'{category}_{key}.png', dpi=150)
plt.show()
category = 'R'
nrows = 1
ncols = 3
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(22, 4))
d_axes = plot_agent.plot_main(category, axes, key, nrows, ncols)
plt.savefig(f'{category}_{key}.png', dpi=150)
plt.show()
category = 'RB'
nrows = 2
ncols = 2
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(16, 10))
d_axes = plot_agent.plot_main(category, axes, key, nrows, ncols)
plt.savefig(f'{category}_{key}.png', dpi=150)
plt.show()
category = 'PB'
key = 'k'
nrows = 1
ncols = 2
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(16, 4))
d_axes = plot_agent.plot_main(category, axes, key, nrows, ncols)
plt.savefig(f'{category}_{key}.png', dpi=150)
plt.show()
category = 'st'
nrows = 1
ncols = 1
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(16, 8))
d_axes = plot_agent.plot_main(category, axes, key, nrows, ncols)
plt.savefig(f'{category}_{key}.png', dpi=150)
plt.show()
category = 'bp'
nrows = 1
ncols = 3
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(24, 4))
d_axes = plot_agent.plot_main(category, axes, key, nrows, ncols)
plt.savefig(f'{category}_{key}.png', dpi=150)
plt.show()
df_nome = pd.read_csv('/Users/alayah361/fluctmatch/enm/cg_13/nome/bdna+bdna/pd_dfs/pairtypes_k_b0_cutoff_4.70.csv')
df_me1 = pd.read_csv('/Users/alayah361/fluctmatch/enm/cg_13/me1/bdna+bdna/pd_dfs/pairtypes_k_b0_cutoff_4.70.csv')
df_me2 = pd.read_csv('/Users/alayah361/fluctmatch/enm/cg_13/me2/bdna+bdna/pd_dfs/pairtypes_k_b0_cutoff_4.70.csv')
df_me3 = pd.read_csv('/Users/alayah361/fluctmatch/enm/cg_13/me3/bdna+bdna/pd_dfs/pairtypes_k_b0_cutoff_4.70.csv')
df_me12 = pd.read_csv('/Users/alayah361/fluctmatch/enm/cg_13/me12/bdna+bdna/pd_dfs/pairtypes_k_b0_cutoff_4.70.csv')
df_me23 = pd.read_csv('/Users/alayah361/fluctmatch/enm/cg_13/me23/bdna+bdna/pd_dfs/pairtypes_k_b0_cutoff_4.70.csv')
df_me123 = pd.read_csv('/Users/alayah361/fluctmatch/enm/cg_13/me123/bdna+bdna/pd_dfs/pairtypes_k_b0_cutoff_4.70.csv')
df_me1.tail(60)
columns = ['system', 'PairType', 'median', 'mean', 'std']
d_pair_sta = {column:[] for column in columns}
sys = ['nome', 'me1', 'me2', 'me3', 'me12', 'me23', 'me123']
dfs = [df_nome, df_me1, df_me2, df_me3, df_me12, df_me23, df_me123]
res = (4,5,6,7,8,9,10)
for df, sysname in zip(dfs, sys):
mask_pp0 = (df['PairType'] == 'same-P-P-0')
df_pp0 = df[mask_pp0]
median_pp0 = np.median(df_pp0['k'])
mean_pp0 = np.mean(df_pp0['k'])
std_pp0 = np.std(df_pp0['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-P-P-0')
d_pair_sta['median'].append(round(median_pp0, 3))
d_pair_sta['mean'].append(round(mean_pp0, 3))
d_pair_sta['std'].append(round(std_pp0, 3))
mask_pp1 = (df['PairType'] == 'same-P-P-1')
df_pp1 = df[mask_pp1]
median_pp1 = np.median(df_pp1['k'])
mean_pp1 = np.mean(df_pp1['k'])
std_pp1 = np.std(df_pp1['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-P-P-1')
d_pair_sta['median'].append(round(median_pp1, 3))
d_pair_sta['mean'].append(round(mean_pp1, 3))
d_pair_sta['std'].append(round(std_pp1, 3))
mask_ps0 = (df['PairType'] == 'same-P-S-0')
df_ps0 = df[mask_ps0]
median_ps0 = np.median(df_ps0['k'])
mean_ps0 = np.mean(df_ps0['k'])
std_ps0 = np.std(df_ps0['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-P-S-0')
d_pair_sta['median'].append(round(median_ps0, 3))
d_pair_sta['mean'].append(round(mean_ps0, 3))
d_pair_sta['std'].append(round(std_ps0, 3))
mask_ps1 = (df['PairType'] == 'same-P-S-1')
df_ps1 = df[mask_ps1]
median_ps1 = np.median(df_ps1['k'])
mean_ps1 = np.mean(df_ps1['k'])
std_ps1 = np.std(df_ps1['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-P-S-1')
d_pair_sta['median'].append(round(median_ps1, 3))
d_pair_sta['mean'].append(round(mean_ps1, 3))
d_pair_sta['std'].append(round(std_ps1, 3))
mask_pb0 = (df['PairType'] == 'same-P-B-0')
df_pb0 = df[mask_pb0]
median_pb0 = np.median(df_pb0['k'])
mean_pb0 = np.mean(df_pb0['k'])
std_pb0 = np.std(df_pb0['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-P-B-0')
d_pair_sta['median'].append(round(median_pb0, 3))
d_pair_sta['mean'].append(round(mean_pb0, 3))
d_pair_sta['std'].append(round(std_pb0, 3))
mask_pb1 = (df['PairType'] == 'same-P-B-1')
df_pb1 = df[mask_pb1]
median_pb1 = np.median(df_pb1['k'])
mean_pb1 = np.mean(df_pb1['k'])
std_pb1 = np.std(df_pb1['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-P-B-1')
d_pair_sta['median'].append(round(median_pb1, 3))
d_pair_sta['mean'].append(round(mean_pb1, 3))
d_pair_sta['std'].append(round(std_pb1, 3))
mask_ss0 = (df['PairType'] == 'same-S-S-0')
df_ss0 = df[mask_ss0]
median_ss0 = np.median(df_ss0['k'])
mean_ss0 = np.mean(df_ss0['k'])
std_ss0 = np.std(df_ss0['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-S-S-0')
d_pair_sta['median'].append(round(median_ss0, 3))
d_pair_sta['mean'].append(round(mean_ss0, 3))
d_pair_sta['std'].append(round(std_ss0, 3))
mask_sb0 = (df['PairType'] == 'same-S-B-0')
df_sb0 = df[mask_sb0]
median_sb0 = np.median(df_sb0['k'])
mean_sb0 = np.mean(df_sb0['k'])
std_sb0 = np.std(df_sb0['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-S-B-0')
d_pair_sta['median'].append(round(median_sb0, 3))
d_pair_sta['mean'].append(round(mean_sb0, 3))
d_pair_sta['std'].append(round(std_sb0, 3))
mask_sb1 = (df['PairType'] == 'same-S-B-1')
df_sb1 = df[mask_sb1]
median_sb1 = np.median(df_sb1['k'])
mean_sb1 = np.mean(df_sb1['k'])
std_sb1 = np.std(df_sb1['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-S-B-1')
d_pair_sta['median'].append(round(median_sb1, 3))
d_pair_sta['mean'].append(round(mean_sb1, 3))
d_pair_sta['std'].append(round(std_sb1, 3))
mask_wr = (df['PairType'] == 'Within-Ring')
df_wr = df[mask_wr]
median_wr = np.median(df_wr['k'])
mean_wr = np.mean(df_wr['k'])
std_wr = np.std(df_wr['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('Within-Ring')
d_pair_sta['median'].append(round(median_wr, 3))
d_pair_sta['mean'].append(round(mean_wr, 3))
d_pair_sta['std'].append(round(std_wr, 3))
pd.DataFrame(d_pair_sta).to_csv('/Users/alayah361/fluctmatch/enm/cg_13/me1/bdna+bdna/pd_dfs/pairtypes_statis.csv', index=False)
```
|
github_jupyter
|
import matplotlib.pyplot as plt
from enmspring.spring import Spring
from enmspring import k_b0_util
from enmspring.k_b0_plot import BoxPlot
import pandas as pd
import numpy as np
rootfolder = '/Users/alayah361/fluctmatch/enm/cg_13'
cutoff = 4.7
host = 'nome'
type_na = 'bdna+bdna'
n_bp = 13 ## 21
spring_obj = Spring(rootfolder, host, type_na, n_bp)
plot_agent = BoxPlot(rootfolder, cutoff)
category = 'PP'
key = 'k'
nrows = 2
ncols = 2
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(16, 10))
d_axes = plot_agent.plot_main(category, axes, key, nrows, ncols)
plt.savefig(f'{category}_{key}.png', dpi=150)
plt.show()
category = 'R'
nrows = 1
ncols = 3
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(22, 4))
d_axes = plot_agent.plot_main(category, axes, key, nrows, ncols)
plt.savefig(f'{category}_{key}.png', dpi=150)
plt.show()
category = 'RB'
nrows = 2
ncols = 2
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(16, 10))
d_axes = plot_agent.plot_main(category, axes, key, nrows, ncols)
plt.savefig(f'{category}_{key}.png', dpi=150)
plt.show()
category = 'PB'
key = 'k'
nrows = 1
ncols = 2
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(16, 4))
d_axes = plot_agent.plot_main(category, axes, key, nrows, ncols)
plt.savefig(f'{category}_{key}.png', dpi=150)
plt.show()
category = 'st'
nrows = 1
ncols = 1
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(16, 8))
d_axes = plot_agent.plot_main(category, axes, key, nrows, ncols)
plt.savefig(f'{category}_{key}.png', dpi=150)
plt.show()
category = 'bp'
nrows = 1
ncols = 3
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(24, 4))
d_axes = plot_agent.plot_main(category, axes, key, nrows, ncols)
plt.savefig(f'{category}_{key}.png', dpi=150)
plt.show()
df_nome = pd.read_csv('/Users/alayah361/fluctmatch/enm/cg_13/nome/bdna+bdna/pd_dfs/pairtypes_k_b0_cutoff_4.70.csv')
df_me1 = pd.read_csv('/Users/alayah361/fluctmatch/enm/cg_13/me1/bdna+bdna/pd_dfs/pairtypes_k_b0_cutoff_4.70.csv')
df_me2 = pd.read_csv('/Users/alayah361/fluctmatch/enm/cg_13/me2/bdna+bdna/pd_dfs/pairtypes_k_b0_cutoff_4.70.csv')
df_me3 = pd.read_csv('/Users/alayah361/fluctmatch/enm/cg_13/me3/bdna+bdna/pd_dfs/pairtypes_k_b0_cutoff_4.70.csv')
df_me12 = pd.read_csv('/Users/alayah361/fluctmatch/enm/cg_13/me12/bdna+bdna/pd_dfs/pairtypes_k_b0_cutoff_4.70.csv')
df_me23 = pd.read_csv('/Users/alayah361/fluctmatch/enm/cg_13/me23/bdna+bdna/pd_dfs/pairtypes_k_b0_cutoff_4.70.csv')
df_me123 = pd.read_csv('/Users/alayah361/fluctmatch/enm/cg_13/me123/bdna+bdna/pd_dfs/pairtypes_k_b0_cutoff_4.70.csv')
df_me1.tail(60)
columns = ['system', 'PairType', 'median', 'mean', 'std']
d_pair_sta = {column:[] for column in columns}
sys = ['nome', 'me1', 'me2', 'me3', 'me12', 'me23', 'me123']
dfs = [df_nome, df_me1, df_me2, df_me3, df_me12, df_me23, df_me123]
res = (4,5,6,7,8,9,10)
for df, sysname in zip(dfs, sys):
mask_pp0 = (df['PairType'] == 'same-P-P-0')
df_pp0 = df[mask_pp0]
median_pp0 = np.median(df_pp0['k'])
mean_pp0 = np.mean(df_pp0['k'])
std_pp0 = np.std(df_pp0['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-P-P-0')
d_pair_sta['median'].append(round(median_pp0, 3))
d_pair_sta['mean'].append(round(mean_pp0, 3))
d_pair_sta['std'].append(round(std_pp0, 3))
mask_pp1 = (df['PairType'] == 'same-P-P-1')
df_pp1 = df[mask_pp1]
median_pp1 = np.median(df_pp1['k'])
mean_pp1 = np.mean(df_pp1['k'])
std_pp1 = np.std(df_pp1['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-P-P-1')
d_pair_sta['median'].append(round(median_pp1, 3))
d_pair_sta['mean'].append(round(mean_pp1, 3))
d_pair_sta['std'].append(round(std_pp1, 3))
mask_ps0 = (df['PairType'] == 'same-P-S-0')
df_ps0 = df[mask_ps0]
median_ps0 = np.median(df_ps0['k'])
mean_ps0 = np.mean(df_ps0['k'])
std_ps0 = np.std(df_ps0['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-P-S-0')
d_pair_sta['median'].append(round(median_ps0, 3))
d_pair_sta['mean'].append(round(mean_ps0, 3))
d_pair_sta['std'].append(round(std_ps0, 3))
mask_ps1 = (df['PairType'] == 'same-P-S-1')
df_ps1 = df[mask_ps1]
median_ps1 = np.median(df_ps1['k'])
mean_ps1 = np.mean(df_ps1['k'])
std_ps1 = np.std(df_ps1['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-P-S-1')
d_pair_sta['median'].append(round(median_ps1, 3))
d_pair_sta['mean'].append(round(mean_ps1, 3))
d_pair_sta['std'].append(round(std_ps1, 3))
mask_pb0 = (df['PairType'] == 'same-P-B-0')
df_pb0 = df[mask_pb0]
median_pb0 = np.median(df_pb0['k'])
mean_pb0 = np.mean(df_pb0['k'])
std_pb0 = np.std(df_pb0['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-P-B-0')
d_pair_sta['median'].append(round(median_pb0, 3))
d_pair_sta['mean'].append(round(mean_pb0, 3))
d_pair_sta['std'].append(round(std_pb0, 3))
mask_pb1 = (df['PairType'] == 'same-P-B-1')
df_pb1 = df[mask_pb1]
median_pb1 = np.median(df_pb1['k'])
mean_pb1 = np.mean(df_pb1['k'])
std_pb1 = np.std(df_pb1['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-P-B-1')
d_pair_sta['median'].append(round(median_pb1, 3))
d_pair_sta['mean'].append(round(mean_pb1, 3))
d_pair_sta['std'].append(round(std_pb1, 3))
mask_ss0 = (df['PairType'] == 'same-S-S-0')
df_ss0 = df[mask_ss0]
median_ss0 = np.median(df_ss0['k'])
mean_ss0 = np.mean(df_ss0['k'])
std_ss0 = np.std(df_ss0['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-S-S-0')
d_pair_sta['median'].append(round(median_ss0, 3))
d_pair_sta['mean'].append(round(mean_ss0, 3))
d_pair_sta['std'].append(round(std_ss0, 3))
mask_sb0 = (df['PairType'] == 'same-S-B-0')
df_sb0 = df[mask_sb0]
median_sb0 = np.median(df_sb0['k'])
mean_sb0 = np.mean(df_sb0['k'])
std_sb0 = np.std(df_sb0['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-S-B-0')
d_pair_sta['median'].append(round(median_sb0, 3))
d_pair_sta['mean'].append(round(mean_sb0, 3))
d_pair_sta['std'].append(round(std_sb0, 3))
mask_sb1 = (df['PairType'] == 'same-S-B-1')
df_sb1 = df[mask_sb1]
median_sb1 = np.median(df_sb1['k'])
mean_sb1 = np.mean(df_sb1['k'])
std_sb1 = np.std(df_sb1['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('same-S-B-1')
d_pair_sta['median'].append(round(median_sb1, 3))
d_pair_sta['mean'].append(round(mean_sb1, 3))
d_pair_sta['std'].append(round(std_sb1, 3))
mask_wr = (df['PairType'] == 'Within-Ring')
df_wr = df[mask_wr]
median_wr = np.median(df_wr['k'])
mean_wr = np.mean(df_wr['k'])
std_wr = np.std(df_wr['k'])
d_pair_sta['system'].append(sysname)
d_pair_sta['PairType'].append('Within-Ring')
d_pair_sta['median'].append(round(median_wr, 3))
d_pair_sta['mean'].append(round(mean_wr, 3))
d_pair_sta['std'].append(round(std_wr, 3))
pd.DataFrame(d_pair_sta).to_csv('/Users/alayah361/fluctmatch/enm/cg_13/me1/bdna+bdna/pd_dfs/pairtypes_statis.csv', index=False)
| 0.252568 | 0.730852 |
_This notebook contains code and comments from Section 2.2 and 2.3 of the book [Ensemble Methods for Machine Learning](https://www.manning.com/books/ensemble-methods-for-machine-learning). Please see the book for additional details on this topic. This notebook and code are released under the [MIT license](https://github.com/gkunapuli/ensemble-methods-notebooks/blob/master/LICENSE)._
---
## 2.2 Bagging
### 2.2.1 Implementing our own Bagging classifier
We will implement our own version of bagging to understand its internals, after which we look at how to use scikit-learn's bagging implementation.
**Listing 2.1**: Bagging with Decision Trees: Training
```
import numpy as np
from sklearn.tree import DecisionTreeClassifier
def bagging_fit(X, y, n_estimators, max_depth=5, max_samples=200):
n_examples = len(y)
estimators = [DecisionTreeClassifier(max_depth=max_depth)
for _ in range(n_estimators)]
for tree in estimators:
bag = np.random.choice(n_examples, max_samples, replace=True)
tree.fit(X[bag, :], y[bag])
return estimators
```
This function will return a list of [``DecisionTreeClassifier``](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html) objects. We can use this ensemble for prediction, by first obtaining the individual predictions and then aggregating them (through majority voting).
**Listing 2.2**: Bagging with Decision Trees: Prediction
```
from scipy.stats import mode
def bagging_predict(X, estimators):
all_predictions = np.array([tree.predict(X) for tree in estimators])
ypred, _ = mode(all_predictions, axis=0)
return np.squeeze(ypred)
```
Let's test this on a 2d synthetic data set. We train a bagging ensemble of 500 decision trees, each of depth 10 on bootstrap samples of size 200.
```
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
X, y = make_moons(n_samples=300, noise=.25, random_state=0)
Xtrn, Xtst, ytrn, ytst = train_test_split(X, y, test_size=0.33)
bag_ens = bagging_fit(Xtrn, ytrn, n_estimators=500,
max_depth=12, max_samples=200)
ypred = bagging_predict(Xtst, bag_ens)
print(accuracy_score(ytst, ypred))
ensembleAcc = accuracy_score(ytst, ypred)
print('Bagging: Holdout accuracy = {0:4.2f}%.'.format(ensembleAcc * 100))
tree = DecisionTreeClassifier(max_depth=12)
ypred_single = tree.fit(Xtrn, ytrn).predict(Xtst)
treeAcc = accuracy_score(ytst, ypred_single)
print('Single Decision Tree: Holdout test accuracy = {0:4.2f}%.'.format(treeAcc * 100))
```
We can visualize the difference between the bagging classifier and a single decision tree.
```
%matplotlib inline
import matplotlib.pyplot as plt
from visualization import plot_2d_classifier
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(8, 4))
title = 'Single Decision Tree (acc = {0:4.2f}%)'.format(treeAcc*100)
plot_2d_classifier(ax[0], X, y, colormap='RdBu', alpha=0.3,
predict_function=tree.predict,
xlabel='$x_1$', ylabel='$x_2$', title=title)
title = 'Bagging Ensemble (acc = {0:4.2f}%)'.format(ensembleAcc*100)
plot_2d_classifier(ax[1], X, y, colormap='RdBu', alpha=0.3,
predict_function=bagging_predict, predict_args=(bag_ens),
xlabel='$x_1$', ylabel='$x_2$', title=title)
fig.tight_layout()
plt.savefig('./figures/CH02_F04_Kunapuli.png', format='png', dpi=300, bbox_inches='tight');
```
---
### 2.2.3 Bagging with ``scikit-learn``
``scikit-learn``'s [``BaggingClassifier``](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.BaggingClassifier.html) can be used to train a bagging ensemble for classification. It supports many different kinds of base estimators, though in the example below, we use ``DecisionTreeClassifier`` as the base estimator.
**Listing 2.3**: Baggimg with ``scikit-learn``
```
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import BaggingClassifier
base_estimator = DecisionTreeClassifier(max_depth=10)
bag_ens = BaggingClassifier(base_estimator=base_estimator, n_estimators=500,
max_samples=100, oob_score=True)
bag_ens.fit(Xtrn, ytrn)
ypred = bag_ens.predict(Xtst)
```
``BaggingClassifier`` supports out-of-bag evaluation and will return the oob accuracy if we set ``oob_score=True``, as we have done above. We have ourselves also held out a test set, with which we cancompute another estimate of this model’s generalization. These are both pretty close together, as we expect!
```
bag_ens.oob_score_
accuracy_score(ytst, ypred)
```
We can visualize the smoothing behavior of the ``BaggingClassifier`` by comparing its decision boundary to its component base ``DecisionTreeClassifiers``.
```
%matplotlib inline
fig, ax = plt.subplots(nrows=2, ncols=3, figsize=(12, 8))
trees_to_plot = np.random.choice(500, 5, replace=True)
title = 'Bagging Ensemble (acc = {0:4.2f}%)'.format(accuracy_score(ytst, ypred)*100)
plot_2d_classifier(ax[0, 0], X, y, colormap='RdBu', alpha=0.3,
predict_function=bag_ens.predict,
xlabel='$x_1$', ylabel='$x_2$', title=title)
for i in range(5):
r, c = np.divmod(i + 1, 3) # Get the row and column index of the subplot
j = trees_to_plot[i]
tst_acc_clf = accuracy_score(ytst, bag_ens[j].predict(Xtst))
bag = bag_ens.estimators_samples_[j]
X_bag = X[bag, :]
y_bag = y[bag]
title = 'Decision Tree {1} (acc = {0:4.2f}%)'.format(tst_acc_clf*100, j+1)
plot_2d_classifier(ax[r, c], X, y, colormap='RdBu', alpha=0.3,
predict_function=bag_ens[j].predict,
xlabel='$x_1$', ylabel='$x_2$', title=title)
fig.tight_layout()
plt.savefig('./figures/CH02_F05_Kunapuli.png', format='png', dpi=300, bbox_inches='tight');
```
---
### 2.2.4 Faster Training with Parallelization
BaggingClassifier supports the speed up of both training and prediction through the [``n_jobs``](https://scikit-learn.org/stable/glossary.html#term-n-jobs) parameter. By default, this parameter is set to ``1`` and bagging will run sequentially. Alternately, you can specify the number of concurrent processes ``BaggingClassifier`` should use with by setting ``n_jobs``.
The experiment below compares the training efficiency of sequential (with ``n_jobs=1``) with parallelized bagging (``n_jobs=-1``) on a machine with 6 cores. Bagging can be effectively parallelized, and the resulting gains can training times can be significantly improved.
**CAUTION: This experiment below runs slowly! Pickle files from a previous run are included for quick plotting.**
```
import time
import os
import pickle
# See if the result file for this experiment already exists, and if not, rerun and save a new set of results
if not os.path.exists('./data/SeqentialVsParallelBagging.pickle'):
n_estimator_range = np.arange(50, 525, 50, dtype=int)
n_range = len(n_estimator_range)
n_runs = 10
run_time_seq = np.zeros((n_runs, n_range))
run_time_par = np.zeros((n_runs, n_range))
base_estimator = DecisionTreeClassifier(max_depth=5)
for r in range(n_runs):
# Split the data randomly into training and test for this run
X_trn, X_tst, y_trn, y_tst = train_test_split(X, y, test_size=100)
# Learn and evaluate this train/test split for this run with sequential bagging
for i, n_estimators in enumerate(n_estimator_range):
start = time.time()
bag_ens = BaggingClassifier(base_estimator=base_estimator, n_estimators=n_estimators,
max_samples=100, oob_score=True, n_jobs=1)
bag_ens.fit(X_trn, y_trn)
run_time_seq[r, i] = time.time() - start
# Learn and evaluate this train/test split for this run
for i, n_estimators in enumerate(n_estimator_range):
start = time.time()
bag_ens = BaggingClassifier(base_estimator=base_estimator, n_estimators=n_estimators,
max_samples=100, oob_score=True, n_jobs=-1)
bag_ens.fit(X_trn, y_trn)
run_time_par[r, i] = time.time() - start
results = (run_time_seq, run_time_par)
with open('./data/SeqentialVsParallelBagging.pickle', 'wb') as result_file:
pickle.dump(results, result_file)
else:
with open('./data/SeqentialVsParallelBagging.pickle', 'rb') as result_file:
(run_time_seq, run_time_par) = pickle.load(result_file)
```
Once the sequential vs. parallel results have been loaded/run, plot them.
```
%matplotlib inline
n_estimator_range = np.arange(50, 525, 50, dtype=int)
run_time_seq_adj = np.copy(run_time_seq)
run_time_seq_adj[run_time_seq > 0.5] = np.nan
run_time_seq_mean = np.nanmean(run_time_seq_adj, axis=0)
run_time_par_adj = np.copy(run_time_par)
run_time_par_adj[run_time_par > 0.3] = np.nan
run_time_par_mean = np.nanmean(run_time_par_adj, axis=0)
fig = plt.figure(figsize=(4, 4))
plt.plot(n_estimator_range, run_time_seq_mean, linewidth=3)
plt.plot(n_estimator_range[1:], run_time_par_mean[1:], linewidth=3, linestyle='--')
plt.ylabel('Run Time (msec.)', fontsize=16)
plt.xlabel('Number of estimators', fontsize=16)
plt.legend(['Sequential Bagging', 'Parallel Bagging'], fontsize=12);
fig.tight_layout()
plt.savefig('./figures/CH02_F06_Kunapuli.png', format='png', dpi=300, bbox_inches='tight');
```
---
## 2.3 Random Forest
Using ``scikit-learn``'s [``RandomForestClassifier``](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html).
**Listing 2.4**: Random Forest with ``scikit-learn``
```
from sklearn.ensemble import RandomForestClassifier
rf_ens = RandomForestClassifier(n_estimators=500, max_depth=10,
oob_score=True, n_jobs=-1)
rf_ens.fit(Xtrn, ytrn)
ypred = rf_ens.predict(Xtst)
%matplotlib inline
fig, ax = plt.subplots(nrows=2, ncols=3, figsize=(12, 8))
trees_to_plot = np.random.choice(500, 5, replace=True)
title = 'Random Forest (acc = {0:4.2f}%)'.format(accuracy_score(ytst, ypred)*100)
plot_2d_classifier(ax[0, 0], X, y, colormap='RdBu', alpha=0.3,
predict_function=rf_ens.predict,
xlabel='$x_1$', ylabel='$x_2$', title=title)
for i in range(5):
r, c = np.divmod(i + 1, 3) # Get the row and column index of the subplot
j = trees_to_plot[i]
tst_acc_clf = accuracy_score(ytst, bag_ens[j].predict(Xtst))
bag = bag_ens.estimators_samples_[j]
X_bag = X[bag, :]
y_bag = y[bag]
title = 'Randomized Tree {1} (acc = {0:4.2f}%)'.format(tst_acc_clf*100, j+1)
plot_2d_classifier(ax[r, c], X, y, colormap='RdBu', alpha=0.3,
predict_function=rf_ens[j].predict,
xlabel='$x_1$', ylabel='$x_2$', title=title)
fig.tight_layout()
plt.savefig('./figures/CH02_F08_Kunapuli.png', format='png', dpi=300, bbox_inches='tight');
```
``scikit-learn``'s ``RandomForestClassifier`` can also rank features by their importance. Feature importances can be extracted from the learned ``RandomForestClassifier``'s [``feature_importances_``](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html#sklearn.ensemble.RandomForestClassifier.feature_importances_) attribute. This is computed by adding up how much each feature decreases the overall [Gini impurity](https://en.wikipedia.org/wiki/Decision_tree_learning#Gini_impurity) criterion during training. Features that decrease the impurity more will have higher feature importances.
```
for i, score in enumerate(rf_ens.feature_importances_):
print('Feature x{0}: {1:6.5f}'.format(i, score))
```
|
github_jupyter
|
import numpy as np
from sklearn.tree import DecisionTreeClassifier
def bagging_fit(X, y, n_estimators, max_depth=5, max_samples=200):
n_examples = len(y)
estimators = [DecisionTreeClassifier(max_depth=max_depth)
for _ in range(n_estimators)]
for tree in estimators:
bag = np.random.choice(n_examples, max_samples, replace=True)
tree.fit(X[bag, :], y[bag])
return estimators
from scipy.stats import mode
def bagging_predict(X, estimators):
all_predictions = np.array([tree.predict(X) for tree in estimators])
ypred, _ = mode(all_predictions, axis=0)
return np.squeeze(ypred)
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
X, y = make_moons(n_samples=300, noise=.25, random_state=0)
Xtrn, Xtst, ytrn, ytst = train_test_split(X, y, test_size=0.33)
bag_ens = bagging_fit(Xtrn, ytrn, n_estimators=500,
max_depth=12, max_samples=200)
ypred = bagging_predict(Xtst, bag_ens)
print(accuracy_score(ytst, ypred))
ensembleAcc = accuracy_score(ytst, ypred)
print('Bagging: Holdout accuracy = {0:4.2f}%.'.format(ensembleAcc * 100))
tree = DecisionTreeClassifier(max_depth=12)
ypred_single = tree.fit(Xtrn, ytrn).predict(Xtst)
treeAcc = accuracy_score(ytst, ypred_single)
print('Single Decision Tree: Holdout test accuracy = {0:4.2f}%.'.format(treeAcc * 100))
%matplotlib inline
import matplotlib.pyplot as plt
from visualization import plot_2d_classifier
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(8, 4))
title = 'Single Decision Tree (acc = {0:4.2f}%)'.format(treeAcc*100)
plot_2d_classifier(ax[0], X, y, colormap='RdBu', alpha=0.3,
predict_function=tree.predict,
xlabel='$x_1$', ylabel='$x_2$', title=title)
title = 'Bagging Ensemble (acc = {0:4.2f}%)'.format(ensembleAcc*100)
plot_2d_classifier(ax[1], X, y, colormap='RdBu', alpha=0.3,
predict_function=bagging_predict, predict_args=(bag_ens),
xlabel='$x_1$', ylabel='$x_2$', title=title)
fig.tight_layout()
plt.savefig('./figures/CH02_F04_Kunapuli.png', format='png', dpi=300, bbox_inches='tight');
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import BaggingClassifier
base_estimator = DecisionTreeClassifier(max_depth=10)
bag_ens = BaggingClassifier(base_estimator=base_estimator, n_estimators=500,
max_samples=100, oob_score=True)
bag_ens.fit(Xtrn, ytrn)
ypred = bag_ens.predict(Xtst)
bag_ens.oob_score_
accuracy_score(ytst, ypred)
%matplotlib inline
fig, ax = plt.subplots(nrows=2, ncols=3, figsize=(12, 8))
trees_to_plot = np.random.choice(500, 5, replace=True)
title = 'Bagging Ensemble (acc = {0:4.2f}%)'.format(accuracy_score(ytst, ypred)*100)
plot_2d_classifier(ax[0, 0], X, y, colormap='RdBu', alpha=0.3,
predict_function=bag_ens.predict,
xlabel='$x_1$', ylabel='$x_2$', title=title)
for i in range(5):
r, c = np.divmod(i + 1, 3) # Get the row and column index of the subplot
j = trees_to_plot[i]
tst_acc_clf = accuracy_score(ytst, bag_ens[j].predict(Xtst))
bag = bag_ens.estimators_samples_[j]
X_bag = X[bag, :]
y_bag = y[bag]
title = 'Decision Tree {1} (acc = {0:4.2f}%)'.format(tst_acc_clf*100, j+1)
plot_2d_classifier(ax[r, c], X, y, colormap='RdBu', alpha=0.3,
predict_function=bag_ens[j].predict,
xlabel='$x_1$', ylabel='$x_2$', title=title)
fig.tight_layout()
plt.savefig('./figures/CH02_F05_Kunapuli.png', format='png', dpi=300, bbox_inches='tight');
import time
import os
import pickle
# See if the result file for this experiment already exists, and if not, rerun and save a new set of results
if not os.path.exists('./data/SeqentialVsParallelBagging.pickle'):
n_estimator_range = np.arange(50, 525, 50, dtype=int)
n_range = len(n_estimator_range)
n_runs = 10
run_time_seq = np.zeros((n_runs, n_range))
run_time_par = np.zeros((n_runs, n_range))
base_estimator = DecisionTreeClassifier(max_depth=5)
for r in range(n_runs):
# Split the data randomly into training and test for this run
X_trn, X_tst, y_trn, y_tst = train_test_split(X, y, test_size=100)
# Learn and evaluate this train/test split for this run with sequential bagging
for i, n_estimators in enumerate(n_estimator_range):
start = time.time()
bag_ens = BaggingClassifier(base_estimator=base_estimator, n_estimators=n_estimators,
max_samples=100, oob_score=True, n_jobs=1)
bag_ens.fit(X_trn, y_trn)
run_time_seq[r, i] = time.time() - start
# Learn and evaluate this train/test split for this run
for i, n_estimators in enumerate(n_estimator_range):
start = time.time()
bag_ens = BaggingClassifier(base_estimator=base_estimator, n_estimators=n_estimators,
max_samples=100, oob_score=True, n_jobs=-1)
bag_ens.fit(X_trn, y_trn)
run_time_par[r, i] = time.time() - start
results = (run_time_seq, run_time_par)
with open('./data/SeqentialVsParallelBagging.pickle', 'wb') as result_file:
pickle.dump(results, result_file)
else:
with open('./data/SeqentialVsParallelBagging.pickle', 'rb') as result_file:
(run_time_seq, run_time_par) = pickle.load(result_file)
%matplotlib inline
n_estimator_range = np.arange(50, 525, 50, dtype=int)
run_time_seq_adj = np.copy(run_time_seq)
run_time_seq_adj[run_time_seq > 0.5] = np.nan
run_time_seq_mean = np.nanmean(run_time_seq_adj, axis=0)
run_time_par_adj = np.copy(run_time_par)
run_time_par_adj[run_time_par > 0.3] = np.nan
run_time_par_mean = np.nanmean(run_time_par_adj, axis=0)
fig = plt.figure(figsize=(4, 4))
plt.plot(n_estimator_range, run_time_seq_mean, linewidth=3)
plt.plot(n_estimator_range[1:], run_time_par_mean[1:], linewidth=3, linestyle='--')
plt.ylabel('Run Time (msec.)', fontsize=16)
plt.xlabel('Number of estimators', fontsize=16)
plt.legend(['Sequential Bagging', 'Parallel Bagging'], fontsize=12);
fig.tight_layout()
plt.savefig('./figures/CH02_F06_Kunapuli.png', format='png', dpi=300, bbox_inches='tight');
from sklearn.ensemble import RandomForestClassifier
rf_ens = RandomForestClassifier(n_estimators=500, max_depth=10,
oob_score=True, n_jobs=-1)
rf_ens.fit(Xtrn, ytrn)
ypred = rf_ens.predict(Xtst)
%matplotlib inline
fig, ax = plt.subplots(nrows=2, ncols=3, figsize=(12, 8))
trees_to_plot = np.random.choice(500, 5, replace=True)
title = 'Random Forest (acc = {0:4.2f}%)'.format(accuracy_score(ytst, ypred)*100)
plot_2d_classifier(ax[0, 0], X, y, colormap='RdBu', alpha=0.3,
predict_function=rf_ens.predict,
xlabel='$x_1$', ylabel='$x_2$', title=title)
for i in range(5):
r, c = np.divmod(i + 1, 3) # Get the row and column index of the subplot
j = trees_to_plot[i]
tst_acc_clf = accuracy_score(ytst, bag_ens[j].predict(Xtst))
bag = bag_ens.estimators_samples_[j]
X_bag = X[bag, :]
y_bag = y[bag]
title = 'Randomized Tree {1} (acc = {0:4.2f}%)'.format(tst_acc_clf*100, j+1)
plot_2d_classifier(ax[r, c], X, y, colormap='RdBu', alpha=0.3,
predict_function=rf_ens[j].predict,
xlabel='$x_1$', ylabel='$x_2$', title=title)
fig.tight_layout()
plt.savefig('./figures/CH02_F08_Kunapuli.png', format='png', dpi=300, bbox_inches='tight');
for i, score in enumerate(rf_ens.feature_importances_):
print('Feature x{0}: {1:6.5f}'.format(i, score))
| 0.642993 | 0.97959 |
<a href="https://colab.research.google.com/github/mani2106/Competition-Notebooks/blob/master/Feature_Engineering_edelweiss.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Connecting and importing data
```
from google.colab import drive
drive.mount('/content/gdrive')
!pip install -U graphviz pandas featuretools
import pandas as pd
import os
import featuretools as ft
path = 'gdrive/My Drive/Comp data'
cust_data = pd.read_excel(os.path.join(path, 'Customers_31JAN2019.xlsx'))
em_data = pd.read_excel(os.path.join(path, 'RF_Final_Data.xlsx'))
lm_data = pd.read_excel(os.path.join(path, 'LMS_31JAN2019.xlsx'))
lm_data.dropna(subset = ['CUSTOMERID'], inplace=True)
lm_data.CUSTOMERID = lm_data.CUSTOMERID.astype('int64')
```
# Setting variable types to faciliate feature engineering
Mostly object __(string)__ type data can be inferred as categorical in featuretools.
```
# get lists of string columns across the datasets
cust_cat = cust_data.select_dtypes(include = ['object']).columns.tolist()
cust_cat.extend(['BRANCH_PINCODE', 'CUST_CONSTTYPE_ID', 'CUST_CATEGORYID'])
lm_cat = lm_data.select_dtypes(include = ['object']).columns.tolist()
lm_cat.extend(['MOB', 'SCHEMEID'])
em_cat = list(
set(em_data.select_dtypes(include = ['object']).columns.tolist())
-
# excluding this because we may generate text features from this
# eg. count of words
set(['Preprocessed_EmailBody', 'Preprocessed_Subject',
'Preprocessed_EmailBody_proc'])
)
# This column will be inderred as datetime index so need to make it a category
em_cat.remove('Date')
# Construct dictionaries with column names as keys and desired data type as values
cust_vtypes = dict.fromkeys(cust_cat, ft.variable_types.Categorical)
lm_vtypes = dict.fromkeys(lm_cat, ft.variable_types.Categorical)
em_vtypes = dict.fromkeys(em_cat, ft.variable_types.Categorical)
em_text_types = dict.fromkeys(['Preprocessed_EmailBody', 'Preprocessed_Subject',
'Preprocessed_EmailBody_proc'], ft.variable_types.Text)
em_vtypes.update(em_text_types)
```
# Creating Entity Sets for feature engineering
```
es = ft.EntitySet()
# dropping empty columns inferred from our EDA
es = es.entity_from_dataframe(entity_id="customers",
dataframe=cust_data.drop(['PROFESSION',
'OCCUPATION'], axis=1),
index="CUSTOMERID",
variable_types=cust_vtypes)
es = es.entity_from_dataframe(entity_id="Email",
dataframe=em_data.drop(['Unnamed: 0'],axis=1),
index='TicketId',
time_index = "Date",
variable_types=em_vtypes)
es = es.entity_from_dataframe(entity_id="Loan transaction data",
dataframe=lm_data,
index='t_id',
make_index=True,
time_index = 'AUTHORIZATIONDATE',
variable_types=lm_vtypes)
cols_to_seperate = ['CITY','PRODUCT','NPA_IN_LAST_MONTH','NPA_IN_CURRENT_MONTH',
'MOB','SCHEMEID','CUSTOMERID']
es = es.normalize_entity(base_entity_id="Loan transaction data",
new_entity_id="Loan data",
index="AGREEMENTID",make_time_index= True,
additional_variables=cols_to_seperate)
es
```
# Adding relationships between entities
```
rel1 = ft.Relationship(
es['customers']['CUSTOMERID'],es['Loan data']['CUSTOMERID']
)
rel2 = ft.Relationship(
es['customers']['CUSTOMERID'], es['Email']['Masked_CustomerID']
)
es = es.add_relationships([rel1, rel2])
es
```
# Relationship Plot
```
es.plot(to_file = os.path.join(path, "data.png"))
from featuretools.primitives import make_trans_primitive
from featuretools.variable_types import Numeric
# Create two new functions for our two new primitives since some variables
# have large numbers
def Log(column):
return np.log(column)
def Square_Root(column):
return np.sqrt(column)
# Create the primitives
log_prim = make_trans_primitive(
function=Log, input_types=[Numeric], return_type=Numeric)
square_root_prim = make_trans_primitive(
function=Square_Root, input_types=[Numeric], return_type=Numeric)
```
Quoting from competition description
Foreclosure means repaying the outstanding loan amount in a __single payment__ instead of with EMIs while balance transfer is transferring outstanding Loan availed from one Bank / Financial Institution to another Bank / Financial Institution, usually on the grounds of better service, top-up on the existing loan, proximity of branch, saving on interest repayments, etc.
__Assumptions__:
A customer's foreclosure intention can be predicted if
1. He/she has paid more EMI in the recent past
2. The Tenor period has been reduced
3. Excess amount received is high
```
ft.primitives.list_primitives()
# # Difference between current and pre received amount
# RECEIVED_LARGE = ft.Feature(es['Loan transaction data']['EMI_RECEIVED_AMT']
# - es['Loan transaction data']['PRE_EMI_RECEIVED_AMT'])
# # Difference between current and last received amount
# INCREASED_RECEIVED = ft.Feature(es['Loan transaction data']['EMI_RECEIVED_AMT']
# - es['Loan transaction data']['LAST_RECEIPT_AMOUNT'])
# TENOR_IND = ft.Feature(es['Loan transaction data']['CURRENT_TENOR']
# - es['Loan transaction data']['ORIGNAL_TENOR'])
# Excess paid amount
EXCESS_LARGE = ft.Feature(es['Loan transaction data']['EXCESS_AVAILABLE']) > 1.53e4
# # Threshold for Recieved amounts
# RECEIVED_LARGE_TRUE = ft.Feature(RECEIVED_LARGE)>2e4
# INCREASED_RECEIVED_TRUE = ft.Feature(INCREASED_RECEIVED)>100
seed_features = [EXCESS_LARGE]
agg_primitives=[
'std', 'mean', 'count'
]
trans_primitives=['num_words','num_characters']
trans_primitives.append(log_prim)
trans_primitives.append(square_root_prim)
ignore_variables = {
"Loan data": "first_Loan transaction data_time"
}
```
# Feature Engineering
```
# Let's see what we can generate from the data and relationships we have
# agg_primitives = ["std", "skew", "mean","median", "count", 'mode'],
# trans_primitives = "num_words cum_mean days_since percentile num_characters".split(),
feature_defs = ft.dfs(entityset=es, target_entity="Loan data", max_depth=2,
agg_primitives = agg_primitives,
trans_primitives = trans_primitives,
seed_features = seed_features,
ignore_variables = ignore_variables,
features_only=True, verbose = True)
# Names of features
feature_defs
```
Lets __generate__ the features
```
feature_matrix, _ = ft.dfs(entityset=es, target_entity="Loan data",
agg_primitives = agg_primitives,
trans_primitives = trans_primitives,
seed_features = seed_features,
ignore_variables = ignore_variables,
max_depth=1,features_only=False,
verbose = True)
```
# Persist feature matrix for Loan
```
feature_matrix.to_csv(os.path.join(path, 'feature_agg.csv'))
```
|
github_jupyter
|
from google.colab import drive
drive.mount('/content/gdrive')
!pip install -U graphviz pandas featuretools
import pandas as pd
import os
import featuretools as ft
path = 'gdrive/My Drive/Comp data'
cust_data = pd.read_excel(os.path.join(path, 'Customers_31JAN2019.xlsx'))
em_data = pd.read_excel(os.path.join(path, 'RF_Final_Data.xlsx'))
lm_data = pd.read_excel(os.path.join(path, 'LMS_31JAN2019.xlsx'))
lm_data.dropna(subset = ['CUSTOMERID'], inplace=True)
lm_data.CUSTOMERID = lm_data.CUSTOMERID.astype('int64')
# get lists of string columns across the datasets
cust_cat = cust_data.select_dtypes(include = ['object']).columns.tolist()
cust_cat.extend(['BRANCH_PINCODE', 'CUST_CONSTTYPE_ID', 'CUST_CATEGORYID'])
lm_cat = lm_data.select_dtypes(include = ['object']).columns.tolist()
lm_cat.extend(['MOB', 'SCHEMEID'])
em_cat = list(
set(em_data.select_dtypes(include = ['object']).columns.tolist())
-
# excluding this because we may generate text features from this
# eg. count of words
set(['Preprocessed_EmailBody', 'Preprocessed_Subject',
'Preprocessed_EmailBody_proc'])
)
# This column will be inderred as datetime index so need to make it a category
em_cat.remove('Date')
# Construct dictionaries with column names as keys and desired data type as values
cust_vtypes = dict.fromkeys(cust_cat, ft.variable_types.Categorical)
lm_vtypes = dict.fromkeys(lm_cat, ft.variable_types.Categorical)
em_vtypes = dict.fromkeys(em_cat, ft.variable_types.Categorical)
em_text_types = dict.fromkeys(['Preprocessed_EmailBody', 'Preprocessed_Subject',
'Preprocessed_EmailBody_proc'], ft.variable_types.Text)
em_vtypes.update(em_text_types)
es = ft.EntitySet()
# dropping empty columns inferred from our EDA
es = es.entity_from_dataframe(entity_id="customers",
dataframe=cust_data.drop(['PROFESSION',
'OCCUPATION'], axis=1),
index="CUSTOMERID",
variable_types=cust_vtypes)
es = es.entity_from_dataframe(entity_id="Email",
dataframe=em_data.drop(['Unnamed: 0'],axis=1),
index='TicketId',
time_index = "Date",
variable_types=em_vtypes)
es = es.entity_from_dataframe(entity_id="Loan transaction data",
dataframe=lm_data,
index='t_id',
make_index=True,
time_index = 'AUTHORIZATIONDATE',
variable_types=lm_vtypes)
cols_to_seperate = ['CITY','PRODUCT','NPA_IN_LAST_MONTH','NPA_IN_CURRENT_MONTH',
'MOB','SCHEMEID','CUSTOMERID']
es = es.normalize_entity(base_entity_id="Loan transaction data",
new_entity_id="Loan data",
index="AGREEMENTID",make_time_index= True,
additional_variables=cols_to_seperate)
es
rel1 = ft.Relationship(
es['customers']['CUSTOMERID'],es['Loan data']['CUSTOMERID']
)
rel2 = ft.Relationship(
es['customers']['CUSTOMERID'], es['Email']['Masked_CustomerID']
)
es = es.add_relationships([rel1, rel2])
es
es.plot(to_file = os.path.join(path, "data.png"))
from featuretools.primitives import make_trans_primitive
from featuretools.variable_types import Numeric
# Create two new functions for our two new primitives since some variables
# have large numbers
def Log(column):
return np.log(column)
def Square_Root(column):
return np.sqrt(column)
# Create the primitives
log_prim = make_trans_primitive(
function=Log, input_types=[Numeric], return_type=Numeric)
square_root_prim = make_trans_primitive(
function=Square_Root, input_types=[Numeric], return_type=Numeric)
ft.primitives.list_primitives()
# # Difference between current and pre received amount
# RECEIVED_LARGE = ft.Feature(es['Loan transaction data']['EMI_RECEIVED_AMT']
# - es['Loan transaction data']['PRE_EMI_RECEIVED_AMT'])
# # Difference between current and last received amount
# INCREASED_RECEIVED = ft.Feature(es['Loan transaction data']['EMI_RECEIVED_AMT']
# - es['Loan transaction data']['LAST_RECEIPT_AMOUNT'])
# TENOR_IND = ft.Feature(es['Loan transaction data']['CURRENT_TENOR']
# - es['Loan transaction data']['ORIGNAL_TENOR'])
# Excess paid amount
EXCESS_LARGE = ft.Feature(es['Loan transaction data']['EXCESS_AVAILABLE']) > 1.53e4
# # Threshold for Recieved amounts
# RECEIVED_LARGE_TRUE = ft.Feature(RECEIVED_LARGE)>2e4
# INCREASED_RECEIVED_TRUE = ft.Feature(INCREASED_RECEIVED)>100
seed_features = [EXCESS_LARGE]
agg_primitives=[
'std', 'mean', 'count'
]
trans_primitives=['num_words','num_characters']
trans_primitives.append(log_prim)
trans_primitives.append(square_root_prim)
ignore_variables = {
"Loan data": "first_Loan transaction data_time"
}
# Let's see what we can generate from the data and relationships we have
# agg_primitives = ["std", "skew", "mean","median", "count", 'mode'],
# trans_primitives = "num_words cum_mean days_since percentile num_characters".split(),
feature_defs = ft.dfs(entityset=es, target_entity="Loan data", max_depth=2,
agg_primitives = agg_primitives,
trans_primitives = trans_primitives,
seed_features = seed_features,
ignore_variables = ignore_variables,
features_only=True, verbose = True)
# Names of features
feature_defs
feature_matrix, _ = ft.dfs(entityset=es, target_entity="Loan data",
agg_primitives = agg_primitives,
trans_primitives = trans_primitives,
seed_features = seed_features,
ignore_variables = ignore_variables,
max_depth=1,features_only=False,
verbose = True)
feature_matrix.to_csv(os.path.join(path, 'feature_agg.csv'))
| 0.440951 | 0.843895 |
# The Object Detection Dataset
:label:`sec_object-detection-dataset`
There is no small dataset such as MNIST and Fashion-MNIST in the field of object detection.
In order to quickly demonstrate object detection models,
[**we collected and labeled a small dataset**].
First, we took photos of free bananas from our office
and generated
1000 banana images with different rotations and sizes.
Then we placed each banana image
at a random position on some background image.
In the end, we labeled bounding boxes for those bananas on the images.
## [**Downloading the Dataset**]
The banana detection dataset with all the image and
csv label files can be downloaded directly from the Internet.
```
%matplotlib inline
import os
import pandas as pd
import torch
import torchvision
from d2l import torch as d2l
#@save
d2l.DATA_HUB['banana-detection'] = (
d2l.DATA_URL + 'banana-detection.zip',
'5de26c8fce5ccdea9f91267273464dc968d20d72')
```
## Reading the Dataset
We are going to [**read the banana detection dataset**] in the `read_data_bananas`
function below.
The dataset includes a csv file for
object class labels and
ground-truth bounding box coordinates
at the upper-left and lower-right corners.
```
#@save
def read_data_bananas(is_train=True):
"""Read the banana detection dataset images and labels."""
data_dir = d2l.download_extract('banana-detection')
csv_fname = os.path.join(data_dir, 'bananas_train' if is_train
else 'bananas_val', 'label.csv')
csv_data = pd.read_csv(csv_fname)
csv_data = csv_data.set_index('img_name')
images, targets = [], []
for img_name, target in csv_data.iterrows():
images.append(torchvision.io.read_image(
os.path.join(data_dir, 'bananas_train' if is_train else
'bananas_val', 'images', f'{img_name}')))
# Here `target` contains (class, upper-left x, upper-left y,
# lower-right x, lower-right y), where all the images have the same
# banana class (index 0)
targets.append(list(target))
return images, torch.tensor(targets).unsqueeze(1) / 256
```
By using the `read_data_bananas` function to read images and labels,
the following `BananasDataset` class
will allow us to [**create a customized `Dataset` instance**]
for loading the banana detection dataset.
```
#@save
class BananasDataset(torch.utils.data.Dataset):
"""A customized dataset to load the banana detection dataset."""
def __init__(self, is_train):
self.features, self.labels = read_data_bananas(is_train)
print('read ' + str(len(self.features)) + (f' training examples' if
is_train else f' validation examples'))
def __getitem__(self, idx):
return (self.features[idx].float(), self.labels[idx])
def __len__(self):
return len(self.features)
```
Finally, we define
the `load_data_bananas` function to [**return two
data iterator instances for both the training and test sets.**]
For the test dataset,
there is no need to read it in random order.
```
#@save
def load_data_bananas(batch_size):
"""Load the banana detection dataset."""
train_iter = torch.utils.data.DataLoader(BananasDataset(is_train=True),
batch_size, shuffle=True)
val_iter = torch.utils.data.DataLoader(BananasDataset(is_train=False),
batch_size)
return train_iter, val_iter
```
Let us [**read a minibatch and print the shapes of
both images and labels**] in this minibatch.
The shape of the image minibatch,
(batch size, number of channels, height, width),
looks familiar:
it is the same as in our earlier image classification tasks.
The shape of the label minibatch is
(batch size, $m$, 5),
where $m$ is the largest possible number of bounding boxes
that any image has in the dataset.
Although computation in minibatches is more efficient,
it requires that all the image examples
contain the same number of bounding boxes to form a minibatch via concatenation.
In general,
images may have a varying number of bounding boxes;
thus,
images with fewer than $m$ bounding boxes
will be padded with illegal bounding boxes
until $m$ is reached.
Then
the label of each bounding box is represented by an array of length 5.
The first element in the array is the class of the object in the bounding box,
where -1 indicates an illegal bounding box for padding.
The remaining four elements of the array are
the ($x$, $y$)-coordinate values
of the upper-left corner and the lower-right corner
of the bounding box (the range is between 0 and 1).
For the banana dataset,
since there is only one bounding box on each image,
we have $m=1$.
```
batch_size, edge_size = 32, 256
train_iter, _ = load_data_bananas(batch_size)
batch = next(iter(train_iter))
batch[0].shape, batch[1].shape
```
## [**Demonstration**]
Let us demonstrate ten images with their labeled ground-truth bounding boxes.
We can see that the rotations, sizes, and positions of bananas vary across all these images.
Of course, this is just a simple artificial dataset.
In practice, real-world datasets are usually much more complicated.
```
imgs = (batch[0][0:10].permute(0, 2, 3, 1)) / 255
axes = d2l.show_images(imgs, 2, 5, scale=2)
for ax, label in zip(axes, batch[1][0:10]):
d2l.show_bboxes(ax, [label[0][1:5] * edge_size], colors=['w'])
```
## Summary
* The banana detection dataset we collected can be used to demonstrate object detection models.
* The data loading for object detection is similar to that for image classification. However, in object detection the labels also contain information of ground-truth bounding boxes, which is missing in image classification.
## Exercises
1. Demonstrate other images with ground-truth bounding boxes in the banana detection dataset. How do they differ with respect to bounding boxes and objects?
1. Say that we want to apply data augmentation, such as random cropping, to object detection. How can it be different from that in image classification? Hint: what if a cropped image only contains a small portion of an object?
[Discussions](https://discuss.d2l.ai/t/1608)
|
github_jupyter
|
%matplotlib inline
import os
import pandas as pd
import torch
import torchvision
from d2l import torch as d2l
#@save
d2l.DATA_HUB['banana-detection'] = (
d2l.DATA_URL + 'banana-detection.zip',
'5de26c8fce5ccdea9f91267273464dc968d20d72')
#@save
def read_data_bananas(is_train=True):
"""Read the banana detection dataset images and labels."""
data_dir = d2l.download_extract('banana-detection')
csv_fname = os.path.join(data_dir, 'bananas_train' if is_train
else 'bananas_val', 'label.csv')
csv_data = pd.read_csv(csv_fname)
csv_data = csv_data.set_index('img_name')
images, targets = [], []
for img_name, target in csv_data.iterrows():
images.append(torchvision.io.read_image(
os.path.join(data_dir, 'bananas_train' if is_train else
'bananas_val', 'images', f'{img_name}')))
# Here `target` contains (class, upper-left x, upper-left y,
# lower-right x, lower-right y), where all the images have the same
# banana class (index 0)
targets.append(list(target))
return images, torch.tensor(targets).unsqueeze(1) / 256
#@save
class BananasDataset(torch.utils.data.Dataset):
"""A customized dataset to load the banana detection dataset."""
def __init__(self, is_train):
self.features, self.labels = read_data_bananas(is_train)
print('read ' + str(len(self.features)) + (f' training examples' if
is_train else f' validation examples'))
def __getitem__(self, idx):
return (self.features[idx].float(), self.labels[idx])
def __len__(self):
return len(self.features)
#@save
def load_data_bananas(batch_size):
"""Load the banana detection dataset."""
train_iter = torch.utils.data.DataLoader(BananasDataset(is_train=True),
batch_size, shuffle=True)
val_iter = torch.utils.data.DataLoader(BananasDataset(is_train=False),
batch_size)
return train_iter, val_iter
batch_size, edge_size = 32, 256
train_iter, _ = load_data_bananas(batch_size)
batch = next(iter(train_iter))
batch[0].shape, batch[1].shape
imgs = (batch[0][0:10].permute(0, 2, 3, 1)) / 255
axes = d2l.show_images(imgs, 2, 5, scale=2)
for ax, label in zip(axes, batch[1][0:10]):
d2l.show_bboxes(ax, [label[0][1:5] * edge_size], colors=['w'])
| 0.783077 | 0.990263 |
```
import numpy as np
import matplotlib.pyplot as plt
```
#### Creating Dataset
A simple dataset using numpy arrays
```
x_train = np.array ([[4.7], [2.4], [7.5], [7.1], [4.3], [7.816],
[8.9], [5.2], [8.59], [2.1], [8] ,
[10], [4.5], [6], [4]],
dtype = np.float32)
y_train = np.array ([[2.6], [1.6], [3.09], [2.4], [2.4], [3.357],
[2.6], [1.96], [3.53], [1.76], [3.2] ,
[3.5], [1.6], [2.5], [2.2]],
dtype = np.float32)
```
#### View the data
There seems to be some relationship which can be plotted between x_train and y_train. A regression line can be drawn to represent the relationship
```
plt.figure(figsize=(12, 8))
plt.scatter(x_train, y_train, label='Original data', s=250, c='g')
plt.legend()
plt.show()
import torch
```
#### Converting data to pytorch tensors
By defualt requires_grad = False
```
X_train = torch.from_numpy(x_train)
Y_train = torch.from_numpy(y_train)
print('requires_grad for X_train: ', X_train.requires_grad)
print('requires_grad for Y_train: ', Y_train.requires_grad)
```
#### Set the details for our neural network
Input, output and hidden layer sizes plus the learning rate
```
input_size = 1
hidden_size = 1
output_size = 1
```
#### Create random Tensors for weights.<br>
Setting requires_grad=True indicates that we want to compute gradients with respect to these Tensors during the backward pass
```
w1 = torch.rand(input_size,
hidden_size,
requires_grad=True)
w1.shape
w2 = torch.rand(hidden_size,
output_size,
requires_grad=True)
w2.shape
```
## Training
#### Foward Pass:
* Predicting Y with input data X
* finding (matrix X matrix) using .mm function, finding product of X_train and w1 and activation function is identity function
* again doing mat product data with second weight w2
#### Finding Loss:
* Finding difference between Y_train and Y_pred by squaring the difference and then summing out, similar to nn.MSELoss
#### For the loss_backward() function call:
* backward pass will compute the gradient of loss with respect to all Tensors with requires_grad=True.
* After this call w1.grad and w2.grad will be Tensors holding the gradient of the loss with respect to w1 and w2 respectively.
#### Manually updating the weights
* weights have requires_grad=True, but we don't need to track this in autograd. So will wrap it in torch.no_grad
* reducing weight with multiple of learning rate and gradient
* manually zero the weight gradients after updating weights
```
learning_rate = 1e-6
# Start at 10. Change this to 100, 1000 and 3000 and run the code all the way to the plot at the bottom
for iter in range(1, 10):
y_pred = X_train.mm(w1).mm(w2)
loss = (y_pred - Y_train).pow(2).sum()
if iter % 50 ==0:
print(iter, loss.item())
loss.backward()
with torch.no_grad():
w1 -= learning_rate * w1.grad
w2 -= learning_rate * w2.grad
w1.grad.zero_()
w2.grad.zero_()
print ('w1: ', w1)
print ('w2: ', w2)
```
#### Checking the output
Converting data into a tensor
```
x_train_tensor = torch.from_numpy(x_train)
x_train_tensor
```
#### Get the predicted values using the weights
Using final weights calculated from our training in order to get the predicted values
```
predicted_in_tensor = x_train_tensor.mm(w1).mm(w2)
predicted_in_tensor
```
#### Convert the prediction to a numpy array
This will be used to plot the regression line in a plot
```
predicted = predicted_in_tensor.detach().numpy()
predicted
```
#### Plotting
Our training has produced a rather accurate regression line
```
plt.figure(figsize=(12, 8))
plt.scatter(x_train, y_train, label = 'Original data', s=250, c='g')
plt.plot(x_train, predicted, label = 'Fitted line ')
plt.legend()
plt.show()
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
x_train = np.array ([[4.7], [2.4], [7.5], [7.1], [4.3], [7.816],
[8.9], [5.2], [8.59], [2.1], [8] ,
[10], [4.5], [6], [4]],
dtype = np.float32)
y_train = np.array ([[2.6], [1.6], [3.09], [2.4], [2.4], [3.357],
[2.6], [1.96], [3.53], [1.76], [3.2] ,
[3.5], [1.6], [2.5], [2.2]],
dtype = np.float32)
plt.figure(figsize=(12, 8))
plt.scatter(x_train, y_train, label='Original data', s=250, c='g')
plt.legend()
plt.show()
import torch
X_train = torch.from_numpy(x_train)
Y_train = torch.from_numpy(y_train)
print('requires_grad for X_train: ', X_train.requires_grad)
print('requires_grad for Y_train: ', Y_train.requires_grad)
input_size = 1
hidden_size = 1
output_size = 1
w1 = torch.rand(input_size,
hidden_size,
requires_grad=True)
w1.shape
w2 = torch.rand(hidden_size,
output_size,
requires_grad=True)
w2.shape
learning_rate = 1e-6
# Start at 10. Change this to 100, 1000 and 3000 and run the code all the way to the plot at the bottom
for iter in range(1, 10):
y_pred = X_train.mm(w1).mm(w2)
loss = (y_pred - Y_train).pow(2).sum()
if iter % 50 ==0:
print(iter, loss.item())
loss.backward()
with torch.no_grad():
w1 -= learning_rate * w1.grad
w2 -= learning_rate * w2.grad
w1.grad.zero_()
w2.grad.zero_()
print ('w1: ', w1)
print ('w2: ', w2)
x_train_tensor = torch.from_numpy(x_train)
x_train_tensor
predicted_in_tensor = x_train_tensor.mm(w1).mm(w2)
predicted_in_tensor
predicted = predicted_in_tensor.detach().numpy()
predicted
plt.figure(figsize=(12, 8))
plt.scatter(x_train, y_train, label = 'Original data', s=250, c='g')
plt.plot(x_train, predicted, label = 'Fitted line ')
plt.legend()
plt.show()
| 0.555194 | 0.984381 |
<a href="https://colab.research.google.com/github/MattiaVerticchio/PersonalProjects/blob/master/TransactionPrediction/TransactionPrediction.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Santander Customer Transaction Prediction
> **Abstract**
>
> The objective of this notebook is to predict customer behavior. The problem is a binary classification, where we try to predict if a customer will (`1`) or won’t (`0`) make a transaction. The dataset contains 200 real anonymized features and one boolean target. We’ll use LightGBM as an ensemble learning model. The metric for evaluation is the Area Under the Receiver Operating Characteristic Curve (ROC-AUC), and the final cross-validated score is ~0.90.
## Introduction
To tune the classification model, we’ll use `optuna`, which is a hyperparameter optimization framework. The model we’ll train is Microsoft’s LightGBM, a gradient boosting decision tree learner, integrated with `optuna`. Let’s first install the packages.
```
%%bash
pip install -q optuna
```
Once installed, we’ll retrieve the dataset from the source. Here we’ll use Kaggle APIs to download the dataset from the Santander Customer Transaction Prediction competition as a `zip` file.
The `JSON` file contains a unique individual `username` and `key`, retrievable from each Kaggle account settings.
```
%%bash
# Set up Kaggle APIs
mkdir ~/.kaggle/
touch ~/.kaggle/kaggle.json
chmod 600 ~/.kaggle/kaggle.json
echo '{"username": "mattiavert", "key": "Your API key"}' >> ~/.kaggle/kaggle.json
# Download the file
kaggle competitions download -c santander-customer-transaction-prediction
```
### Preprocessing
Let’s import the installed libraries and Pandas to manage the data.
```
import pandas as pd # Data management
import optuna.integration.lightgbm as lgb # Hyperparameter optimization
```
Here we’ll read the dataset and separate features and target.
```
X_train = pd.read_csv('train.csv.zip', index_col='ID_code') # Training data
X_test = pd.read_csv('test.csv.zip', index_col='ID_code') # Testing data
y_train = X_train[['target']].astype('bool') # Separating features and target
X_train = X_train.drop(columns='target')
X = X_train.append(X_test) # Matrix for all the features
```
On Google Colaboratory, we cannot widely explore feature augmentation with a dataset of this size. It could be useful to explore different techniques, however, due to memory limits, I will only add a few new aggregated columns on the `X` DataFrame.
```
cols = X.columns.values
X['sum'] = X[cols].sum(axis=1) # Sum of all the values
X['min'] = X[cols].min(axis=1) # Minimum value in the sample
X['max'] = X[cols].max(axis=1) # Maximum value in the sample
X['mean'] = X[cols].mean(axis=1) # Mean sample value
X['std'] = X[cols].std(axis=1) # Standard deviation of the sample
X['var'] = X[cols].var(axis=1) # Variance of the sample
X['skew'] = X[cols].skew(axis=1) # Skewness of the sample
X['kurt'] = X[cols].kurtosis(axis=1) # Kurtosis of each sample
X['med'] = X[cols].median(axis=1) # Median sample value
```
Now let’s create the train and test sets.
```
dtrain = lgb.Dataset(X.iloc[0:200000], label=y_train) # Training data
X_test = X.iloc[200000:400000] # Testing data
```
## Model building
The learning model we’ll use is Microsoft’s LightGBM, a fast gradient boosting decision tree implementation, wrapped by `optuna`, as an optimizer for hyperparameters.
The hyperparameters are optimized using a step wise process that follows a particular, well-established order:
- `feature_fraction`
- `num_leaves`
- `bagging`
- `feature_fraction`
- `regularization_factors`
- `min_data_in_leaf`
Firstly, we define a few parameters for the model.
```
params = { # Dictionary of starting parameters
"objective": "binary", # Binary classification
"metric": "auc", # Used in competition
"verbosity": -1, # Stay silent
"boosting_type": "gbdt", # Gradient Boosting Decision Tree
"max_bin": 63, # Faster training on GPU
"num_threads": 2, # Use all physical cores of CPU
}
```
Then we create a `LightGBMTunerCV` object. We perform a 5-Folds Stratified Cross Validation to check the accuracy of the model. I set a very high `num_boost_round` and enabled early training stopping to avoid overfitting on training data, since that could lead to poor generalization on unseen data. Patience for early stopping is set at 100 rounds.
```
tuner = lgb.LightGBMTunerCV( # Tuner object with Stratified 5-Fold CV
params, # GBM settings
dtrain, # Training dataset
num_boost_round=999999, # Set max iterations
nfold=5, # Number of CV folds
stratified=True, # Stratified samples
early_stopping_rounds=100, # Callback for CV's AUC
verbose_eval=False # Stay silent
)
```
### Hyperparameters tuning
`optuna` provides calls to perform the search, let’s execute them in the established order.
```
tuner.run()
```
Here are the results.
- `feature_fraction = 0.48`
- `num_leaves = 3`
- `bagging_fraction = 0.8662505913776934`
- `bagging_freq = 7`
- `lambda_l1 = 2.6736262550429385e-08`
- `lambda_l2 = 0.0013546195528208944`
- `min_child_samples = 50`
The next step is to find a good `num_boost_rounds` via cross-validation to retrain the final model without overfitting. Here I set the hyperparameters we found and start training with 10-Folds Stratified Cross-Validation with early stopping. This time the patience threshold is set to 20.
```
# Dictionary of tuned LightGBM parameters
params = {
"objective": "binary", # Binary classification
"metric": "auc", # Used in competition
"verbosity": -1, # Stay silent
"boosting_type": "gbdt", # Gradient Boosting Decision Tree
"max_bin": 63, # Faster training on GPU
"num_threads": 2, # Use all physical cores of CPU
# Adding optimized hyperparameters
"feature_fraction": 0.48,
"num_leaves": 3,
"bagging_fraction" : 0.8662505913776934,
"bagging_freq" : 7,
"lambda_l1": 2.6736262550429385e-08,
"lambda_l2": 0.0013546195528208944,
"min_child_samples": 50
}
```
We now create and train the object with the found settings.
```
finalModel = lgb.cv( # Training the cross-validated model
params, # Loading the parameters
dtrain, # Training dataset
num_boost_round=999999, # Setting a lot of boosting rounds
early_stopping_rounds=20, # Stop training after 20 non-productive rounds
nfold=10, # Cross-validation folds
stratified=True, # Stratified sampling
)
```
## Results & Conclusions
```
CV_results = pd.DataFrame(finalModel) # Saving iterations
best_iteration = CV_results['auc-mean'].idxmax() # Best iteration
CV_results.loc[best_iteration] # Best CV ROC-AUC
```
The model scored ~0.90 as cross-validated metric for ROC-AUC.
This particular experiment focused on hyperparameter tuning, but what could be done to furtherly improve the scores of the whole model?
- Explore feature engineering by augmenting the available data with the methods described above.
- Feature interaction
- Feature ratio
- Polynomial combinations
- Trigonometric transforms
- Clustering
- We could implement an ensemble learning model to combine different models and stack/blend the results.
- Calibrate the model prediction probabilities.
### Finalize the model
At this point, we can train the final model on the whole dataset, using the optimized hyperparameters and the number of boosting rounds.
```
import lightgbm as lgb # Importing the official Microsoft LightGBM
model = lgb.train( # Training the final model
params, # Loading the parameters
dtrain, # Training dataset
num_boost_round=best_iterations # Setting boosting rounds
)
```
### [Go back to index >](https://github.com/MattiaVerticchio/PersonalProjects/blob/master/README.md)
|
github_jupyter
|
%%bash
pip install -q optuna
%%bash
# Set up Kaggle APIs
mkdir ~/.kaggle/
touch ~/.kaggle/kaggle.json
chmod 600 ~/.kaggle/kaggle.json
echo '{"username": "mattiavert", "key": "Your API key"}' >> ~/.kaggle/kaggle.json
# Download the file
kaggle competitions download -c santander-customer-transaction-prediction
import pandas as pd # Data management
import optuna.integration.lightgbm as lgb # Hyperparameter optimization
X_train = pd.read_csv('train.csv.zip', index_col='ID_code') # Training data
X_test = pd.read_csv('test.csv.zip', index_col='ID_code') # Testing data
y_train = X_train[['target']].astype('bool') # Separating features and target
X_train = X_train.drop(columns='target')
X = X_train.append(X_test) # Matrix for all the features
cols = X.columns.values
X['sum'] = X[cols].sum(axis=1) # Sum of all the values
X['min'] = X[cols].min(axis=1) # Minimum value in the sample
X['max'] = X[cols].max(axis=1) # Maximum value in the sample
X['mean'] = X[cols].mean(axis=1) # Mean sample value
X['std'] = X[cols].std(axis=1) # Standard deviation of the sample
X['var'] = X[cols].var(axis=1) # Variance of the sample
X['skew'] = X[cols].skew(axis=1) # Skewness of the sample
X['kurt'] = X[cols].kurtosis(axis=1) # Kurtosis of each sample
X['med'] = X[cols].median(axis=1) # Median sample value
dtrain = lgb.Dataset(X.iloc[0:200000], label=y_train) # Training data
X_test = X.iloc[200000:400000] # Testing data
params = { # Dictionary of starting parameters
"objective": "binary", # Binary classification
"metric": "auc", # Used in competition
"verbosity": -1, # Stay silent
"boosting_type": "gbdt", # Gradient Boosting Decision Tree
"max_bin": 63, # Faster training on GPU
"num_threads": 2, # Use all physical cores of CPU
}
tuner = lgb.LightGBMTunerCV( # Tuner object with Stratified 5-Fold CV
params, # GBM settings
dtrain, # Training dataset
num_boost_round=999999, # Set max iterations
nfold=5, # Number of CV folds
stratified=True, # Stratified samples
early_stopping_rounds=100, # Callback for CV's AUC
verbose_eval=False # Stay silent
)
tuner.run()
# Dictionary of tuned LightGBM parameters
params = {
"objective": "binary", # Binary classification
"metric": "auc", # Used in competition
"verbosity": -1, # Stay silent
"boosting_type": "gbdt", # Gradient Boosting Decision Tree
"max_bin": 63, # Faster training on GPU
"num_threads": 2, # Use all physical cores of CPU
# Adding optimized hyperparameters
"feature_fraction": 0.48,
"num_leaves": 3,
"bagging_fraction" : 0.8662505913776934,
"bagging_freq" : 7,
"lambda_l1": 2.6736262550429385e-08,
"lambda_l2": 0.0013546195528208944,
"min_child_samples": 50
}
finalModel = lgb.cv( # Training the cross-validated model
params, # Loading the parameters
dtrain, # Training dataset
num_boost_round=999999, # Setting a lot of boosting rounds
early_stopping_rounds=20, # Stop training after 20 non-productive rounds
nfold=10, # Cross-validation folds
stratified=True, # Stratified sampling
)
CV_results = pd.DataFrame(finalModel) # Saving iterations
best_iteration = CV_results['auc-mean'].idxmax() # Best iteration
CV_results.loc[best_iteration] # Best CV ROC-AUC
import lightgbm as lgb # Importing the official Microsoft LightGBM
model = lgb.train( # Training the final model
params, # Loading the parameters
dtrain, # Training dataset
num_boost_round=best_iterations # Setting boosting rounds
)
| 0.577495 | 0.977883 |
<!-- dom:TITLE: Data Analysis and Machine Learning: Getting started, our first data and Machine Learning encounters -->
# Data Analysis and Machine Learning: Getting started, our first data and Machine Learning encounters
<!-- dom:AUTHOR: Morten Hjorth-Jensen at Department of Physics, University of Oslo & Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University -->
<!-- Author: -->
**Morten Hjorth-Jensen**, Department of Physics, University of Oslo and Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University
Date: **Aug 14, 2019**
Copyright 1999-2019, Morten Hjorth-Jensen. Released under CC Attribution-NonCommercial 4.0 license
## Introduction
Our emphasis throughout this series of lectures
is on understanding the mathematical aspects of
different algorithms used in the fields of data analysis and machine learning.
However, where possible we will emphasize the
importance of using available software. We start thus with a hands-on
and top-down approach to machine learning. The aim is thus to start with
relevant data or data we have produced
and use these to introduce statistical data analysis
concepts and machine learning algorithms before we delve into the
algorithms themselves. The examples we will use in the beginning, start with simple
polynomials with random noise added. We will use the Python
software package [Scikit-Learn](http://scikit-learn.org/stable/) and
introduce various machine learning algorithms to make fits of
the data and predictions. We move thereafter to more interesting
cases such as data from say experiments (below we will look at experimental nuclear binding energies as an example).
These are examples where we can easily set up the data and
then use machine learning algorithms included in for example
**Scikit-Learn**.
These examples will serve us the purpose of getting
started. Furthermore, they allow us to catch more than two birds with
a stone. They will allow us to bring in some programming specific
topics and tools as well as showing the power of various Python
libraries for machine learning and statistical data analysis.
Here, we will mainly focus on two
specific Python packages for Machine Learning, Scikit-Learn and
Tensorflow (see below for links etc). Moreover, the examples we
introduce will serve as inputs to many of our discussions later, as
well as allowing you to set up models and produce your own data and
get started with programming.
## What is Machine Learning?
Statistics, data science and machine learning form important fields of
research in modern science. They describe how to learn and make
predictions from data, as well as allowing us to extract important
correlations about physical process and the underlying laws of motion
in large data sets. The latter, big data sets, appear frequently in
essentially all disciplines, from the traditional Science, Technology,
Mathematics and Engineering fields to Life Science, Law, education
research, the Humanities and the Social Sciences.
It has become more
and more common to see research projects on big data in for example
the Social Sciences where extracting patterns from complicated survey
data is one of many research directions. Having a solid grasp of data
analysis and machine learning is thus becoming central to scientific
computing in many fields, and competences and skills within the fields
of machine learning and scientific computing are nowadays strongly
requested by many potential employers. The latter cannot be
overstated, familiarity with machine learning has almost become a
prerequisite for many of the most exciting employment opportunities,
whether they are in bioinformatics, life science, physics or finance,
in the private or the public sector. This author has had several
students or met students who have been hired recently based on their
skills and competences in scientific computing and data science, often
with marginal knowledge of machine learning.
Machine learning is a subfield of computer science, and is closely
related to computational statistics. It evolved from the study of
pattern recognition in artificial intelligence (AI) research, and has
made contributions to AI tasks like computer vision, natural language
processing and speech recognition. Many of the methods we will study are also
strongly rooted in basic mathematics and physics research.
Ideally, machine learning represents the science of giving computers
the ability to learn without being explicitly programmed. The idea is
that there exist generic algorithms which can be used to find patterns
in a broad class of data sets without having to write code
specifically for each problem. The algorithm will build its own logic
based on the data. You should however always keep in mind that
machines and algorithms are to a large extent developed by humans. The
insights and knowledge we have about a specific system, play a central
role when we develop a specific machine learning algorithm.
Machine learning is an extremely rich field, in spite of its young
age. The increases we have seen during the last three decades in
computational capabilities have been followed by developments of
methods and techniques for analyzing and handling large date sets,
relying heavily on statistics, computer science and mathematics. The
field is rather new and developing rapidly. Popular software packages
written in Python for machine learning like
[Scikit-learn](http://scikit-learn.org/stable/),
[Tensorflow](https://www.tensorflow.org/),
[PyTorch](http://pytorch.org/) and [Keras](https://keras.io/), all
freely available at their respective GitHub sites, encompass
communities of developers in the thousands or more. And the number of
code developers and contributors keeps increasing. Not all the
algorithms and methods can be given a rigorous mathematical
justification, opening up thereby large rooms for experimenting and
trial and error and thereby exciting new developments. However, a
solid command of linear algebra, multivariate theory, probability
theory, statistical data analysis, understanding errors and Monte
Carlo methods are central elements in a proper understanding of many
of algorithms and methods we will discuss.
## Types of Machine Learning
The approaches to machine learning are many, but are often split into
two main categories. In *supervised learning* we know the answer to a
problem, and let the computer deduce the logic behind it. On the other
hand, *unsupervised learning* is a method for finding patterns and
relationship in data sets without any prior knowledge of the system.
Some authours also operate with a third category, namely
*reinforcement learning*. This is a paradigm of learning inspired by
behavioral psychology, where learning is achieved by trial-and-error,
solely from rewards and punishment.
Another way to categorize machine learning tasks is to consider the
desired output of a system. Some of the most common tasks are:
* Classification: Outputs are divided into two or more classes. The goal is to produce a model that assigns inputs into one of these classes. An example is to identify digits based on pictures of hand-written ones. Classification is typically supervised learning.
* Regression: Finding a functional relationship between an input data set and a reference data set. The goal is to construct a function that maps input data to continuous output values.
* Clustering: Data are divided into groups with certain common traits, without knowing the different groups beforehand. It is thus a form of unsupervised learning.
The methods we cover have three main topics in common, irrespective of
whether we deal with supervised or unsupervised learning. The first
ingredient is normally our data set (which can be subdivided into
training and test data), the second item is a model which is normally a
function of some parameters. The model reflects our knowledge of the system (or lack thereof). As an example, if we know that our data show a behavior similar to what would be predicted by a polynomial, fitting our data to a polynomial of some degree would then determin our model.
The last ingredient is a so-called **cost**
function which allows us to present an estimate on how good our model
is in reproducing the data it is supposed to train.
At the heart of basically all ML algorithms there are so-called minimization algorithms, often we end up with various variants of **gradient** methods.
## Software and needed installations
We will make extensive use of Python as programming language and its
myriad of available libraries. You will find
Jupyter notebooks invaluable in your work. You can run **R**
codes in the Jupyter/IPython notebooks, with the immediate benefit of
visualizing your data. You can also use compiled languages like C++,
Rust, Julia, Fortran etc if you prefer. The focus in these lectures will be
on Python.
If you have Python installed (we strongly recommend Python3) and you feel
pretty familiar with installing different packages, we recommend that
you install the following Python packages via **pip** as
1. pip install numpy scipy matplotlib ipython scikit-learn mglearn sympy pandas pillow
For Python3, replace **pip** with **pip3**.
For OSX users we recommend, after having installed Xcode, to
install **brew**. Brew allows for a seamless installation of additional
software via for example
1. brew install python3
For Linux users, with its variety of distributions like for example the widely popular Ubuntu distribution,
you can use **pip** as well and simply install Python as
1. sudo apt-get install python3 (or python for pyhton2.7)
etc etc.
## Python installers
If you don't want to perform these operations separately and venture
into the hassle of exploring how to set up dependencies and paths, we
recommend two widely used distrubutions which set up all relevant
dependencies for Python, namely
* [Anaconda](https://docs.anaconda.com/),
which is an open source
distribution of the Python and R programming languages for large-scale
data processing, predictive analytics, and scientific computing, that
aims to simplify package management and deployment. Package versions
are managed by the package management system **conda**.
* [Enthought canopy](https://www.enthought.com/product/canopy/)
is a Python
distribution for scientific and analytic computing distribution and
analysis environment, available for free and under a commercial
license.
Furthermore, [Google's Colab](https://colab.research.google.com/notebooks/welcome.ipynb) is a free Jupyter notebook environment that requires
no setup and runs entirely in the cloud. Try it out!
## Useful Python libraries
Here we list several useful Python libraries we strongly recommend (if you use anaconda many of these are already there)
* [NumPy](https://www.numpy.org/) is a highly popular library for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays
* [The pandas](https://pandas.pydata.org/) library provides high-performance, easy-to-use data structures and data analysis tools
* [Xarray](http://xarray.pydata.org/en/stable/) is a Python package that makes working with labelled multi-dimensional arrays simple, efficient, and fun!
* [Scipy](https://www.scipy.org/) (pronounced “Sigh Pie”) is a Python-based ecosystem of open-source software for mathematics, science, and engineering.
* [Matplotlib](https://matplotlib.org/) is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms.
* [Autograd](https://github.com/HIPS/autograd) can automatically differentiate native Python and Numpy code. It can handle a large subset of Python's features, including loops, ifs, recursion and closures, and it can even take derivatives of derivatives of derivatives
* [SymPy](https://www.sympy.org/en/index.html) is a Python library for symbolic mathematics.
* [scikit-learn](https://scikit-learn.org/stable/) has simple and efficient tools for machine learning, data mining and data analysis
* [TensorFlow](https://www.tensorflow.org/) is a Python library for fast numerical computing created and released by Google
* [Keras](https://keras.io/) is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano
* And many more such as [pytorch](https://pytorch.org/), [Theano](https://pypi.org/project/Theano/) etc
## Installing R, C++, cython or Julia
You will also find it convenient to utilize **R**. We will mainly
use Python during our lectures and in various projects and exercises.
Those of you
already familiar with **R** should feel free to continue using **R**, keeping
however an eye on the parallel Python set ups. Similarly, if you are a
Python afecionado, feel free to explore **R** as well. Jupyter/Ipython
notebook allows you to run **R** codes interactively in your
browser. The software library **R** is really tailored for statistical data analysis
and allows for an easy usage of the tools and algorithms we will discuss in these
lectures.
To install **R** with Jupyter notebook
[follow the link here](https://mpacer.org/maths/r-kernel-for-ipython-notebook)
## Installing R, C++, cython, Numba etc
For the C++ aficionados, Jupyter/IPython notebook allows you also to
install C++ and run codes written in this language interactively in
the browser. Since we will emphasize writing many of the algorithms
yourself, you can thus opt for either Python or C++ (or Fortran or other compiled languages) as programming
languages.
To add more entropy, **cython** can also be used when running your
notebooks. It means that Python with the jupyter notebook
setup allows you to integrate widely popular softwares and tools for
scientific computing. Similarly, the
[Numba Python package](https://numba.pydata.org/) delivers increased performance
capabilities with minimal rewrites of your codes. With its
versatility, including symbolic operations, Python offers a unique
computational environment. Your jupyter notebook can easily be
converted into a nicely rendered **PDF** file or a Latex file for
further processing. For example, convert to latex as
pycod jupyter nbconvert filename.ipynb --to latex
And to add more versatility, the Python package [SymPy](http://www.sympy.org/en/index.html) is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) and is entirely written in Python.
Finally, if you wish to use the light mark-up language
[doconce](https://github.com/hplgit/doconce) you can convert a standard ascii text file into various HTML
formats, ipython notebooks, latex files, pdf files etc with minimal edits. These lectures were generated using **doconce**.
## Numpy examples and Important Matrix and vector handling packages
There are several central software libraries for linear algebra and eigenvalue problems. Several of the more
popular ones have been wrapped into ofter software packages like those from the widely used text **Numerical Recipes**. The original source codes in many of the available packages are often taken from the widely used
software package LAPACK, which follows two other popular packages
developed in the 1970s, namely EISPACK and LINPACK. We describe them shortly here.
* LINPACK: package for linear equations and least square problems.
* LAPACK:package for solving symmetric, unsymmetric and generalized eigenvalue problems. From LAPACK's website <http://www.netlib.org> it is possible to download for free all source codes from this library. Both C/C++ and Fortran versions are available.
* BLAS (I, II and III): (Basic Linear Algebra Subprograms) are routines that provide standard building blocks for performing basic vector and matrix operations. Blas I is vector operations, II vector-matrix operations and III matrix-matrix operations. Highly parallelized and efficient codes, all available for download from <http://www.netlib.org>.
## Basic Matrix Features
**Matrix properties reminder.**
$$
\mathbf{A} =
\begin{bmatrix} a_{11} & a_{12} & a_{13} & a_{14} \\
a_{21} & a_{22} & a_{23} & a_{24} \\
a_{31} & a_{32} & a_{33} & a_{34} \\
a_{41} & a_{42} & a_{43} & a_{44}
\end{bmatrix}\qquad
\mathbf{I} =
\begin{bmatrix} 1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}
$$
The inverse of a matrix is defined by
$$
\mathbf{A}^{-1} \cdot \mathbf{A} = I
$$
<table border="1">
<thead>
<tr><th align="center"> Relations </th> <th align="center"> Name </th> <th align="center"> matrix elements </th> </tr>
</thead>
<tbody>
<tr><td align="center"> $A = A^{T}$ </td> <td align="center"> symmetric </td> <td align="center"> $a_{ij} = a_{ji}$ </td> </tr>
<tr><td align="center"> $A = \left (A^{T} \right )^{-1}$ </td> <td align="center"> real orthogonal </td> <td align="center"> $\sum_k a_{ik} a_{jk} = \sum_k a_{ki} a_{kj} = \delta_{ij}$ </td> </tr>
<tr><td align="center"> $A = A^{ * }$ </td> <td align="center"> real matrix </td> <td align="center"> $a_{ij} = a_{ij}^{ * }$ </td> </tr>
<tr><td align="center"> $A = A^{\dagger}$ </td> <td align="center"> hermitian </td> <td align="center"> $a_{ij} = a_{ji}^{ * }$ </td> </tr>
<tr><td align="center"> $A = \left (A^{\dagger} \right )^{-1}$ </td> <td align="center"> unitary </td> <td align="center"> $\sum_k a_{ik} a_{jk}^{ * } = \sum_k a_{ki}^{ * } a_{kj} = \delta_{ij}$ </td> </tr>
</tbody>
</table>
### Some famous Matrices
* Diagonal if $a_{ij}=0$ for $i\ne j$
* Upper triangular if $a_{ij}=0$ for $i > j$
* Lower triangular if $a_{ij}=0$ for $i < j$
* Upper Hessenberg if $a_{ij}=0$ for $i > j+1$
* Lower Hessenberg if $a_{ij}=0$ for $i < j+1$
* Tridiagonal if $a_{ij}=0$ for $|i -j| > 1$
* Lower banded with bandwidth $p$: $a_{ij}=0$ for $i > j+p$
* Upper banded with bandwidth $p$: $a_{ij}=0$ for $i < j+p$
* Banded, block upper triangular, block lower triangular....
### More Basic Matrix Features
**Some Equivalent Statements.**
For an $N\times N$ matrix $\mathbf{A}$ the following properties are all equivalent
* If the inverse of $\mathbf{A}$ exists, $\mathbf{A}$ is nonsingular.
* The equation $\mathbf{Ax}=0$ implies $\mathbf{x}=0$.
* The rows of $\mathbf{A}$ form a basis of $R^N$.
* The columns of $\mathbf{A}$ form a basis of $R^N$.
* $\mathbf{A}$ is a product of elementary matrices.
* $0$ is not eigenvalue of $\mathbf{A}$.
## Numpy and arrays
[Numpy](http://www.numpy.org/) provides an easy way to handle arrays in Python. The standard way to import this library is as
```
import numpy as np
```
Here follows a simple example where we set up an array of ten elements, all determined by random numbers drawn according to the normal distribution,
```
n = 10
x = np.random.normal(size=n)
print(x)
```
We defined a vector $x$ with $n=10$ elements with its values given by the Normal distribution $N(0,1)$.
Another alternative is to declare a vector as follows
```
import numpy as np
x = np.array([1, 2, 3])
print(x)
```
Here we have defined a vector with three elements, with $x_0=1$, $x_1=2$ and $x_2=3$. Note that both Python and C++
start numbering array elements from $0$ and on. This means that a vector with $n$ elements has a sequence of entities $x_0, x_1, x_2, \dots, x_{n-1}$. We could also let (recommended) Numpy to compute the logarithms of a specific array as
```
import numpy as np
x = np.log(np.array([4, 7, 8]))
print(x)
```
In the last example we used Numpy's unary function $np.log$. This function is
highly tuned to compute array elements since the code is vectorized
and does not require looping. We normaly recommend that you use the
Numpy intrinsic functions instead of the corresponding **log** function
from Python's **math** module. The looping is done explicitely by the
**np.log** function. The alternative, and slower way to compute the
logarithms of a vector would be to write
```
import numpy as np
from math import log
x = np.array([4, 7, 8])
for i in range(0, len(x)):
x[i] = log(x[i])
print(x)
```
We note that our code is much longer already and we need to import the **log** function from the **math** module.
The attentive reader will also notice that the output is $[1, 1, 2]$. Python interprets automagically our numbers as integers (like the **automatic** keyword in C++). To change this we could define our array elements to be double precision numbers as
```
import numpy as np
x = np.log(np.array([4, 7, 8], dtype = np.float64))
print(x)
```
or simply write them as double precision numbers (Python uses 64 bits as default for floating point type variables), that is
```
import numpy as np
x = np.log(np.array([4.0, 7.0, 8.0])
print(x)
```
To check the number of bytes (remember that one byte contains eight bits for double precision variables), you can use simple use the **itemsize** functionality (the array $x$ is actually an object which inherits the functionalities defined in Numpy) as
```
import numpy as np
x = np.log(np.array([4.0, 7.0, 8.0])
print(x.itemsize)
```
## Matrices in Python
Having defined vectors, we are now ready to try out matrices. We can
define a $3 \times 3 $ real matrix $\hat{A}$ as (recall that we user
lowercase letters for vectors and uppercase letters for matrices)
```
import numpy as np
A = np.log(np.array([ [4.0, 7.0, 8.0], [3.0, 10.0, 11.0], [4.0, 5.0, 7.0] ]))
print(A)
```
If we use the **shape** function we would get $(3, 3)$ as output, that is verifying that our matrix is a $3\times 3$ matrix. We can slice the matrix and print for example the first column (Python organized matrix elements in a row-major order, see below) as
```
import numpy as np
A = np.log(np.array([ [4.0, 7.0, 8.0], [3.0, 10.0, 11.0], [4.0, 5.0, 7.0] ]))
# print the first column, row-major order and elements start with 0
print(A[:,0])
```
We can continue this was by printing out other columns or rows. The example here prints out the second column
```
import numpy as np
A = np.log(np.array([ [4.0, 7.0, 8.0], [3.0, 10.0, 11.0], [4.0, 5.0, 7.0] ]))
# print the first column, row-major order and elements start with 0
print(A[1,:])
```
Numpy contains many other functionalities that allow us to slice, subdivide etc etc arrays. We strongly recommend that you look up the [Numpy website for more details](http://www.numpy.org/). Useful functions when defining a matrix are the **np.zeros** function which declares a matrix of a given dimension and sets all elements to zero
```
import numpy as np
n = 10
# define a matrix of dimension 10 x 10 and set all elements to zero
A = np.zeros( (n, n) )
print(A)
```
or initializing all elements to
```
import numpy as np
n = 10
# define a matrix of dimension 10 x 10 and set all elements to one
A = np.ones( (n, n) )
print(A)
```
or as unitarily distributed random numbers (see the material on random number generators in the statistics part)
```
import numpy as np
n = 10
# define a matrix of dimension 10 x 10 and set all elements to random numbers with x \in [0, 1]
A = np.random.rand(n, n)
print(A)
```
As we will see throughout these lectures, there are several extremely useful functionalities in Numpy.
As an example, consider the discussion of the covariance matrix. Suppose we have defined three vectors
$\hat{x}, \hat{y}, \hat{z}$ with $n$ elements each. The covariance matrix is defined as
$$
\hat{\Sigma} = \begin{bmatrix} \sigma_{xx} & \sigma_{xy} & \sigma_{xz} \\
\sigma_{yx} & \sigma_{yy} & \sigma_{yz} \\
\sigma_{zx} & \sigma_{zy} & \sigma_{zz}
\end{bmatrix},
$$
where for example
$$
\sigma_{xy} =\frac{1}{n} \sum_{i=0}^{n-1}(x_i- \overline{x})(y_i- \overline{y}).
$$
The Numpy function **np.cov** calculates the covariance elements using the factor $1/(n-1)$ instead of $1/n$ since it assumes we do not have the exact mean values.
The following simple function uses the **np.vstack** function which takes each vector of dimension $1\times n$ and produces a $3\times n$ matrix $\hat{W}$
$$
\hat{W} = \begin{bmatrix} x_0 & y_0 & z_0 \\
x_1 & y_1 & z_1 \\
x_2 & y_2 & z_2 \\
\dots & \dots & \dots \\
x_{n-2} & y_{n-2} & z_{n-2} \\
x_{n-1} & y_{n-1} & z_{n-1}
\end{bmatrix},
$$
which in turn is converted into into the $3\times 3$ covariance matrix
$\hat{\Sigma}$ via the Numpy function **np.cov()**. We note that we can also calculate
the mean value of each set of samples $\hat{x}$ etc using the Numpy
function **np.mean(x)**. We can also extract the eigenvalues of the
covariance matrix through the **np.linalg.eig()** function.
```
# Importing various packages
import numpy as np
n = 100
x = np.random.normal(size=n)
print(np.mean(x))
y = 4+3*x+np.random.normal(size=n)
print(np.mean(y))
z = x**3+np.random.normal(size=n)
print(np.mean(z))
W = np.vstack((x, y, z))
Sigma = np.cov(W)
print(Sigma)
Eigvals, Eigvecs = np.linalg.eig(Sigma)
print(Eigvals)
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import sparse
eye = np.eye(4)
print(eye)
sparse_mtx = sparse.csr_matrix(eye)
print(sparse_mtx)
x = np.linspace(-10,10,100)
y = np.sin(x)
plt.plot(x,y,marker='x')
plt.show()
```
## Meet the Pandas
<!-- dom:FIGURE: [fig/pandas.jpg, width=600 frac=0.8] -->
<!-- begin figure -->
<p></p>
<img src="fig/pandas.jpg" width=600>
<!-- end figure -->
Another useful Python package is
[pandas](https://pandas.pydata.org/), which is an open source library
providing high-performance, easy-to-use data structures and data
analysis tools for Python. **pandas** stands for panel data, a term borrowed from econometrics and is an efficient library for data analysis with an emphasis on tabular data.
**pandas** has two major classes, the **DataFrame** class with two-dimensional data objects and tabular data organized in columns and the class **Series** with a focus on one-dimensional data objects. Both classes allow you to index data easily as we will see in the examples below.
**pandas** allows you also to perform mathematical operations on the data, spanning from simple reshapings of vectors and matrices to statistical operations.
The following simple example shows how we can, in an easy way make tables of our data. Here we define a data set which includes names, place of birth and date of birth, and displays the data in an easy to read way. We will see repeated use of **pandas**, in particular in connection with classification of data.
```
import pandas as pd
from IPython.display import display
data = {'First Name': ["Frodo", "Bilbo", "Aragorn II", "Samwise"],
'Last Name': ["Baggins", "Baggins","Elessar","Gamgee"],
'Place of birth': ["Shire", "Shire", "Eriador", "Shire"],
'Date of Birth T.A.': [2968, 2890, 2931, 2980]
}
data_pandas = pd.DataFrame(data)
display(data_pandas)
```
In the above we have imported **pandas** with the shorthand **pd**, the latter has become the standard way we import **pandas**. We make then a list of various variables
and reorganize the aboves lists into a **DataFrame** and then print out a neat table with specific column labels as *Name*, *place of birth* and *date of birth*.
Displaying these results, we see that the indices are given by the default numbers from zero to three.
**pandas** is extremely flexible and we can easily change the above indices by defining a new type of indexing as
```
data_pandas = pd.DataFrame(data,index=['Frodo','Bilbo','Aragorn','Sam'])
display(data_pandas)
```
Thereafter we display the content of the row which begins with the index **Aragorn**
```
display(data_pandas.loc['Aragorn'])
```
We can easily append data to this, for example
```
new_hobbit = {'First Name': ["Peregrin"],
'Last Name': ["Took"],
'Place of birth': ["Shire"],
'Date of Birth T.A.': [2990]
}
data_pandas=data_pandas.append(pd.DataFrame(new_hobbit, index=['Pippin']))
display(data_pandas)
```
Here are other examples where we use the **DataFrame** functionality to handle arrays, now with more interesting features for us, namely numbers. We set up a matrix
of dimensionality $10\times 5$ and compute the mean value and standard deviation of each column. Similarly, we can perform mathematial operations like squaring the matrix elements and many other operations.
```
import numpy as np
import pandas as pd
from IPython.display import display
np.random.seed(100)
# setting up a 10 x 5 matrix
rows = 10
cols = 5
a = np.random.randn(rows,cols)
df = pd.DataFrame(a)
display(df)
print(df.mean())
print(df.std())
display(df**2)
```
Thereafter we can select specific columns only and plot final results
```
df.columns = ['First', 'Second', 'Third', 'Fourth', 'Fifth']
df.index = np.arange(10)
display(df)
print(df['Second'].mean() )
print(df.info())
print(df.describe())
from pylab import plt, mpl
plt.style.use('seaborn')
mpl.rcParams['font.family'] = 'serif'
df.cumsum().plot(lw=2.0, figsize=(10,6))
plt.show()
df.plot.bar(figsize=(10,6), rot=15)
plt.show()
```
We can produce a $4\times 4$ matrix
```
b = np.arange(16).reshape((4,4))
print(b)
df1 = pd.DataFrame(b)
print(df1)
```
and many other operations.
The **Series** class is another important class included in
**pandas**. You can view it as a specialization of **DataFrame** but where
we have just a single column of data. It shares many of the same features as _DataFrame. As with **DataFrame**,
most operations are vectorized, achieving thereby a high performance when dealing with computations of arrays, in particular labeled arrays.
As we will see below it leads also to a very concice code close to the mathematical operations we may be interested in.
For multidimensional arrays, we recommend strongly [xarray](http://xarray.pydata.org/en/stable/). **xarray** has much of the same flexibility as **pandas**, but allows for the extension to higher dimensions than two. We will see examples later of the usage of both **pandas** and **xarray**.
## Reading Data and fitting
In order to study various Machine Learning algorithms, we need to
access data. Acccessing data is an essential step in all machine
learning algorithms. In particular, setting up the so-called **design
matrix** (to be defined below) is often the first element we need in
order to perform our calculations. To set up the design matrix means
reading (and later, when the calculations are done, writing) data
in various formats, The formats span from reading files from disk,
loading data from databases and interacting with online sources
like web application programming interfaces (APIs).
In handling various input formats, as discussed above, we will mainly stay with **pandas**,
a Python package which allows us, in a seamless and painless way, to
deal with a multitude of formats, from standard **csv** (comma separated
values) files, via **excel**, **html** to **hdf5** formats. With **pandas**
and the **DataFrame** and **Series** functionalities we are able to convert text data
into the calculational formats we need for a specific algorithm. And our code is going to be
pretty close the basic mathematical expressions.
Our first data set is going to be a classic from nuclear physics, namely all
available data on binding energies. Don't be intimidated if you are not familiar with nuclear physics. It serves simply as an example here of a data set.
We will show some of the
strengths of packages like **Scikit-Learn** in fitting nuclear binding energies to
specific functions using linear regression first. Then, as a teaser, we will show you how
you can easily implement other algorithms like decision trees and random forests and neural networks.
But before we really start with nuclear physics data, let's just look at some simpler polynomial fitting cases, such as,
(don't be offended) fitting straight lines!
### Simple linear regression model using **scikit-learn**
We start with perhaps our simplest possible example, using **Scikit-Learn** to perform linear regression analysis on a data set produced by us.
What follows is a simple Python code where we have defined a function
$y$ in terms of the variable $x$. Both are defined as vectors with $100$ entries.
The numbers in the vector $\hat{x}$ are given
by random numbers generated with a uniform distribution with entries
$x_i \in [0,1]$ (more about probability distribution functions
later). These values are then used to define a function $y(x)$
(tabulated again as a vector) with a linear dependence on $x$ plus a
random noise added via the normal distribution.
The Numpy functions are imported used the **import numpy as np**
statement and the random number generator for the uniform distribution
is called using the function **np.random.rand()**, where we specificy
that we want $100$ random variables. Using Numpy we define
automatically an array with the specified number of elements, $100$ in
our case. With the Numpy function **randn()** we can compute random
numbers with the normal distribution (mean value $\mu$ equal to zero and
variance $\sigma^2$ set to one) and produce the values of $y$ assuming a linear
dependence as function of $x$
$$
y = 2x+N(0,1),
$$
where $N(0,1)$ represents random numbers generated by the normal
distribution. From **Scikit-Learn** we import then the
**LinearRegression** functionality and make a prediction $\tilde{y} =
\alpha + \beta x$ using the function **fit(x,y)**. We call the set of
data $(\hat{x},\hat{y})$ for our training data. The Python package
**scikit-learn** has also a functionality which extracts the above
fitting parameters $\alpha$ and $\beta$ (see below). Later we will
distinguish between training data and test data.
For plotting we use the Python package
[matplotlib](https://matplotlib.org/) which produces publication
quality figures. Feel free to explore the extensive
[gallery](https://matplotlib.org/gallery/index.html) of examples. In
this example we plot our original values of $x$ and $y$ as well as the
prediction **ypredict** ($\tilde{y}$), which attempts at fitting our
data with a straight line.
The Python code follows here.
```
# Importing various packages
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
x = np.random.rand(100,1)
y = 2*x+0.01*np.random.randn(100,1)
linreg = LinearRegression()
linreg.fit(x,y)
xnew = np.array([[0],[1]])
ypredict = linreg.predict(xnew)
plt.plot(xnew, ypredict, "r-")
plt.plot(x, y ,'ro')
plt.axis([0,1.0,0, 5.0])
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title(r'Simple Linear Regression')
plt.show()
```
This example serves several aims. It allows us to demonstrate several
aspects of data analysis and later machine learning algorithms. The
immediate visualization shows that our linear fit is not
impressive. It goes through the data points, but there are many
outliers which are not reproduced by our linear regression. We could
now play around with this small program and change for example the
factor in front of $x$ and the normal distribution. Try to change the
function $y$ to
$$
y = 10x+0.01 \times N(0,1),
$$
where $x$ is defined as before. Does the fit look better? Indeed, by
reducing the role of the noise given by the normal distribution we see immediately that
our linear prediction seemingly reproduces better the training
set. However, this testing 'by the eye' is obviouly not satisfactory in the
long run. Here we have only defined the training data and our model, and
have not discussed a more rigorous approach to the **cost** function.
We need more rigorous criteria in defining whether we have succeeded or
not in modeling our training data. You will be surprised to see that
many scientists seldomly venture beyond this 'by the eye' approach. A
standard approach for the *cost* function is the so-called $\chi^2$
function (a variant of the mean-squared error (MSE))
$$
\chi^2 = \frac{1}{n}
\sum_{i=0}^{n-1}\frac{(y_i-\tilde{y}_i)^2}{\sigma_i^2},
$$
where $\sigma_i^2$ is the variance (to be defined later) of the entry
$y_i$. We may not know the explicit value of $\sigma_i^2$, it serves
however the aim of scaling the equations and make the cost function
dimensionless.
Minimizing the cost function is a central aspect of
our discussions to come. Finding its minima as function of the model
parameters ($\alpha$ and $\beta$ in our case) will be a recurring
theme in these series of lectures. Essentially all machine learning
algorithms we will discuss center around the minimization of the
chosen cost function. This depends in turn on our specific
model for describing the data, a typical situation in supervised
learning. Automatizing the search for the minima of the cost function is a
central ingredient in all algorithms. Typical methods which are
employed are various variants of **gradient** methods. These will be
discussed in more detail later. Again, you'll be surprised to hear that
many practitioners minimize the above function ''by the eye', popularly dubbed as
'chi by the eye'. That is, change a parameter and see (visually and numerically) that
the $\chi^2$ function becomes smaller.
There are many ways to define the cost function. A simpler approach is to look at the relative difference between the training data and the predicted data, that is we define
the relative error (why would we prefer the MSE instead of the relative error?) as
$$
\epsilon_{\mathrm{relative}}= \frac{\vert \hat{y} -\hat{\tilde{y}}\vert}{\vert \hat{y}\vert}.
$$
We can modify easily the above Python code and plot the relative error instead
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
x = np.random.rand(100,1)
y = 5*x+np.random.randn(100,1)
linreg = LinearRegression()
linreg.fit(x,y)
ypredict = linreg.predict(x)
plt.plot(x, np.abs(ypredict-y)/abs(y), "ro")
plt.axis([0,1.0,0.0, 0.5])
plt.xlabel(r'$x$')
plt.ylabel(r'$\epsilon_{\mathrm{relative}}$')
plt.title(r'Relative error')
plt.show()
```
Depending on the parameter in front of the normal distribution, we may
have a small or larger relative error. Try to play around with
different training data sets and study (graphically) the value of the
relative error.
As mentioned above, **Scikit-Learn** has an impressive functionality.
We can for example extract the values of $\alpha$ and $\beta$ and
their error estimates, or the variance and standard deviation and many
other properties from the statistical data analysis.
Here we show an
example of the functionality of **Scikit-Learn**.
```
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score, mean_squared_log_error, mean_absolute_error
x = np.random.rand(100,1)
y = 2.0+ 5*x#+0.05*np.random.randn(100,1)
linreg = LinearRegression()
linreg.fit(x,y)
ypredict = linreg.predict(x)
print('The intercept alpha: \n', linreg.intercept_)
print('Coefficient beta : \n', linreg.coef_)
# The mean squared error
print("Mean squared error: %.2f" % mean_squared_error(y, ypredict))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(y, ypredict))
# Mean squared log error
print('Mean squared log error: %.2f' % mean_squared_log_error(y, ypredict) )
# Mean absolute error
print('Mean absolute error: %.2f' % mean_absolute_error(y, ypredict))
plt.plot(x, ypredict, "r-")
plt.plot(x, y ,'ro')
plt.axis([0.0,1.0,1.5, 7.0])
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title(r'Linear Regression fit ')
plt.show()
```
The function **coef** gives us the parameter $\beta$ of our fit while **intercept** yields
$\alpha$. Depending on the constant in front of the normal distribution, we get values near or far from $alpha =2$ and $\beta =5$. Try to play around with different parameters in front of the normal distribution. The function **meansquarederror** gives us the mean square error, a risk metric corresponding to the expected value of the squared (quadratic) error or loss defined as
$$
MSE(\hat{y},\hat{\tilde{y}}) = \frac{1}{n}
\sum_{i=0}^{n-1}(y_i-\tilde{y}_i)^2,
$$
The smaller the value, the better the fit. Ideally we would like to
have an MSE equal zero. The attentive reader has probably recognized
this function as being similar to the $\chi^2$ function defined above.
The **r2score** function computes $R^2$, the coefficient of
determination. It provides a measure of how well future samples are
likely to be predicted by the model. Best possible score is 1.0 and it
can be negative (because the model can be arbitrarily worse). A
constant model that always predicts the expected value of $\hat{y}$,
disregarding the input features, would get a $R^2$ score of $0.0$.
If $\tilde{\hat{y}}_i$ is the predicted value of the $i-th$ sample and $y_i$ is the corresponding true value, then the score $R^2$ is defined as
$$
R^2(\hat{y}, \tilde{\hat{y}}) = 1 - \frac{\sum_{i=0}^{n - 1} (y_i - \tilde{y}_i)^2}{\sum_{i=0}^{n - 1} (y_i - \bar{y})^2},
$$
where we have defined the mean value of $\hat{y}$ as
$$
\bar{y} = \frac{1}{n} \sum_{i=0}^{n - 1} y_i.
$$
Another quantity taht we will meet again in our discussions of regression analysis is
the mean absolute error (MAE), a risk metric corresponding to the expected value of the absolute error loss or what we call the $l1$-norm loss. In our discussion above we presented the relative error.
The MAE is defined as follows
$$
\text{MAE}(\hat{y}, \hat{\tilde{y}}) = \frac{1}{n} \sum_{i=0}^{n-1} \left| y_i - \tilde{y}_i \right|.
$$
Finally we present the
squared logarithmic (quadratic) error
$$
\text{MSLE}(\hat{y}, \hat{\tilde{y}}) = \frac{1}{n} \sum_{i=0}^{n - 1} (\log_e (1 + y_i) - \log_e (1 + \tilde{y}_i) )^2,
$$
where $\log_e (x)$ stands for the natural logarithm of $x$. This error
estimate is best to use when targets having exponential growth, such
as population counts, average sales of a commodity over a span of
years etc.
We will discuss in more
detail these and other functions in the various lectures. We conclude this part with another example. Instead of
a linear $x$-dependence we study now a cubic polynomial and use the polynomial regression analysis tools of scikit-learn.
```
import matplotlib.pyplot as plt
import numpy as np
import random
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LinearRegression
x=np.linspace(0.02,0.98,200)
noise = np.asarray(random.sample((range(200)),200))
y=x**3*noise
yn=x**3*100
poly3 = PolynomialFeatures(degree=3)
X = poly3.fit_transform(x[:,np.newaxis])
clf3 = LinearRegression()
clf3.fit(X,y)
Xplot=poly3.fit_transform(x[:,np.newaxis])
poly3_plot=plt.plot(x, clf3.predict(Xplot), label='Cubic Fit')
plt.plot(x,yn, color='red', label="True Cubic")
plt.scatter(x, y, label='Data', color='orange', s=15)
plt.legend()
plt.show()
def error(a):
for i in y:
err=(y-yn)/yn
return abs(np.sum(err))/len(err)
print (error(y))
```
### To our real data: nuclear binding energies. Brief reminder on masses and binding energies
Let us now dive into nuclear physics and remind ourselves briefly about some basic features about binding
energies. A basic quantity which can be measured for the ground
states of nuclei is the atomic mass $M(N, Z)$ of the neutral atom with
atomic mass number $A$ and charge $Z$. The number of neutrons is $N$. There are indeed several sophisticated experiments worldwide which allow us to measure this quantity to high precision (parts per million even).
Atomic masses are usually tabulated in terms of the mass excess defined by
$$
\Delta M(N, Z) = M(N, Z) - uA,
$$
where $u$ is the Atomic Mass Unit
$$
u = M(^{12}\mathrm{C})/12 = 931.4940954(57) \hspace{0.1cm} \mathrm{MeV}/c^2.
$$
The nucleon masses are
$$
m_p = 1.00727646693(9)u,
$$
and
$$
m_n = 939.56536(8)\hspace{0.1cm} \mathrm{MeV}/c^2 = 1.0086649156(6)u.
$$
In the [2016 mass evaluation of by W.J.Huang, G.Audi, M.Wang, F.G.Kondev, S.Naimi and X.Xu](http://nuclearmasses.org/resources_folder/Wang_2017_Chinese_Phys_C_41_030003.pdf)
there are data on masses and decays of 3437 nuclei.
The nuclear binding energy is defined as the energy required to break
up a given nucleus into its constituent parts of $N$ neutrons and $Z$
protons. In terms of the atomic masses $M(N, Z)$ the binding energy is
defined by
$$
BE(N, Z) = ZM_H c^2 + Nm_n c^2 - M(N, Z)c^2 ,
$$
where $M_H$ is the mass of the hydrogen atom and $m_n$ is the mass of the neutron.
In terms of the mass excess the binding energy is given by
$$
BE(N, Z) = Z\Delta_H c^2 + N\Delta_n c^2 -\Delta(N, Z)c^2 ,
$$
where $\Delta_H c^2 = 7.2890$ MeV and $\Delta_n c^2 = 8.0713$ MeV.
A popular and physically intuitive model which can be used to parametrize
the experimental binding energies as function of $A$, is the so-called
**liquid drop model**. The ansatz is based on the following expression
$$
BE(N,Z) = a_1A-a_2A^{2/3}-a_3\frac{Z^2}{A^{1/3}}-a_4\frac{(N-Z)^2}{A},
$$
where $A$ stands for the number of nucleons and the $a_i$s are parameters which are determined by a fit
to the experimental data.
To arrive at the above expression we have assumed that we can make the following assumptions:
* There is a volume term $a_1A$ proportional with the number of nucleons (the energy is also an extensive quantity). When an assembly of nucleons of the same size is packed together into the smallest volume, each interior nucleon has a certain number of other nucleons in contact with it. This contribution is proportional to the volume.
* There is a surface energy term $a_2A^{2/3}$. The assumption here is that a nucleon at the surface of a nucleus interacts with fewer other nucleons than one in the interior of the nucleus and hence its binding energy is less. This surface energy term takes that into account and is therefore negative and is proportional to the surface area.
* There is a Coulomb energy term $a_3\frac{Z^2}{A^{1/3}}$. The electric repulsion between each pair of protons in a nucleus yields less binding.
* There is an asymmetry term $a_4\frac{(N-Z)^2}{A}$. This term is associated with the Pauli exclusion principle and reflects the fact that the proton-neutron interaction is more attractive on the average than the neutron-neutron and proton-proton interactions.
We could also add a so-called pairing term, which is a correction term that
arises from the tendency of proton pairs and neutron pairs to
occur. An even number of particles is more stable than an odd number.
### Organizing our data
Let us start with reading and organizing our data.
We start with the compilation of masses and binding energies from 2016.
After having downloaded this file to our own computer, we are now ready to read the file and start structuring our data.
We start with preparing folders for storing our calculations and the data file over masses and binding energies. We import also various modules that we will find useful in order to present various Machine Learning methods. Here we focus mainly on the functionality of **scikit-learn**.
```
# Common imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn.linear_model as skl
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error
import os
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
infile = open(data_path("MassEval2016.dat"),'r')
```
Before we proceed, we define also a function for making our plots. You can obviously avoid this and simply set up various **matplotlib** commands every time you need them. You may however find it convenient to collect all such commands in one function and simply call this function.
```
from pylab import plt, mpl
plt.style.use('seaborn')
mpl.rcParams['font.family'] = 'serif'
def MakePlot(x,y, styles, labels, axlabels):
plt.figure(figsize=(10,6))
for i in range(len(x)):
plt.plot(x[i], y[i], styles[i], label = labels[i])
plt.xlabel(axlabels[0])
plt.ylabel(axlabels[1])
plt.legend(loc=0)
```
Our next step is to read the data on experimental binding energies and
reorganize them as functions of the mass number $A$, the number of
protons $Z$ and neutrons $N$ using **pandas**. Before we do this it is
always useful (unless you have a binary file or other types of compressed
data) to actually open the file and simply take a look at it!
In particular, the program that outputs the final nuclear masses is written in Fortran with a specific format. It means that we need to figure out the format and which columns contain the data we are interested in. Pandas comes with a function that reads formatted output. After having admired the file, we are now ready to start massaging it with **pandas**. The file begins with some basic format information.
```
"""
This is taken from the data file of the mass 2016 evaluation.
All files are 3436 lines long with 124 character per line.
Headers are 39 lines long.
col 1 : Fortran character control: 1 = page feed 0 = line feed
format : a1,i3,i5,i5,i5,1x,a3,a4,1x,f13.5,f11.5,f11.3,f9.3,1x,a2,f11.3,f9.3,1x,i3,1x,f12.5,f11.5
These formats are reflected in the pandas widths variable below, see the statement
widths=(1,3,5,5,5,1,3,4,1,13,11,11,9,1,2,11,9,1,3,1,12,11,1),
Pandas has also a variable header, with length 39 in this case.
"""
```
The data we are interested in are in columns 2, 3, 4 and 11, giving us
the number of neutrons, protons, mass numbers and binding energies,
respectively. We add also for the sake of completeness the element name. The data are in fixed-width formatted lines and we will
covert them into the **pandas** DataFrame structure.
```
# Read the experimental data with Pandas
Masses = pd.read_fwf(infile, usecols=(2,3,4,6,11),
names=('N', 'Z', 'A', 'Element', 'Ebinding'),
widths=(1,3,5,5,5,1,3,4,1,13,11,11,9,1,2,11,9,1,3,1,12,11,1),
header=39,
index_col=False)
# Extrapolated values are indicated by '#' in place of the decimal place, so
# the Ebinding column won't be numeric. Coerce to float and drop these entries.
Masses['Ebinding'] = pd.to_numeric(Masses['Ebinding'], errors='coerce')
Masses = Masses.dropna()
# Convert from keV to MeV.
Masses['Ebinding'] /= 1000
# Group the DataFrame by nucleon number, A.
Masses = Masses.groupby('A')
# Find the rows of the grouped DataFrame with the maximum binding energy.
Masses = Masses.apply(lambda t: t[t.Ebinding==t.Ebinding.max()])
```
We have now read in the data, grouped them according to the variables we are interested in.
We see how easy it is to reorganize the data using **pandas**. If we
were to do these operations in C/C++ or Fortran, we would have had to
write various functions/subroutines which perform the above
reorganizations for us. Having reorganized the data, we can now start
to make some simple fits using both the functionalities in **numpy** and
**Scikit-Learn** afterwards.
Now we define five variables which contain
the number of nucleons $A$, the number of protons $Z$ and the number of neutrons $N$, the element name and finally the energies themselves.
```
A = Masses['A']
Z = Masses['Z']
N = Masses['N']
Element = Masses['Element']
Energies = Masses['Ebinding']
print(Masses)
```
The next step, and we will define this mathematically later, is to set up the so-called **design matrix**. We will throughout call this matrix $\boldsymbol{X}$.
It has dimensionality $p\times n$, where $n$ is the number of data points and $p$ are the so-called predictors. In our case here they are given by the number of polynomials in $A$ we wish to include in the fit.
```
# Now we set up the design matrix X
X = np.zeros((len(A),5))
X[:,0] = 1
X[:,1] = A
X[:,2] = A**(2.0/3.0)
X[:,3] = A**(-1.0/3.0)
X[:,4] = A**(-1.0)
```
With **scikitlearn** we are now ready to use linear regression and fit our data.
```
clf = skl.LinearRegression().fit(X, Energies)
fity = clf.predict(X)
```
Pretty simple!
Now we can print measures of how our fit is doing, the coefficients from the fits and plot the final fit together with our data.
```
# The mean squared error
print("Mean squared error: %.2f" % mean_squared_error(Energies, fity))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(Energies, fity))
# Mean absolute error
print('Mean absolute error: %.2f' % mean_absolute_error(Energies, fity))
print(clf.coef_, clf.intercept_)
Masses['Eapprox'] = fity
# Generate a plot comparing the experimental with the fitted values values.
fig, ax = plt.subplots()
ax.set_xlabel(r'$A = N + Z$')
ax.set_ylabel(r'$E_\mathrm{bind}\,/\mathrm{MeV}$')
ax.plot(Masses['A'], Masses['Ebinding'], alpha=0.7, lw=2,
label='Ame2016')
ax.plot(Masses['A'], Masses['Eapprox'], alpha=0.7, lw=2, c='m',
label='Fit')
ax.legend()
save_fig("Masses2016")
plt.show()
```
### Seeing the wood for the trees
As a teaser, let us now see how we can do this with decision trees using **scikit-learn**. Later we will switch to so-called **random forests**!
```
#Decision Tree Regression
from sklearn.tree import DecisionTreeRegressor
regr_1=DecisionTreeRegressor(max_depth=5)
regr_2=DecisionTreeRegressor(max_depth=7)
regr_3=DecisionTreeRegressor(max_depth=9)
regr_1.fit(X, Energies)
regr_2.fit(X, Energies)
regr_3.fit(X, Energies)
y_1 = regr_1.predict(X)
y_2 = regr_2.predict(X)
y_3=regr_3.predict(X)
Masses['Eapprox'] = y_3
# Plot the results
plt.figure()
plt.plot(A, Energies, color="blue", label="Data", linewidth=2)
plt.plot(A, y_1, color="red", label="max_depth=5", linewidth=2)
plt.plot(A, y_2, color="green", label="max_depth=7", linewidth=2)
plt.plot(A, y_3, color="m", label="max_depth=9", linewidth=2)
plt.xlabel("$A$")
plt.ylabel("$E$[MeV]")
plt.title("Decision Tree Regression")
plt.legend()
save_fig("Masses2016Trees")
plt.show()
print(Masses)
print(np.mean( (Energies-y_1)**2))
```
### And what about using neural networks?
The **seaborn** package allows us to visualize data in an efficient way. Note that we use **scikit-learn**'s multi-layer perceptron (or feed forward neural network)
functionality.
```
from sklearn.neural_network import MLPRegressor
from sklearn.metrics import accuracy_score
import seaborn as sns
X_train = X
Y_train = Energies
n_hidden_neurons = 100
epochs = 100
# store models for later use
eta_vals = np.logspace(-5, 1, 7)
lmbd_vals = np.logspace(-5, 1, 7)
# store the models for later use
DNN_scikit = np.zeros((len(eta_vals), len(lmbd_vals)), dtype=object)
train_accuracy = np.zeros((len(eta_vals), len(lmbd_vals)))
sns.set()
for i, eta in enumerate(eta_vals):
for j, lmbd in enumerate(lmbd_vals):
dnn = MLPRegressor(hidden_layer_sizes=(n_hidden_neurons), activation='logistic',
alpha=lmbd, learning_rate_init=eta, max_iter=epochs)
dnn.fit(X_train, Y_train)
DNN_scikit[i][j] = dnn
train_accuracy[i][j] = dnn.score(X_train, Y_train)
fig, ax = plt.subplots(figsize = (10, 10))
sns.heatmap(train_accuracy, annot=True, ax=ax, cmap="viridis")
ax.set_title("Training Accuracy")
ax.set_ylabel("$\eta$")
ax.set_xlabel("$\lambda$")
plt.show()
```
## A first summary
The aim behind these introductory words was to present to you various
Python libraries and their functionalities, in particular libraries like
**numpy**, **pandas**, **xarray** and **matplotlib** and other that make our life much easier
in handling various data sets and visualizing data.
Furthermore,
**Scikit-Learn** allows us with few lines of code to implement popular
Machine Learning algorithms for supervised learning. Later we will meet **Tensorflow**, a powerful library for deep learning.
Now it is time to dive more into the details of various methods. We will start with linear regression and try to take a deeper look at what it entails.
|
github_jupyter
|
import numpy as np
n = 10
x = np.random.normal(size=n)
print(x)
import numpy as np
x = np.array([1, 2, 3])
print(x)
import numpy as np
x = np.log(np.array([4, 7, 8]))
print(x)
import numpy as np
from math import log
x = np.array([4, 7, 8])
for i in range(0, len(x)):
x[i] = log(x[i])
print(x)
import numpy as np
x = np.log(np.array([4, 7, 8], dtype = np.float64))
print(x)
import numpy as np
x = np.log(np.array([4.0, 7.0, 8.0])
print(x)
import numpy as np
x = np.log(np.array([4.0, 7.0, 8.0])
print(x.itemsize)
import numpy as np
A = np.log(np.array([ [4.0, 7.0, 8.0], [3.0, 10.0, 11.0], [4.0, 5.0, 7.0] ]))
print(A)
import numpy as np
A = np.log(np.array([ [4.0, 7.0, 8.0], [3.0, 10.0, 11.0], [4.0, 5.0, 7.0] ]))
# print the first column, row-major order and elements start with 0
print(A[:,0])
import numpy as np
A = np.log(np.array([ [4.0, 7.0, 8.0], [3.0, 10.0, 11.0], [4.0, 5.0, 7.0] ]))
# print the first column, row-major order and elements start with 0
print(A[1,:])
import numpy as np
n = 10
# define a matrix of dimension 10 x 10 and set all elements to zero
A = np.zeros( (n, n) )
print(A)
import numpy as np
n = 10
# define a matrix of dimension 10 x 10 and set all elements to one
A = np.ones( (n, n) )
print(A)
import numpy as np
n = 10
# define a matrix of dimension 10 x 10 and set all elements to random numbers with x \in [0, 1]
A = np.random.rand(n, n)
print(A)
# Importing various packages
import numpy as np
n = 100
x = np.random.normal(size=n)
print(np.mean(x))
y = 4+3*x+np.random.normal(size=n)
print(np.mean(y))
z = x**3+np.random.normal(size=n)
print(np.mean(z))
W = np.vstack((x, y, z))
Sigma = np.cov(W)
print(Sigma)
Eigvals, Eigvecs = np.linalg.eig(Sigma)
print(Eigvals)
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import sparse
eye = np.eye(4)
print(eye)
sparse_mtx = sparse.csr_matrix(eye)
print(sparse_mtx)
x = np.linspace(-10,10,100)
y = np.sin(x)
plt.plot(x,y,marker='x')
plt.show()
import pandas as pd
from IPython.display import display
data = {'First Name': ["Frodo", "Bilbo", "Aragorn II", "Samwise"],
'Last Name': ["Baggins", "Baggins","Elessar","Gamgee"],
'Place of birth': ["Shire", "Shire", "Eriador", "Shire"],
'Date of Birth T.A.': [2968, 2890, 2931, 2980]
}
data_pandas = pd.DataFrame(data)
display(data_pandas)
data_pandas = pd.DataFrame(data,index=['Frodo','Bilbo','Aragorn','Sam'])
display(data_pandas)
display(data_pandas.loc['Aragorn'])
new_hobbit = {'First Name': ["Peregrin"],
'Last Name': ["Took"],
'Place of birth': ["Shire"],
'Date of Birth T.A.': [2990]
}
data_pandas=data_pandas.append(pd.DataFrame(new_hobbit, index=['Pippin']))
display(data_pandas)
import numpy as np
import pandas as pd
from IPython.display import display
np.random.seed(100)
# setting up a 10 x 5 matrix
rows = 10
cols = 5
a = np.random.randn(rows,cols)
df = pd.DataFrame(a)
display(df)
print(df.mean())
print(df.std())
display(df**2)
df.columns = ['First', 'Second', 'Third', 'Fourth', 'Fifth']
df.index = np.arange(10)
display(df)
print(df['Second'].mean() )
print(df.info())
print(df.describe())
from pylab import plt, mpl
plt.style.use('seaborn')
mpl.rcParams['font.family'] = 'serif'
df.cumsum().plot(lw=2.0, figsize=(10,6))
plt.show()
df.plot.bar(figsize=(10,6), rot=15)
plt.show()
b = np.arange(16).reshape((4,4))
print(b)
df1 = pd.DataFrame(b)
print(df1)
# Importing various packages
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
x = np.random.rand(100,1)
y = 2*x+0.01*np.random.randn(100,1)
linreg = LinearRegression()
linreg.fit(x,y)
xnew = np.array([[0],[1]])
ypredict = linreg.predict(xnew)
plt.plot(xnew, ypredict, "r-")
plt.plot(x, y ,'ro')
plt.axis([0,1.0,0, 5.0])
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title(r'Simple Linear Regression')
plt.show()
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
x = np.random.rand(100,1)
y = 5*x+np.random.randn(100,1)
linreg = LinearRegression()
linreg.fit(x,y)
ypredict = linreg.predict(x)
plt.plot(x, np.abs(ypredict-y)/abs(y), "ro")
plt.axis([0,1.0,0.0, 0.5])
plt.xlabel(r'$x$')
plt.ylabel(r'$\epsilon_{\mathrm{relative}}$')
plt.title(r'Relative error')
plt.show()
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score, mean_squared_log_error, mean_absolute_error
x = np.random.rand(100,1)
y = 2.0+ 5*x#+0.05*np.random.randn(100,1)
linreg = LinearRegression()
linreg.fit(x,y)
ypredict = linreg.predict(x)
print('The intercept alpha: \n', linreg.intercept_)
print('Coefficient beta : \n', linreg.coef_)
# The mean squared error
print("Mean squared error: %.2f" % mean_squared_error(y, ypredict))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(y, ypredict))
# Mean squared log error
print('Mean squared log error: %.2f' % mean_squared_log_error(y, ypredict) )
# Mean absolute error
print('Mean absolute error: %.2f' % mean_absolute_error(y, ypredict))
plt.plot(x, ypredict, "r-")
plt.plot(x, y ,'ro')
plt.axis([0.0,1.0,1.5, 7.0])
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title(r'Linear Regression fit ')
plt.show()
import matplotlib.pyplot as plt
import numpy as np
import random
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LinearRegression
x=np.linspace(0.02,0.98,200)
noise = np.asarray(random.sample((range(200)),200))
y=x**3*noise
yn=x**3*100
poly3 = PolynomialFeatures(degree=3)
X = poly3.fit_transform(x[:,np.newaxis])
clf3 = LinearRegression()
clf3.fit(X,y)
Xplot=poly3.fit_transform(x[:,np.newaxis])
poly3_plot=plt.plot(x, clf3.predict(Xplot), label='Cubic Fit')
plt.plot(x,yn, color='red', label="True Cubic")
plt.scatter(x, y, label='Data', color='orange', s=15)
plt.legend()
plt.show()
def error(a):
for i in y:
err=(y-yn)/yn
return abs(np.sum(err))/len(err)
print (error(y))
# Common imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn.linear_model as skl
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error
import os
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
infile = open(data_path("MassEval2016.dat"),'r')
from pylab import plt, mpl
plt.style.use('seaborn')
mpl.rcParams['font.family'] = 'serif'
def MakePlot(x,y, styles, labels, axlabels):
plt.figure(figsize=(10,6))
for i in range(len(x)):
plt.plot(x[i], y[i], styles[i], label = labels[i])
plt.xlabel(axlabels[0])
plt.ylabel(axlabels[1])
plt.legend(loc=0)
"""
This is taken from the data file of the mass 2016 evaluation.
All files are 3436 lines long with 124 character per line.
Headers are 39 lines long.
col 1 : Fortran character control: 1 = page feed 0 = line feed
format : a1,i3,i5,i5,i5,1x,a3,a4,1x,f13.5,f11.5,f11.3,f9.3,1x,a2,f11.3,f9.3,1x,i3,1x,f12.5,f11.5
These formats are reflected in the pandas widths variable below, see the statement
widths=(1,3,5,5,5,1,3,4,1,13,11,11,9,1,2,11,9,1,3,1,12,11,1),
Pandas has also a variable header, with length 39 in this case.
"""
# Read the experimental data with Pandas
Masses = pd.read_fwf(infile, usecols=(2,3,4,6,11),
names=('N', 'Z', 'A', 'Element', 'Ebinding'),
widths=(1,3,5,5,5,1,3,4,1,13,11,11,9,1,2,11,9,1,3,1,12,11,1),
header=39,
index_col=False)
# Extrapolated values are indicated by '#' in place of the decimal place, so
# the Ebinding column won't be numeric. Coerce to float and drop these entries.
Masses['Ebinding'] = pd.to_numeric(Masses['Ebinding'], errors='coerce')
Masses = Masses.dropna()
# Convert from keV to MeV.
Masses['Ebinding'] /= 1000
# Group the DataFrame by nucleon number, A.
Masses = Masses.groupby('A')
# Find the rows of the grouped DataFrame with the maximum binding energy.
Masses = Masses.apply(lambda t: t[t.Ebinding==t.Ebinding.max()])
A = Masses['A']
Z = Masses['Z']
N = Masses['N']
Element = Masses['Element']
Energies = Masses['Ebinding']
print(Masses)
# Now we set up the design matrix X
X = np.zeros((len(A),5))
X[:,0] = 1
X[:,1] = A
X[:,2] = A**(2.0/3.0)
X[:,3] = A**(-1.0/3.0)
X[:,4] = A**(-1.0)
clf = skl.LinearRegression().fit(X, Energies)
fity = clf.predict(X)
# The mean squared error
print("Mean squared error: %.2f" % mean_squared_error(Energies, fity))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(Energies, fity))
# Mean absolute error
print('Mean absolute error: %.2f' % mean_absolute_error(Energies, fity))
print(clf.coef_, clf.intercept_)
Masses['Eapprox'] = fity
# Generate a plot comparing the experimental with the fitted values values.
fig, ax = plt.subplots()
ax.set_xlabel(r'$A = N + Z$')
ax.set_ylabel(r'$E_\mathrm{bind}\,/\mathrm{MeV}$')
ax.plot(Masses['A'], Masses['Ebinding'], alpha=0.7, lw=2,
label='Ame2016')
ax.plot(Masses['A'], Masses['Eapprox'], alpha=0.7, lw=2, c='m',
label='Fit')
ax.legend()
save_fig("Masses2016")
plt.show()
#Decision Tree Regression
from sklearn.tree import DecisionTreeRegressor
regr_1=DecisionTreeRegressor(max_depth=5)
regr_2=DecisionTreeRegressor(max_depth=7)
regr_3=DecisionTreeRegressor(max_depth=9)
regr_1.fit(X, Energies)
regr_2.fit(X, Energies)
regr_3.fit(X, Energies)
y_1 = regr_1.predict(X)
y_2 = regr_2.predict(X)
y_3=regr_3.predict(X)
Masses['Eapprox'] = y_3
# Plot the results
plt.figure()
plt.plot(A, Energies, color="blue", label="Data", linewidth=2)
plt.plot(A, y_1, color="red", label="max_depth=5", linewidth=2)
plt.plot(A, y_2, color="green", label="max_depth=7", linewidth=2)
plt.plot(A, y_3, color="m", label="max_depth=9", linewidth=2)
plt.xlabel("$A$")
plt.ylabel("$E$[MeV]")
plt.title("Decision Tree Regression")
plt.legend()
save_fig("Masses2016Trees")
plt.show()
print(Masses)
print(np.mean( (Energies-y_1)**2))
from sklearn.neural_network import MLPRegressor
from sklearn.metrics import accuracy_score
import seaborn as sns
X_train = X
Y_train = Energies
n_hidden_neurons = 100
epochs = 100
# store models for later use
eta_vals = np.logspace(-5, 1, 7)
lmbd_vals = np.logspace(-5, 1, 7)
# store the models for later use
DNN_scikit = np.zeros((len(eta_vals), len(lmbd_vals)), dtype=object)
train_accuracy = np.zeros((len(eta_vals), len(lmbd_vals)))
sns.set()
for i, eta in enumerate(eta_vals):
for j, lmbd in enumerate(lmbd_vals):
dnn = MLPRegressor(hidden_layer_sizes=(n_hidden_neurons), activation='logistic',
alpha=lmbd, learning_rate_init=eta, max_iter=epochs)
dnn.fit(X_train, Y_train)
DNN_scikit[i][j] = dnn
train_accuracy[i][j] = dnn.score(X_train, Y_train)
fig, ax = plt.subplots(figsize = (10, 10))
sns.heatmap(train_accuracy, annot=True, ax=ax, cmap="viridis")
ax.set_title("Training Accuracy")
ax.set_ylabel("$\eta$")
ax.set_xlabel("$\lambda$")
plt.show()
| 0.410874 | 0.990587 |
# Python Helpers
The python helpers in this IPython notebook serve to generate Elixir modules out of Pygments's styles. This gives us 29 styles for free, even if some of the styles are a little weird. It's good to have a choice. Because this introspects python code, it is written in python and will continue to be written in python for as long as it's needed.
I've done this in an IPython notebook because it's the best environment for exploratory programming using python.
This is not to be regularly used during development, so I didn't bother creating a proper python package or even a requirements file. The external dependencies needed to run this notebook are:
* jupyter
* pygments
* jinja2
## Style Modules Generation
The code is pretty simple. It introspects the Python classes with some help from Pygments itself, and then generates Elixir modules with the same functionality. The architecture is of course quite different (Elixir lexers are *data*, not Objects).
Please don't touch the `lib/makeup/styles/html/style_map.ex` file between these markers:
```elixir
# %% Start Pygments - Don't remove this line
...
# %% End Pygments - Don't remove this line
```
Because they will be overwritten if this is run again.
```
import pygments.styles
import jinja2
import textwrap
from itertools import chain
import os
import re
tokens = [
':text',
':whitespace',
':escape',
':error',
':other' ,
':keyword',
':keyword_constant',
':keyword_declaration',
':keyword_namespace',
':keyword_pseudo',
':keyword_reserved',
':keyword_type' ,
':name',
':name_attribute',
':name_builtin',
':name_builtin_pseudo',
':name_class',
':name_constant',
':name_decorator',
':name_entity',
':name_exception',
':name_function',
':name_function_magic',
':name_property',
':name_label',
':name_namespace',
':name_other',
':name_tag',
':name_variable',
':name_variable_class',
':name_variable_global',
':name_variable_instance',
':name_variable_magic',
':literal',
':literal_date',
':string',
':string_affix',
':string_backtick',
':string_char',
':string_delimiter',
':string_doc',
':string_double',
':string_escape',
':string_heredoc',
':string_interpol',
':string_other',
':string_regex',
':string_sigil',
':string_single',
':string_symbol',
':number',
':number_bin',
':number_float',
':number_hex',
':number_integer',
':number_integer_long',
':number_oct',
':operator',
':operator_word',
':punctuation',
':comment',
':comment_hashbang',
':comment_multiline',
':comment_preproc',
':comment_preproc_file',
':comment_single',
':comment_special',
':generic',
':generic_deleted',
':generic_emph',
':generic_error',
':generic_heading',
':generic_inserted',
':generic_output',
':generic_prompt',
':generic_strong',
':generic_subheading',
':generic_traceback']
style_module_template = jinja2.Template('''
defmodule Makeup.Styles.HTML.{{module_name}} do
@moduledoc false
require Makeup.Token.TokenTypes
alias Makeup.Token.TokenTypes, as: Tok
@styles %{
{% for tok in tokens %}
{%- if styles[ex_to_py[tok]] %}{{ tok }} => "{{ styles[ex_to_py[tok]] }}"{% if not loop.last %},{% endif %}
{% endif -%}
{%- endfor %}
}
alias Makeup.Styles.HTML.Style
@style_struct Style.make_style(
short_name: "{{ short_name }}",
long_name: "{{ long_name }}",
background_color: "{{ background_color }}",
highlight_color: "{{ highlight_color }}",
styles: @styles)
def style() do
@style_struct
end
end
''')
style_map_file_fragment = jinja2.Template('''
{% for (lowercase, uppercase) in pairs %}
@doc """
The *{{ lowercase }}* style. Example [here](https://tmbb.github.io/makeup_demo/elixir.html#{{ lowercase }}).
"""
def {{ lowercase }}_style, do: HTML.{{ uppercase }}.style()
{% endfor -%}
# All styles
@pygments_style_map_binaries %{
{% for (lowercase, uppercase) in pairs %} "{{ lowercase }}" => HTML.{{ uppercase }}.style(),
{% endfor %} }
@pygments_style_map_atoms %{
{% for (lowercase, uppercase) in pairs %} {{ lowercase }}: HTML.{{ uppercase }}.style(),
{% endfor %}}
''')
def py_to_ex(cls):
# We don't want to operate on token classes, only their names
name = str(cls)
# They are of the form "Token.*"
# Trim the "Token." prefix
name = name.replace('Token.Literal.', 'Token.')
trimmed = name[6:]
# Convert to lower case
# It would be confusing to have them in uppercase in Elixir
# because they could be mistaken by aliases.
# Besides, having them in lowercase allows us to use macros
# to make sure at compile time we're not using any inexistant styles.
lowered = trimmed.lower()
# Continue turning them into valid identifiers
replaced = lowered.replace('.', '_')
# Turn it into a macro under the Tok alias
return (str(cls), ':' + replaced)
def invert(pairs):
return [(y, x) for (x, y) in pairs]
def stringify_styles(styles):
return dict((str(k), v) for (k,v) in styles.items())
def correct_docs(text, level=2):
# The module docs are writte in rST.
# rST is similar enough to markdown that we can fake it
# by removing the first lines with the title and
# replacing some directives.
# First, remove all indent
md = textwrap.dedent(text)
# Replace the :copyright directive
md = md.strip().replace(':copyright:', '©')
# Replace the :license: directive
md = md.replace(':license:', 'License:')
# Add a link to the BDS license
md = md.replace('see LICENSE for details',
'see [here](https://opensource.org/licenses/BSD-3-Clause) for details')
# Escape the '*' character, which is probably not used for emphasis by the license
md = md.replace('*', '\\*')
# remove the first 3 lines, which contain the title
# and indent all lines (2 spaces by default)
indented = "\n".join(((" " * level) + line) for line in md.split('\n')[3:])
return indented
def style_to_ex_module(key, value, tokens):
# Pygments stores the module name and the class name under this weird format
module_name, class_name = value.split('::')
# Import the module
__import__('pygments.styles.' + module_name)
# Store the module in a variable
module = getattr(pygments.styles, module_name)
short_name = module_name
long_name = class_name[:-5] + " " + class_name[-5:]
# Extract the class from the module
style_class = getattr(module, class_name)
# Map the Elixir styles into Python stringified token classes
ex_to_py = dict(invert([py_to_ex(k) for k in style_class.styles.keys()]))
stringified_styles = stringify_styles(style_class.styles)
# Render the tokens
return style_module_template.render(
# Preprocess the docs
moduledoc=correct_docs(module.__doc__, 2),
# We take the style name unchanged from Python
# (including the *Style suffix)
module_name=style_class.__name__,
# The elixir token styles
tokens=tokens,
# Other class attributes
short_name=short_name,
long_name=long_name,
background_color=style_class.background_color,
highlight_color=style_class.highlight_color,
styles=stringified_styles,
ex_to_py=ex_to_py)
def all_styles(style_map, tokens):
# This function generates elixir an elixir file (with a module for each Pygments style.
# It will overwrite existing files.
for key, value in style_map.items():
source = style_to_ex_module(key, value, tokens)
# The path where we'll generate the file
file_path = os.path.join('lib/makeup/styles/html/pygments/', key + '.ex')
with open(file_path, 'wb') as f:
f.write(source.encode())
def generate_style_map_file(style_map):
sorted_pairs = sorted([
# Turn the key into a valid Elxir identifier
(key.replace('-', '_'), value.split('::')[1])
for (key, value) in style_map.items()
])
# Generate the new text fragment
new_fragment = style_map_file_fragment.render(pairs=sorted_pairs)
file_path = os.path.join('lib/makeup/styles/html/style_map.ex')
with open(file_path, 'r') as f:
source = f.read()
# Recognize the pattern to replace
pattern = re.compile(
"(?<= # %% Start Pygments %%)(\r?\n)"
"(.*?\r?\n)"
"(?= # %% End Pygments %%)", re.DOTALL)
# Replace the text between the markers
replaced = re.sub(
pattern,
new_fragment,
source)
# Check we've done the right thing
print(replaced)
# Replace the file contents
with open(file_path, 'wb') as f:
source = f.write(replaced.encode())
# (Re)generate modules for all styles
all_styles(pygments.styles.STYLE_MAP, tokens)
# Regenerate the style_map file
generate_style_map_file(pygments.styles.STYLE_MAP)
```
# Default Language Names
```
from pygments.lexers import get_all_lexers
import jinja2
def get_lexer_data():
data = []
for lexer in get_all_lexers():
(name, shortnames, filetypes, mimetypes) = lexer
lexer_class = pygments.lexers.get_lexer_by_name(shortnames[0])
row = {
'class_name': lexer_class.__class__.__name__,
'name': name,
'shortnames': shortnames,
'filetypes': filetypes,
'mimetypes': mimetypes
}
data.append(row)
return data
template = jinja2.Template("""
%{
{% for lexer in lexer_data %}
%{\
module: Makeup.Lexers.{{ lexer['class_name'] }}, \
name: "{{ lexer['name'] }}", \
shortnames: [{% for shortname in lexer['shortnames'] %}"{{ shortname }}"{% if not loop.last %}, {% endif %}{% endfor %}]}\
{% if not loop.last %},{% endif %}
{%- endfor %}\
""")
print(template.render(lexer_data=get_lexer_data()))
dir(pygments.lexers)
pygments.lexers.get_lexer_by_name("HTML+Cheetah").__class__.__name__
```
|
github_jupyter
|
# %% Start Pygments - Don't remove this line
...
# %% End Pygments - Don't remove this line
import pygments.styles
import jinja2
import textwrap
from itertools import chain
import os
import re
tokens = [
':text',
':whitespace',
':escape',
':error',
':other' ,
':keyword',
':keyword_constant',
':keyword_declaration',
':keyword_namespace',
':keyword_pseudo',
':keyword_reserved',
':keyword_type' ,
':name',
':name_attribute',
':name_builtin',
':name_builtin_pseudo',
':name_class',
':name_constant',
':name_decorator',
':name_entity',
':name_exception',
':name_function',
':name_function_magic',
':name_property',
':name_label',
':name_namespace',
':name_other',
':name_tag',
':name_variable',
':name_variable_class',
':name_variable_global',
':name_variable_instance',
':name_variable_magic',
':literal',
':literal_date',
':string',
':string_affix',
':string_backtick',
':string_char',
':string_delimiter',
':string_doc',
':string_double',
':string_escape',
':string_heredoc',
':string_interpol',
':string_other',
':string_regex',
':string_sigil',
':string_single',
':string_symbol',
':number',
':number_bin',
':number_float',
':number_hex',
':number_integer',
':number_integer_long',
':number_oct',
':operator',
':operator_word',
':punctuation',
':comment',
':comment_hashbang',
':comment_multiline',
':comment_preproc',
':comment_preproc_file',
':comment_single',
':comment_special',
':generic',
':generic_deleted',
':generic_emph',
':generic_error',
':generic_heading',
':generic_inserted',
':generic_output',
':generic_prompt',
':generic_strong',
':generic_subheading',
':generic_traceback']
style_module_template = jinja2.Template('''
defmodule Makeup.Styles.HTML.{{module_name}} do
@moduledoc false
require Makeup.Token.TokenTypes
alias Makeup.Token.TokenTypes, as: Tok
@styles %{
{% for tok in tokens %}
{%- if styles[ex_to_py[tok]] %}{{ tok }} => "{{ styles[ex_to_py[tok]] }}"{% if not loop.last %},{% endif %}
{% endif -%}
{%- endfor %}
}
alias Makeup.Styles.HTML.Style
@style_struct Style.make_style(
short_name: "{{ short_name }}",
long_name: "{{ long_name }}",
background_color: "{{ background_color }}",
highlight_color: "{{ highlight_color }}",
styles: @styles)
def style() do
@style_struct
end
end
''')
style_map_file_fragment = jinja2.Template('''
{% for (lowercase, uppercase) in pairs %}
@doc """
The *{{ lowercase }}* style. Example [here](https://tmbb.github.io/makeup_demo/elixir.html#{{ lowercase }}).
"""
def {{ lowercase }}_style, do: HTML.{{ uppercase }}.style()
{% endfor -%}
# All styles
@pygments_style_map_binaries %{
{% for (lowercase, uppercase) in pairs %} "{{ lowercase }}" => HTML.{{ uppercase }}.style(),
{% endfor %} }
@pygments_style_map_atoms %{
{% for (lowercase, uppercase) in pairs %} {{ lowercase }}: HTML.{{ uppercase }}.style(),
{% endfor %}}
''')
def py_to_ex(cls):
# We don't want to operate on token classes, only their names
name = str(cls)
# They are of the form "Token.*"
# Trim the "Token." prefix
name = name.replace('Token.Literal.', 'Token.')
trimmed = name[6:]
# Convert to lower case
# It would be confusing to have them in uppercase in Elixir
# because they could be mistaken by aliases.
# Besides, having them in lowercase allows us to use macros
# to make sure at compile time we're not using any inexistant styles.
lowered = trimmed.lower()
# Continue turning them into valid identifiers
replaced = lowered.replace('.', '_')
# Turn it into a macro under the Tok alias
return (str(cls), ':' + replaced)
def invert(pairs):
return [(y, x) for (x, y) in pairs]
def stringify_styles(styles):
return dict((str(k), v) for (k,v) in styles.items())
def correct_docs(text, level=2):
# The module docs are writte in rST.
# rST is similar enough to markdown that we can fake it
# by removing the first lines with the title and
# replacing some directives.
# First, remove all indent
md = textwrap.dedent(text)
# Replace the :copyright directive
md = md.strip().replace(':copyright:', '©')
# Replace the :license: directive
md = md.replace(':license:', 'License:')
# Add a link to the BDS license
md = md.replace('see LICENSE for details',
'see [here](https://opensource.org/licenses/BSD-3-Clause) for details')
# Escape the '*' character, which is probably not used for emphasis by the license
md = md.replace('*', '\\*')
# remove the first 3 lines, which contain the title
# and indent all lines (2 spaces by default)
indented = "\n".join(((" " * level) + line) for line in md.split('\n')[3:])
return indented
def style_to_ex_module(key, value, tokens):
# Pygments stores the module name and the class name under this weird format
module_name, class_name = value.split('::')
# Import the module
__import__('pygments.styles.' + module_name)
# Store the module in a variable
module = getattr(pygments.styles, module_name)
short_name = module_name
long_name = class_name[:-5] + " " + class_name[-5:]
# Extract the class from the module
style_class = getattr(module, class_name)
# Map the Elixir styles into Python stringified token classes
ex_to_py = dict(invert([py_to_ex(k) for k in style_class.styles.keys()]))
stringified_styles = stringify_styles(style_class.styles)
# Render the tokens
return style_module_template.render(
# Preprocess the docs
moduledoc=correct_docs(module.__doc__, 2),
# We take the style name unchanged from Python
# (including the *Style suffix)
module_name=style_class.__name__,
# The elixir token styles
tokens=tokens,
# Other class attributes
short_name=short_name,
long_name=long_name,
background_color=style_class.background_color,
highlight_color=style_class.highlight_color,
styles=stringified_styles,
ex_to_py=ex_to_py)
def all_styles(style_map, tokens):
# This function generates elixir an elixir file (with a module for each Pygments style.
# It will overwrite existing files.
for key, value in style_map.items():
source = style_to_ex_module(key, value, tokens)
# The path where we'll generate the file
file_path = os.path.join('lib/makeup/styles/html/pygments/', key + '.ex')
with open(file_path, 'wb') as f:
f.write(source.encode())
def generate_style_map_file(style_map):
sorted_pairs = sorted([
# Turn the key into a valid Elxir identifier
(key.replace('-', '_'), value.split('::')[1])
for (key, value) in style_map.items()
])
# Generate the new text fragment
new_fragment = style_map_file_fragment.render(pairs=sorted_pairs)
file_path = os.path.join('lib/makeup/styles/html/style_map.ex')
with open(file_path, 'r') as f:
source = f.read()
# Recognize the pattern to replace
pattern = re.compile(
"(?<= # %% Start Pygments %%)(\r?\n)"
"(.*?\r?\n)"
"(?= # %% End Pygments %%)", re.DOTALL)
# Replace the text between the markers
replaced = re.sub(
pattern,
new_fragment,
source)
# Check we've done the right thing
print(replaced)
# Replace the file contents
with open(file_path, 'wb') as f:
source = f.write(replaced.encode())
# (Re)generate modules for all styles
all_styles(pygments.styles.STYLE_MAP, tokens)
# Regenerate the style_map file
generate_style_map_file(pygments.styles.STYLE_MAP)
from pygments.lexers import get_all_lexers
import jinja2
def get_lexer_data():
data = []
for lexer in get_all_lexers():
(name, shortnames, filetypes, mimetypes) = lexer
lexer_class = pygments.lexers.get_lexer_by_name(shortnames[0])
row = {
'class_name': lexer_class.__class__.__name__,
'name': name,
'shortnames': shortnames,
'filetypes': filetypes,
'mimetypes': mimetypes
}
data.append(row)
return data
template = jinja2.Template("""
%{
{% for lexer in lexer_data %}
%{\
module: Makeup.Lexers.{{ lexer['class_name'] }}, \
name: "{{ lexer['name'] }}", \
shortnames: [{% for shortname in lexer['shortnames'] %}"{{ shortname }}"{% if not loop.last %}, {% endif %}{% endfor %}]}\
{% if not loop.last %},{% endif %}
{%- endfor %}\
""")
print(template.render(lexer_data=get_lexer_data()))
dir(pygments.lexers)
pygments.lexers.get_lexer_by_name("HTML+Cheetah").__class__.__name__
| 0.522202 | 0.714777 |
```
EPOCHS = 20
LR = 3e-4
BATCH_SIZE_TWO = 1
HIDDEN = 20
MEMBERS = 3
import pandas as pd
import numpy as np
import random
import torch
import torch.nn.functional as F
import torch.nn as nn
from torchinfo import summary
import re
import string
import torch.optim as optim
from torchtext.legacy import data
from torch.utils.data import Dataset, DataLoader
from torch.nn.utils.rnn import pad_sequence, pack_padded_sequence
from sklearn.model_selection import train_test_split
def collate_batch(batch):
label_list, text_list, length_list = [], [], []
for (_text,_label, _len) in batch:
label_list.append(_label)
length_list.append(_len)
tensor = torch.tensor(_text, dtype=torch.long)
text_list.append(tensor)
text_list = pad_sequence(text_list, batch_first=True)
label_list = torch.tensor(label_list, dtype=torch.float)
length_list = torch.tensor(length_list)
return text_list,label_list, length_list
class VectorizeData(Dataset):
def __init__(self, file):
self.data = pd.read_pickle(file)
def __len__(self):
return self.data.shape[0]
def __getitem__(self, idx):
X = self.data.vector[idx]
lens = self.data.lengths[idx]
y = self.data.label[idx]
return X,y,lens
testing = VectorizeData('variable_test_set.csv')
dtes_load = DataLoader(testing, batch_size=BATCH_SIZE_TWO, shuffle=False, collate_fn=collate_batch)
'''loading the pretrained embedding weights'''
weights=torch.load('CBOW_NEWS.pth')
pre_trained = nn.Embedding.from_pretrained(weights)
pre_trained.weight.requires_grad=False
# Part Implemntation of Integrated Stacking Model as detailed by
# Jason Brownlee
# @ https://machinelearningmastery.com/stacking-ensemble-for-deep-learning
# -neural-networks/
def binary_accuracy(preds, y):
#round predictions to the closest integer
rounded_preds = torch.round(preds)
correct = (rounded_preds == y).float()
acc = correct.sum() / len(correct)
return acc
def create_emb_layer(pre_trained):
num_embeddings = pre_trained.num_embeddings
embedding_dim = pre_trained.embedding_dim
emb_layer = nn.Embedding.from_pretrained(pre_trained.weight.data, freeze=True)
return emb_layer, embedding_dim
class StackedLSTMAtteionModel(nn.Module):
def __init__(self, pre_trained,num_labels):
super(StackedLSTMAtteionModel, self).__init__()
self.n_class = num_labels
self.embedding, self.embedding_dim = create_emb_layer(pre_trained)
self.LSTM = nn.LSTM(self.embedding_dim, HIDDEN, num_layers=2,bidirectional=True,dropout=0.26,batch_first=True)
self.label = nn.Linear(2*HIDDEN, self.n_class)
self.act = nn.Sigmoid()
def attention_net(self, Lstm_output, final_state):
hidden = final_state
output = Lstm_output[0]
attn_weights = torch.matmul(output, hidden.transpose(1, 0))
soft_attn_weights = F.softmax(attn_weights.transpose(1, 0), dim=1)
new_hidden_state = torch.matmul(output.transpose(1,0), soft_attn_weights.transpose(1,0))
return new_hidden_state.transpose(1, 0)
def forward(self, x, text_len):
embeds = self.embedding(x)
pack = pack_padded_sequence(embeds, text_len, batch_first=True, enforce_sorted=False)
output, (hidden, cell) = self.LSTM(pack)
hidden = torch.cat((hidden[0,:, :], hidden[1,:, :]), dim=1)
attn_output = self.attention_net(output, hidden)
logits = self.label(attn_output)
outputs = self.act(logits.view(-1))
return outputs
class TwoLayerGRUAttModel(nn.Module):
def __init__(self, pre_trained, HIDDEN, num_labels):
super(TwoLayerGRUAttModel, self).__init__()
self.n_class = num_labels
self.embedding, self.embedding_dim = create_emb_layer(pre_trained)
self.gru = nn.GRU(self.embedding_dim, hidden_size=HIDDEN, num_layers=2,batch_first=True, bidirectional=True, dropout=0.2)
self.label = nn.Linear(2*HIDDEN, self.n_class)
self.act = nn.Sigmoid()
def attention_net(self, gru_output, final_state):
hidden = final_state
output = gru_output[0]
attn_weights = torch.matmul(output, hidden.transpose(1, 0))
soft_attn_weights = F.softmax(attn_weights.transpose(1, 0), dim=1)
new_hidden_state = torch.matmul(output.transpose(1,0), soft_attn_weights.transpose(1,0))
return new_hidden_state.transpose(1, 0)
def forward(self, x, text_len):
embeds = self.embedding(x)
pack = pack_padded_sequence(embeds, text_len, batch_first=True, enforce_sorted=False)
output, hidden = self.gru(pack)
hidden = torch.cat((hidden[0,:, :], hidden[1,:, :]), dim=1)
attn_output = self.attention_net(output, hidden)
logits = self.label(attn_output)
outputs = self.act(logits.view(-1))
return outputs
class C_DNN(nn.Module):
def __init__(self, pre_trained,num_labels):
super(C_DNN, self).__init__()
self.n_class = num_labels
self.embedding, self.embedding_dim = create_emb_layer(pre_trained)
self.conv1D = nn.Conv2d(1, 100, kernel_size=(3,16), padding=(1,0))
self.label = nn.Linear(100, self.n_class)
self.act = nn.Sigmoid()
def forward(self, x):
embeds = self.embedding(x)
embeds = embeds.unsqueeze(1)
conv1d = self.conv1D(embeds)
relu = F.relu(conv1d).squeeze(3)
maxpool = F.max_pool1d(input=relu, kernel_size=relu.size(2)).squeeze(2)
fc = self.label(maxpool)
sig = self.act(fc)
return sig.squeeze(1)
class MetaLearner(nn.Module):
def __init__(self, modelA, modelB, modelC):
super(MetaLearner, self).__init__()
self.modelA = modelA
self.modelB = modelB
self.modelC = modelC
self.fc1 = nn.Linear(3, 2)
self.fc2 = nn.Linear(2, 1)
self.act = nn.Sigmoid()
def forward(self, text, length):
x1=self.modelA(text, length)
x2=self.modelB(text,length)
x3=self.modelC(text)
x4 = torch.cat((x1.detach(),x2.detach(), x3.detach()), dim=0)
x5 = F.relu(self.fc1(x4))
output = self.act(self.fc2(x5))
return output
def load_all_models(n_models):
all_models = []
for i in range(n_models):
filename = "models/model_"+str(i+1)+'.pth'
if filename == "models/model_1.pth":
model_one = StackedLSTMAtteionModel(pre_trained, 1)
model_one.load_state_dict(torch.load(filename))
for param in model_one.parameters():
param.requires_grad = False
all_models.append(model_one)
elif filename == "models/model_2.pth":
model_two = TwoLayerGRUAttModel(pre_trained, HIDDEN, 1)
model_two.load_state_dict(torch.load(filename))
for param in model_two.parameters():
param.requires_grad = False
all_models.append(model_two)
else:
model = C_DNN(pre_trained=pre_trained, num_labels=1)
model.load_state_dict(torch.load(filename))
for param in model.parameters():
param.requires_grad = False
all_models.append(model)
return all_models
models = load_all_models(MEMBERS)
meta_model = MetaLearner(models[0], models[1], models[2])
optimizer = optim.Adam(meta_model.parameters(), lr=LR)
criterion = nn.BCELoss()
def validate_meta(dataloader, model, epoch):
#initialize every epoch
total_epoch_loss = 0
total_epoch_acc = 0
#set the model in training phase
model.train()
for idx, batch in enumerate(dataloader):
text,label,lengths = batch
optimizer.zero_grad()
prediction = model(text, lengths)
loss = criterion(prediction, label)
acc = binary_accuracy(prediction, label)
#backpropage the loss and compute the gradients
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 1)
#update the weights
optimizer.step()
total_epoch_loss += loss.item()
acc += acc.item()
if idx % 500 == 0 and idx > 0:
print(f'Epoch: {epoch}, Idx: {idx+1}, Meta Training Loss: {loss.item():.4f}, Meta Training Accuracy:{acc.item():.2f}%')
total_epoch_loss = 0
acc = 0
for epoch in range(1, EPOCHS + 1):
validate_meta(dtes_load, meta_model, epoch)
filename = "models/model_metaLearner.pth"
torch.save(meta_model.state_dict(), filename)
```
|
github_jupyter
|
EPOCHS = 20
LR = 3e-4
BATCH_SIZE_TWO = 1
HIDDEN = 20
MEMBERS = 3
import pandas as pd
import numpy as np
import random
import torch
import torch.nn.functional as F
import torch.nn as nn
from torchinfo import summary
import re
import string
import torch.optim as optim
from torchtext.legacy import data
from torch.utils.data import Dataset, DataLoader
from torch.nn.utils.rnn import pad_sequence, pack_padded_sequence
from sklearn.model_selection import train_test_split
def collate_batch(batch):
label_list, text_list, length_list = [], [], []
for (_text,_label, _len) in batch:
label_list.append(_label)
length_list.append(_len)
tensor = torch.tensor(_text, dtype=torch.long)
text_list.append(tensor)
text_list = pad_sequence(text_list, batch_first=True)
label_list = torch.tensor(label_list, dtype=torch.float)
length_list = torch.tensor(length_list)
return text_list,label_list, length_list
class VectorizeData(Dataset):
def __init__(self, file):
self.data = pd.read_pickle(file)
def __len__(self):
return self.data.shape[0]
def __getitem__(self, idx):
X = self.data.vector[idx]
lens = self.data.lengths[idx]
y = self.data.label[idx]
return X,y,lens
testing = VectorizeData('variable_test_set.csv')
dtes_load = DataLoader(testing, batch_size=BATCH_SIZE_TWO, shuffle=False, collate_fn=collate_batch)
'''loading the pretrained embedding weights'''
weights=torch.load('CBOW_NEWS.pth')
pre_trained = nn.Embedding.from_pretrained(weights)
pre_trained.weight.requires_grad=False
# Part Implemntation of Integrated Stacking Model as detailed by
# Jason Brownlee
# @ https://machinelearningmastery.com/stacking-ensemble-for-deep-learning
# -neural-networks/
def binary_accuracy(preds, y):
#round predictions to the closest integer
rounded_preds = torch.round(preds)
correct = (rounded_preds == y).float()
acc = correct.sum() / len(correct)
return acc
def create_emb_layer(pre_trained):
num_embeddings = pre_trained.num_embeddings
embedding_dim = pre_trained.embedding_dim
emb_layer = nn.Embedding.from_pretrained(pre_trained.weight.data, freeze=True)
return emb_layer, embedding_dim
class StackedLSTMAtteionModel(nn.Module):
def __init__(self, pre_trained,num_labels):
super(StackedLSTMAtteionModel, self).__init__()
self.n_class = num_labels
self.embedding, self.embedding_dim = create_emb_layer(pre_trained)
self.LSTM = nn.LSTM(self.embedding_dim, HIDDEN, num_layers=2,bidirectional=True,dropout=0.26,batch_first=True)
self.label = nn.Linear(2*HIDDEN, self.n_class)
self.act = nn.Sigmoid()
def attention_net(self, Lstm_output, final_state):
hidden = final_state
output = Lstm_output[0]
attn_weights = torch.matmul(output, hidden.transpose(1, 0))
soft_attn_weights = F.softmax(attn_weights.transpose(1, 0), dim=1)
new_hidden_state = torch.matmul(output.transpose(1,0), soft_attn_weights.transpose(1,0))
return new_hidden_state.transpose(1, 0)
def forward(self, x, text_len):
embeds = self.embedding(x)
pack = pack_padded_sequence(embeds, text_len, batch_first=True, enforce_sorted=False)
output, (hidden, cell) = self.LSTM(pack)
hidden = torch.cat((hidden[0,:, :], hidden[1,:, :]), dim=1)
attn_output = self.attention_net(output, hidden)
logits = self.label(attn_output)
outputs = self.act(logits.view(-1))
return outputs
class TwoLayerGRUAttModel(nn.Module):
def __init__(self, pre_trained, HIDDEN, num_labels):
super(TwoLayerGRUAttModel, self).__init__()
self.n_class = num_labels
self.embedding, self.embedding_dim = create_emb_layer(pre_trained)
self.gru = nn.GRU(self.embedding_dim, hidden_size=HIDDEN, num_layers=2,batch_first=True, bidirectional=True, dropout=0.2)
self.label = nn.Linear(2*HIDDEN, self.n_class)
self.act = nn.Sigmoid()
def attention_net(self, gru_output, final_state):
hidden = final_state
output = gru_output[0]
attn_weights = torch.matmul(output, hidden.transpose(1, 0))
soft_attn_weights = F.softmax(attn_weights.transpose(1, 0), dim=1)
new_hidden_state = torch.matmul(output.transpose(1,0), soft_attn_weights.transpose(1,0))
return new_hidden_state.transpose(1, 0)
def forward(self, x, text_len):
embeds = self.embedding(x)
pack = pack_padded_sequence(embeds, text_len, batch_first=True, enforce_sorted=False)
output, hidden = self.gru(pack)
hidden = torch.cat((hidden[0,:, :], hidden[1,:, :]), dim=1)
attn_output = self.attention_net(output, hidden)
logits = self.label(attn_output)
outputs = self.act(logits.view(-1))
return outputs
class C_DNN(nn.Module):
def __init__(self, pre_trained,num_labels):
super(C_DNN, self).__init__()
self.n_class = num_labels
self.embedding, self.embedding_dim = create_emb_layer(pre_trained)
self.conv1D = nn.Conv2d(1, 100, kernel_size=(3,16), padding=(1,0))
self.label = nn.Linear(100, self.n_class)
self.act = nn.Sigmoid()
def forward(self, x):
embeds = self.embedding(x)
embeds = embeds.unsqueeze(1)
conv1d = self.conv1D(embeds)
relu = F.relu(conv1d).squeeze(3)
maxpool = F.max_pool1d(input=relu, kernel_size=relu.size(2)).squeeze(2)
fc = self.label(maxpool)
sig = self.act(fc)
return sig.squeeze(1)
class MetaLearner(nn.Module):
def __init__(self, modelA, modelB, modelC):
super(MetaLearner, self).__init__()
self.modelA = modelA
self.modelB = modelB
self.modelC = modelC
self.fc1 = nn.Linear(3, 2)
self.fc2 = nn.Linear(2, 1)
self.act = nn.Sigmoid()
def forward(self, text, length):
x1=self.modelA(text, length)
x2=self.modelB(text,length)
x3=self.modelC(text)
x4 = torch.cat((x1.detach(),x2.detach(), x3.detach()), dim=0)
x5 = F.relu(self.fc1(x4))
output = self.act(self.fc2(x5))
return output
def load_all_models(n_models):
all_models = []
for i in range(n_models):
filename = "models/model_"+str(i+1)+'.pth'
if filename == "models/model_1.pth":
model_one = StackedLSTMAtteionModel(pre_trained, 1)
model_one.load_state_dict(torch.load(filename))
for param in model_one.parameters():
param.requires_grad = False
all_models.append(model_one)
elif filename == "models/model_2.pth":
model_two = TwoLayerGRUAttModel(pre_trained, HIDDEN, 1)
model_two.load_state_dict(torch.load(filename))
for param in model_two.parameters():
param.requires_grad = False
all_models.append(model_two)
else:
model = C_DNN(pre_trained=pre_trained, num_labels=1)
model.load_state_dict(torch.load(filename))
for param in model.parameters():
param.requires_grad = False
all_models.append(model)
return all_models
models = load_all_models(MEMBERS)
meta_model = MetaLearner(models[0], models[1], models[2])
optimizer = optim.Adam(meta_model.parameters(), lr=LR)
criterion = nn.BCELoss()
def validate_meta(dataloader, model, epoch):
#initialize every epoch
total_epoch_loss = 0
total_epoch_acc = 0
#set the model in training phase
model.train()
for idx, batch in enumerate(dataloader):
text,label,lengths = batch
optimizer.zero_grad()
prediction = model(text, lengths)
loss = criterion(prediction, label)
acc = binary_accuracy(prediction, label)
#backpropage the loss and compute the gradients
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 1)
#update the weights
optimizer.step()
total_epoch_loss += loss.item()
acc += acc.item()
if idx % 500 == 0 and idx > 0:
print(f'Epoch: {epoch}, Idx: {idx+1}, Meta Training Loss: {loss.item():.4f}, Meta Training Accuracy:{acc.item():.2f}%')
total_epoch_loss = 0
acc = 0
for epoch in range(1, EPOCHS + 1):
validate_meta(dtes_load, meta_model, epoch)
filename = "models/model_metaLearner.pth"
torch.save(meta_model.state_dict(), filename)
| 0.896801 | 0.36869 |
# Use `Lale` `AIF360` scorers to calculate and mitigate bias for credit risk AutoAI model
This notebook contains the steps and code to demonstrate support of AutoAI experiments in Watson Machine Learning service. It introduces commands for bias detecting and mitigation performed with `lale.lib.aif360` module.
Some familiarity with Python is helpful. This notebook uses Python 3.7.
## Learning goals
The learning goals of this notebook are:
- Work with Watson Machine Learning experiments to train AutoAI models.
- Calculate fairness metrics of trained pipelines.
- Refine the best model and perform mitigation to get less biased model.
- Store trained model with custom software specification.
- Online deployment and score the trained model.
## Contents
This notebook contains the following parts:
1. [Setup](#setup)
2. [Optimizer definition](#definition)
3. [Experiment Run](#run)
4. [Pipeline bias detection and mitigation](#bias)
5. [Deployment and score](#scoring)
6. [Clean up](#cleanup)
7. [Summary and next steps](#summary)
<a id="setup"></a>
## 1. Set up the environment
If you are not familiar with <a href="https://console.ng.bluemix.net/catalog/services/ibm-watson-machine-learning/" target="_blank" rel="noopener no referrer">Watson Machine Learning (WML) Service</a> and AutoAI experiments please read more about it in the sample notebook: <a href="https://github.com/IBM/watson-machine-learning-samples/blob/master/cpd3.5/notebooks/python_sdk/experiments/autoai/Use%20AutoAI%20and%20Lale%20to%20predict%20credit%20risk.ipynb" target="_blank" rel="noopener no referrer">"Use AutoAI and Lale to predict credit risk with `ibm-watson-machine-learning`"</a>
### Connection to WML
Authenticate the Watson Machine Learning service on IBM Cloud Pack for Data. You need to provide platform `url`, your `username` and `password`.
```
username = 'PASTE YOUR USERNAME HERE'
password = 'PASTE YOUR PASSWORD HERE'
url = 'PASTE THE PLATFORM URL HERE'
wml_credentials = {
"username": username,
"password": password,
"url": url,
"instance_id": 'openshift',
"version": '3.5'
}
```
### Install and import the `ibm-watson-machine-learning`, `lale` and `aif360`.
**Note:** `ibm-watson-machine-learning` documentation can be found <a href="http://ibm-wml-api-pyclient.mybluemix.net/" target="_blank" rel="noopener no referrer">here</a>.
```
!pip install -U ibm-watson-machine-learning | tail -n 1
!pip install -U scikit-learn==0.23.1 | tail -n 1
!pip install -U autoai-libs | tail -n 1
!pip install -U lale | tail -n 1
!pip install -U aif360 | tail -n 1
from ibm_watson_machine_learning import APIClient
client = APIClient(wml_credentials)
```
### Working with spaces
First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use `{PLATFORM_URL}/ml-runtime/spaces?context=icp4data` to create one.
- Click New Deployment Space
- Create an empty space
- Go to space `Settings` tab
- Copy `space_id` and paste it below
**Tip**: You can also use SDK to prepare the space for your work. More information can be found [here](https://github.com/IBM/watson-machine-learning-samples/blob/master/cpd3.5/notebooks/python_sdk/instance-management/Space%20management.ipynb).
**Action**: Assign space ID below
```
space_id = 'PASTE YOUR SPACE ID HERE'
```
You can use the `list` method to print all existing spaces.
```
client.spaces.list(limit=10)
```
To be able to interact with all resources available in Watson Machine Learning, you need to set the **space** which you will be using.
```
client.set.default_space(space_id)
```
<a id="definition"></a>
## 2. Optimizer definition
### Training data connection
Define connection information to training data CSV file. This example uses the [German Credit Risk dataset](https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cloud/data/credit_risk/credit_risk_training_light.csv).
```
filename = 'german_credit_data_biased_training.csv'
```
Download training data from git repository and split for training and test set.
```
import os, wget
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
url = 'https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cpd3.5/data/credit_risk/german_credit_data_biased_training.csv'
if not os.path.isfile(filename): wget.download(url)
credit_risk_df = pd.read_csv(filename)
X = credit_risk_df.drop(['Risk'], axis=1)
y = credit_risk_df['Risk']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1)
credit_risk_df.head()
```
Upload train dataset as Cloud Pack for Data data asset and define connection information to training data.
```
X_train.join(y_train).to_csv(filename, index=False, mode="w+")
asset_details = client.data_assets.create(filename, filename)
asset_details
from ibm_watson_machine_learning.helpers import DataConnection, AssetLocation
credit_risk_conn = DataConnection(
location=AssetLocation(asset_id=client.data_assets.get_id(asset_details)))
training_data_reference=[credit_risk_conn]
```
### Optimizer configuration
Provide the input information for AutoAI optimizer:
- `name` - experiment name
- `prediction_type` - type of the problem
- `prediction_column` - target column name
- `scoring` - optimization metric
- `daub_include_only_estimators` - estimators which will be included during AutoAI training. More available estimators can be found in `experiment.ClassificationAlgorithms` enum
```
from ibm_watson_machine_learning.experiment import AutoAI
experiment = AutoAI(wml_credentials, space_id=space_id)
pipeline_optimizer = experiment.optimizer(
name='Credit Risk Bias detection in AutoAI',
desc='Sample notebook',
prediction_type=AutoAI.PredictionType.BINARY,
prediction_column='Risk',
scoring=AutoAI.Metrics.ROC_AUC_SCORE,
daub_include_only_estimators=[experiment.ClassificationAlgorithms.XGB]
)
```
<a id="run"></a>
## 3. Experiment run
Call the `fit()` method to trigger the AutoAI experiment. You can either use interactive mode (synchronous job) or background mode (asychronous job) by specifying `background_model=True`.
```
run_details = pipeline_optimizer.fit(
training_data_reference=training_data_reference,
background_mode=False)
pipeline_optimizer.get_run_status()
```
You can list trained pipelines and evaluation metrics information in
the form of a Pandas DataFrame by calling the `summary()` method.
```
summary = pipeline_optimizer.summary()
summary
```
### Get selected pipeline model
Download pipeline model object from the AutoAI training job.
```
best_pipeline = pipeline_optimizer.get_pipeline()
```
<a id="bias"></a>
## 4. Bias detection and mitigation
The `fairness_info` dictionary contains some fairness-related metadata. The favorable and unfavorable label are values of the target class column that indicate whether the loan was granted or denied. A protected attribute is a feature that partitions the population into groups whose outcome should have parity. The credit-risk dataset has two protected attribute columns, sex and age. Each prottected attributes has privileged and unprivileged group.
Note that to use fairness metrics from lale with numpy arrays `protected_attributes.feature` need to be passed as index of the column in dataset, not as name.
```
fairness_info = {'favorable_labels': ['No Risk'],
'protected_attributes': [
{'feature': X.columns.get_loc('Sex'),'privileged_groups': ['male']},
{'feature': X.columns.get_loc('Age'), 'privileged_groups': [[26, 40]]}]}
fairness_info
```
### Calculate fairness metrics
We will calculate some model metrics. Accuracy describes how accurate is the model according to dataset.
Disparate impact is defined by comparing outcomes between a privileged group and an unprivileged group,
so it needs to check the protected attribute to determine group membership for the sample record at hand.
The third calculated metric takes the disparate impact into account along with accuracy. The best value of the score is 1.0.
```
import sklearn.metrics
from lale.lib.aif360 import disparate_impact, accuracy_and_disparate_impact
accuracy_scorer = sklearn.metrics.make_scorer(sklearn.metrics.accuracy_score)
print(f'accuracy {accuracy_scorer(best_pipeline, X_test.values, y_test.values):.1%}')
disparate_impact_scorer = disparate_impact(**fairness_info)
print(f'disparate impact {disparate_impact_scorer(best_pipeline, X_test.values, y_test.values):.2f}')
combined_scorer = accuracy_and_disparate_impact(**fairness_info)
print(f'accuracy and disparate impact metric {combined_scorer(best_pipeline, X_test.values, y_test.values):.2f}')
```
### Mitigation
`Hyperopt` minimizes (`best_score` - `score_returned_by_the_scorer`), where `best_score` is an argument to `Hyperopt` and `score_returned_by_the_scorer` is the value returned by the scorer for each evaluation point. We will use the `Hyperopt` to tune hyperparametres of the AutoAI pipeline to get new and more fair model.
```
from sklearn.linear_model import LogisticRegression as LR
from sklearn.tree import DecisionTreeClassifier as Tree
from sklearn.neighbors import KNeighborsClassifier as KNN
from lale.lib.lale import Hyperopt
from lale.lib.aif360 import FairStratifiedKFold
from lale import wrap_imported_operators
wrap_imported_operators()
prefix = best_pipeline.remove_last().freeze_trainable()
prefix.visualize()
new_pipeline = prefix >> (LR | Tree | KNN)
new_pipeline.visualize()
fair_cv = FairStratifiedKFold(**fairness_info, n_splits=3)
pipeline_fairer = new_pipeline.auto_configure(
X_train.values, y_train.values, optimizer=Hyperopt, cv=fair_cv,
max_evals=10, scoring=combined_scorer, best_score=1.0)
```
As with any trained model, we can evaluate and visualize the result.
```
print(f'accuracy {accuracy_scorer(pipeline_fairer, X_test.values, y_test.values):.1%}')
print(f'disparate impact {disparate_impact_scorer(pipeline_fairer, X_test.values, y_test.values):.2f}')
print(f'accuracy and disparate impact metric {combined_scorer(pipeline_fairer, X_test.values, y_test.values):.2f}')
pipeline_fairer.visualize()
```
As the result demonstrates, the best model found by AI Automation
has lower accuracy and much better disparate impact as the one we saw
before. Also, it has tuned the repair level and
has picked and tuned a classifier. These results may vary by dataset and search space.
You can get source code of the created pipeline. You just need to change the below cell type `Raw NBCovert` to `code`.
<a id="scoring"></a>
## 5. Deploy and Score
In this section you will learn how to deploy and score Lale pipeline model using WML instance.
#### Custom software_specification
Created model is AutoAI model refined with Lale. We will create new software specification based on default Python 3.7
environment extended by `autoai-libs` package.
```
base_sw_spec_uid = client.software_specifications.get_uid_by_name("default_py3.7")
print("Id of default Python 3.7 software specification is: ", base_sw_spec_uid)
url = 'https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cpd3.5/configs/config.yaml'
if not os.path.isfile('config.yaml'): wget.download(url)
!cat config.yaml
```
`config.yaml` file describes details of package extention. Now you need to store new package extention with `APIClient`.
```
meta_prop_pkg_extn = {
client.package_extensions.ConfigurationMetaNames.NAME: "Scikt with autoai-libs",
client.package_extensions.ConfigurationMetaNames.DESCRIPTION: "Pkg extension for autoai-libs",
client.package_extensions.ConfigurationMetaNames.TYPE: "conda_yml"
}
pkg_extn_details = client.package_extensions.store(meta_props=meta_prop_pkg_extn, file_path="config.yaml")
pkg_extn_uid = client.package_extensions.get_uid(pkg_extn_details)
pkg_extn_url = client.package_extensions.get_href(pkg_extn_details)
```
Create new software specification and add created package extention to it.
```
meta_prop_sw_spec = {
client.software_specifications.ConfigurationMetaNames.NAME: "Mitigated AutoAI bases on scikit spec",
client.software_specifications.ConfigurationMetaNames.DESCRIPTION: "Software specification for scikt with autoai-libs",
client.software_specifications.ConfigurationMetaNames.BASE_SOFTWARE_SPECIFICATION: {"guid": base_sw_spec_uid}
}
sw_spec_details = client.software_specifications.store(meta_props=meta_prop_sw_spec)
sw_spec_uid = client.software_specifications.get_uid(sw_spec_details)
status = client.software_specifications.add_package_extension(sw_spec_uid, pkg_extn_uid)
```
You can get details of created software specification using `client.software_specifications.get_details(sw_spec_uid)`
### Store the model
```
model_props = {
client.repository.ModelMetaNames.NAME: "Fairer AutoAI model",
client.repository.ModelMetaNames.TYPE: 'scikit-learn_0.23',
client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_uid
}
feature_vector = list(X.columns)
published_model = client.repository.store_model(
model=best_pipeline.export_to_sklearn_pipeline(),
meta_props=model_props,
training_data=X_train.values,
training_target=y_train.values,
feature_names=feature_vector,
label_column_names=['Risk']
)
published_model_uid = client.repository.get_model_id(published_model)
```
### Deployment creation
```
metadata = {
client.deployments.ConfigurationMetaNames.NAME: "Deployment of fairer model",
client.deployments.ConfigurationMetaNames.ONLINE: {}
}
created_deployment = client.deployments.create(published_model_uid, meta_props=metadata)
deployment_id = client.deployments.get_uid(created_deployment)
```
#### Deployment scoring
You need to pass scoring values as input data if the deployed model. Use `client.deployments.score()` method to get predictions from deployed model.
```
values = X_test.values
scoring_payload = {
"input_data": [{
'values': values[:5]
}]
}
predictions = client.deployments.score(deployment_id, scoring_payload)
predictions
```
<a id="cleanup"></a>
## 6. Clean up
If you want to clean up all created assets:
- experiments
- trainings
- pipelines
- model definitions
- models
- functions
- deployments
please follow up this sample [notebook](https://github.com/IBM/watson-machine-learning-samples/blob/master/cpd3.5/notebooks/python_sdk/instance-management/Machine%20Learning%20artifacts%20management.ipynb).
<a id="summary"></a>
## 7. Summary and next steps
You successfully completed this notebook!.
Check out used packeges domuntations:
- `ibm-watson-machine-learning` [Online Documentation](https://www.ibm.com/cloud/watson-studio/autoai)
- `lale`: https://github.com/IBM/lale
- `aif360`: https://aif360.mybluemix.net/
### Authors
**Dorota Dydo-Rożniecka**, Intern in Watson Machine Learning at IBM
Copyright © 2020, 2021 IBM. This notebook and its source code are released under the terms of the MIT License.
|
github_jupyter
|
username = 'PASTE YOUR USERNAME HERE'
password = 'PASTE YOUR PASSWORD HERE'
url = 'PASTE THE PLATFORM URL HERE'
wml_credentials = {
"username": username,
"password": password,
"url": url,
"instance_id": 'openshift',
"version": '3.5'
}
!pip install -U ibm-watson-machine-learning | tail -n 1
!pip install -U scikit-learn==0.23.1 | tail -n 1
!pip install -U autoai-libs | tail -n 1
!pip install -U lale | tail -n 1
!pip install -U aif360 | tail -n 1
from ibm_watson_machine_learning import APIClient
client = APIClient(wml_credentials)
space_id = 'PASTE YOUR SPACE ID HERE'
client.spaces.list(limit=10)
client.set.default_space(space_id)
filename = 'german_credit_data_biased_training.csv'
import os, wget
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
url = 'https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cpd3.5/data/credit_risk/german_credit_data_biased_training.csv'
if not os.path.isfile(filename): wget.download(url)
credit_risk_df = pd.read_csv(filename)
X = credit_risk_df.drop(['Risk'], axis=1)
y = credit_risk_df['Risk']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1)
credit_risk_df.head()
X_train.join(y_train).to_csv(filename, index=False, mode="w+")
asset_details = client.data_assets.create(filename, filename)
asset_details
from ibm_watson_machine_learning.helpers import DataConnection, AssetLocation
credit_risk_conn = DataConnection(
location=AssetLocation(asset_id=client.data_assets.get_id(asset_details)))
training_data_reference=[credit_risk_conn]
from ibm_watson_machine_learning.experiment import AutoAI
experiment = AutoAI(wml_credentials, space_id=space_id)
pipeline_optimizer = experiment.optimizer(
name='Credit Risk Bias detection in AutoAI',
desc='Sample notebook',
prediction_type=AutoAI.PredictionType.BINARY,
prediction_column='Risk',
scoring=AutoAI.Metrics.ROC_AUC_SCORE,
daub_include_only_estimators=[experiment.ClassificationAlgorithms.XGB]
)
run_details = pipeline_optimizer.fit(
training_data_reference=training_data_reference,
background_mode=False)
pipeline_optimizer.get_run_status()
summary = pipeline_optimizer.summary()
summary
best_pipeline = pipeline_optimizer.get_pipeline()
fairness_info = {'favorable_labels': ['No Risk'],
'protected_attributes': [
{'feature': X.columns.get_loc('Sex'),'privileged_groups': ['male']},
{'feature': X.columns.get_loc('Age'), 'privileged_groups': [[26, 40]]}]}
fairness_info
import sklearn.metrics
from lale.lib.aif360 import disparate_impact, accuracy_and_disparate_impact
accuracy_scorer = sklearn.metrics.make_scorer(sklearn.metrics.accuracy_score)
print(f'accuracy {accuracy_scorer(best_pipeline, X_test.values, y_test.values):.1%}')
disparate_impact_scorer = disparate_impact(**fairness_info)
print(f'disparate impact {disparate_impact_scorer(best_pipeline, X_test.values, y_test.values):.2f}')
combined_scorer = accuracy_and_disparate_impact(**fairness_info)
print(f'accuracy and disparate impact metric {combined_scorer(best_pipeline, X_test.values, y_test.values):.2f}')
from sklearn.linear_model import LogisticRegression as LR
from sklearn.tree import DecisionTreeClassifier as Tree
from sklearn.neighbors import KNeighborsClassifier as KNN
from lale.lib.lale import Hyperopt
from lale.lib.aif360 import FairStratifiedKFold
from lale import wrap_imported_operators
wrap_imported_operators()
prefix = best_pipeline.remove_last().freeze_trainable()
prefix.visualize()
new_pipeline = prefix >> (LR | Tree | KNN)
new_pipeline.visualize()
fair_cv = FairStratifiedKFold(**fairness_info, n_splits=3)
pipeline_fairer = new_pipeline.auto_configure(
X_train.values, y_train.values, optimizer=Hyperopt, cv=fair_cv,
max_evals=10, scoring=combined_scorer, best_score=1.0)
print(f'accuracy {accuracy_scorer(pipeline_fairer, X_test.values, y_test.values):.1%}')
print(f'disparate impact {disparate_impact_scorer(pipeline_fairer, X_test.values, y_test.values):.2f}')
print(f'accuracy and disparate impact metric {combined_scorer(pipeline_fairer, X_test.values, y_test.values):.2f}')
pipeline_fairer.visualize()
base_sw_spec_uid = client.software_specifications.get_uid_by_name("default_py3.7")
print("Id of default Python 3.7 software specification is: ", base_sw_spec_uid)
url = 'https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cpd3.5/configs/config.yaml'
if not os.path.isfile('config.yaml'): wget.download(url)
!cat config.yaml
meta_prop_pkg_extn = {
client.package_extensions.ConfigurationMetaNames.NAME: "Scikt with autoai-libs",
client.package_extensions.ConfigurationMetaNames.DESCRIPTION: "Pkg extension for autoai-libs",
client.package_extensions.ConfigurationMetaNames.TYPE: "conda_yml"
}
pkg_extn_details = client.package_extensions.store(meta_props=meta_prop_pkg_extn, file_path="config.yaml")
pkg_extn_uid = client.package_extensions.get_uid(pkg_extn_details)
pkg_extn_url = client.package_extensions.get_href(pkg_extn_details)
meta_prop_sw_spec = {
client.software_specifications.ConfigurationMetaNames.NAME: "Mitigated AutoAI bases on scikit spec",
client.software_specifications.ConfigurationMetaNames.DESCRIPTION: "Software specification for scikt with autoai-libs",
client.software_specifications.ConfigurationMetaNames.BASE_SOFTWARE_SPECIFICATION: {"guid": base_sw_spec_uid}
}
sw_spec_details = client.software_specifications.store(meta_props=meta_prop_sw_spec)
sw_spec_uid = client.software_specifications.get_uid(sw_spec_details)
status = client.software_specifications.add_package_extension(sw_spec_uid, pkg_extn_uid)
model_props = {
client.repository.ModelMetaNames.NAME: "Fairer AutoAI model",
client.repository.ModelMetaNames.TYPE: 'scikit-learn_0.23',
client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_uid
}
feature_vector = list(X.columns)
published_model = client.repository.store_model(
model=best_pipeline.export_to_sklearn_pipeline(),
meta_props=model_props,
training_data=X_train.values,
training_target=y_train.values,
feature_names=feature_vector,
label_column_names=['Risk']
)
published_model_uid = client.repository.get_model_id(published_model)
metadata = {
client.deployments.ConfigurationMetaNames.NAME: "Deployment of fairer model",
client.deployments.ConfigurationMetaNames.ONLINE: {}
}
created_deployment = client.deployments.create(published_model_uid, meta_props=metadata)
deployment_id = client.deployments.get_uid(created_deployment)
values = X_test.values
scoring_payload = {
"input_data": [{
'values': values[:5]
}]
}
predictions = client.deployments.score(deployment_id, scoring_payload)
predictions
| 0.510741 | 0.971537 |
<a href="https://colab.research.google.com/github/whobbes/fastai/blob/master/lesson1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#Image classification with Convolutional Neural Networks
*Welcome* to the first week of the second deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.
## Fast AI setup
```
# Check python version
import sys
print(sys.version)
# Get libraries
!pip install fastai==0.7.0
!pip install torchtext==0.2.3
!pip3 install http://download.pytorch.org/whl/cu80/torch-0.3.0.post4-cp36-cp36m-linux_x86_64.whl
!pip3 install torchvision
# Lesson 4
# !pip3 install spacy
# !python -m spacy download en
# memory footprint support libraries/code
!ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi
!pip install gputil
!pip install psutil
!pip install humanize
```
## GPU usage
```
import psutil
import humanize
import os
import GPUtil as GPU
GPUs = GPU.getGPUs()
# XXX: only one GPU on Colab and isn’t guaranteed
gpu = GPUs[0]
def printm():
process = psutil.Process(os.getpid())
print("Gen RAM Free: " + humanize.naturalsize( psutil.virtual_memory().available ), " | Proc size: " + humanize.naturalsize( process.memory_info().rss))
print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
printm()
```
## Introduction to our first task: 'Dogs vs Cats'
We're going to try to create a model to enter the Dogs vs Cats competition at Kaggle. There are 25,000 labelled dog and cat photos available for training, and 12,500 in the test set that we have to try to label for this competition. According to the Kaggle web-site, when this competition was launched (end of 2013): "State of the art: The current literature suggests machine classifiers can score above 80% accuracy on this task". So if we can beat 80%, then we will be at the cutting edge as of 2013!
```
# Put these at the top of every notebook, to get automatic reloading and inline plotting
# %reload_ext autoreload
# %autoreload 2
%matplotlib inline
```
Here we import the libraries we need. We'll learn about what each does during the course.
```
# This file contains all the main external libs we'll use
from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
```
`PATH` is the path to your data - if you use the recommended setup approaches from the lesson, you won't need to change this. `sz` is the size that the images will be resized to in order to ensure that the training runs quickly. We'll be talking about this parameter a lot during the course. Leave it at `224` for now.
```
PATH = "data/dogscats/"
sz=224
```
It's important that you have a working NVidia GPU set up. The programming framework used to behind the scenes to work with NVidia GPUs is called CUDA. Therefore, you need to ensure the following line returns `True` before you proceed. If you have problems with this, please check the FAQ and ask for help on [the forums](http://forums.fast.ai).
```
torch.cuda.is_available()
```
In addition, NVidia provides special accelerated functions for deep learning in a package called CuDNN. Although not strictly necessary, it will improve training performance significantly, and is included by default in all supported fastai configurations. Therefore, if the following does not return `True`, you may want to look into why.
```
torch.backends.cudnn.enabled
# Get the file from fast.ai URL, unzip it, and put it into the folder 'data'
# This uses -qq to make the unzipping less verbose.
!wget http://files.fast.ai/data/dogscats.zip && unzip -qq dogscats.zip -d data/
# Check to make sure the data is where you think it is:
!ls
# Check to make sure the folders all unzipped properly:
!ls data/dogscats
```
### Extra steps if NOT using Crestle or Paperspace or our scripts
The dataset is available at http://files.fast.ai/data/dogscats.zip. You can download it directly on your server by running the following line in your terminal. `wget http://files.fast.ai/data/dogscats.zip`. You should put the data in a subdirectory of this notebook's directory, called `data/`. Note that this data is already available in Crestle and the Paperspace fast.ai template.
### Extra steps if using Crestle
Crestle has the datasets required for fast.ai in /datasets, so we'll create symlinks to the data we want for this competition. (NB: we can't write to /datasets, but we need a place to store temporary files, so we create our own writable directory to put the symlinks in, and we also take advantage of Crestle's `/cache/` faster temporary storage space.)
To run these commands (**which you should only do if using Crestle**) remove the `#` characters from the start of each line.
```
# os.makedirs('data/dogscats/models', exist_ok=True)
# !ln -s /datasets/fast.ai/dogscats/train {PATH}
# !ln -s /datasets/fast.ai/dogscats/test {PATH}
# !ln -s /datasets/fast.ai/dogscats/valid {PATH}
# os.makedirs('/cache/tmp', exist_ok=True)
# !ln -fs /cache/tmp {PATH}
# os.makedirs('/cache/tmp', exist_ok=True)
# !ln -fs /cache/tmp {PATH}
```
## First look at cat pictures
Our library will assume that you have *train* and *valid* directories. It also assumes that each dir will have subdirs for each class you wish to recognize (in this case, 'cats' and 'dogs').
```
os.listdir(PATH)
os.listdir(f'{PATH}valid')
files = os.listdir(f'{PATH}valid/cats')[:5]
files
img = plt.imread(f'{PATH}valid/cats/{files[0]}')
plt.imshow(img);
```
Here is how the raw data looks like
```
img.shape
img[:4,:4]
```
## Our first model: quick start
We're going to use a <b>pre-trained</b> model, that is, a model created by some one else to solve a different problem. Instead of building a model from scratch to solve a similar problem, we'll use a model trained on ImageNet (1.2 million images and 1000 classes) as a starting point. The model is a Convolutional Neural Network (CNN), a type of Neural Network that builds state-of-the-art models for computer vision. We'll be learning all about CNNs during this course.
We will be using the <b>resnet34</b> model. resnet34 is a version of the model that won the 2015 ImageNet competition. Here is more info on [resnet models](https://github.com/KaimingHe/deep-residual-networks). We'll be studying them in depth later, but for now we'll focus on using them effectively.
Here's how to train and evalulate a *dogs vs cats* model in 3 lines of code, and under 20 seconds:
```
# Uncomment the below if you need to reset your precomputed activations
# shutil.rmtree(f'{PATH}tmp', ignore_errors=True)
arch=resnet34
data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz))
learn = ConvLearner.pretrained(arch, data, precompute=True)
learn.fit(0.01, 2)
```
How good is this model? Well, as we mentioned, prior to this competition, the state of the art was 80% accuracy. But the competition resulted in a huge jump to 98.9% accuracy, with the author of a popular deep learning library winning the competition. Extraordinarily, less than 4 years later, we can now beat that result in seconds! Even last year in this same course, our initial model had 98.3% accuracy, which is nearly double the error we're getting just a year later, and that took around 10 minutes to compute.
```
```
## Analyzing results: looking at pictures
As well as looking at the overall metrics, it's also a good idea to look at examples of each of:
1. A few correct labels at random
2. A few incorrect labels at random
3. The most correct labels of each class (i.e. those with highest probability that are correct)
4. The most incorrect labels of each class (i.e. those with highest probability that are incorrect)
5. The most uncertain labels (i.e. those with probability closest to 0.5).
```
# This is the label for a val data
data.val_y
# from here we know that 'cats' is label 0 and 'dogs' is label 1.
data.classes
# this gives prediction for validation set. Predictions are in log scale
log_preds = learn.predict()
log_preds.shape
log_preds[:10]
preds = np.argmax(log_preds, axis=1) # from log probabilities to 0 or 1
probs = np.exp(log_preds[:,1]) # pr(dog)
def rand_by_mask(mask): return np.random.choice(np.where(mask)[0], min(len(preds), 4), replace=False)
def rand_by_correct(is_correct): return rand_by_mask((preds == data.val_y)==is_correct)
def plots(ims, figsize=(12,6), rows=1, titles=None):
f = plt.figure(figsize=figsize)
for i in range(len(ims)):
sp = f.add_subplot(rows, len(ims)//rows, i+1)
sp.axis('Off')
if titles is not None: sp.set_title(titles[i], fontsize=16)
plt.imshow(ims[i])
def load_img_id(ds, idx): return np.array(PIL.Image.open(PATH+ds.fnames[idx]))
def plot_val_with_title(idxs, title):
imgs = [load_img_id(data.val_ds,x) for x in idxs]
title_probs = [probs[x] for x in idxs]
print(title)
return plots(imgs, rows=1, titles=title_probs, figsize=(16,8)) if len(imgs)>0 else print('Not Found.')
# 1. A few correct labels at random
plot_val_with_title(rand_by_correct(True), "Correctly classified")
# 2. A few incorrect labels at random
plot_val_with_title(rand_by_correct(False), "Incorrectly classified")
def most_by_mask(mask, mult):
idxs = np.where(mask)[0]
return idxs[np.argsort(mult * probs[idxs])[:4]]
def most_by_correct(y, is_correct):
mult = -1 if (y==1)==is_correct else 1
return most_by_mask(((preds == data.val_y)==is_correct) & (data.val_y == y), mult)
plot_val_with_title(most_by_correct(0, True), "Most correct cats")
plot_val_with_title(most_by_correct(1, True), "Most correct dogs")
plot_val_with_title(most_by_correct(0, False), "Most incorrect cats")
plot_val_with_title(most_by_correct(1, False), "Most incorrect dogs")
most_uncertain = np.argsort(np.abs(probs -0.5))[:4]
plot_val_with_title(most_uncertain, "Most uncertain predictions")
```
## Choosing a learning rate
The *learning rate* determines how quickly or how slowly you want to update the *weights* (or *parameters*). Learning rate is one of the most difficult parameters to set, because it significantly affects model performance.
The method `learn.lr_find()` helps you find an optimal learning rate. It uses the technique developed in the 2015 paper [Cyclical Learning Rates for Training Neural Networks](http://arxiv.org/abs/1506.01186), where we simply keep increasing the learning rate from a very small value, until the loss stops decreasing. We can plot the learning rate across batches to see what this looks like.
We first create a new learner, since we want to know how to set the learning rate for a new (untrained) model.
```
learn = ConvLearner.pretrained(arch, data, precompute=True)
lrf=learn.lr_find()
```
Our `learn` object contains an attribute `sched` that contains our learning rate scheduler, and has some convenient plotting functionality including this one:
```
learn.sched.plot_lr()
```
Note that in the previous plot *iteration* is one iteration (or *minibatch*) of SGD. In one epoch there are
(num_train_samples/batch_size) iterations of SGD.
We can see the plot of loss versus learning rate to see where our loss stops decreasing:
```
learn.sched.plot()
```
The loss is still clearly improving at lr=1e-2 (0.01), so that's what we use. Note that the optimal learning rate can change as we train the model, so you may want to re-run this function from time to time.
## Improving our model
### Data augmentation
If you try training for more epochs, you'll notice that we start to *overfit*, which means that our model is learning to recognize the specific images in the training set, rather than generalizing such that we also get good results on the validation set. One way to fix this is to effectively create more data, through *data augmentation*. This refers to randomly changing the images in ways that shouldn't impact their interpretation, such as horizontal flipping, zooming, and rotating.
We can do this by passing `aug_tfms` (*augmentation transforms*) to `tfms_from_model`, with a list of functions to apply that randomly change the image however we wish. For photos that are largely taken from the side (e.g. most photos of dogs and cats, as opposed to photos taken from the top down, such as satellite imagery) we can use the pre-defined list of functions `transforms_side_on`. We can also specify random zooming of images up to specified scale by adding the `max_zoom` parameter.
```
tfms = tfms_from_model(resnet34, sz, aug_tfms=transforms_side_on, max_zoom=1.1)
def get_augs():
data = ImageClassifierData.from_paths(PATH, bs=2, tfms=tfms, num_workers=1)
x,_ = next(iter(data.aug_dl))
return data.trn_ds.denorm(x)[1]
ims = np.stack([get_augs() for i in range(6)])
plots(ims, rows=2)
```
Let's create a new `data` object that includes this augmentation in the transforms.
```
data = ImageClassifierData.from_paths(PATH, tfms=tfms)
learn = ConvLearner.pretrained(arch, data, precompute=True)
learn.fit(1e-2, 1)
learn.precompute=False
```
By default when we create a learner, it sets all but the last layer to *frozen*. That means that it's still only updating the weights in the last layer when we call `fit`.
```
learn.fit(1e-2, 3, cycle_len=1)
```
What is that `cycle_len` parameter? What we've done here is used a technique called *stochastic gradient descent with restarts (SGDR)*, a variant of *learning rate annealing*, which gradually decreases the learning rate as training progresses. This is helpful because as we get closer to the optimal weights, we want to take smaller steps.
However, we may find ourselves in a part of the weight space that isn't very resilient - that is, small changes to the weights may result in big changes to the loss. We want to encourage our model to find parts of the weight space that are both accurate and stable. Therefore, from time to time we increase the learning rate (this is the 'restarts' in 'SGDR'), which will force the model to jump to a different part of the weight space if the current area is "spikey". Here's a picture of how that might look if we reset the learning rates 3 times (in this paper they call it a "cyclic LR schedule"):
<img src="https://github.com/fastai/fastai/blob/master/courses/dl1/images/sgdr.png?raw=1" width="80%">
(From the paper [Snapshot Ensembles](https://arxiv.org/abs/1704.00109)).
The number of epochs between resetting the learning rate is set by `cycle_len`, and the number of times this happens is refered to as the *number of cycles*, and is what we're actually passing as the 2nd parameter to `fit()`. So here's what our actual learning rates looked like:
```
learn.sched.plot_lr()
```
Our validation loss isn't improving much, so there's probably no point further training the last layer on its own.
Since we've got a pretty good model at this point, we might want to save it so we can load it again later without training it from scratch.
```
learn.save('224_lastlayer')
learn.load('224_lastlayer')
```
### Fine-tuning and differential learning rate annealing
Now that we have a good final layer trained, we can try fine-tuning the other layers. To tell the learner that we want to unfreeze the remaining layers, just call (surprise surprise!) `unfreeze()`.
```
learn.unfreeze()
```
Note that the other layers have *already* been trained to recognize imagenet photos (whereas our final layers where randomly initialized), so we want to be careful of not destroying the carefully tuned weights that are already there.
Generally speaking, the earlier layers (as we've seen) have more general-purpose features. Therefore we would expect them to need less fine-tuning for new datasets. For this reason we will use different learning rates for different layers: the first few layers will be at 1e-4, the middle layers at 1e-3, and our FC layers we'll leave at 1e-2 as before. We refer to this as *differential learning rates*, although there's no standard name for this techique in the literature that we're aware of.
```
lr=np.array([1e-4,1e-3,1e-2])
learn.fit(lr, 3, cycle_len=1, cycle_mult=2)
```
Another trick we've used here is adding the `cycle_mult` parameter. Take a look at the following chart, and see if you can figure out what the parameter is doing:
```
learn.sched.plot_lr()
```
Note that's what being plotted above is the learning rate of the *final layers*. The learning rates of the earlier layers are fixed at the same multiples of the final layer rates as we initially requested (i.e. the first layers have 100x smaller, and middle layers 10x smaller learning rates, since we set `lr=np.array([1e-4,1e-3,1e-2])`.
```
learn.save('224_all')
learn.load('224_all')
```
There is something else we can do with data augmentation: use it at *inference* time (also known as *test* time). Not surprisingly, this is known as *test time augmentation*, or just *TTA*.
TTA simply makes predictions not just on the images in your validation set, but also makes predictions on a number of randomly augmented versions of them too (by default, it uses the original image along with 4 randomly augmented versions). It then takes the average prediction from these images, and uses that. To use TTA on the validation set, we can use the learner's `TTA()` method.
```
log_preds,y = learn.TTA()
probs = np.mean(np.exp(log_preds),0)
accuracy_np(probs, y)
```
I generally see about a 10-20% reduction in error on this dataset when using TTA at this point, which is an amazing result for such a quick and easy technique!
## Analyzing results
### Confusion matrix
```
preds = np.argmax(probs, axis=1)
probs = probs[:,1]
```
A common way to analyze the result of a classification model is to use a [confusion matrix](http://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/). Scikit-learn has a convenient function we can use for this purpose:
```
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y, preds)
```
We can just print out the confusion matrix, or we can show a graphical view (which is mainly useful for dependents with a larger number of categories).
```
plot_confusion_matrix(cm, data.classes)
```
### Looking at pictures again
```
plot_val_with_title(most_by_correct(0, False), "Most incorrect cats")
plot_val_with_title(most_by_correct(1, False), "Most incorrect dogs")
```
## Review: easy steps to train a world-class image classifier
1. precompute=True
1. Use `lr_find()` to find highest learning rate where loss is still clearly improving
1. Train last layer from precomputed activations for 1-2 epochs
1. Train last layer with data augmentation (i.e. precompute=False) for 2-3 epochs with cycle_len=1
1. Unfreeze all layers
1. Set earlier layers to 3x-10x lower learning rate than next higher layer
1. Use `lr_find()` again
1. Train full network with cycle_mult=2 until over-fitting
## Understanding the code for our first model
Let's look at the Dogs v Cats code line by line.
**tfms** stands for *transformations*. `tfms_from_model` takes care of resizing, image cropping, initial normalization (creating data with (mean,stdev) of (0,1)), and more.
```
tfms = tfms_from_model(resnet34, sz)
```
We need a <b>path</b> that points to the dataset. In this path we will also store temporary data and final results. `ImageClassifierData.from_paths` reads data from a provided path and creates a dataset ready for training.
```
data = ImageClassifierData.from_paths(PATH, tfms=tfms)
```
`ConvLearner.pretrained` builds *learner* that contains a pre-trained model. The last layer of the model needs to be replaced with the layer of the right dimensions. The pre-trained model was trained for 1000 classes therfore the final layer predicts a vector of 1000 probabilities. The model for cats and dogs needs to output a two dimensional vector. The diagram below shows in an example how this was done in one of the earliest successful CNNs. The layer "FC8" here would get replaced with a new layer with 2 outputs.
<img src="https://github.com/fastai/fastai/blob/master/courses/dl1/images/pretrained.png?raw=1" width="500">
[original image](https://image.slidesharecdn.com/practicaldeeplearning-160329181459/95/practical-deep-learning-16-638.jpg)
```
learn = ConvLearner.pretrained(resnet34, data, precompute=True)
```
*Parameters* are learned by fitting a model to the data. *Hyperparameters* are another kind of parameter, that cannot be directly learned from the regular training process. These parameters express “higher-level” properties of the model such as its complexity or how fast it should learn. Two examples of hyperparameters are the *learning rate* and the *number of epochs*.
During iterative training of a neural network, a *batch* or *mini-batch* is a subset of training samples used in one iteration of Stochastic Gradient Descent (SGD). An *epoch* is a single pass through the entire training set which consists of multiple iterations of SGD.
We can now *fit* the model; that is, use *gradient descent* to find the best parameters for the fully connected layer we added, that can separate cat pictures from dog pictures. We need to pass two hyperparameters: the *learning rate* (generally 1e-2 or 1e-3 is a good starting point, we'll look more at this next) and the *number of epochs* (you can pass in a higher number and just stop training when you see it's no longer improving, then re-run it with the number of epochs you found works well.)
```
learn.fit(1e-2, 1)
```
## Analyzing results: loss and accuracy
When we run `learn.fit` we print 3 performance values (see above.) Here 0.03 is the value of the **loss** in the training set, 0.0226 is the value of the loss in the validation set and 0.9927 is the validation accuracy. What is the loss? What is accuracy? Why not to just show accuracy?
**Accuracy** is the ratio of correct prediction to the total number of predictions.
In machine learning the **loss** function or cost function is representing the price paid for inaccuracy of predictions.
The loss associated with one example in binary classification is given by:
`-(y * log(p) + (1-y) * log (1-p))`
where `y` is the true label of `x` and `p` is the probability predicted by our model that the label is 1.
```
def binary_loss(y, p):
return np.mean(-(y * np.log(p) + (1-y)*np.log(1-p)))
acts = np.array([1, 0, 0, 1])
preds = np.array([0.99999, 0.000001, 0.000001, 0.999999])
binary_loss(acts, preds)
```
Note that in our toy example above our accuracy is 100% and our loss is 0.16. Compare that to a loss of 0.03 that we are getting while predicting cats and dogs. Exercise: play with `preds` to get a lower loss for this example.
**Example:** Here is an example on how to compute the loss for one example of binary classification problem. Suppose for an image x with label 1 and your model gives it a prediction of 0.9. For this case the loss should be small because our model is predicting a label $1$ with high probability.
`loss = -log(0.9) = 0.10`
Now suppose x has label 0 but our model is predicting 0.9. In this case our loss should be much larger.
loss = -log(1-0.9) = 2.30
- Exercise: look at the other cases and convince yourself that this make sense.
- Exercise: how would you rewrite `binary_loss` using `if` instead of `*` and `+`?
Why not just maximize accuracy? The binary classification loss is an easier function to optimize.
```
```
|
github_jupyter
|
# Check python version
import sys
print(sys.version)
# Get libraries
!pip install fastai==0.7.0
!pip install torchtext==0.2.3
!pip3 install http://download.pytorch.org/whl/cu80/torch-0.3.0.post4-cp36-cp36m-linux_x86_64.whl
!pip3 install torchvision
# Lesson 4
# !pip3 install spacy
# !python -m spacy download en
# memory footprint support libraries/code
!ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi
!pip install gputil
!pip install psutil
!pip install humanize
import psutil
import humanize
import os
import GPUtil as GPU
GPUs = GPU.getGPUs()
# XXX: only one GPU on Colab and isn’t guaranteed
gpu = GPUs[0]
def printm():
process = psutil.Process(os.getpid())
print("Gen RAM Free: " + humanize.naturalsize( psutil.virtual_memory().available ), " | Proc size: " + humanize.naturalsize( process.memory_info().rss))
print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
printm()
# Put these at the top of every notebook, to get automatic reloading and inline plotting
# %reload_ext autoreload
# %autoreload 2
%matplotlib inline
# This file contains all the main external libs we'll use
from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
PATH = "data/dogscats/"
sz=224
torch.cuda.is_available()
torch.backends.cudnn.enabled
# Get the file from fast.ai URL, unzip it, and put it into the folder 'data'
# This uses -qq to make the unzipping less verbose.
!wget http://files.fast.ai/data/dogscats.zip && unzip -qq dogscats.zip -d data/
# Check to make sure the data is where you think it is:
!ls
# Check to make sure the folders all unzipped properly:
!ls data/dogscats
# os.makedirs('data/dogscats/models', exist_ok=True)
# !ln -s /datasets/fast.ai/dogscats/train {PATH}
# !ln -s /datasets/fast.ai/dogscats/test {PATH}
# !ln -s /datasets/fast.ai/dogscats/valid {PATH}
# os.makedirs('/cache/tmp', exist_ok=True)
# !ln -fs /cache/tmp {PATH}
# os.makedirs('/cache/tmp', exist_ok=True)
# !ln -fs /cache/tmp {PATH}
os.listdir(PATH)
os.listdir(f'{PATH}valid')
files = os.listdir(f'{PATH}valid/cats')[:5]
files
img = plt.imread(f'{PATH}valid/cats/{files[0]}')
plt.imshow(img);
img.shape
img[:4,:4]
# Uncomment the below if you need to reset your precomputed activations
# shutil.rmtree(f'{PATH}tmp', ignore_errors=True)
arch=resnet34
data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz))
learn = ConvLearner.pretrained(arch, data, precompute=True)
learn.fit(0.01, 2)
```
## Analyzing results: looking at pictures
As well as looking at the overall metrics, it's also a good idea to look at examples of each of:
1. A few correct labels at random
2. A few incorrect labels at random
3. The most correct labels of each class (i.e. those with highest probability that are correct)
4. The most incorrect labels of each class (i.e. those with highest probability that are incorrect)
5. The most uncertain labels (i.e. those with probability closest to 0.5).
## Choosing a learning rate
The *learning rate* determines how quickly or how slowly you want to update the *weights* (or *parameters*). Learning rate is one of the most difficult parameters to set, because it significantly affects model performance.
The method `learn.lr_find()` helps you find an optimal learning rate. It uses the technique developed in the 2015 paper [Cyclical Learning Rates for Training Neural Networks](http://arxiv.org/abs/1506.01186), where we simply keep increasing the learning rate from a very small value, until the loss stops decreasing. We can plot the learning rate across batches to see what this looks like.
We first create a new learner, since we want to know how to set the learning rate for a new (untrained) model.
Our `learn` object contains an attribute `sched` that contains our learning rate scheduler, and has some convenient plotting functionality including this one:
Note that in the previous plot *iteration* is one iteration (or *minibatch*) of SGD. In one epoch there are
(num_train_samples/batch_size) iterations of SGD.
We can see the plot of loss versus learning rate to see where our loss stops decreasing:
The loss is still clearly improving at lr=1e-2 (0.01), so that's what we use. Note that the optimal learning rate can change as we train the model, so you may want to re-run this function from time to time.
## Improving our model
### Data augmentation
If you try training for more epochs, you'll notice that we start to *overfit*, which means that our model is learning to recognize the specific images in the training set, rather than generalizing such that we also get good results on the validation set. One way to fix this is to effectively create more data, through *data augmentation*. This refers to randomly changing the images in ways that shouldn't impact their interpretation, such as horizontal flipping, zooming, and rotating.
We can do this by passing `aug_tfms` (*augmentation transforms*) to `tfms_from_model`, with a list of functions to apply that randomly change the image however we wish. For photos that are largely taken from the side (e.g. most photos of dogs and cats, as opposed to photos taken from the top down, such as satellite imagery) we can use the pre-defined list of functions `transforms_side_on`. We can also specify random zooming of images up to specified scale by adding the `max_zoom` parameter.
Let's create a new `data` object that includes this augmentation in the transforms.
By default when we create a learner, it sets all but the last layer to *frozen*. That means that it's still only updating the weights in the last layer when we call `fit`.
What is that `cycle_len` parameter? What we've done here is used a technique called *stochastic gradient descent with restarts (SGDR)*, a variant of *learning rate annealing*, which gradually decreases the learning rate as training progresses. This is helpful because as we get closer to the optimal weights, we want to take smaller steps.
However, we may find ourselves in a part of the weight space that isn't very resilient - that is, small changes to the weights may result in big changes to the loss. We want to encourage our model to find parts of the weight space that are both accurate and stable. Therefore, from time to time we increase the learning rate (this is the 'restarts' in 'SGDR'), which will force the model to jump to a different part of the weight space if the current area is "spikey". Here's a picture of how that might look if we reset the learning rates 3 times (in this paper they call it a "cyclic LR schedule"):
<img src="https://github.com/fastai/fastai/blob/master/courses/dl1/images/sgdr.png?raw=1" width="80%">
(From the paper [Snapshot Ensembles](https://arxiv.org/abs/1704.00109)).
The number of epochs between resetting the learning rate is set by `cycle_len`, and the number of times this happens is refered to as the *number of cycles*, and is what we're actually passing as the 2nd parameter to `fit()`. So here's what our actual learning rates looked like:
Our validation loss isn't improving much, so there's probably no point further training the last layer on its own.
Since we've got a pretty good model at this point, we might want to save it so we can load it again later without training it from scratch.
### Fine-tuning and differential learning rate annealing
Now that we have a good final layer trained, we can try fine-tuning the other layers. To tell the learner that we want to unfreeze the remaining layers, just call (surprise surprise!) `unfreeze()`.
Note that the other layers have *already* been trained to recognize imagenet photos (whereas our final layers where randomly initialized), so we want to be careful of not destroying the carefully tuned weights that are already there.
Generally speaking, the earlier layers (as we've seen) have more general-purpose features. Therefore we would expect them to need less fine-tuning for new datasets. For this reason we will use different learning rates for different layers: the first few layers will be at 1e-4, the middle layers at 1e-3, and our FC layers we'll leave at 1e-2 as before. We refer to this as *differential learning rates*, although there's no standard name for this techique in the literature that we're aware of.
Another trick we've used here is adding the `cycle_mult` parameter. Take a look at the following chart, and see if you can figure out what the parameter is doing:
Note that's what being plotted above is the learning rate of the *final layers*. The learning rates of the earlier layers are fixed at the same multiples of the final layer rates as we initially requested (i.e. the first layers have 100x smaller, and middle layers 10x smaller learning rates, since we set `lr=np.array([1e-4,1e-3,1e-2])`.
There is something else we can do with data augmentation: use it at *inference* time (also known as *test* time). Not surprisingly, this is known as *test time augmentation*, or just *TTA*.
TTA simply makes predictions not just on the images in your validation set, but also makes predictions on a number of randomly augmented versions of them too (by default, it uses the original image along with 4 randomly augmented versions). It then takes the average prediction from these images, and uses that. To use TTA on the validation set, we can use the learner's `TTA()` method.
I generally see about a 10-20% reduction in error on this dataset when using TTA at this point, which is an amazing result for such a quick and easy technique!
## Analyzing results
### Confusion matrix
A common way to analyze the result of a classification model is to use a [confusion matrix](http://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/). Scikit-learn has a convenient function we can use for this purpose:
We can just print out the confusion matrix, or we can show a graphical view (which is mainly useful for dependents with a larger number of categories).
### Looking at pictures again
## Review: easy steps to train a world-class image classifier
1. precompute=True
1. Use `lr_find()` to find highest learning rate where loss is still clearly improving
1. Train last layer from precomputed activations for 1-2 epochs
1. Train last layer with data augmentation (i.e. precompute=False) for 2-3 epochs with cycle_len=1
1. Unfreeze all layers
1. Set earlier layers to 3x-10x lower learning rate than next higher layer
1. Use `lr_find()` again
1. Train full network with cycle_mult=2 until over-fitting
## Understanding the code for our first model
Let's look at the Dogs v Cats code line by line.
**tfms** stands for *transformations*. `tfms_from_model` takes care of resizing, image cropping, initial normalization (creating data with (mean,stdev) of (0,1)), and more.
We need a <b>path</b> that points to the dataset. In this path we will also store temporary data and final results. `ImageClassifierData.from_paths` reads data from a provided path and creates a dataset ready for training.
`ConvLearner.pretrained` builds *learner* that contains a pre-trained model. The last layer of the model needs to be replaced with the layer of the right dimensions. The pre-trained model was trained for 1000 classes therfore the final layer predicts a vector of 1000 probabilities. The model for cats and dogs needs to output a two dimensional vector. The diagram below shows in an example how this was done in one of the earliest successful CNNs. The layer "FC8" here would get replaced with a new layer with 2 outputs.
<img src="https://github.com/fastai/fastai/blob/master/courses/dl1/images/pretrained.png?raw=1" width="500">
[original image](https://image.slidesharecdn.com/practicaldeeplearning-160329181459/95/practical-deep-learning-16-638.jpg)
*Parameters* are learned by fitting a model to the data. *Hyperparameters* are another kind of parameter, that cannot be directly learned from the regular training process. These parameters express “higher-level” properties of the model such as its complexity or how fast it should learn. Two examples of hyperparameters are the *learning rate* and the *number of epochs*.
During iterative training of a neural network, a *batch* or *mini-batch* is a subset of training samples used in one iteration of Stochastic Gradient Descent (SGD). An *epoch* is a single pass through the entire training set which consists of multiple iterations of SGD.
We can now *fit* the model; that is, use *gradient descent* to find the best parameters for the fully connected layer we added, that can separate cat pictures from dog pictures. We need to pass two hyperparameters: the *learning rate* (generally 1e-2 or 1e-3 is a good starting point, we'll look more at this next) and the *number of epochs* (you can pass in a higher number and just stop training when you see it's no longer improving, then re-run it with the number of epochs you found works well.)
## Analyzing results: loss and accuracy
When we run `learn.fit` we print 3 performance values (see above.) Here 0.03 is the value of the **loss** in the training set, 0.0226 is the value of the loss in the validation set and 0.9927 is the validation accuracy. What is the loss? What is accuracy? Why not to just show accuracy?
**Accuracy** is the ratio of correct prediction to the total number of predictions.
In machine learning the **loss** function or cost function is representing the price paid for inaccuracy of predictions.
The loss associated with one example in binary classification is given by:
`-(y * log(p) + (1-y) * log (1-p))`
where `y` is the true label of `x` and `p` is the probability predicted by our model that the label is 1.
Note that in our toy example above our accuracy is 100% and our loss is 0.16. Compare that to a loss of 0.03 that we are getting while predicting cats and dogs. Exercise: play with `preds` to get a lower loss for this example.
**Example:** Here is an example on how to compute the loss for one example of binary classification problem. Suppose for an image x with label 1 and your model gives it a prediction of 0.9. For this case the loss should be small because our model is predicting a label $1$ with high probability.
`loss = -log(0.9) = 0.10`
Now suppose x has label 0 but our model is predicting 0.9. In this case our loss should be much larger.
loss = -log(1-0.9) = 2.30
- Exercise: look at the other cases and convince yourself that this make sense.
- Exercise: how would you rewrite `binary_loss` using `if` instead of `*` and `+`?
Why not just maximize accuracy? The binary classification loss is an easier function to optimize.
| 0.712332 | 0.937555 |
Kaushal Jani
**Practical** **7**
```
import sklearn
from sklearn import datasets,metrics
from sklearn.naive_bayes import GaussianNB
import numpy as np
X,Y = datasets.load_iris(return_X_y=True)
xtrain = X[range(0,150,2),:]
ytrain = Y[range(0,150,2)]
xtest = X[range(1,150,2),:]
ytest = Y[range(1,150,2)]
gclf = GaussianNB()
gclf.fit(xtrain,ytrain)
print("example of each class ",gclf.class_count_)
print("mean for given feature value for given class\n ",gclf.theta_)
print("standarad deviation for given feature value by given class\n",gclf.sigma_)
pred= gclf.predict(xtest)
print("\naccuracy ",metrics.accuracy_score(ytest,pred))
import math
def gauss(mean,std,val):
k= 1/(std* math.sqrt(2*math.pi))
f=math.exp(-0.5*((val-mean)/std)**2)
return k*f
def custom_fit(xtrain,ytrain,unique_v):
mean=np.zeros((len(unique_v),xtrain.shape[1]))
std=np.zeros((len(unique_v),xtrain.shape[1]))
prior=[]
for i in unique_v: # loop till number of class than create mask of index to where value = class than find mean and std in of each feature given class
mask= ytrain[:] == i
prior.append(mask.sum()/ytrain.shape)
for j in range(0,xtrain.shape[1]):
mean[i][j]=np.mean(xtrain[mask,j])
std[i][j]=np.std(xtrain[mask,j])
return prior,mean,std
prior,mean,std=custom_fit(xtrain,ytrain,[0,1,2])
print("prior prob \n",prior)
print("mean value for each feature given class\n",mean)
print("standard deviation for each feature given class\n",std)
def custom_predict(xtest):
prior,mean,std=custom_fit(xtrain,ytrain,[0,1,2])
post=[]
pre=[]
for i in xtest: # it extracts row from testing set
post=[]
for k in range(0,len(prior)): # we have all std and mean for all feature given class(3 * 4) matrics
x=1
for j in range(0,len(i)): # now extract each element from row and find its likely hood and multiply all likely hood as per formula
x=x*gauss(mean[k][j],std[k][j],i[j])
post.append(x) # store result given class so 1th entry corrospends to 1st class
result=[]
for i in range(0,len(prior)):
result.append(prior[i] * post[i]) # multiply prior and likelhood
# print(result)
pre.append(result.index(max(result))) # find max prob of result and assign its index as class becuse in post we store each class's prob like this
return pre
pre= custom_predict(xtest)
print("predictions are \n",pre)
print("accuracy ",metrics.accuracy_score(ytest,pre))
```
Mutinomial Classifier
```
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.naive_bayes import MultinomialNB,BernoulliNB
"""Here 24 datastring in data 0to 19 for trianing dataset in which 0 to 9 for cricket 10 to 19 for footbal and 20 to 23 for testing """
data=["Cricket is a bat-and-ball game played between two teams of eleven players on a field at the centre of which is a 22-yard pitch with a wicket at each end, each comprising two bails balanced on three stumps.",
"Cricket was invented in the vast fields of England, supposedly by shepherds who herded their flock. Later on this game was shown benevolence by aristocrats, and now has the stature of being England's national game. After a century now, cricket stands in the international arena, with a place of its own.",
"Bowlers can come in many different varieties and this video explains all of the different types of bowling style. Swing bowlers, seamers and spinners tactics and techniques are all shown off, as well as how they get their crucial wickets and dot balls.",
"Cricket is a game played with a bat and ball on a large field, known as a ground, between two teams of 11 players each.",
"The batting team must score as many ‘runs’ as possible, by hitting the ball and running to the other end of the pitch. If the batsman can reach the other end of the pitch successfully, he scores 1 ‘run’. If he can reach the other end of the pitch and return, he scores 2 runs etc.If he hits the ball to the edge of the field, he scores 4 runs. If he can hit the ball to the edge of the field without bouncing, he scores 6 runs.",
"The earliest reference to cricket is in South East England in the mid-16th century. It spread globally with the expansion of the British Empire, with the first international matches in the second half of the 19th century. The game's governing body is the International Cricket Council",
"There are two batsman up at a time, and the batsman being bowled to (the striker) tries to hit the ball away from the wicket. ",
"A hit may be defensive or offensive. A defensive hit may protect the wicket but leave the batsmen no time to run to the opposite wicket. In that case the batsmen need not run, and play will resume with another bowl.",
"A ball hit to or beyond the boundary scores four points if it hits the ground and then reaches the boundary, six points if it reaches the boundary from the air (a fly ball).",
"The earliest reference to an 11-a-side match, played in Sussex for a stake of 50 guineas, dates from 1697. In 1709 Kent met Surrey in the first recorded intercounty match at Dartford, and it is probable that about this time a code of laws (rules) existed for the conduct of the game, although the earliest known version of such rules is dated 1744. ",
"Football is a family of team sports that involve, to varying degrees, kicking a ball to score a goal. Unqualified, the word football normally means the form of football that is the most popular where the word is used.",
"Football, also called association football or soccer, game in which two teams of 11 players, using any part of their bodies except their hands and arms, try to maneuver the ball into the opposing team’s goal. Only the goalkeeper is permitted to handle the ball and may do so only within the penalty area surrounding the goal. ",
"Football is the world’s most popular ball game in numbers of participants and spectators. Simple in its principal rules and essential equipment, the sport can be played almost anywhere, from official football playing fields (pitches) to gymnasiums, streets, school playgrounds, parks, or beaches.",
"Modern football originated in Britain in the 19th century. Since before medieval times, “folk football” games had been played in towns and villages according to local customs and with a minimum of rules. Industrialization and urbanization, which reduced the amount of leisure time and space available to the working class, combined with a history of legal prohibitions against particularly violent and destructive forms of folk football to undermine the game’s status from the early 19th century onward. ",
"In 1863 a series of meetings involving clubs from metropolitan London and surrounding counties produced the printed rules of football, which prohibited the carrying of the ball. Thus, the “handling” game of rugby remained outside the newly formed Football Association (FA). Indeed, by 1870 all handling of the ball except by the goalkeeper was prohibited by the FA.",
"The game of US football evolved in the 19th century as a combination of rugby and soccer. The first intercollegiate match was played in 1869 between Princeton University and Rutgers College. In 1873, the first collegiate rules were standardized and the Ivy League was formed. Collegiate football grew into one of the most popular American sports. Professional football began in the 1890s, but did not become a major sport until after World War II. The National Football League (NFL) was formed (from an earlier association) in 1922; in 1966 it subsumed the rival American Football League (created in 1959). The NFL is now divided into an American and a National conference; the conference winners compete for the Super Bowl championship. A Football Hall of Fame is located in Canton, Ohio.",
"A match consists of two 45 minute periods known as the first and second half. In some instances, extra time can be played, two 15 minute periods. If side remain level after extra time a replay or penalty kicks can occur. Each team will take 5 penalties each, if scores remain level after both teams take their allotted 5 penalties then sudden death occurs. The first team to miss their penalty will lose if the opponents have scored their sudden death penalty kick. Extra time and penalties tend to only occur in tournaments and cup competitions. League games will result in a draw if both teams score the same amount of goals during the 90 minutes.",
"Each player will wear football boots, studs, blades or moulded this is depended on the pitch conditions.Shin pads must be worn by all players, to protect their shins. Some players ware ankle pads, however these are not compulsory.Goal keepers are allowed to wear gloves.",
"Yellow card – a yellow card is awarded by the referee for a breach of the rules that he does not consider serious or dangerous. A player who receives a yellow card is known as getting “booked” as the referee records the players name in his book.",
"Substitution – 11 players are allowed on the pitch unless a player has been sent off or a team has made the maximum of 3 substitutions. 3 substitutions are allowed out of 5 out field players and a sub goalkeeper. You are not allowed to sub a player who is in the course of being sent off or already dismissed.",
"Umpires have a key role in the game as they monitor the proceedings. They decide whether the batsman is out, decide on no-ball, wide, and ensure both teams are playing according to the rules.",
"If the scores are level after 90 minutes then the game will end as a draw apart from in cup games where the game can go to extra time and even a penalty shootout to decide the winner.",
"In some instances, on-field umpires find it tough to give few decisions like boundaries, out, no-ball, etc. Therefore, they seek help of another umpire, called third-umpire.",
"The game of football takes its form. The most admitted story tells that the game was developed in England in the 12th century. "]
testdata=[]
count=0
for i in data:
count=count+len(i)
print("total letters in data",count)
data_vector=CountVectorizer()
vdata=data_vector.fit_transform(data) # converts the data into vector representation
traindata=vdata[:-4] # first 20 for training
traindata=traindata.toarray()
print("no of feature(key words) of given dataset",len(data_vector.get_feature_names()))
testdata=vdata[-4:].toarray() # last 4 for testing
print(traindata)
answer=["Cricket","Cricket","Cricket","Cricket","Cricket","Cricket","Cricket","Cricket","Cricket","Cricket","Football","Football","Football","Football","Football","Football","Football","Football","Football","Football"]
testans=["Cricket","Football","Cricket","Football"]
clf=MultinomialNB()
clf.fit(traindata,answer)
print(clf.predict(testdata))
print("accuracy ",metrics.accuracy_score(testans,clf.predict(testdata)))
```
Multivariate Bernoulli Classifier
```
data_vector=CountVectorizer(binary=True)
vdata=data_vector.fit_transform(data)
traindata=vdata[:-4] # first 20 for training
traindata=traindata.toarray()
print("Number of feature(key words ) of given dataset are ",len(data_vector.get_feature_names()))
testdata=vdata[-4:].toarray() # last 4 for testing
print(traindata)
answer=["Cricket","Cricket","Cricket","Cricket","Cricket","Cricket","Cricket","Cricket","Cricket","Cricket","Football","Football","Football","Football","Football","Football","Football","Football","Football","Football"]
testans=["Cricket","Football","Cricket","Football"]
clf=BernoulliNB()
clf.fit(traindata,answer)
print(clf.predict(testdata))
print("\naccuracy ",metrics.accuracy_score(testans,clf.predict(testdata)))
```
|
github_jupyter
|
import sklearn
from sklearn import datasets,metrics
from sklearn.naive_bayes import GaussianNB
import numpy as np
X,Y = datasets.load_iris(return_X_y=True)
xtrain = X[range(0,150,2),:]
ytrain = Y[range(0,150,2)]
xtest = X[range(1,150,2),:]
ytest = Y[range(1,150,2)]
gclf = GaussianNB()
gclf.fit(xtrain,ytrain)
print("example of each class ",gclf.class_count_)
print("mean for given feature value for given class\n ",gclf.theta_)
print("standarad deviation for given feature value by given class\n",gclf.sigma_)
pred= gclf.predict(xtest)
print("\naccuracy ",metrics.accuracy_score(ytest,pred))
import math
def gauss(mean,std,val):
k= 1/(std* math.sqrt(2*math.pi))
f=math.exp(-0.5*((val-mean)/std)**2)
return k*f
def custom_fit(xtrain,ytrain,unique_v):
mean=np.zeros((len(unique_v),xtrain.shape[1]))
std=np.zeros((len(unique_v),xtrain.shape[1]))
prior=[]
for i in unique_v: # loop till number of class than create mask of index to where value = class than find mean and std in of each feature given class
mask= ytrain[:] == i
prior.append(mask.sum()/ytrain.shape)
for j in range(0,xtrain.shape[1]):
mean[i][j]=np.mean(xtrain[mask,j])
std[i][j]=np.std(xtrain[mask,j])
return prior,mean,std
prior,mean,std=custom_fit(xtrain,ytrain,[0,1,2])
print("prior prob \n",prior)
print("mean value for each feature given class\n",mean)
print("standard deviation for each feature given class\n",std)
def custom_predict(xtest):
prior,mean,std=custom_fit(xtrain,ytrain,[0,1,2])
post=[]
pre=[]
for i in xtest: # it extracts row from testing set
post=[]
for k in range(0,len(prior)): # we have all std and mean for all feature given class(3 * 4) matrics
x=1
for j in range(0,len(i)): # now extract each element from row and find its likely hood and multiply all likely hood as per formula
x=x*gauss(mean[k][j],std[k][j],i[j])
post.append(x) # store result given class so 1th entry corrospends to 1st class
result=[]
for i in range(0,len(prior)):
result.append(prior[i] * post[i]) # multiply prior and likelhood
# print(result)
pre.append(result.index(max(result))) # find max prob of result and assign its index as class becuse in post we store each class's prob like this
return pre
pre= custom_predict(xtest)
print("predictions are \n",pre)
print("accuracy ",metrics.accuracy_score(ytest,pre))
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.naive_bayes import MultinomialNB,BernoulliNB
"""Here 24 datastring in data 0to 19 for trianing dataset in which 0 to 9 for cricket 10 to 19 for footbal and 20 to 23 for testing """
data=["Cricket is a bat-and-ball game played between two teams of eleven players on a field at the centre of which is a 22-yard pitch with a wicket at each end, each comprising two bails balanced on three stumps.",
"Cricket was invented in the vast fields of England, supposedly by shepherds who herded their flock. Later on this game was shown benevolence by aristocrats, and now has the stature of being England's national game. After a century now, cricket stands in the international arena, with a place of its own.",
"Bowlers can come in many different varieties and this video explains all of the different types of bowling style. Swing bowlers, seamers and spinners tactics and techniques are all shown off, as well as how they get their crucial wickets and dot balls.",
"Cricket is a game played with a bat and ball on a large field, known as a ground, between two teams of 11 players each.",
"The batting team must score as many ‘runs’ as possible, by hitting the ball and running to the other end of the pitch. If the batsman can reach the other end of the pitch successfully, he scores 1 ‘run’. If he can reach the other end of the pitch and return, he scores 2 runs etc.If he hits the ball to the edge of the field, he scores 4 runs. If he can hit the ball to the edge of the field without bouncing, he scores 6 runs.",
"The earliest reference to cricket is in South East England in the mid-16th century. It spread globally with the expansion of the British Empire, with the first international matches in the second half of the 19th century. The game's governing body is the International Cricket Council",
"There are two batsman up at a time, and the batsman being bowled to (the striker) tries to hit the ball away from the wicket. ",
"A hit may be defensive or offensive. A defensive hit may protect the wicket but leave the batsmen no time to run to the opposite wicket. In that case the batsmen need not run, and play will resume with another bowl.",
"A ball hit to or beyond the boundary scores four points if it hits the ground and then reaches the boundary, six points if it reaches the boundary from the air (a fly ball).",
"The earliest reference to an 11-a-side match, played in Sussex for a stake of 50 guineas, dates from 1697. In 1709 Kent met Surrey in the first recorded intercounty match at Dartford, and it is probable that about this time a code of laws (rules) existed for the conduct of the game, although the earliest known version of such rules is dated 1744. ",
"Football is a family of team sports that involve, to varying degrees, kicking a ball to score a goal. Unqualified, the word football normally means the form of football that is the most popular where the word is used.",
"Football, also called association football or soccer, game in which two teams of 11 players, using any part of their bodies except their hands and arms, try to maneuver the ball into the opposing team’s goal. Only the goalkeeper is permitted to handle the ball and may do so only within the penalty area surrounding the goal. ",
"Football is the world’s most popular ball game in numbers of participants and spectators. Simple in its principal rules and essential equipment, the sport can be played almost anywhere, from official football playing fields (pitches) to gymnasiums, streets, school playgrounds, parks, or beaches.",
"Modern football originated in Britain in the 19th century. Since before medieval times, “folk football” games had been played in towns and villages according to local customs and with a minimum of rules. Industrialization and urbanization, which reduced the amount of leisure time and space available to the working class, combined with a history of legal prohibitions against particularly violent and destructive forms of folk football to undermine the game’s status from the early 19th century onward. ",
"In 1863 a series of meetings involving clubs from metropolitan London and surrounding counties produced the printed rules of football, which prohibited the carrying of the ball. Thus, the “handling” game of rugby remained outside the newly formed Football Association (FA). Indeed, by 1870 all handling of the ball except by the goalkeeper was prohibited by the FA.",
"The game of US football evolved in the 19th century as a combination of rugby and soccer. The first intercollegiate match was played in 1869 between Princeton University and Rutgers College. In 1873, the first collegiate rules were standardized and the Ivy League was formed. Collegiate football grew into one of the most popular American sports. Professional football began in the 1890s, but did not become a major sport until after World War II. The National Football League (NFL) was formed (from an earlier association) in 1922; in 1966 it subsumed the rival American Football League (created in 1959). The NFL is now divided into an American and a National conference; the conference winners compete for the Super Bowl championship. A Football Hall of Fame is located in Canton, Ohio.",
"A match consists of two 45 minute periods known as the first and second half. In some instances, extra time can be played, two 15 minute periods. If side remain level after extra time a replay or penalty kicks can occur. Each team will take 5 penalties each, if scores remain level after both teams take their allotted 5 penalties then sudden death occurs. The first team to miss their penalty will lose if the opponents have scored their sudden death penalty kick. Extra time and penalties tend to only occur in tournaments and cup competitions. League games will result in a draw if both teams score the same amount of goals during the 90 minutes.",
"Each player will wear football boots, studs, blades or moulded this is depended on the pitch conditions.Shin pads must be worn by all players, to protect their shins. Some players ware ankle pads, however these are not compulsory.Goal keepers are allowed to wear gloves.",
"Yellow card – a yellow card is awarded by the referee for a breach of the rules that he does not consider serious or dangerous. A player who receives a yellow card is known as getting “booked” as the referee records the players name in his book.",
"Substitution – 11 players are allowed on the pitch unless a player has been sent off or a team has made the maximum of 3 substitutions. 3 substitutions are allowed out of 5 out field players and a sub goalkeeper. You are not allowed to sub a player who is in the course of being sent off or already dismissed.",
"Umpires have a key role in the game as they monitor the proceedings. They decide whether the batsman is out, decide on no-ball, wide, and ensure both teams are playing according to the rules.",
"If the scores are level after 90 minutes then the game will end as a draw apart from in cup games where the game can go to extra time and even a penalty shootout to decide the winner.",
"In some instances, on-field umpires find it tough to give few decisions like boundaries, out, no-ball, etc. Therefore, they seek help of another umpire, called third-umpire.",
"The game of football takes its form. The most admitted story tells that the game was developed in England in the 12th century. "]
testdata=[]
count=0
for i in data:
count=count+len(i)
print("total letters in data",count)
data_vector=CountVectorizer()
vdata=data_vector.fit_transform(data) # converts the data into vector representation
traindata=vdata[:-4] # first 20 for training
traindata=traindata.toarray()
print("no of feature(key words) of given dataset",len(data_vector.get_feature_names()))
testdata=vdata[-4:].toarray() # last 4 for testing
print(traindata)
answer=["Cricket","Cricket","Cricket","Cricket","Cricket","Cricket","Cricket","Cricket","Cricket","Cricket","Football","Football","Football","Football","Football","Football","Football","Football","Football","Football"]
testans=["Cricket","Football","Cricket","Football"]
clf=MultinomialNB()
clf.fit(traindata,answer)
print(clf.predict(testdata))
print("accuracy ",metrics.accuracy_score(testans,clf.predict(testdata)))
data_vector=CountVectorizer(binary=True)
vdata=data_vector.fit_transform(data)
traindata=vdata[:-4] # first 20 for training
traindata=traindata.toarray()
print("Number of feature(key words ) of given dataset are ",len(data_vector.get_feature_names()))
testdata=vdata[-4:].toarray() # last 4 for testing
print(traindata)
answer=["Cricket","Cricket","Cricket","Cricket","Cricket","Cricket","Cricket","Cricket","Cricket","Cricket","Football","Football","Football","Football","Football","Football","Football","Football","Football","Football"]
testans=["Cricket","Football","Cricket","Football"]
clf=BernoulliNB()
clf.fit(traindata,answer)
print(clf.predict(testdata))
print("\naccuracy ",metrics.accuracy_score(testans,clf.predict(testdata)))
| 0.311322 | 0.772702 |
<!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png">
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).*
*The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!*
<!--NAVIGATION-->
< [In-Depth: Decision Trees and Random Forests](05.08-Random-Forests.ipynb) | [Contents](Index.ipynb) | [In-Depth: Manifold Learning](05.10-Manifold-Learning.ipynb) >
# In Depth: Principal Component Analysis
Up until now, we have been looking in depth at supervised learning estimators: those estimators that predict labels based on labeled training data.
Here we begin looking at several unsupervised estimators, which can highlight interesting aspects of the data without reference to any known labels.
In this section, we explore what is perhaps one of the most broadly used of unsupervised algorithms, principal component analysis (PCA).
PCA is fundamentally a dimensionality reduction algorithm, but it can also be useful as a tool for visualization, for noise filtering, for feature extraction and engineering, and much more.
After a brief conceptual discussion of the PCA algorithm, we will see a couple examples of these further applications.
We begin with the standard imports:
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
```
## Introducing Principal Component Analysis
Principal component analysis is a fast and flexible unsupervised method for dimensionality reduction in data, which we saw briefly in [Introducing Scikit-Learn](05.02-Introducing-Scikit-Learn.ipynb).
Its behavior is easiest to visualize by looking at a two-dimensional dataset.
Consider the following 200 points:
```
rng = np.random.RandomState(1)
X = np.dot(rng.rand(2, 2), rng.randn(2, 200)).T
plt.scatter(X[:, 0], X[:, 1])
plt.axis('equal');
```
By eye, it is clear that there is a nearly linear relationship between the x and y variables.
This is reminiscent of the linear regression data we explored in [In Depth: Linear Regression](05.06-Linear-Regression.ipynb), but the problem setting here is slightly different: rather than attempting to *predict* the y values from the x values, the unsupervised learning problem attempts to learn about the *relationship* between the x and y values.
In principal component analysis, this relationship is quantified by finding a list of the *principal axes* in the data, and using those axes to describe the dataset.
Using Scikit-Learn's ``PCA`` estimator, we can compute this as follows:
```
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(X)
```
The fit learns some quantities from the data, most importantly the "components" and "explained variance":
```
print(pca.components_)
print(pca.explained_variance_)
```
To see what these numbers mean, let's visualize them as vectors over the input data, using the "components" to define the direction of the vector, and the "explained variance" to define the squared-length of the vector:
```
def draw_vector(v0, v1, ax=None):
ax = ax or plt.gca()
arrowprops=dict(arrowstyle='->',
linewidth=2,
shrinkA=0, shrinkB=0)
ax.annotate('', v1, v0, arrowprops=arrowprops)
# plot data
plt.scatter(X[:, 0], X[:, 1], alpha=0.2)
for length, vector in zip(pca.explained_variance_, pca.components_):
v = vector * 3 * np.sqrt(length)
draw_vector(pca.mean_, pca.mean_ + v)
plt.axis('equal');
```
These vectors represent the *principal axes* of the data, and the length of the vector is an indication of how "important" that axis is in describing the distribution of the data—more precisely, it is a measure of the variance of the data when projected onto that axis.
The projection of each data point onto the principal axes are the "principal components" of the data.
If we plot these principal components beside the original data, we see the plots shown here:

[figure source in Appendix](06.00-Figure-Code.ipynb#Principal-Components-Rotation)
This transformation from data axes to principal axes is an *affine transformation*, which basically means it is composed of a translation, rotation, and uniform scaling.
While this algorithm to find principal components may seem like just a mathematical curiosity, it turns out to have very far-reaching applications in the world of machine learning and data exploration.
### PCA as dimensionality reduction
Using PCA for dimensionality reduction involves zeroing out one or more of the smallest principal components, resulting in a lower-dimensional projection of the data that preserves the maximal data variance.
Here is an example of using PCA as a dimensionality reduction transform:
```
pca = PCA(n_components=1)
pca.fit(X)
X_pca = pca.transform(X)
print("original shape: ", X.shape)
print("transformed shape:", X_pca.shape)
```
The transformed data has been reduced to a single dimension.
To understand the effect of this dimensionality reduction, we can perform the inverse transform of this reduced data and plot it along with the original data:
```
X_new = pca.inverse_transform(X_pca)
plt.scatter(X[:, 0], X[:, 1], alpha=0.2)
plt.scatter(X_new[:, 0], X_new[:, 1], alpha=0.8)
plt.axis('equal');
```
The light points are the original data, while the dark points are the projected version.
This makes clear what a PCA dimensionality reduction means: the information along the least important principal axis or axes is removed, leaving only the component(s) of the data with the highest variance.
The fraction of variance that is cut out (proportional to the spread of points about the line formed in this figure) is roughly a measure of how much "information" is discarded in this reduction of dimensionality.
This reduced-dimension dataset is in some senses "good enough" to encode the most important relationships between the points: despite reducing the dimension of the data by 50%, the overall relationship between the data points are mostly preserved.
### PCA for visualization: Hand-written digits
The usefulness of the dimensionality reduction may not be entirely apparent in only two dimensions, but becomes much more clear when looking at high-dimensional data.
To see this, let's take a quick look at the application of PCA to the digits data we saw in [In-Depth: Decision Trees and Random Forests](05.08-Random-Forests.ipynb).
We start by loading the data:
```
from sklearn.datasets import load_digits
digits = load_digits()
digits.data.shape
```
Recall that the data consists of 8×8 pixel images, meaning that they are 64-dimensional.
To gain some intuition into the relationships between these points, we can use PCA to project them to a more manageable number of dimensions, say two:
```
pca = PCA(2) # project from 64 to 2 dimensions
projected = pca.fit_transform(digits.data)
print(digits.data.shape)
print(projected.shape)
```
We can now plot the first two principal components of each point to learn about the data:
```
plt.scatter(projected[:, 0], projected[:, 1],
c=digits.target, edgecolor='none', alpha=0.5,
cmap=plt.cm.get_cmap('spectral', 10))
plt.xlabel('component 1')
plt.ylabel('component 2')
plt.colorbar();
```
Recall what these components mean: the full data is a 64-dimensional point cloud, and these points are the projection of each data point along the directions with the largest variance.
Essentially, we have found the optimal stretch and rotation in 64-dimensional space that allows us to see the layout of the digits in two dimensions, and have done this in an unsupervised manner—that is, without reference to the labels.
### What do the components mean?
We can go a bit further here, and begin to ask what the reduced dimensions *mean*.
This meaning can be understood in terms of combinations of basis vectors.
For example, each image in the training set is defined by a collection of 64 pixel values, which we will call the vector $x$:
$$
x = [x_1, x_2, x_3 \cdots x_{64}]
$$
One way we can think about this is in terms of a pixel basis.
That is, to construct the image, we multiply each element of the vector by the pixel it describes, and then add the results together to build the image:
$$
{\rm image}(x) = x_1 \cdot{\rm (pixel~1)} + x_2 \cdot{\rm (pixel~2)} + x_3 \cdot{\rm (pixel~3)} \cdots x_{64} \cdot{\rm (pixel~64)}
$$
One way we might imagine reducing the dimension of this data is to zero out all but a few of these basis vectors.
For example, if we use only the first eight pixels, we get an eight-dimensional projection of the data, but it is not very reflective of the whole image: we've thrown out nearly 90% of the pixels!

[figure source in Appendix](06.00-Figure-Code.ipynb#Digits-Pixel-Components)
The upper row of panels shows the individual pixels, and the lower row shows the cumulative contribution of these pixels to the construction of the image.
Using only eight of the pixel-basis components, we can only construct a small portion of the 64-pixel image.
Were we to continue this sequence and use all 64 pixels, we would recover the original image.
But the pixel-wise representation is not the only choice of basis. We can also use other basis functions, which each contain some pre-defined contribution from each pixel, and write something like
$$
image(x) = {\rm mean} + x_1 \cdot{\rm (basis~1)} + x_2 \cdot{\rm (basis~2)} + x_3 \cdot{\rm (basis~3)} \cdots
$$
PCA can be thought of as a process of choosing optimal basis functions, such that adding together just the first few of them is enough to suitably reconstruct the bulk of the elements in the dataset.
The principal components, which act as the low-dimensional representation of our data, are simply the coefficients that multiply each of the elements in this series.
This figure shows a similar depiction of reconstructing this digit using the mean plus the first eight PCA basis functions:

[figure source in Appendix](06.00-Figure-Code.ipynb#Digits-PCA-Components)
Unlike the pixel basis, the PCA basis allows us to recover the salient features of the input image with just a mean plus eight components!
The amount of each pixel in each component is the corollary of the orientation of the vector in our two-dimensional example.
This is the sense in which PCA provides a low-dimensional representation of the data: it discovers a set of basis functions that are more efficient than the native pixel-basis of the input data.
### Choosing the number of components
A vital part of using PCA in practice is the ability to estimate how many components are needed to describe the data.
This can be determined by looking at the cumulative *explained variance ratio* as a function of the number of components:
```
pca = PCA().fit(digits.data)
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
```
This curve quantifies how much of the total, 64-dimensional variance is contained within the first $N$ components.
For example, we see that with the digits the first 10 components contain approximately 75% of the variance, while you need around 50 components to describe close to 100% of the variance.
Here we see that our two-dimensional projection loses a lot of information (as measured by the explained variance) and that we'd need about 20 components to retain 90% of the variance. Looking at this plot for a high-dimensional dataset can help you understand the level of redundancy present in multiple observations.
## PCA as Noise Filtering
PCA can also be used as a filtering approach for noisy data.
The idea is this: any components with variance much larger than the effect of the noise should be relatively unaffected by the noise.
So if you reconstruct the data using just the largest subset of principal components, you should be preferentially keeping the signal and throwing out the noise.
Let's see how this looks with the digits data.
First we will plot several of the input noise-free data:
```
def plot_digits(data):
fig, axes = plt.subplots(4, 10, figsize=(10, 4),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
for i, ax in enumerate(axes.flat):
ax.imshow(data[i].reshape(8, 8),
cmap='binary', interpolation='nearest',
clim=(0, 16))
plot_digits(digits.data)
```
Now lets add some random noise to create a noisy dataset, and re-plot it:
```
np.random.seed(42)
noisy = np.random.normal(digits.data, 4)
plot_digits(noisy)
```
It's clear by eye that the images are noisy, and contain spurious pixels.
Let's train a PCA on the noisy data, requesting that the projection preserve 50% of the variance:
```
pca = PCA(0.50).fit(noisy)
pca.n_components_
```
Here 50% of the variance amounts to 12 principal components.
Now we compute these components, and then use the inverse of the transform to reconstruct the filtered digits:
```
components = pca.transform(noisy)
filtered = pca.inverse_transform(components)
plot_digits(filtered)
```
This signal preserving/noise filtering property makes PCA a very useful feature selection routine—for example, rather than training a classifier on very high-dimensional data, you might instead train the classifier on the lower-dimensional representation, which will automatically serve to filter out random noise in the inputs.
## Example: Eigenfaces
Earlier we explored an example of using a PCA projection as a feature selector for facial recognition with a support vector machine (see [In-Depth: Support Vector Machines](05.07-Support-Vector-Machines.ipynb)).
Here we will take a look back and explore a bit more of what went into that.
Recall that we were using the Labeled Faces in the Wild dataset made available through Scikit-Learn:
```
from sklearn.datasets import fetch_lfw_people
faces = fetch_lfw_people(min_faces_per_person=60)
print(faces.target_names)
print(faces.images.shape)
```
Let's take a look at the principal axes that span this dataset.
Because this is a large dataset, we will use ``RandomizedPCA``—it contains a randomized method to approximate the first $N$ principal components much more quickly than the standard ``PCA`` estimator, and thus is very useful for high-dimensional data (here, a dimensionality of nearly 3,000).
We will take a look at the first 150 components:
```
from sklearn.decomposition import RandomizedPCA
pca = RandomizedPCA(150)
pca.fit(faces.data)
```
In this case, it can be interesting to visualize the images associated with the first several principal components (these components are technically known as "eigenvectors,"
so these types of images are often called "eigenfaces").
As you can see in this figure, they are as creepy as they sound:
```
fig, axes = plt.subplots(3, 8, figsize=(9, 4),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
for i, ax in enumerate(axes.flat):
ax.imshow(pca.components_[i].reshape(62, 47), cmap='bone')
```
The results are very interesting, and give us insight into how the images vary: for example, the first few eigenfaces (from the top left) seem to be associated with the angle of lighting on the face, and later principal vectors seem to be picking out certain features, such as eyes, noses, and lips.
Let's take a look at the cumulative variance of these components to see how much of the data information the projection is preserving:
```
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
```
We see that these 150 components account for just over 90% of the variance.
That would lead us to believe that using these 150 components, we would recover most of the essential characteristics of the data.
To make this more concrete, we can compare the input images with the images reconstructed from these 150 components:
```
# Compute the components and projected faces
pca = RandomizedPCA(150).fit(faces.data)
components = pca.transform(faces.data)
projected = pca.inverse_transform(components)
# Plot the results
fig, ax = plt.subplots(2, 10, figsize=(10, 2.5),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
for i in range(10):
ax[0, i].imshow(faces.data[i].reshape(62, 47), cmap='binary_r')
ax[1, i].imshow(projected[i].reshape(62, 47), cmap='binary_r')
ax[0, 0].set_ylabel('full-dim\ninput')
ax[1, 0].set_ylabel('150-dim\nreconstruction');
```
The top row here shows the input images, while the bottom row shows the reconstruction of the images from just 150 of the ~3,000 initial features.
This visualization makes clear why the PCA feature selection used in [In-Depth: Support Vector Machines](05.07-Support-Vector-Machines.ipynb) was so successful: although it reduces the dimensionality of the data by nearly a factor of 20, the projected images contain enough information that we might, by eye, recognize the individuals in the image.
What this means is that our classification algorithm needs to be trained on 150-dimensional data rather than 3,000-dimensional data, which depending on the particular algorithm we choose, can lead to a much more efficient classification.
## Principal Component Analysis Summary
In this section we have discussed the use of principal component analysis for dimensionality reduction, for visualization of high-dimensional data, for noise filtering, and for feature selection within high-dimensional data.
Because of the versatility and interpretability of PCA, it has been shown to be effective in a wide variety of contexts and disciplines.
Given any high-dimensional dataset, I tend to start with PCA in order to visualize the relationship between points (as we did with the digits), to understand the main variance in the data (as we did with the eigenfaces), and to understand the intrinsic dimensionality (by plotting the explained variance ratio).
Certainly PCA is not useful for every high-dimensional dataset, but it offers a straightforward and efficient path to gaining insight into high-dimensional data.
PCA's main weakness is that it tends to be highly affected by outliers in the data.
For this reason, many robust variants of PCA have been developed, many of which act to iteratively discard data points that are poorly described by the initial components.
Scikit-Learn contains a couple interesting variants on PCA, including ``RandomizedPCA`` and ``SparsePCA``, both also in the ``sklearn.decomposition`` submodule.
``RandomizedPCA``, which we saw earlier, uses a non-deterministic method to quickly approximate the first few principal components in very high-dimensional data, while ``SparsePCA`` introduces a regularization term (see [In Depth: Linear Regression](05.06-Linear-Regression.ipynb)) that serves to enforce sparsity of the components.
In the following sections, we will look at other unsupervised learning methods that build on some of the ideas of PCA.
<!--NAVIGATION-->
< [In-Depth: Decision Trees and Random Forests](05.08-Random-Forests.ipynb) | [Contents](Index.ipynb) | [In-Depth: Manifold Learning](05.10-Manifold-Learning.ipynb) >
|
github_jupyter
|
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
rng = np.random.RandomState(1)
X = np.dot(rng.rand(2, 2), rng.randn(2, 200)).T
plt.scatter(X[:, 0], X[:, 1])
plt.axis('equal');
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(X)
print(pca.components_)
print(pca.explained_variance_)
def draw_vector(v0, v1, ax=None):
ax = ax or plt.gca()
arrowprops=dict(arrowstyle='->',
linewidth=2,
shrinkA=0, shrinkB=0)
ax.annotate('', v1, v0, arrowprops=arrowprops)
# plot data
plt.scatter(X[:, 0], X[:, 1], alpha=0.2)
for length, vector in zip(pca.explained_variance_, pca.components_):
v = vector * 3 * np.sqrt(length)
draw_vector(pca.mean_, pca.mean_ + v)
plt.axis('equal');
pca = PCA(n_components=1)
pca.fit(X)
X_pca = pca.transform(X)
print("original shape: ", X.shape)
print("transformed shape:", X_pca.shape)
X_new = pca.inverse_transform(X_pca)
plt.scatter(X[:, 0], X[:, 1], alpha=0.2)
plt.scatter(X_new[:, 0], X_new[:, 1], alpha=0.8)
plt.axis('equal');
from sklearn.datasets import load_digits
digits = load_digits()
digits.data.shape
pca = PCA(2) # project from 64 to 2 dimensions
projected = pca.fit_transform(digits.data)
print(digits.data.shape)
print(projected.shape)
plt.scatter(projected[:, 0], projected[:, 1],
c=digits.target, edgecolor='none', alpha=0.5,
cmap=plt.cm.get_cmap('spectral', 10))
plt.xlabel('component 1')
plt.ylabel('component 2')
plt.colorbar();
pca = PCA().fit(digits.data)
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
def plot_digits(data):
fig, axes = plt.subplots(4, 10, figsize=(10, 4),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
for i, ax in enumerate(axes.flat):
ax.imshow(data[i].reshape(8, 8),
cmap='binary', interpolation='nearest',
clim=(0, 16))
plot_digits(digits.data)
np.random.seed(42)
noisy = np.random.normal(digits.data, 4)
plot_digits(noisy)
pca = PCA(0.50).fit(noisy)
pca.n_components_
components = pca.transform(noisy)
filtered = pca.inverse_transform(components)
plot_digits(filtered)
from sklearn.datasets import fetch_lfw_people
faces = fetch_lfw_people(min_faces_per_person=60)
print(faces.target_names)
print(faces.images.shape)
from sklearn.decomposition import RandomizedPCA
pca = RandomizedPCA(150)
pca.fit(faces.data)
fig, axes = plt.subplots(3, 8, figsize=(9, 4),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
for i, ax in enumerate(axes.flat):
ax.imshow(pca.components_[i].reshape(62, 47), cmap='bone')
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
# Compute the components and projected faces
pca = RandomizedPCA(150).fit(faces.data)
components = pca.transform(faces.data)
projected = pca.inverse_transform(components)
# Plot the results
fig, ax = plt.subplots(2, 10, figsize=(10, 2.5),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
for i in range(10):
ax[0, i].imshow(faces.data[i].reshape(62, 47), cmap='binary_r')
ax[1, i].imshow(projected[i].reshape(62, 47), cmap='binary_r')
ax[0, 0].set_ylabel('full-dim\ninput')
ax[1, 0].set_ylabel('150-dim\nreconstruction');
| 0.843251 | 0.995635 |
# Document classifier
## Daten
- Wir brauchen zuerst daten um unser Modell zu trainieren
```
from textblob.classifiers import NaiveBayesClassifier
train = [
('I love this sandwich.', 'pos'),
('This is an amazing place!', 'pos'),
('I feel very good about these beers.', 'pos'),
('This is my best work.', 'pos'),
("What an awesome view", 'pos'),
('I do not like this restaurant', 'neg'),
('I am tired of this stuff.', 'neg'),
("I can't deal with this", 'neg'),
('He is my sworn enemy!', 'neg'),
('My boss is horrible.', 'neg')
]
test = [
('The beer was good.', 'pos'),
('I do not enjoy my job', 'neg'),
("I ain't feeling dandy today.", 'neg'),
("I feel amazing!", 'pos'),
('Gary is a friend of mine.', 'pos'),
("I can't believe I'm doing this.", 'neg')
]
```
## Training
```
cl = NaiveBayesClassifier(train)
```
## Test
- Wie gut performed unser Modell bei Daten die es noch nie gesehen hat?
```
cl.accuracy(test)
```
- Zu 80% korrekt, ok für mich :)
## Features
- Welche wörter sorgen am meisten dafür dass etwas positiv oder negativ klassifiziert wird?
```
cl.show_informative_features(5)
```
Er ist der meinung wenn "this" vorkommt ist es eher positiv, was natürlich quatsch ist, aber das hat er nun mal so gelernt, deswegen braucht ihr gute trainingsdaten.
## Klassifizierung
```
cl.classify("Their burgers are amazing") # "pos"
cl.classify("I don't like their pizza.") # "neg"
cl.classify("I hate sunshine.")
cl.classify("It's good to be here.")
cl.classify("I like Zurich.")
cl.classify("I love Paris.")
```
### Klassizierung nach Sätzen
```
from textblob import TextBlob
blob = TextBlob("The beer was amazing. "
"But the hangover was horrible. My boss was not happy.",
classifier=cl)
for sentence in blob.sentences:
print(("%s (%s)") % (sentence,sentence.classify()))
```
## Mit schweizer Songtexten Kommentare klassifizieren
```
import os,glob
from nltk.tokenize import sent_tokenize
from nltk.tokenize import word_tokenize
from io import open
train = []
countries = ["schweiz", "deutschland"]
for country in countries:
out = []
folder_path = 'songtexte/%s' % country
for filename in glob.glob(os.path.join(folder_path, '*.txt')):
with open(filename, 'r') as f:
text = f.read()
words = word_tokenize(text)
words=[word.lower() for word in words if word.isalpha()]
for word in words:
out.append(word)
out = set(out)
for word in out:
train.append((word,country))
#print (filename)
#print (len(text))
train
from textblob.classifiers import NaiveBayesClassifier
c2 = NaiveBayesClassifier(train)
c2.classify("Ich gehe durch den Wald") # "deutsch"
c2.classify("Häsch es guet") # "deutsch"
c2.classify("Schtärneföifi")
c2.show_informative_features(5)
c2.accuracy(test)
```
## Hardcore Beispiel mit Film-review daten mit NLTK
- https://www.nltk.org/book/ch06.html
- Wir nutzen nur noch die 100 häufigsten Wörter in den Texten und schauen ob sie bei positiv oder negativ vorkommen
```
import random
import nltk
review = (" ").join(train[0][0])
print(review)
from nltk.corpus import movie_reviews
documents = [(list(movie_reviews.words(fileid)), category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category)]
random.shuffle(documents)
(" ").join(documents[0][0])
all_words = nltk.FreqDist(w.lower() for w in movie_reviews.words())
word_features = list(all_words)[:2000]
def document_features(document):
document_words = set(document)
features = {}
for word in word_features:
features['contains({})'.format(word)] = (word in document_words)
return features
print(document_features(movie_reviews.words('pos/cv957_8737.txt')))
featuresets = [(document_features(d), c) for (d,c) in documents]
train_set, test_set = featuresets[100:], featuresets[:100]
classifier = nltk.NaiveBayesClassifier.train(train_set)
classifier.classify(document_features("a movie with bad actors".split(" ")))
classifier.classify(document_features("an uplifting movie with russel crowe".split(" ")))
classifier.show_most_informative_features(10)
```
|
github_jupyter
|
from textblob.classifiers import NaiveBayesClassifier
train = [
('I love this sandwich.', 'pos'),
('This is an amazing place!', 'pos'),
('I feel very good about these beers.', 'pos'),
('This is my best work.', 'pos'),
("What an awesome view", 'pos'),
('I do not like this restaurant', 'neg'),
('I am tired of this stuff.', 'neg'),
("I can't deal with this", 'neg'),
('He is my sworn enemy!', 'neg'),
('My boss is horrible.', 'neg')
]
test = [
('The beer was good.', 'pos'),
('I do not enjoy my job', 'neg'),
("I ain't feeling dandy today.", 'neg'),
("I feel amazing!", 'pos'),
('Gary is a friend of mine.', 'pos'),
("I can't believe I'm doing this.", 'neg')
]
cl = NaiveBayesClassifier(train)
cl.accuracy(test)
cl.show_informative_features(5)
cl.classify("Their burgers are amazing") # "pos"
cl.classify("I don't like their pizza.") # "neg"
cl.classify("I hate sunshine.")
cl.classify("It's good to be here.")
cl.classify("I like Zurich.")
cl.classify("I love Paris.")
from textblob import TextBlob
blob = TextBlob("The beer was amazing. "
"But the hangover was horrible. My boss was not happy.",
classifier=cl)
for sentence in blob.sentences:
print(("%s (%s)") % (sentence,sentence.classify()))
import os,glob
from nltk.tokenize import sent_tokenize
from nltk.tokenize import word_tokenize
from io import open
train = []
countries = ["schweiz", "deutschland"]
for country in countries:
out = []
folder_path = 'songtexte/%s' % country
for filename in glob.glob(os.path.join(folder_path, '*.txt')):
with open(filename, 'r') as f:
text = f.read()
words = word_tokenize(text)
words=[word.lower() for word in words if word.isalpha()]
for word in words:
out.append(word)
out = set(out)
for word in out:
train.append((word,country))
#print (filename)
#print (len(text))
train
from textblob.classifiers import NaiveBayesClassifier
c2 = NaiveBayesClassifier(train)
c2.classify("Ich gehe durch den Wald") # "deutsch"
c2.classify("Häsch es guet") # "deutsch"
c2.classify("Schtärneföifi")
c2.show_informative_features(5)
c2.accuracy(test)
import random
import nltk
review = (" ").join(train[0][0])
print(review)
from nltk.corpus import movie_reviews
documents = [(list(movie_reviews.words(fileid)), category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category)]
random.shuffle(documents)
(" ").join(documents[0][0])
all_words = nltk.FreqDist(w.lower() for w in movie_reviews.words())
word_features = list(all_words)[:2000]
def document_features(document):
document_words = set(document)
features = {}
for word in word_features:
features['contains({})'.format(word)] = (word in document_words)
return features
print(document_features(movie_reviews.words('pos/cv957_8737.txt')))
featuresets = [(document_features(d), c) for (d,c) in documents]
train_set, test_set = featuresets[100:], featuresets[:100]
classifier = nltk.NaiveBayesClassifier.train(train_set)
classifier.classify(document_features("a movie with bad actors".split(" ")))
classifier.classify(document_features("an uplifting movie with russel crowe".split(" ")))
classifier.show_most_informative_features(10)
| 0.340156 | 0.713656 |
# Tree Methods Consulting Project
You've been hired by a dog food company to try to predict why some batches of their dog food are spoiling much quicker than intended! Unfortunately this Dog Food company hasn't upgraded to the latest machinery, meaning that the amounts of the five preservative chemicals they are using can vary a lot, but which is the chemical that has the strongest effect? The dog food company first mixes up a batch of preservative that contains 4 different preservative chemicals (A,B,C,D) and then is completed with a "filler" chemical. The food scientists beelive one of the A,B,C, or D preservatives is causing the problem, but need your help to figure out which one!
Use Machine Learning with RF to find out which parameter had the most predicitive power, thus finding out which chemical causes the early spoiling! So create a model and then find out how you can decide which chemical is the problem!
* Pres_A : Percentage of preservative A in the mix
* Pres_B : Percentage of preservative B in the mix
* Pres_C : Percentage of preservative C in the mix
* Pres_D : Percentage of preservative D in the mix
* Spoiled: Label indicating whether or not the dog food batch was spoiled.
___
**Think carefully about what this problem is really asking you to solve. While we will use Machine Learning to solve this, it won't be with your typical train/test split workflow. If this confuses you, skip ahead to the solution code along walk-through!**
____
# Start
First thing is starting a new spark session. Let's call it dogfood:
```
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('dogfood').getOrCreate()
```
Next is reading the data, which is in a csv file:
```
df = spark.read.csv('input data/dog_food.csv', header=True, inferSchema=True)
df.printSchema()
```
One can look at some of the values of these 4 columns and label in each row:
```
for s in df.head(3):
print(s)
print('-------')
print('\n')
```
Let's print the columns again tu use them in the VectorAssembler
```
df.columns
```
The model that will be used requires our data to be in a given format so let's use the VectorAssembler in order to do that:
```
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.linalg import Vectors
assembler = VectorAssembler(inputCols=['A', 'B', 'C', 'D'],
outputCol='features')
output = assembler.transform(df)
output.printSchema()
```
Now that there is a features vector and a Spoiled label the model can be created.
A RandomForestClassifier will be used:
```
from pyspark.ml.classification import (RandomForestClassifier,
DecisionTreeClassifier)
rfc = RandomForestClassifier(labelCol='Spoiled', featuresCol='features')
```
After selecting the features vector and the label column the model can be trained using the fit method:
```
data = output.select('features', 'Spoiled')
rfc_model = rfc.fit(data)
```
As it was asked, the chemical with more influence can be obtained using featureImportances, wich reaturns a SparseVector:
```
rfc_model.featureImportances
```
Looking at the above cell it is easy to conclude that number 2 (which corresponds to chemical C) is by far the most important feature.
Nevertheless, a plot of these feature importances is shown below:
```
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.barplot(x=rfc_model.featureImportances.indices,
y=rfc_model.featureImportances.values,
palette='viridis')
plt.tight_layout()
```
Machine learning models can be used in multiple ways and this was just an alternative approach as teh goal was to check which feature drives the causation of whether or not the dog food is spoiled.
Thank you!
|
github_jupyter
|
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('dogfood').getOrCreate()
df = spark.read.csv('input data/dog_food.csv', header=True, inferSchema=True)
df.printSchema()
for s in df.head(3):
print(s)
print('-------')
print('\n')
df.columns
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.linalg import Vectors
assembler = VectorAssembler(inputCols=['A', 'B', 'C', 'D'],
outputCol='features')
output = assembler.transform(df)
output.printSchema()
from pyspark.ml.classification import (RandomForestClassifier,
DecisionTreeClassifier)
rfc = RandomForestClassifier(labelCol='Spoiled', featuresCol='features')
data = output.select('features', 'Spoiled')
rfc_model = rfc.fit(data)
rfc_model.featureImportances
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.barplot(x=rfc_model.featureImportances.indices,
y=rfc_model.featureImportances.values,
palette='viridis')
plt.tight_layout()
| 0.551574 | 0.96912 |
## MNIST Training, Compilation and Deployment with MXNet Module and Sagemaker Neo
The **SageMaker Python SDK** makes it easy to train, compile and deploy MXNet models. In this example, we train a simple neural network using the Apache MXNet [Module API](https://mxnet.apache.org/versions/1.5.0/api/python/module/module.html) and the MNIST dataset. The MNIST dataset is widely used for handwritten digit classification, and consists of 70,000 labeled 28x28 pixel grayscale images of hand-written digits. The dataset is split into 60,000 training images and 10,000 test images. There are 10 classes (one for each of the 10 digits). The task at hand is to train a model using the 60,000 training images, compile the trained model using SageMaker Neo and subsequently test its classification accuracy on the 10,000 test images.
### Setup
To get started, we need to first upgrade the [SageMaker SDK for Python](https://sagemaker.readthedocs.io/en/stable/v2.html) to v2.33.0 or greater & restart the kernel. Then we create a session and define a few variables that will be needed later in the example.
```
!~/anaconda3/envs/mxnet_p36/bin/pip install --upgrade sagemaker
import sagemaker
from sagemaker import get_execution_role
from sagemaker.session import Session
# S3 bucket and folder for saving code and model artifacts.
# Feel free to specify a different bucket/folder here if you wish.
bucket = Session().default_bucket()
folder = "DEMO-MXNet-MNIST"
# Location to save your custom code in tar.gz format.
custom_code_upload_location = "s3://{}/{}/custom-code".format(bucket, folder)
# Location where results of model training are saved.
s3_training_output_location = "s3://{}/{}/training-output".format(bucket, folder)
# Location where results of model compilation are saved.
s3_compilation_output_location = "s3://{}/{}/compilation-output".format(bucket, folder)
# IAM execution role that gives SageMaker access to resources in your AWS account.
# We can use the SageMaker Python SDK to get the role from our notebook environment.
role = get_execution_role()
```
### Entry Point Script
The ``mnist.py`` script provides all the code we need for training and hosting a SageMaker model. The script we will use is adapted from Apache MXNet [MNIST tutorial](https://mxnet.incubator.apache.org/versions/1.5.0/tutorials/python/mnist.html).
```
!pygmentize mnist.py
```
In the training script ``mnist.py``, there are two additional functions, to be used with Neo:
* `model_fn()`: Loads the compiled model and runs a warm-up inference on a valid empty data
* `transform_fn()`: Converts incoming payload into NumPy array, performs prediction & converts the prediction output into response payload
* Alternatively, instead of `transform_fn()`, these three can be defined: `input_fn()`, `predict_fn()` and `output_fn()`
### Creating SageMaker's MXNet estimator
The SageMaker ```MXNet``` estimator allows us to run single machine or distributed training in SageMaker, using CPU or GPU-based instances.
When we create the estimator, we pass in the filename of our training script as the entry_point, the name of our IAM execution role, and the S3 locations we defined in the setup section. We also provide ``instance_count`` and ``instance_type`` which allows to specify the number and type of SageMaker instances that will be used for the training job. The ``hyperparameters`` parameter is a ``dict`` of values that will be passed to your training script -- you can see how to access these values in the ``mnist.py`` script above.
For this example, we will choose one ``ml.c5.4xlarge`` instance.
```
from sagemaker.mxnet import MXNet
mnist_estimator = MXNet(
entry_point="mnist.py",
role=role,
output_path=s3_training_output_location,
code_location=custom_code_upload_location,
instance_count=1,
instance_type="ml.c5.4xlarge",
framework_version="1.8.0",
py_version="py37",
distribution={"parameter_server": {"enabled": True}},
hyperparameters={"learning-rate": 0.1},
)
```
### Running the Training Job
After we've constructed our MXNet object, we can fit it using data stored in S3. Below we run SageMaker training on two input channels: **train** and **test**. During training, SageMaker makes this data stored in S3 available in the local filesystem where the ```mnist.py``` script is running. The script loads the train and test data from disk.
```
%%time
import boto3
region = boto3.Session().region_name
train_data_location = "s3://sagemaker-sample-data-{}/mxnet/mnist/train".format(region)
test_data_location = "s3://sagemaker-sample-data-{}/mxnet/mnist/test".format(region)
mnist_estimator.fit({"train": train_data_location, "test": test_data_location})
```
### Optimizing the trained model with SageMaker Neo
Neo API allows to optimize the model for a specific hardware type. When calling `compile_model()` function, we specify the target instance family, correct input shapes for the model, the name of our IAM execution role, S3 bucket to which the compiled model would be stored and we set `MMS_DEFAULT_RESPONSE_TIMEOUT` to 500. For this example, we will choose ``ml_c5`` as the target instance family.
**Important: If the following command result in a permission error, scroll up and locate the value of execution role returned by `get_execution_role()`. The role must have access to the S3 bucket specified in ``output_path``.**
```
compiled_model = mnist_estimator.compile_model(
target_instance_family="ml_c5",
input_shape={"data": [1, 28, 28]},
role=role,
output_path=s3_compilation_output_location,
framework="mxnet",
framework_version="1.8",
env={"MMS_DEFAULT_RESPONSE_TIMEOUT": "500"},
)
```
### Creating an inference Endpoint
We can deploy this compiled model using the ``deploy()`` function, for which we need to use an ``instance_type`` belonging to the ``target_instance_family`` we used for compilation. For this example, we will choose ``ml.c5.4xlarge`` instance as we compiled for ``ml_c5``. The function also allow us to set the number of ``initial_instance_count`` that will be used for the Endpoint. We also pass ``NumpySerializer()`` whose ``CONTENT_TYPE`` is ``application/x-npy`` which thereby ensure that the endpoint will receive NumPy array as the payload during inference. The ``deploy()`` function creates a SageMaker endpoint that we can use to perform inference.
**Note:** If you compiled the model for a GPU ``target_instance_family`` then please make sure to deploy to one of the same target ``instance_type`` below and also make necessary changes in `mnist.py`
```
from sagemaker.serializers import NumpySerializer
serializer = NumpySerializer()
predictor = compiled_model.deploy(
initial_instance_count=1, instance_type="ml.c5.4xlarge", serializer=serializer
)
```
### Making an inference request
Now that our Endpoint is deployed and we have a ``predictor`` object, we can use it to classify handwritten digits.
To see inference in action, we load the `input.npy` file which was generated using `get_input.py` script provided and has the data equivalent of a hand drawn digit `0`. If you would like to draw a different digit and generate a new `input.npy` file then you can do so by running the `get_input.py` script provided. A GUI enabled device would be required to run the script which will generate `input.npy` file once a digit is drawn.
```
import numpy as np
numpy_ndarray = np.load("input.npy")
```
Now we can use the ``predictor`` object to classify the handwritten digit.
```
response = predictor.predict(data=numpy_ndarray)
print("Raw prediction result:")
print(response)
labeled_predictions = list(zip(range(10), response))
print("Labeled predictions: ")
print(labeled_predictions)
labeled_predictions.sort(key=lambda label_and_prob: 1.0 - label_and_prob[1])
print("Most likely answer: {}".format(labeled_predictions[0]))
```
### (Optional) Delete the Endpoint
After you have finished with this example, remember to delete the prediction endpoint to release the instance(s) associated with it.
```
print("Endpoint name: " + predictor.endpoint_name)
predictor.delete_endpoint()
```
|
github_jupyter
|
!~/anaconda3/envs/mxnet_p36/bin/pip install --upgrade sagemaker
import sagemaker
from sagemaker import get_execution_role
from sagemaker.session import Session
# S3 bucket and folder for saving code and model artifacts.
# Feel free to specify a different bucket/folder here if you wish.
bucket = Session().default_bucket()
folder = "DEMO-MXNet-MNIST"
# Location to save your custom code in tar.gz format.
custom_code_upload_location = "s3://{}/{}/custom-code".format(bucket, folder)
# Location where results of model training are saved.
s3_training_output_location = "s3://{}/{}/training-output".format(bucket, folder)
# Location where results of model compilation are saved.
s3_compilation_output_location = "s3://{}/{}/compilation-output".format(bucket, folder)
# IAM execution role that gives SageMaker access to resources in your AWS account.
# We can use the SageMaker Python SDK to get the role from our notebook environment.
role = get_execution_role()
!pygmentize mnist.py
from sagemaker.mxnet import MXNet
mnist_estimator = MXNet(
entry_point="mnist.py",
role=role,
output_path=s3_training_output_location,
code_location=custom_code_upload_location,
instance_count=1,
instance_type="ml.c5.4xlarge",
framework_version="1.8.0",
py_version="py37",
distribution={"parameter_server": {"enabled": True}},
hyperparameters={"learning-rate": 0.1},
)
%%time
import boto3
region = boto3.Session().region_name
train_data_location = "s3://sagemaker-sample-data-{}/mxnet/mnist/train".format(region)
test_data_location = "s3://sagemaker-sample-data-{}/mxnet/mnist/test".format(region)
mnist_estimator.fit({"train": train_data_location, "test": test_data_location})
compiled_model = mnist_estimator.compile_model(
target_instance_family="ml_c5",
input_shape={"data": [1, 28, 28]},
role=role,
output_path=s3_compilation_output_location,
framework="mxnet",
framework_version="1.8",
env={"MMS_DEFAULT_RESPONSE_TIMEOUT": "500"},
)
from sagemaker.serializers import NumpySerializer
serializer = NumpySerializer()
predictor = compiled_model.deploy(
initial_instance_count=1, instance_type="ml.c5.4xlarge", serializer=serializer
)
import numpy as np
numpy_ndarray = np.load("input.npy")
response = predictor.predict(data=numpy_ndarray)
print("Raw prediction result:")
print(response)
labeled_predictions = list(zip(range(10), response))
print("Labeled predictions: ")
print(labeled_predictions)
labeled_predictions.sort(key=lambda label_and_prob: 1.0 - label_and_prob[1])
print("Most likely answer: {}".format(labeled_predictions[0]))
print("Endpoint name: " + predictor.endpoint_name)
predictor.delete_endpoint()
| 0.487551 | 0.987216 |
```
# Dependencies and Setup I
%matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
```
# Reflect Tables into SQLAlchemy ORM
```
# Python SQL Toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
# create engine to hawaii.sqlite
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# Reflect an Existing Database Into a New Model
Base = automap_base()
# Reflect the Tables
Base.prepare(engine, reflect=True)
# View All of the Classes that Automap Found
Base.classes.keys()
# Save References to Each Table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Save references to each table
session=Session(engine)
```
# Exploratory Precipitation Analysis
```
# Find the most recent date in the data set.
last_date = session.query(Measurement.date).order_by(Measurement.date.desc()).first()
last_date
# Calculate the date one year from the last date in data set.
one_year_ago = dt.date(2017,8,23) - dt.timedelta(days=365)
one_year_ago
# Design a Query to Retrieve the Last 12 Months of Precipitation Data
prcp_data = session.query(Measurement.date, Measurement.prcp).\
filter(Measurement.date >= one_year_ago).\
order_by(Measurement.date).all()
# Perform a Query to Retrieve the Data and Precipitation Scores
all_scores = session.query(Measurement.date, Measurement.prcp).order_by(Measurement.date.desc()).all()
# Save the Query Results as a Pandas DataFrame and Set the Index to the Date Column & Sort the Dataframe Values by `date`
prcp_df = pd.DataFrame(prcp_data, columns=["Date","Precipitation"])
prcp_df.set_index("Date", inplace=True,)
prcp_df.head(10)
prcp_df.sort_values('Date')
# Use Pandas Plotting with Matplotlib to `plot` the Data
prcp_df.plot(title="Precipitation Analysis", figsize=(10,5))
plt.legend(loc='upper center')
plt.savefig("Images/precipitation.png")
plt.show()
# Use Pandas to Calculate the Summary Statistics for the Precipitation Data
prcp_df.describe()
```
# Exploratory Station Analysis
```
# Design a query to calculate the total number stations in the dataset
station_count = session.query(Measurement.station).distinct().count()
station_count
# Design a query to find the most active stations (i.e. what stations have the most rows?)
# List the stations and the counts in descending order.
most_active_stations = session.query(Measurement.station, func.count(Measurement.station)).\
group_by(Measurement.station).\
order_by(func.count(Measurement.station).desc()).all()
most_active_stations
# Using the most active station id from the previous query, calculate the lowest, highest, and average temperature.
sel = [func.min(Measurement.tobs),
func.max(Measurement.tobs),
func.avg(Measurement.tobs)]
min_max_avg_temp = session.query(*sel).\
filter(Measurement.station == "USC00519281").all()
min_max_avg_temp
# Using the most active station id
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
tobs_data = session.query(Measurement.tobs).\
filter(Measurement.date >= one_year_ago).\
filter(Measurement.station == "USC00519281").\
order_by(Measurement.date).all()
# Save the Query Results as a Pandas DataFrame
tobs_data_df = pd.DataFrame(tobs_data, columns=["TOBS"])
# Plot the Results as a Histogram with `bins=12`
tobs_data_df.plot.hist(bins=12, title="Temperature vs. Frequency Histogram", figsize=(10,5))
plt.xlabel("Temperature")
plt.legend(loc="upper right")
plt.tight_layout()
plt.savefig("Images/temperature_vs_frequency.png")
plt.show()
```
# Close session
```
# Close Session
session.close()
```
# Bonus: Temperature Analysis I
```
# "tobs" is "temperature observations"
df = pd.read_csv('hawaii_measurements.csv')
df.head()
hm_df=pd.read_csv('hawaii_measurements.csv')
hm_df
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, Float
class HawaiiPrcpTobs(Base):
__tablename__ = 'prcptobs'
id = Column(Integer, primary_key = True)
station = Column(String)
date = Column(String)
prcp = Column(Float)
tobs = Column(Float)
engine=create_engine('sqlite:///hawaii_measurements.sqlite')
hm_df.to_sql('prcptobs', engine, if_exists='append', index=False)
Base.metadata.create_all(engine)
session=Session(bind=engine)
hm_df=engine.execute('SELECT * FROM prcptobs')
hm_df.fetchall()
print(hm_df.keys())
hm_df=engine.execute('SELECT station FROM prcptobs ORDER BY station')
hm_df.fetchall()
session.query(HawaiiPrcpTobs.station).group_by(HawaiiPrcpTobs.station).all()
session.query(HawaiiPrcpTobs.station,func.max(HawaiiPrcpTobs.tobs)).group_by(HawaiiPrcpTobs.station).all()
from scipy import stats
from scipy import mean
# Filter data for desired months
# Identify the average temperature for June
avg_temp_j=(session.query(func.avg(HawaiiPrcpTobs.tobs))
.filter(func.strftime('%m',HawaiiPrcpTobs.date) == '06')
.all())
avg_temp_j
# Identify the average temperature for December
avg_temp_d=(session.query(func.avg(HawaiiPrcpTobs.tobs))
.filter(func.strftime('%m',HawaiiPrcpTobs.date) == '12')
.all())
avg_temp_d
# Create collections of temperature data
june_temp=(session.query(HawaiiPrcpTobs.date,HawaiiPrcpTobs.tobs)
.filter(func.strftime('%m',HawaiiPrcpTobs.date) == '06')
.all())
june_temp
december_temp=(session.query(HawaiiPrcpTobs.date,HawaiiPrcpTobs.tobs)
.filter(func.strftime('%m',HawaiiPrcpTobs.date) == '12')
.all())
december_temp
# Filtering Out Null Values From June and December TOBS Lists
j_temp_list = []
for temp in june_temp:
if type(temp.tobs) == int:
j_temp_list.append(temp.tobs)
d_temp_list = []
for temp in december_temp:
if type(temp.tobs) == int:
d_temp_list.append(temp.tobs)
# Run paired t-test
stats.ttest_rel(j_temp_list[0:200],d_temp_list[0:200])
```
Paired t-test is used to find the difference in the June and December average temperature in Honolulu for the period of 2010 and 2017.
The null hypothesis in this case is that there is no statistically significant difference in the mean of June temperature and December temperature in Honolulu, Hawaii.
# Analysis
The t-statistic value is 21.813
The p-value in this case is 1.1468e-54,
That is n the standard thresholds of 0.05 or 0.01, so the null hypothesis is rejected
```
# create engine to hawaii.sqlite
engine = create_engine("sqlite:///hawaii.sqlite")
```
|
github_jupyter
|
# Dependencies and Setup I
%matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
# Python SQL Toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
# create engine to hawaii.sqlite
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# Reflect an Existing Database Into a New Model
Base = automap_base()
# Reflect the Tables
Base.prepare(engine, reflect=True)
# View All of the Classes that Automap Found
Base.classes.keys()
# Save References to Each Table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Save references to each table
session=Session(engine)
# Find the most recent date in the data set.
last_date = session.query(Measurement.date).order_by(Measurement.date.desc()).first()
last_date
# Calculate the date one year from the last date in data set.
one_year_ago = dt.date(2017,8,23) - dt.timedelta(days=365)
one_year_ago
# Design a Query to Retrieve the Last 12 Months of Precipitation Data
prcp_data = session.query(Measurement.date, Measurement.prcp).\
filter(Measurement.date >= one_year_ago).\
order_by(Measurement.date).all()
# Perform a Query to Retrieve the Data and Precipitation Scores
all_scores = session.query(Measurement.date, Measurement.prcp).order_by(Measurement.date.desc()).all()
# Save the Query Results as a Pandas DataFrame and Set the Index to the Date Column & Sort the Dataframe Values by `date`
prcp_df = pd.DataFrame(prcp_data, columns=["Date","Precipitation"])
prcp_df.set_index("Date", inplace=True,)
prcp_df.head(10)
prcp_df.sort_values('Date')
# Use Pandas Plotting with Matplotlib to `plot` the Data
prcp_df.plot(title="Precipitation Analysis", figsize=(10,5))
plt.legend(loc='upper center')
plt.savefig("Images/precipitation.png")
plt.show()
# Use Pandas to Calculate the Summary Statistics for the Precipitation Data
prcp_df.describe()
# Design a query to calculate the total number stations in the dataset
station_count = session.query(Measurement.station).distinct().count()
station_count
# Design a query to find the most active stations (i.e. what stations have the most rows?)
# List the stations and the counts in descending order.
most_active_stations = session.query(Measurement.station, func.count(Measurement.station)).\
group_by(Measurement.station).\
order_by(func.count(Measurement.station).desc()).all()
most_active_stations
# Using the most active station id from the previous query, calculate the lowest, highest, and average temperature.
sel = [func.min(Measurement.tobs),
func.max(Measurement.tobs),
func.avg(Measurement.tobs)]
min_max_avg_temp = session.query(*sel).\
filter(Measurement.station == "USC00519281").all()
min_max_avg_temp
# Using the most active station id
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
tobs_data = session.query(Measurement.tobs).\
filter(Measurement.date >= one_year_ago).\
filter(Measurement.station == "USC00519281").\
order_by(Measurement.date).all()
# Save the Query Results as a Pandas DataFrame
tobs_data_df = pd.DataFrame(tobs_data, columns=["TOBS"])
# Plot the Results as a Histogram with `bins=12`
tobs_data_df.plot.hist(bins=12, title="Temperature vs. Frequency Histogram", figsize=(10,5))
plt.xlabel("Temperature")
plt.legend(loc="upper right")
plt.tight_layout()
plt.savefig("Images/temperature_vs_frequency.png")
plt.show()
# Close Session
session.close()
# "tobs" is "temperature observations"
df = pd.read_csv('hawaii_measurements.csv')
df.head()
hm_df=pd.read_csv('hawaii_measurements.csv')
hm_df
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, Float
class HawaiiPrcpTobs(Base):
__tablename__ = 'prcptobs'
id = Column(Integer, primary_key = True)
station = Column(String)
date = Column(String)
prcp = Column(Float)
tobs = Column(Float)
engine=create_engine('sqlite:///hawaii_measurements.sqlite')
hm_df.to_sql('prcptobs', engine, if_exists='append', index=False)
Base.metadata.create_all(engine)
session=Session(bind=engine)
hm_df=engine.execute('SELECT * FROM prcptobs')
hm_df.fetchall()
print(hm_df.keys())
hm_df=engine.execute('SELECT station FROM prcptobs ORDER BY station')
hm_df.fetchall()
session.query(HawaiiPrcpTobs.station).group_by(HawaiiPrcpTobs.station).all()
session.query(HawaiiPrcpTobs.station,func.max(HawaiiPrcpTobs.tobs)).group_by(HawaiiPrcpTobs.station).all()
from scipy import stats
from scipy import mean
# Filter data for desired months
# Identify the average temperature for June
avg_temp_j=(session.query(func.avg(HawaiiPrcpTobs.tobs))
.filter(func.strftime('%m',HawaiiPrcpTobs.date) == '06')
.all())
avg_temp_j
# Identify the average temperature for December
avg_temp_d=(session.query(func.avg(HawaiiPrcpTobs.tobs))
.filter(func.strftime('%m',HawaiiPrcpTobs.date) == '12')
.all())
avg_temp_d
# Create collections of temperature data
june_temp=(session.query(HawaiiPrcpTobs.date,HawaiiPrcpTobs.tobs)
.filter(func.strftime('%m',HawaiiPrcpTobs.date) == '06')
.all())
june_temp
december_temp=(session.query(HawaiiPrcpTobs.date,HawaiiPrcpTobs.tobs)
.filter(func.strftime('%m',HawaiiPrcpTobs.date) == '12')
.all())
december_temp
# Filtering Out Null Values From June and December TOBS Lists
j_temp_list = []
for temp in june_temp:
if type(temp.tobs) == int:
j_temp_list.append(temp.tobs)
d_temp_list = []
for temp in december_temp:
if type(temp.tobs) == int:
d_temp_list.append(temp.tobs)
# Run paired t-test
stats.ttest_rel(j_temp_list[0:200],d_temp_list[0:200])
# create engine to hawaii.sqlite
engine = create_engine("sqlite:///hawaii.sqlite")
| 0.722527 | 0.918407 |
## Set-up
```
%run functions.ipynb
%matplotlib inline
import tweepy
import configparser
import os
import json
import GetOldTweets3 as got
import datetime
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from datetime import datetime
import nltk
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
import string
import random
from nltk.sentiment.vader import SentimentIntensityAnalyzer
import re
import csv
import math
from collections import Counter
jan_tweets = load_tweets('data/1/tweets_2020-01-01_to_2020-02-01.json')
feb_tweets = load_tweets('data/2/tweets_2020-02-01_to_2020-03-01.json')
mar_tweets = load_tweets('data/3/tweets_2020-03-01_to_2020-04-01.json')
apr_tweets = load_tweets('data/4/tweets_2020-04-01_to_2020-05-01.json')
all_time = load_tweets('data/all_time/tweets_2020-01-01_to_2020-05-01.json')
trump_tweets = load_tweets('data/all_time/realdonaldtrump_2020-01-01_to_2020-05-01.json')
pompeo_tweets = load_tweets('data/all_time/secpompeo_2020-01-01_to_2020-05-01.json')
racist_tweets = load_tweets('data/all_time/racist_tweets_2020-01-01_to_2020-05-01.json')
len(jan_tweets), len(feb_tweets), len(mar_tweets), len(apr_tweets)
len(all_time)
len(trump_tweets),len(pompeo_tweets)
len(racist_tweets)
corpus1 = json.load(open('data/corpus_index1.json'))
corpus2 = json.load(open('data/corpus_index2.json'))
corpus3 = json.load(open('data/corpus_index3.json'))
corpus4 = json.load(open('data/corpus_index4.json'))
corp_all = json.load(open('data/corpus_index_all.json'))
len(corpus1), len(corpus2),len(corpus3),len(corpus4)
len(corp_all)
```
## How Trump has fueled the discussion on COVID-19 and Asian-American racism
```
d = Counter(tweet['date'][:10] for tweet in all_time)
dftweets_raw = pd.DataFrame.from_dict(d, orient='index').reset_index()
dftweets_cleaned = dftweets_raw.rename(columns = {"index": "date", 0: "count"})
dftweets = dftweets_cleaned.sort_values(by='date')
dftweets.head()
d = Counter(tweet['date'][:10] for tweet in trump_tweets)
dftrump_raw = pd.DataFrame.from_dict(d, orient='index').reset_index()
dftrump_cleaned = dftrump_raw.rename(columns = {"index": "date", 0: "count"})
dftrump = dftrump_cleaned.sort_values(by='date')
dftrump
d = Counter(tweet['date'][:10] for tweet in pompeo_tweets)
dfpompeo_raw = pd.DataFrame.from_dict(d, orient='index').reset_index()
dfpompeo_cleaned = dfpompeo_raw.rename(columns = {"index": "date", 0: "count"})
dfpompeo = dfpompeo_cleaned.sort_values(by='date')
dfpompeo
d = Counter(article['Date'] for article in corp_all)
dflexis_raw = pd.DataFrame.from_dict(d, orient='index').reset_index()
dflexis_cleaned = dflexis_raw.rename(columns = {"index": "date", 0: "count"})
dflexis = dflexis_cleaned.sort_values(by='date')
dflexis.head()
d = Counter(tweet['date'][:10] for tweet in racist_tweets)
dfracist_tweets_raw = pd.DataFrame.from_dict(d, orient='index').reset_index()
dfracist_tweets_cleaned = dfracist_tweets_raw.rename(columns = {"index": "date", 0: "count"})
dfracist_tweets = dfracist_tweets_cleaned.sort_values(by='date')
dfracist_tweets.head()
fig = plt.figure(figsize = (10,5))
ax2 = fig.add_subplot(1, 1, 1)
ax2.set_title("Tweets with Racist Words/Phrases Over Time")
ax2.set_xlabel("Date")
ax2.set_ylabel("Number of Tweets")
plt.plot('date', 'count', data=dfracist_tweets, label='Racist Tweets')
plt.plot('date', 'count', data=dftrump, label=' Trump\'s Tweets ')
plt.plot('date', 'count', data=dfpompeo, label=' Pompeo\'s Tweets ')
plt.legend()
plt.xticks(dflexis["date"][::5], rotation = 90)
fig = plt.figure(figsize = (10,5))
ax2 = fig.add_subplot(1, 1, 1)
ax2.set_title("Tweets Over Time")
ax2.set_xlabel("Date")
ax2.set_ylabel("Number of Tweets")
plt.plot('date', 'count', data=dfracist_tweets, label='Racist Tweets')
plt.plot('date', 'count', data=dftrump, label=' Trump\'s Tweets ')
plt.plot('date', 'count', data=dfpompeo, label=' Pompeo\'s Tweets ')
plt.legend()
plt.xticks(dflexis["date"][::5], rotation = 90)
fig = plt.figure(figsize = (15,5))
ax1 = fig.add_subplot(1, 2, 1)
ax1.set_title("Tweets Over Time")
ax1.set_xlabel("Date")
ax1.set_ylabel("Number of Tweets")
plt.plot('date', 'count', data=dftweets, label='Overall Tweets')
plt.plot('date', 'count', data=dftrump, label=' Trump\'s Tweets ')
plt.plot('date', 'count', data=dfpompeo, label=' Pompeo\'s Tweets ')
plt.legend()
plt.xticks(dftweets["date"][::5], rotation = 90)
ax2 = fig.add_subplot(1, 2, 2)
ax2.set_title("News Coverage Over Time")
ax2.set_xlabel("Date")
ax2.set_ylabel("Number of News Articles")
plt.plot('date', 'count', data=dflexis, label='Overall Articles')
plt.plot('date', 'count', data=dftrump, label=' Trump\'s Tweets ')
plt.plot('date', 'count', data=dfpompeo, label=' Pompeo\'s Tweets ')
plt.legend()
plt.xticks(dflexis["date"][::5], rotation = 90)
```
## Word Frequency and Bigram/Trigram Distribution
<b>All-time</b>
```
all_t_word_dist=Counter()
all_t_bigram_dist=Counter()
all_t_trigram_dist=Counter()
all_t_tokens = []
for tweet in all_time:
text = tweet['text'].replace('&', '&').replace('”', '').replace('\'', '').replace('’', '').replace('“', '')
toks = tokenize(text, lowercase=True, strip_chars=string.punctuation)
all_t_tokens.extend(toks)
all_t_bigrams=get_ngram_tokens(all_t_tokens,2)
all_t_trigrams=get_ngram_tokens(all_t_tokens,3)
all_t_word_dist.update(all_t_tokens)
all_t_bigram_dist.update(all_t_bigrams)
all_t_trigram_dist.update(all_t_trigrams)
```
## Examining tweets with racist words/phrases
**Word, Bigram, Trigram Distributions**
```
racist_word_dist=Counter()
racist_bigram_dist=Counter()
racist_trigram_dist=Counter()
racist_tokens = []
for tweet in racist_tweets:
text = tweet['text'].replace('&', '&').replace('”', '').replace('\'', '').replace('’', '').replace('“', '')
toks = tokenize(text, lowercase=True, strip_chars=string.punctuation)
racist_tokens.extend(toks)
racist_bigrams=get_ngram_tokens(racist_tokens,2)
racist_trigrams=get_ngram_tokens(racist_tokens,3)
racist_word_dist.update(racist_tokens)
racist_bigram_dist.update(racist_bigrams)
racist_trigram_dist.update(racist_trigrams)
racist_queries = ["ching chong",'ching','chong', 'chink', 'chingchong', "kung flu",'kung','fu', "kung fu flu", "ching chong virus",'coronavirus', 'corona virus', 'covid19', 'covid 19']
s_racist_tweets_tokens = racist_tokens
words_to_remove= stopwords.words('english')+racist_queries
for tweet in list(s_racist_tweets_tokens):
if tweet in words_to_remove:
s_racist_tweets_tokens.remove(tweet)
s_racist_tweets_tokens = [x for x in s_racist_tweets_tokens if not x.startswith('https')]
racist_tweets_wfreq = Counter(s_racist_tweets_tokens)
s_racist_bigrams = get_ngram_tokens(s_racist_tweets_tokens,2)
s_racist_bigrams_dist = Counter(s_racist_bigrams)
s_racist_trigrams = get_ngram_tokens(s_racist_tweets_tokens,3)
s_racist_trigrams_dist = Counter(s_racist_trigrams)
racist_bigram_dist.most_common(100)
s_racist_bigrams_dist.most_common(30)
racist_tweets_wfreq.most_common()
s_racist_trigrams_dist.most_common(10)
```
**Collocation**
```
tweet_colls = Counter()
tweet_colls.update(collocates(racist_tokens, 'funny',win=[5,5]))
plot_collocates('funny', tweet_colls, num=15, threshold=2,
title='Tweet collocates of funny (win=5)')
```
**KWIC Concordances**
```
racist_kwic = make_kwic('funny', racist_tokens)
print_kwic(sort_kwic(racist_kwic,['R1']))
racist_kwic = make_kwic('trump', racist_tokens)
print_kwic(sort_kwic(racist_kwic,['R1']))
```
## Tweets over time
### Bigrams and Trigrams
<b>January</b>
```
jan_t_word_dist=Counter()
jan_t_bigram_dist=Counter()
jan_t_trigram_dist=Counter()
jan_t_tokens = []
for tweet in jan_tweets:
text = tweet['text'].replace('&', '&').replace('”', '').replace('\'', '').replace('’', '').replace('“', '')
toks = tokenize(text, lowercase=True, strip_chars=string.punctuation)
jan_t_tokens.extend(toks)
jan_t_bigrams=get_ngram_tokens(jan_t_tokens,2)
jan_t_trigrams=get_ngram_tokens(jan_t_tokens,3)
jan_t_word_dist.update(jan_t_tokens)
jan_t_bigram_dist.update(jan_t_bigrams)
jan_t_trigram_dist.update(jan_t_trigrams)
top_20_bigrams = jan_t_bigram_dist.most_common(20)
top_20_trigrams = jan_t_trigram_dist.most_common(20)
bigram_df = pd.DataFrame(top_20_bigrams, columns = ['Bigram','Freq'])
bigram_list = list(bigram_df['Bigram'])
trigram_df = pd.DataFrame(top_20_trigrams, columns = ['Trigram','Freq'])
trigram_list = list(trigram_df['Trigram'])
rank = list(range(1, 21))
jan_bitrigram = pd.DataFrame(rank, columns = ['Rank'])
jan_bitrigram['Bigram']=bigram_list
jan_bitrigram['Trigram']=trigram_list
jan_bitrigram.set_index('Rank', inplace=True)
jan_bitrigram
```
<b>February</b>
```
feb_t_word_dist=Counter()
feb_t_bigram_dist=Counter()
feb_t_trigram_dist=Counter()
feb_t_tokens = []
for tweet in feb_tweets:
text = tweet['text'].replace('&', '&').replace('”', '').replace('\'', '').replace('’', '').replace('“', '')
toks = tokenize(text, lowercase=True, strip_chars=string.punctuation)
feb_t_tokens.extend(toks)
feb_t_bigrams=get_ngram_tokens(feb_t_tokens,2)
feb_t_trigrams=get_ngram_tokens(feb_t_tokens,3)
feb_t_word_dist.update(feb_t_tokens)
feb_t_bigram_dist.update(feb_t_bigrams)
feb_t_trigram_dist.update(feb_t_trigrams)
feb_top_20_bigrams = feb_t_bigram_dist.most_common(20)
feb_top_20_trigrams = feb_t_trigram_dist.most_common(20)
feb_bigram_df = pd.DataFrame(feb_top_20_bigrams, columns = ['Bigram','Freq'])
feb_bigram_list = list(feb_bigram_df['Bigram'])
feb_trigram_df = pd.DataFrame(feb_top_20_trigrams, columns = ['Trigram','Freq'])
feb_trigram_list = list(feb_trigram_df['Trigram'])
rank = list(range(1, 21))
feb_bitrigram = pd.DataFrame(rank, columns = ['Rank'])
feb_bitrigram['Bigram']=feb_bigram_list
feb_bitrigram['Trigram']=feb_trigram_list
feb_bitrigram.set_index('Rank', inplace=True)
feb_bitrigram
```
<b>March</b>
```
mar_t_word_dist=Counter()
mar_t_bigram_dist=Counter()
mar_t_trigram_dist=Counter()
mar_t_tokens = []
for tweet in mar_tweets:
text = tweet['text'].replace('&', '&').replace('”', '').replace('\'', '').replace('’', '').replace('“', '')
toks = tokenize(text, lowercase=True, strip_chars=string.punctuation)
mar_t_tokens.extend(toks)
mar_t_bigrams=get_ngram_tokens(mar_t_tokens,2)
mar_t_trigrams=get_ngram_tokens(mar_t_tokens,3)
mar_t_word_dist.update(mar_t_tokens)
mar_t_bigram_dist.update(mar_t_bigrams)
mar_t_trigram_dist.update(mar_t_trigrams)
mar_top_20_bigrams = mar_t_bigram_dist.most_common(20)
mar_top_20_trigrams = mar_t_trigram_dist.most_common(20)
mar_bigram_df = pd.DataFrame(mar_top_20_bigrams, columns = ['Bigram','Freq'])
mar_bigram_list = list(mar_bigram_df['Bigram'])
mar_trigram_df = pd.DataFrame(mar_top_20_trigrams, columns = ['Trigram','Freq'])
mar_trigram_list = list(mar_trigram_df['Trigram'])
rank = list(range(1, 21))
mar_bitrigram = pd.DataFrame(rank, columns = ['Rank'])
mar_bitrigram['Bigram']=mar_bigram_list
mar_bitrigram['Trigram']=mar_trigram_list
mar_bitrigram.set_index('Rank', inplace=True)
mar_bitrigram
```
<b>April</b>
```
apr_t_word_dist=Counter()
apr_t_bigram_dist=Counter()
apr_t_trigram_dist=Counter()
apr_t_tokens = []
for tweet in apr_tweets:
text = tweet['text'].replace('&', '&').replace('”', '').replace('\'', '').replace('’', '').replace('“', '')
toks = tokenize(text, lowercase=True, strip_chars=string.punctuation)
apr_t_tokens.extend(toks)
apr_t_bigrams=get_ngram_tokens(apr_t_tokens,2)
apr_t_trigrams=get_ngram_tokens(apr_t_tokens,3)
apr_t_word_dist.update(apr_t_tokens)
apr_t_bigram_dist.update(apr_t_bigrams)
apr_t_trigram_dist.update(apr_t_trigrams)
apr_top_20_bigrams = apr_t_bigram_dist.most_common(20)
apr_top_20_trigrams = apr_t_trigram_dist.most_common(20)
apr_bigram_df = pd.DataFrame(apr_top_20_bigrams, columns = ['Bigram','Freq'])
apr_bigram_list = list(apr_bigram_df['Bigram'])
apr_trigram_df = pd.DataFrame(apr_top_20_trigrams, columns = ['Trigram','Freq'])
apr_trigram_list = list(apr_trigram_df['Trigram'])
rank = list(range(1, 21))
apr_bitrigram = pd.DataFrame(rank, columns = ['Rank'])
apr_bitrigram['Bigram']=apr_bigram_list
apr_bitrigram['Trigram']=apr_trigram_list
apr_bitrigram.set_index('Rank', inplace=True)
apr_bitrigram
```
### Comparing key words across time
```
comparison_data = compare_items(jan_t_word_dist, feb_t_word_dist,mar_t_word_dist,apr_t_word_dist,['chinese', 'community', 'family', 'antiasian', 'trump', 'hate', 'fears', 'texas', 'stab', 'grocery', 'attacks'])
print('Tweet Keyword Frequency by Month')
comparison_plot(comparison_data, label1= "January Tweets", label2= "February Tweets", label3= "March Tweets", label4= "April Tweets")
```
## Data cleaning
### Find most common words other than the words in the query search
```
queries = ['asianamerican', 'asian', 'american', \
'racism', 'racist', 'xenophobia', 'racism', 'racist', 'xenophobia', \
'coronavirus', 'corona virus', 'covid19', 'covid 19', 'pandemic', 'virus', "chinese virus", "china virus", \
'coronavirus', 'covid19', 'pandemic', 'chinavirus', 'chinesevirus']
stripped_tweets_tokens = all_t_tokens
words_to_remove= stopwords.words('english')+queries
for tweet in list(stripped_tweets_tokens):
if tweet in words_to_remove:
stripped_tweets_tokens.remove(tweet)
stripped_tweets_tokens = [x for x in stripped_tweets_tokens if not x.startswith('https')]
stripped_tweets_wfreq = Counter(stripped_tweets_tokens)
stripped_tweets_wfreq.most_common(30)
```
## How distributions change over time
<b> We'll first work with old Tweets.</b>
- Combine January and February Tweets
```
old_tweets = DictListUpdate(jan_tweets,feb_tweets)
old_tweets_tokens = []
for tweet in old_tweets:
text = tweet['text'].replace('&', '&').replace('”', '').replace('\'', '').replace('’', '').replace('“', '')
toks = tokenize(text, lowercase=True, strip_chars=string.punctuation)
old_tweets_tokens.extend(toks)
stripped_oldtweets_tokens = old_tweets_tokens
words_to_remove= stopwords.words('english')+queries
for tweet in list(stripped_oldtweets_tokens):
if tweet in words_to_remove:
stripped_oldtweets_tokens.remove(tweet)
stripped_oldtweets_tokens = [x for x in stripped_oldtweets_tokens if not x.startswith('https')]
stripped_oldtweets_wfreq = Counter(stripped_oldtweets_tokens)
stripped_oldtweets_wfreq.most_common(30)
```
<b> Now, let's look at recent Tweets. </b>
- Combine March and April Tweets
```
recent_tweets = DictListUpdate(mar_tweets,apr_tweets)
recent_tweets_tokens = []
for tweet in recent_tweets:
text = tweet['text'].replace('&', '&').replace('”', '').replace('\'', '').replace('’', '').replace('“', '')
toks = tokenize(text, lowercase=True, strip_chars=string.punctuation)
recent_tweets_tokens.extend(toks)
stripped_recenttweets_tokens = recent_tweets_tokens
words_to_remove= stopwords.words('english')+queries
for tweet in list(stripped_recenttweets_tokens):
if tweet in words_to_remove:
stripped_recenttweets_tokens.remove(tweet)
stripped_recenttweets_tokens = [x for x in stripped_recenttweets_tokens if not x.startswith('https')]
stripped_recenttweets_wfreq = Counter(stripped_recenttweets_tokens)
stripped_recenttweets_wfreq.most_common(30)
```
### Comparing with a keyness analysis
```
old_size = len(stripped_oldtweets_tokens)
recent_size = len(stripped_recenttweets_tokens)
top_old = stripped_oldtweets_wfreq.most_common(30)
top_recent = stripped_recenttweets_wfreq.most_common(30)
print("{: <20}{: <8}{:}\t\t{: <10}{:}\t{:}".format('word', 'old', 'norm_old', 'recent', 'norm_recent', 'LL'))
print("="*80)
row_template = "{: <20}{: <8}{:0.2f}\t\t{: <10}{:0.2f}\t{: 0.2f}"
for word, freq in top_old:
old = freq
recent = stripped_recenttweets_wfreq.get(word,0)
norm_old = old/old_size * 1000
norm_recent = recent/recent_size * 1000
LL = 0 if recent==0 else log_likelihood(old, old_size, recent, recent_size)
print(row_template.format(word, old, norm_old, recent, norm_recent, LL))
print("{: <20}{: <8}{:}\t\t{: <10}{:}\t{:}".format('word', 'old', 'norm_old', 'recent', 'norm_recent', 'LL'))
print("="*80)
row_template = "{: <20}{: <8}{:0.2f}\t\t{: <10}{:0.2f}\t{: 0.2f}"
for word, freq in top_recent:
recent = freq
old = stripped_oldtweets_wfreq.get(word,0)
norm_old = old/old_size * 1000
norm_recent = recent/recent_size * 1000
LL = 0 if old==0 else log_likelihood(recent, recent_size, old, old_size)
print(row_template.format(word, recent, norm_recent, old, norm_old, LL))
```
## What else fuels discussion?
```
silent_tweets = []
for tweet in all_time:
if tweet['replies']==0:
silent_tweets.append(tweet)
silent_tokens = []
for tweet in silent_tweets:
text = tweet['text'].replace('&', '&').replace('”', '').replace('\'', '').replace('’', '').replace('“', '')
toks = tokenize(text, lowercase=True, strip_chars=string.punctuation)
silent_tokens.extend(toks)
stripped_silent_tokens = silent_tokens
words_to_remove= stopwords.words('english')+queries
for tweet in list(stripped_silent_tokens):
if tweet in words_to_remove:
stripped_silent_tokens.remove(tweet)
stripped_silent_tokens = [x for x in stripped_silent_tokens if not x.startswith('https')]
stripped_silent_wfreq = Counter(stripped_silent_tokens)
discussion_creators = []
for tweet in all_time:
if tweet['replies']>0:
discussion_creators.append(tweet)
discussion_tokens = []
for tweet in discussion_creators:
text = tweet['text'].replace('&', '&').replace('”', '').replace('\'', '').replace('’', '').replace('“', '')
toks = tokenize(text, lowercase=True, strip_chars=string.punctuation)
discussion_tokens.extend(toks)
stripped_discussion_tokens = discussion_tokens
words_to_remove= stopwords.words('english')+queries
for tweet in list(stripped_discussion_tokens):
if tweet in words_to_remove:
stripped_discussion_tokens.remove(tweet)
stripped_discussion_tokens = [x for x in stripped_discussion_tokens if not x.startswith('https')]
stripped_discussion_wfreq = Counter(stripped_discussion_tokens)
silent_kwic = make_kwic('trump', stripped_silent_tokens)
silent_kwic_sample = random.sample(silent_kwic,30)
print_kwic(sort_kwic(silent_kwic_sample,['R1']))
discussion_kwic = make_kwic('trump', stripped_discussion_tokens)
discussion_kwic_sample = random.sample(discussion_kwic,30)
print_kwic(sort_kwic(discussion_kwic_sample,['R1']))
```
|
github_jupyter
|
%run functions.ipynb
%matplotlib inline
import tweepy
import configparser
import os
import json
import GetOldTweets3 as got
import datetime
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from datetime import datetime
import nltk
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
import string
import random
from nltk.sentiment.vader import SentimentIntensityAnalyzer
import re
import csv
import math
from collections import Counter
jan_tweets = load_tweets('data/1/tweets_2020-01-01_to_2020-02-01.json')
feb_tweets = load_tweets('data/2/tweets_2020-02-01_to_2020-03-01.json')
mar_tweets = load_tweets('data/3/tweets_2020-03-01_to_2020-04-01.json')
apr_tweets = load_tweets('data/4/tweets_2020-04-01_to_2020-05-01.json')
all_time = load_tweets('data/all_time/tweets_2020-01-01_to_2020-05-01.json')
trump_tweets = load_tweets('data/all_time/realdonaldtrump_2020-01-01_to_2020-05-01.json')
pompeo_tweets = load_tweets('data/all_time/secpompeo_2020-01-01_to_2020-05-01.json')
racist_tweets = load_tweets('data/all_time/racist_tweets_2020-01-01_to_2020-05-01.json')
len(jan_tweets), len(feb_tweets), len(mar_tweets), len(apr_tweets)
len(all_time)
len(trump_tweets),len(pompeo_tweets)
len(racist_tweets)
corpus1 = json.load(open('data/corpus_index1.json'))
corpus2 = json.load(open('data/corpus_index2.json'))
corpus3 = json.load(open('data/corpus_index3.json'))
corpus4 = json.load(open('data/corpus_index4.json'))
corp_all = json.load(open('data/corpus_index_all.json'))
len(corpus1), len(corpus2),len(corpus3),len(corpus4)
len(corp_all)
d = Counter(tweet['date'][:10] for tweet in all_time)
dftweets_raw = pd.DataFrame.from_dict(d, orient='index').reset_index()
dftweets_cleaned = dftweets_raw.rename(columns = {"index": "date", 0: "count"})
dftweets = dftweets_cleaned.sort_values(by='date')
dftweets.head()
d = Counter(tweet['date'][:10] for tweet in trump_tweets)
dftrump_raw = pd.DataFrame.from_dict(d, orient='index').reset_index()
dftrump_cleaned = dftrump_raw.rename(columns = {"index": "date", 0: "count"})
dftrump = dftrump_cleaned.sort_values(by='date')
dftrump
d = Counter(tweet['date'][:10] for tweet in pompeo_tweets)
dfpompeo_raw = pd.DataFrame.from_dict(d, orient='index').reset_index()
dfpompeo_cleaned = dfpompeo_raw.rename(columns = {"index": "date", 0: "count"})
dfpompeo = dfpompeo_cleaned.sort_values(by='date')
dfpompeo
d = Counter(article['Date'] for article in corp_all)
dflexis_raw = pd.DataFrame.from_dict(d, orient='index').reset_index()
dflexis_cleaned = dflexis_raw.rename(columns = {"index": "date", 0: "count"})
dflexis = dflexis_cleaned.sort_values(by='date')
dflexis.head()
d = Counter(tweet['date'][:10] for tweet in racist_tweets)
dfracist_tweets_raw = pd.DataFrame.from_dict(d, orient='index').reset_index()
dfracist_tweets_cleaned = dfracist_tweets_raw.rename(columns = {"index": "date", 0: "count"})
dfracist_tweets = dfracist_tweets_cleaned.sort_values(by='date')
dfracist_tweets.head()
fig = plt.figure(figsize = (10,5))
ax2 = fig.add_subplot(1, 1, 1)
ax2.set_title("Tweets with Racist Words/Phrases Over Time")
ax2.set_xlabel("Date")
ax2.set_ylabel("Number of Tweets")
plt.plot('date', 'count', data=dfracist_tweets, label='Racist Tweets')
plt.plot('date', 'count', data=dftrump, label=' Trump\'s Tweets ')
plt.plot('date', 'count', data=dfpompeo, label=' Pompeo\'s Tweets ')
plt.legend()
plt.xticks(dflexis["date"][::5], rotation = 90)
fig = plt.figure(figsize = (10,5))
ax2 = fig.add_subplot(1, 1, 1)
ax2.set_title("Tweets Over Time")
ax2.set_xlabel("Date")
ax2.set_ylabel("Number of Tweets")
plt.plot('date', 'count', data=dfracist_tweets, label='Racist Tweets')
plt.plot('date', 'count', data=dftrump, label=' Trump\'s Tweets ')
plt.plot('date', 'count', data=dfpompeo, label=' Pompeo\'s Tweets ')
plt.legend()
plt.xticks(dflexis["date"][::5], rotation = 90)
fig = plt.figure(figsize = (15,5))
ax1 = fig.add_subplot(1, 2, 1)
ax1.set_title("Tweets Over Time")
ax1.set_xlabel("Date")
ax1.set_ylabel("Number of Tweets")
plt.plot('date', 'count', data=dftweets, label='Overall Tweets')
plt.plot('date', 'count', data=dftrump, label=' Trump\'s Tweets ')
plt.plot('date', 'count', data=dfpompeo, label=' Pompeo\'s Tweets ')
plt.legend()
plt.xticks(dftweets["date"][::5], rotation = 90)
ax2 = fig.add_subplot(1, 2, 2)
ax2.set_title("News Coverage Over Time")
ax2.set_xlabel("Date")
ax2.set_ylabel("Number of News Articles")
plt.plot('date', 'count', data=dflexis, label='Overall Articles')
plt.plot('date', 'count', data=dftrump, label=' Trump\'s Tweets ')
plt.plot('date', 'count', data=dfpompeo, label=' Pompeo\'s Tweets ')
plt.legend()
plt.xticks(dflexis["date"][::5], rotation = 90)
all_t_word_dist=Counter()
all_t_bigram_dist=Counter()
all_t_trigram_dist=Counter()
all_t_tokens = []
for tweet in all_time:
text = tweet['text'].replace('&', '&').replace('”', '').replace('\'', '').replace('’', '').replace('“', '')
toks = tokenize(text, lowercase=True, strip_chars=string.punctuation)
all_t_tokens.extend(toks)
all_t_bigrams=get_ngram_tokens(all_t_tokens,2)
all_t_trigrams=get_ngram_tokens(all_t_tokens,3)
all_t_word_dist.update(all_t_tokens)
all_t_bigram_dist.update(all_t_bigrams)
all_t_trigram_dist.update(all_t_trigrams)
racist_word_dist=Counter()
racist_bigram_dist=Counter()
racist_trigram_dist=Counter()
racist_tokens = []
for tweet in racist_tweets:
text = tweet['text'].replace('&', '&').replace('”', '').replace('\'', '').replace('’', '').replace('“', '')
toks = tokenize(text, lowercase=True, strip_chars=string.punctuation)
racist_tokens.extend(toks)
racist_bigrams=get_ngram_tokens(racist_tokens,2)
racist_trigrams=get_ngram_tokens(racist_tokens,3)
racist_word_dist.update(racist_tokens)
racist_bigram_dist.update(racist_bigrams)
racist_trigram_dist.update(racist_trigrams)
racist_queries = ["ching chong",'ching','chong', 'chink', 'chingchong', "kung flu",'kung','fu', "kung fu flu", "ching chong virus",'coronavirus', 'corona virus', 'covid19', 'covid 19']
s_racist_tweets_tokens = racist_tokens
words_to_remove= stopwords.words('english')+racist_queries
for tweet in list(s_racist_tweets_tokens):
if tweet in words_to_remove:
s_racist_tweets_tokens.remove(tweet)
s_racist_tweets_tokens = [x for x in s_racist_tweets_tokens if not x.startswith('https')]
racist_tweets_wfreq = Counter(s_racist_tweets_tokens)
s_racist_bigrams = get_ngram_tokens(s_racist_tweets_tokens,2)
s_racist_bigrams_dist = Counter(s_racist_bigrams)
s_racist_trigrams = get_ngram_tokens(s_racist_tweets_tokens,3)
s_racist_trigrams_dist = Counter(s_racist_trigrams)
racist_bigram_dist.most_common(100)
s_racist_bigrams_dist.most_common(30)
racist_tweets_wfreq.most_common()
s_racist_trigrams_dist.most_common(10)
tweet_colls = Counter()
tweet_colls.update(collocates(racist_tokens, 'funny',win=[5,5]))
plot_collocates('funny', tweet_colls, num=15, threshold=2,
title='Tweet collocates of funny (win=5)')
racist_kwic = make_kwic('funny', racist_tokens)
print_kwic(sort_kwic(racist_kwic,['R1']))
racist_kwic = make_kwic('trump', racist_tokens)
print_kwic(sort_kwic(racist_kwic,['R1']))
jan_t_word_dist=Counter()
jan_t_bigram_dist=Counter()
jan_t_trigram_dist=Counter()
jan_t_tokens = []
for tweet in jan_tweets:
text = tweet['text'].replace('&', '&').replace('”', '').replace('\'', '').replace('’', '').replace('“', '')
toks = tokenize(text, lowercase=True, strip_chars=string.punctuation)
jan_t_tokens.extend(toks)
jan_t_bigrams=get_ngram_tokens(jan_t_tokens,2)
jan_t_trigrams=get_ngram_tokens(jan_t_tokens,3)
jan_t_word_dist.update(jan_t_tokens)
jan_t_bigram_dist.update(jan_t_bigrams)
jan_t_trigram_dist.update(jan_t_trigrams)
top_20_bigrams = jan_t_bigram_dist.most_common(20)
top_20_trigrams = jan_t_trigram_dist.most_common(20)
bigram_df = pd.DataFrame(top_20_bigrams, columns = ['Bigram','Freq'])
bigram_list = list(bigram_df['Bigram'])
trigram_df = pd.DataFrame(top_20_trigrams, columns = ['Trigram','Freq'])
trigram_list = list(trigram_df['Trigram'])
rank = list(range(1, 21))
jan_bitrigram = pd.DataFrame(rank, columns = ['Rank'])
jan_bitrigram['Bigram']=bigram_list
jan_bitrigram['Trigram']=trigram_list
jan_bitrigram.set_index('Rank', inplace=True)
jan_bitrigram
feb_t_word_dist=Counter()
feb_t_bigram_dist=Counter()
feb_t_trigram_dist=Counter()
feb_t_tokens = []
for tweet in feb_tweets:
text = tweet['text'].replace('&', '&').replace('”', '').replace('\'', '').replace('’', '').replace('“', '')
toks = tokenize(text, lowercase=True, strip_chars=string.punctuation)
feb_t_tokens.extend(toks)
feb_t_bigrams=get_ngram_tokens(feb_t_tokens,2)
feb_t_trigrams=get_ngram_tokens(feb_t_tokens,3)
feb_t_word_dist.update(feb_t_tokens)
feb_t_bigram_dist.update(feb_t_bigrams)
feb_t_trigram_dist.update(feb_t_trigrams)
feb_top_20_bigrams = feb_t_bigram_dist.most_common(20)
feb_top_20_trigrams = feb_t_trigram_dist.most_common(20)
feb_bigram_df = pd.DataFrame(feb_top_20_bigrams, columns = ['Bigram','Freq'])
feb_bigram_list = list(feb_bigram_df['Bigram'])
feb_trigram_df = pd.DataFrame(feb_top_20_trigrams, columns = ['Trigram','Freq'])
feb_trigram_list = list(feb_trigram_df['Trigram'])
rank = list(range(1, 21))
feb_bitrigram = pd.DataFrame(rank, columns = ['Rank'])
feb_bitrigram['Bigram']=feb_bigram_list
feb_bitrigram['Trigram']=feb_trigram_list
feb_bitrigram.set_index('Rank', inplace=True)
feb_bitrigram
mar_t_word_dist=Counter()
mar_t_bigram_dist=Counter()
mar_t_trigram_dist=Counter()
mar_t_tokens = []
for tweet in mar_tweets:
text = tweet['text'].replace('&', '&').replace('”', '').replace('\'', '').replace('’', '').replace('“', '')
toks = tokenize(text, lowercase=True, strip_chars=string.punctuation)
mar_t_tokens.extend(toks)
mar_t_bigrams=get_ngram_tokens(mar_t_tokens,2)
mar_t_trigrams=get_ngram_tokens(mar_t_tokens,3)
mar_t_word_dist.update(mar_t_tokens)
mar_t_bigram_dist.update(mar_t_bigrams)
mar_t_trigram_dist.update(mar_t_trigrams)
mar_top_20_bigrams = mar_t_bigram_dist.most_common(20)
mar_top_20_trigrams = mar_t_trigram_dist.most_common(20)
mar_bigram_df = pd.DataFrame(mar_top_20_bigrams, columns = ['Bigram','Freq'])
mar_bigram_list = list(mar_bigram_df['Bigram'])
mar_trigram_df = pd.DataFrame(mar_top_20_trigrams, columns = ['Trigram','Freq'])
mar_trigram_list = list(mar_trigram_df['Trigram'])
rank = list(range(1, 21))
mar_bitrigram = pd.DataFrame(rank, columns = ['Rank'])
mar_bitrigram['Bigram']=mar_bigram_list
mar_bitrigram['Trigram']=mar_trigram_list
mar_bitrigram.set_index('Rank', inplace=True)
mar_bitrigram
apr_t_word_dist=Counter()
apr_t_bigram_dist=Counter()
apr_t_trigram_dist=Counter()
apr_t_tokens = []
for tweet in apr_tweets:
text = tweet['text'].replace('&', '&').replace('”', '').replace('\'', '').replace('’', '').replace('“', '')
toks = tokenize(text, lowercase=True, strip_chars=string.punctuation)
apr_t_tokens.extend(toks)
apr_t_bigrams=get_ngram_tokens(apr_t_tokens,2)
apr_t_trigrams=get_ngram_tokens(apr_t_tokens,3)
apr_t_word_dist.update(apr_t_tokens)
apr_t_bigram_dist.update(apr_t_bigrams)
apr_t_trigram_dist.update(apr_t_trigrams)
apr_top_20_bigrams = apr_t_bigram_dist.most_common(20)
apr_top_20_trigrams = apr_t_trigram_dist.most_common(20)
apr_bigram_df = pd.DataFrame(apr_top_20_bigrams, columns = ['Bigram','Freq'])
apr_bigram_list = list(apr_bigram_df['Bigram'])
apr_trigram_df = pd.DataFrame(apr_top_20_trigrams, columns = ['Trigram','Freq'])
apr_trigram_list = list(apr_trigram_df['Trigram'])
rank = list(range(1, 21))
apr_bitrigram = pd.DataFrame(rank, columns = ['Rank'])
apr_bitrigram['Bigram']=apr_bigram_list
apr_bitrigram['Trigram']=apr_trigram_list
apr_bitrigram.set_index('Rank', inplace=True)
apr_bitrigram
comparison_data = compare_items(jan_t_word_dist, feb_t_word_dist,mar_t_word_dist,apr_t_word_dist,['chinese', 'community', 'family', 'antiasian', 'trump', 'hate', 'fears', 'texas', 'stab', 'grocery', 'attacks'])
print('Tweet Keyword Frequency by Month')
comparison_plot(comparison_data, label1= "January Tweets", label2= "February Tweets", label3= "March Tweets", label4= "April Tweets")
queries = ['asianamerican', 'asian', 'american', \
'racism', 'racist', 'xenophobia', 'racism', 'racist', 'xenophobia', \
'coronavirus', 'corona virus', 'covid19', 'covid 19', 'pandemic', 'virus', "chinese virus", "china virus", \
'coronavirus', 'covid19', 'pandemic', 'chinavirus', 'chinesevirus']
stripped_tweets_tokens = all_t_tokens
words_to_remove= stopwords.words('english')+queries
for tweet in list(stripped_tweets_tokens):
if tweet in words_to_remove:
stripped_tweets_tokens.remove(tweet)
stripped_tweets_tokens = [x for x in stripped_tweets_tokens if not x.startswith('https')]
stripped_tweets_wfreq = Counter(stripped_tweets_tokens)
stripped_tweets_wfreq.most_common(30)
old_tweets = DictListUpdate(jan_tweets,feb_tweets)
old_tweets_tokens = []
for tweet in old_tweets:
text = tweet['text'].replace('&', '&').replace('”', '').replace('\'', '').replace('’', '').replace('“', '')
toks = tokenize(text, lowercase=True, strip_chars=string.punctuation)
old_tweets_tokens.extend(toks)
stripped_oldtweets_tokens = old_tweets_tokens
words_to_remove= stopwords.words('english')+queries
for tweet in list(stripped_oldtweets_tokens):
if tweet in words_to_remove:
stripped_oldtweets_tokens.remove(tweet)
stripped_oldtweets_tokens = [x for x in stripped_oldtweets_tokens if not x.startswith('https')]
stripped_oldtweets_wfreq = Counter(stripped_oldtweets_tokens)
stripped_oldtweets_wfreq.most_common(30)
recent_tweets = DictListUpdate(mar_tweets,apr_tweets)
recent_tweets_tokens = []
for tweet in recent_tweets:
text = tweet['text'].replace('&', '&').replace('”', '').replace('\'', '').replace('’', '').replace('“', '')
toks = tokenize(text, lowercase=True, strip_chars=string.punctuation)
recent_tweets_tokens.extend(toks)
stripped_recenttweets_tokens = recent_tweets_tokens
words_to_remove= stopwords.words('english')+queries
for tweet in list(stripped_recenttweets_tokens):
if tweet in words_to_remove:
stripped_recenttweets_tokens.remove(tweet)
stripped_recenttweets_tokens = [x for x in stripped_recenttweets_tokens if not x.startswith('https')]
stripped_recenttweets_wfreq = Counter(stripped_recenttweets_tokens)
stripped_recenttweets_wfreq.most_common(30)
old_size = len(stripped_oldtweets_tokens)
recent_size = len(stripped_recenttweets_tokens)
top_old = stripped_oldtweets_wfreq.most_common(30)
top_recent = stripped_recenttweets_wfreq.most_common(30)
print("{: <20}{: <8}{:}\t\t{: <10}{:}\t{:}".format('word', 'old', 'norm_old', 'recent', 'norm_recent', 'LL'))
print("="*80)
row_template = "{: <20}{: <8}{:0.2f}\t\t{: <10}{:0.2f}\t{: 0.2f}"
for word, freq in top_old:
old = freq
recent = stripped_recenttweets_wfreq.get(word,0)
norm_old = old/old_size * 1000
norm_recent = recent/recent_size * 1000
LL = 0 if recent==0 else log_likelihood(old, old_size, recent, recent_size)
print(row_template.format(word, old, norm_old, recent, norm_recent, LL))
print("{: <20}{: <8}{:}\t\t{: <10}{:}\t{:}".format('word', 'old', 'norm_old', 'recent', 'norm_recent', 'LL'))
print("="*80)
row_template = "{: <20}{: <8}{:0.2f}\t\t{: <10}{:0.2f}\t{: 0.2f}"
for word, freq in top_recent:
recent = freq
old = stripped_oldtweets_wfreq.get(word,0)
norm_old = old/old_size * 1000
norm_recent = recent/recent_size * 1000
LL = 0 if old==0 else log_likelihood(recent, recent_size, old, old_size)
print(row_template.format(word, recent, norm_recent, old, norm_old, LL))
silent_tweets = []
for tweet in all_time:
if tweet['replies']==0:
silent_tweets.append(tweet)
silent_tokens = []
for tweet in silent_tweets:
text = tweet['text'].replace('&', '&').replace('”', '').replace('\'', '').replace('’', '').replace('“', '')
toks = tokenize(text, lowercase=True, strip_chars=string.punctuation)
silent_tokens.extend(toks)
stripped_silent_tokens = silent_tokens
words_to_remove= stopwords.words('english')+queries
for tweet in list(stripped_silent_tokens):
if tweet in words_to_remove:
stripped_silent_tokens.remove(tweet)
stripped_silent_tokens = [x for x in stripped_silent_tokens if not x.startswith('https')]
stripped_silent_wfreq = Counter(stripped_silent_tokens)
discussion_creators = []
for tweet in all_time:
if tweet['replies']>0:
discussion_creators.append(tweet)
discussion_tokens = []
for tweet in discussion_creators:
text = tweet['text'].replace('&', '&').replace('”', '').replace('\'', '').replace('’', '').replace('“', '')
toks = tokenize(text, lowercase=True, strip_chars=string.punctuation)
discussion_tokens.extend(toks)
stripped_discussion_tokens = discussion_tokens
words_to_remove= stopwords.words('english')+queries
for tweet in list(stripped_discussion_tokens):
if tweet in words_to_remove:
stripped_discussion_tokens.remove(tweet)
stripped_discussion_tokens = [x for x in stripped_discussion_tokens if not x.startswith('https')]
stripped_discussion_wfreq = Counter(stripped_discussion_tokens)
silent_kwic = make_kwic('trump', stripped_silent_tokens)
silent_kwic_sample = random.sample(silent_kwic,30)
print_kwic(sort_kwic(silent_kwic_sample,['R1']))
discussion_kwic = make_kwic('trump', stripped_discussion_tokens)
discussion_kwic_sample = random.sample(discussion_kwic,30)
print_kwic(sort_kwic(discussion_kwic_sample,['R1']))
| 0.242026 | 0.687158 |
```
import numpy as np
import pandas as pd
from keras.wrappers.scikit_learn import KerasRegressor
from sklearn.model_selection import GridSearchCV
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
from sklearn.metrics import mean_absolute_error
train = pd.read_csv("./Data/train_2017.csv")
test = pd.read_csv("./Data/test_2017.csv")
print(train.shape)
train.head()
print(test.shape)
test.head()
X_train = train[['bathroomcnt', 'bedroomcnt', 'calculatedfinishedsquarefeet',
'yearbuilt']].values
y_train = train['taxvaluedollarcnt'].values
X_test = test[['bathroomcnt', 'bedroomcnt', 'calculatedfinishedsquarefeet',
'yearbuilt']].values
y_test = test['taxvaluedollarcnt'].values
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
#X = scaler.fit_transform(X)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
nnmodel = Sequential()
# Input => Hidden
nnmodel.add(Dense(32, input_dim=4, activation='relu'))
# Hidden
nnmodel.add(Dense(32, activation='relu'))
#nnmodel.add(Dense(32, activation='relu'))
#nnmodel.add(Dense(32, activation='relu'))
# Output Layer
nnmodel.add(Dense(1, activation='linear'))
# Compile
nnmodel.compile(loss='mean_absolute_error',
optimizer='adam',
metrics=['mean_absolute_error'])
nnmodel.summary()
nnmodel.fit(X_train, y_train, epochs=10, batch_size=32, validation_split=.3, verbose=1)
scores = nnmodel.evaluate(X_test, y_test)
print(f'{nnmodel.metrics_names[1]}: {scores[1]*100}')
# Function to create model
def create_model():
# create model
nnmodel = Sequential()
# Input => Hidden
nnmodel.add(Dense(32, input_dim=4, activation='relu'))
# Hidden
nnmodel.add(Dense(32, activation='relu'))
nnmodel.add(Dense(32, activation='relu'))
nnmodel.add(Dense(32, activation='relu'))
# Output Layer
nnmodel.add(Dense(1, activation='linear'))
# Compile
nnmodel.compile(loss='mean_absolute_error',
optimizer='adam',
metrics=['mean_absolute_error'])
return model
model = KerasRegressor(build_fn=create_model, verbose=1)
# define the grid search parameters
param_grid = {'batch_size': [10, 20, 40, 60, 80, 100],
'epochs': [10,20,40,60,80]}
# Create Grid Search
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=5)
grid_result = grid.fit(X_train, y_train)
# Report Results
print(f"Best: {grid_result.best_score_} using {grid_result.best_params_}")
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print(f"Means: {mean}, Stdev: {stdev} with: {param}")
def baseline_model(optimizer='adam'):
# create model
model = Sequential()
model.add(Dense(32, activation='relu',
kernel_regularizer = 'l2',
kernel_initializer = 'normal',
input_shape=(4,)))
# model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(32, activation='relu',
kernel_regularizer = 'l2',
kernel_initializer = 'normal'))
#model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(1, activation='linear',
kernel_regularizer = 'l2',
kernel_initializer='normal'))
model.compile(loss='mean_absolute_error', optimizer=optimizer, metrics=['mean_absolute_error'])
return model
def gridSearch_neural_network(X_train, y_train):
print("Train Data:", X_train.shape)
print("Train label:", y_train.shape)
# evaluate model with standardized dataset
estimator = KerasRegressor(build_fn=baseline_model, nb_epoch=10, batch_size=5, verbose=1)
# grid search epochs, batch size and optimizer
optimizers = ['rmsprop', 'adam']
#dropout_rate = [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
init = ['glorot_uniform', 'normal', 'uniform']
epochs = [50, 100, 150]
batches = [5, 10, 20]
weight_constraint = [1, 2, 3, 4, 5]
param_grid = dict(optimizer=optimizers,
#dropout_rate=dropout_rate,
epochs=epochs,
batch_size=batches,
#weight_constraint=weight_constraint,
#init=init
)
grid = GridSearchCV(estimator=estimator, param_grid=param_grid)
grid_result = grid.fit(X_train, y_train)
# summarize results
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
gridSearch_neural_network(X_train, y_train)
import pickle
pickle.dump(nnmodel, open('zillow_nn_model.pkl','wb'))
```
|
github_jupyter
|
import numpy as np
import pandas as pd
from keras.wrappers.scikit_learn import KerasRegressor
from sklearn.model_selection import GridSearchCV
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
from sklearn.metrics import mean_absolute_error
train = pd.read_csv("./Data/train_2017.csv")
test = pd.read_csv("./Data/test_2017.csv")
print(train.shape)
train.head()
print(test.shape)
test.head()
X_train = train[['bathroomcnt', 'bedroomcnt', 'calculatedfinishedsquarefeet',
'yearbuilt']].values
y_train = train['taxvaluedollarcnt'].values
X_test = test[['bathroomcnt', 'bedroomcnt', 'calculatedfinishedsquarefeet',
'yearbuilt']].values
y_test = test['taxvaluedollarcnt'].values
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
#X = scaler.fit_transform(X)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
nnmodel = Sequential()
# Input => Hidden
nnmodel.add(Dense(32, input_dim=4, activation='relu'))
# Hidden
nnmodel.add(Dense(32, activation='relu'))
#nnmodel.add(Dense(32, activation='relu'))
#nnmodel.add(Dense(32, activation='relu'))
# Output Layer
nnmodel.add(Dense(1, activation='linear'))
# Compile
nnmodel.compile(loss='mean_absolute_error',
optimizer='adam',
metrics=['mean_absolute_error'])
nnmodel.summary()
nnmodel.fit(X_train, y_train, epochs=10, batch_size=32, validation_split=.3, verbose=1)
scores = nnmodel.evaluate(X_test, y_test)
print(f'{nnmodel.metrics_names[1]}: {scores[1]*100}')
# Function to create model
def create_model():
# create model
nnmodel = Sequential()
# Input => Hidden
nnmodel.add(Dense(32, input_dim=4, activation='relu'))
# Hidden
nnmodel.add(Dense(32, activation='relu'))
nnmodel.add(Dense(32, activation='relu'))
nnmodel.add(Dense(32, activation='relu'))
# Output Layer
nnmodel.add(Dense(1, activation='linear'))
# Compile
nnmodel.compile(loss='mean_absolute_error',
optimizer='adam',
metrics=['mean_absolute_error'])
return model
model = KerasRegressor(build_fn=create_model, verbose=1)
# define the grid search parameters
param_grid = {'batch_size': [10, 20, 40, 60, 80, 100],
'epochs': [10,20,40,60,80]}
# Create Grid Search
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=5)
grid_result = grid.fit(X_train, y_train)
# Report Results
print(f"Best: {grid_result.best_score_} using {grid_result.best_params_}")
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print(f"Means: {mean}, Stdev: {stdev} with: {param}")
def baseline_model(optimizer='adam'):
# create model
model = Sequential()
model.add(Dense(32, activation='relu',
kernel_regularizer = 'l2',
kernel_initializer = 'normal',
input_shape=(4,)))
# model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(32, activation='relu',
kernel_regularizer = 'l2',
kernel_initializer = 'normal'))
#model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(1, activation='linear',
kernel_regularizer = 'l2',
kernel_initializer='normal'))
model.compile(loss='mean_absolute_error', optimizer=optimizer, metrics=['mean_absolute_error'])
return model
def gridSearch_neural_network(X_train, y_train):
print("Train Data:", X_train.shape)
print("Train label:", y_train.shape)
# evaluate model with standardized dataset
estimator = KerasRegressor(build_fn=baseline_model, nb_epoch=10, batch_size=5, verbose=1)
# grid search epochs, batch size and optimizer
optimizers = ['rmsprop', 'adam']
#dropout_rate = [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
init = ['glorot_uniform', 'normal', 'uniform']
epochs = [50, 100, 150]
batches = [5, 10, 20]
weight_constraint = [1, 2, 3, 4, 5]
param_grid = dict(optimizer=optimizers,
#dropout_rate=dropout_rate,
epochs=epochs,
batch_size=batches,
#weight_constraint=weight_constraint,
#init=init
)
grid = GridSearchCV(estimator=estimator, param_grid=param_grid)
grid_result = grid.fit(X_train, y_train)
# summarize results
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
gridSearch_neural_network(X_train, y_train)
import pickle
pickle.dump(nnmodel, open('zillow_nn_model.pkl','wb'))
| 0.823541 | 0.479808 |
# Mittens simulations (section 2.3)
```
__author__ = 'Nick Dingwall and Christopher Potts'
%matplotlib inline
import random
from collections import defaultdict
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy.spatial.distance import euclidean
from mittens import Mittens
import utils
plt.style.use('mittens.mplstyle')
```
# Utilities
```
def get_random_count_matrix(n_words):
"""Returns a symmetric matrix where the entries are drawn from an
exponential distribution. The goal is to provide some structure
for GloVe to learn even with small vocabularies.
"""
base = np.random.exponential(3.0, size=(n_words, n_words)) / 2
return np.floor(base + base.T)
def get_random_embedding_lookup(embedding_dim, vocab, percentage_embedded=0.5):
"""Returns a dict from `percentage_embedded` of the words in
`vocab` to random embeddings with dimension `embedding_dim`.
We seek to make these representations look as much as possible
like the ones we create when initializing GloVe parameters.
"""
n_words = len(vocab)
val = np.sqrt(6.0 / (n_words + embedding_dim)) * 2.0
embed_size = int(n_words * percentage_embedded)
return {w: np.random.uniform(-val, val, size=embedding_dim)
for w in random.sample(vocab, embed_size)}
def distance_test(mittens, G, embedding_dict, verbose=False):
dists = defaultdict(list)
warm_start = mittens.G_start
warm_orig = mittens.sess.run(mittens.original_embedding)
for i in range(G.shape[0]):
if "w_{}".format(i) in embedding_dict:
init = warm_orig[i]
key = 'warm'
else:
init = warm_start[i]
key = 'no warm'
dist = euclidean(init, G[i])
dists[key].append(dist)
warm_mean = np.mean(dists['warm'])
no_warm_mean = np.mean(dists['no warm'])
return dists
```
# Simulation test for the paper
```
def simulations(n_trials=5, n_words=500, embedding_dim=50, max_iter=1000,
mus=[0.001, 0.1, 0.5, 0, 1, 5, 10]):
"""Runs the simulations described in the paper. For `n_trials`, we
* Generate a random count matrix
* Generate initial embeddings for half the vocabulary.
* For each of the specified `mus`:
* Run Mittens at `mu` for `max_iter` times.
* Assess the expected GloVe correlation between counts and
representation dot products.
* Get the mean distance from each vector to its initial
embedding, with the expectation that Mittens will keep
the learned embeddings closer on average, as governed
by `mu`.
The return value is a `pd.DataFrame` containing all the values
we need for the plots.
"""
data = []
vocab = ['w_{}'.format(i) for i in range(n_words)]
for trial in range(1, n_trials+1):
X = get_random_count_matrix(n_words)
embedding_dict = get_random_embedding_lookup(embedding_dim, vocab)
for mu in mus:
mittens = Mittens(n=embedding_dim, max_iter=max_iter, mittens=mu)
G = mittens.fit(X, vocab=vocab, initial_embedding_dict=embedding_dict)
correlations = utils.correlation_test(X, G)
dists = distance_test(mittens, G, embedding_dict)
d = {
'trial': trial,
'mu': mu,
'corr_log_cooccur': correlations['log_cooccur'],
'corr_prob': correlations['prob'],
'corr_pmi': correlations['pmi'],
'warm_distance_mean': np.mean(dists['warm']),
'no_warm_distance_mean': np.mean(dists['no warm'])
}
data.append(d)
return pd.DataFrame(data)
data_df = simulations()
```
# Correlation plot (figure 1a)
```
def get_corr_stats(vals, correlation_value='corr_prob'):
"""Helper function for `correlation_plot`: returns the mean
and lower confidence interval bound in the format that
pandas expects.
"""
mu = vals[correlation_value].mean()
lower, upper = utils.get_ci(vals[correlation_value])
return pd.DataFrame([{'mean': mu, 'err': mu-lower}])
def correlation_plot(data_df, correlation_value='corr_prob'):
"""Produces Figure 1a."""
corr_df = data_df.groupby('mu').apply(lambda x: get_corr_stats(x, correlation_value))
corr_df = corr_df.reset_index().sort_values("mu", ascending=False)
ax = corr_df.plot.barh(
x='mu', y='mean', xerr='err',
legend=False, color=['gray'],
lw=1, edgecolor='black')
ax.set_xlabel(r'Mean Pearson $\rho$')
ax.set_ylabel(r'$\mu$')
plt.savefig("correlations-{}.pdf".format(correlation_value), layout='tight')
correlation_plot(data_df, correlation_value='corr_log_cooccur')
correlation_plot(data_df, correlation_value='corr_prob')
correlation_plot(data_df, correlation_value='corr_pmi')
```
# Distances plot (figure 1b)
```
def get_dist_stats(x):
"""Helper function for `distance_plot`: returns the means
and lower confidence interval bounds in the format that
pandas expects.
"""
warm_mu = x['warm_distance_mean'].mean()
warm_err = warm_mu-utils.get_ci(x['warm_distance_mean'])[0]
no_warm_mu = x['no_warm_distance_mean'].mean()
no_warm_err = no_warm_mu-utils.get_ci(x['no_warm_distance_mean'])[0]
return pd.DataFrame([{
'pretrained initialization': warm_mu,
'pretrained initialization_ci': warm_err,
'random initialization': no_warm_mu,
'random initialization_ci': no_warm_err}])
def distance_plot(data_df):
"""Produces Figure 1b."""
cols = ['pretrained initialization', 'random initialization']
dist_df = data_df.groupby('mu').apply(get_dist_stats)
dist_df = dist_df.reset_index(level=1).sort_index(ascending=False)
err_df = dist_df[['pretrained initialization_ci', 'random initialization_ci']]
err_df.columns = cols
data_df = dist_df[['pretrained initialization', 'random initialization']]
ax = data_df.plot.barh(
color=['#0499CC', '#FFFFFF'],
xerr=err_df, lw=1, edgecolor='black')
ax.set_xlabel('Mean distance from initialization')
ax.set_ylabel(r'$\mu$')
legend = plt.legend(loc='center left', bbox_to_anchor=(0.4, 1.15))
plt.savefig("distances.pdf",
bbox_extra_artists=(legend,),
bbox_inches='tight')
distance_plot(data_df)
```
|
github_jupyter
|
__author__ = 'Nick Dingwall and Christopher Potts'
%matplotlib inline
import random
from collections import defaultdict
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy.spatial.distance import euclidean
from mittens import Mittens
import utils
plt.style.use('mittens.mplstyle')
def get_random_count_matrix(n_words):
"""Returns a symmetric matrix where the entries are drawn from an
exponential distribution. The goal is to provide some structure
for GloVe to learn even with small vocabularies.
"""
base = np.random.exponential(3.0, size=(n_words, n_words)) / 2
return np.floor(base + base.T)
def get_random_embedding_lookup(embedding_dim, vocab, percentage_embedded=0.5):
"""Returns a dict from `percentage_embedded` of the words in
`vocab` to random embeddings with dimension `embedding_dim`.
We seek to make these representations look as much as possible
like the ones we create when initializing GloVe parameters.
"""
n_words = len(vocab)
val = np.sqrt(6.0 / (n_words + embedding_dim)) * 2.0
embed_size = int(n_words * percentage_embedded)
return {w: np.random.uniform(-val, val, size=embedding_dim)
for w in random.sample(vocab, embed_size)}
def distance_test(mittens, G, embedding_dict, verbose=False):
dists = defaultdict(list)
warm_start = mittens.G_start
warm_orig = mittens.sess.run(mittens.original_embedding)
for i in range(G.shape[0]):
if "w_{}".format(i) in embedding_dict:
init = warm_orig[i]
key = 'warm'
else:
init = warm_start[i]
key = 'no warm'
dist = euclidean(init, G[i])
dists[key].append(dist)
warm_mean = np.mean(dists['warm'])
no_warm_mean = np.mean(dists['no warm'])
return dists
def simulations(n_trials=5, n_words=500, embedding_dim=50, max_iter=1000,
mus=[0.001, 0.1, 0.5, 0, 1, 5, 10]):
"""Runs the simulations described in the paper. For `n_trials`, we
* Generate a random count matrix
* Generate initial embeddings for half the vocabulary.
* For each of the specified `mus`:
* Run Mittens at `mu` for `max_iter` times.
* Assess the expected GloVe correlation between counts and
representation dot products.
* Get the mean distance from each vector to its initial
embedding, with the expectation that Mittens will keep
the learned embeddings closer on average, as governed
by `mu`.
The return value is a `pd.DataFrame` containing all the values
we need for the plots.
"""
data = []
vocab = ['w_{}'.format(i) for i in range(n_words)]
for trial in range(1, n_trials+1):
X = get_random_count_matrix(n_words)
embedding_dict = get_random_embedding_lookup(embedding_dim, vocab)
for mu in mus:
mittens = Mittens(n=embedding_dim, max_iter=max_iter, mittens=mu)
G = mittens.fit(X, vocab=vocab, initial_embedding_dict=embedding_dict)
correlations = utils.correlation_test(X, G)
dists = distance_test(mittens, G, embedding_dict)
d = {
'trial': trial,
'mu': mu,
'corr_log_cooccur': correlations['log_cooccur'],
'corr_prob': correlations['prob'],
'corr_pmi': correlations['pmi'],
'warm_distance_mean': np.mean(dists['warm']),
'no_warm_distance_mean': np.mean(dists['no warm'])
}
data.append(d)
return pd.DataFrame(data)
data_df = simulations()
def get_corr_stats(vals, correlation_value='corr_prob'):
"""Helper function for `correlation_plot`: returns the mean
and lower confidence interval bound in the format that
pandas expects.
"""
mu = vals[correlation_value].mean()
lower, upper = utils.get_ci(vals[correlation_value])
return pd.DataFrame([{'mean': mu, 'err': mu-lower}])
def correlation_plot(data_df, correlation_value='corr_prob'):
"""Produces Figure 1a."""
corr_df = data_df.groupby('mu').apply(lambda x: get_corr_stats(x, correlation_value))
corr_df = corr_df.reset_index().sort_values("mu", ascending=False)
ax = corr_df.plot.barh(
x='mu', y='mean', xerr='err',
legend=False, color=['gray'],
lw=1, edgecolor='black')
ax.set_xlabel(r'Mean Pearson $\rho$')
ax.set_ylabel(r'$\mu$')
plt.savefig("correlations-{}.pdf".format(correlation_value), layout='tight')
correlation_plot(data_df, correlation_value='corr_log_cooccur')
correlation_plot(data_df, correlation_value='corr_prob')
correlation_plot(data_df, correlation_value='corr_pmi')
def get_dist_stats(x):
"""Helper function for `distance_plot`: returns the means
and lower confidence interval bounds in the format that
pandas expects.
"""
warm_mu = x['warm_distance_mean'].mean()
warm_err = warm_mu-utils.get_ci(x['warm_distance_mean'])[0]
no_warm_mu = x['no_warm_distance_mean'].mean()
no_warm_err = no_warm_mu-utils.get_ci(x['no_warm_distance_mean'])[0]
return pd.DataFrame([{
'pretrained initialization': warm_mu,
'pretrained initialization_ci': warm_err,
'random initialization': no_warm_mu,
'random initialization_ci': no_warm_err}])
def distance_plot(data_df):
"""Produces Figure 1b."""
cols = ['pretrained initialization', 'random initialization']
dist_df = data_df.groupby('mu').apply(get_dist_stats)
dist_df = dist_df.reset_index(level=1).sort_index(ascending=False)
err_df = dist_df[['pretrained initialization_ci', 'random initialization_ci']]
err_df.columns = cols
data_df = dist_df[['pretrained initialization', 'random initialization']]
ax = data_df.plot.barh(
color=['#0499CC', '#FFFFFF'],
xerr=err_df, lw=1, edgecolor='black')
ax.set_xlabel('Mean distance from initialization')
ax.set_ylabel(r'$\mu$')
legend = plt.legend(loc='center left', bbox_to_anchor=(0.4, 1.15))
plt.savefig("distances.pdf",
bbox_extra_artists=(legend,),
bbox_inches='tight')
distance_plot(data_df)
| 0.782912 | 0.900399 |
```
# load generated questions
import os
import json
import numpy as np
from collections import Counter, defaultdict
topics = defaultdict(list)
qas = defaultdict(list)
non_unique_questions = defaultdict(list)
squash_path = '/home/svakule/squash-generation/squash/temp/trec_cast19_paragraphs'
generated_questions = {}
for p_id in os.listdir(squash_path):
try:
with open(squash_path+'/%s/generated_questions.json' % p_id) as f:
paragraphs = json.load(f)['data'][0]['paragraphs']
generated_questions[p_id] = []
for i, p in enumerate(paragraphs):
for qa in p['qas']:
q = qa['question']#.lower()
generated_questions[p_id].append(q)
except:
continue
print(len(generated_questions), 'passages with generated questions')
# filter only questions that have enough support ie relevant passages
# same as in TREC
min_passages_per_question = 3
doc_id = 0
documents = []
all_qs = []
for p_id, qs in generated_questions.items():
# index questions with sufficient support (non-unique within topic)
if len(qs) < min_passages_per_question:
continue
for q in qs:
# skip duplicate questions
if q in all_qs:
continue
documents.append({'id': doc_id,
'contents': q})
all_qs.append(q)
doc_id += 1
print(len(documents), 'unique generated questions')
```
# Index questions
```
import os
import json
anserini_folder = "/home/svakule/markers_bert/Anserini/"
json_files_path = "../results/questions_index/collection"
path_index = "/home/svakule/markers_bert/Anserini/indexes/squash_cast19"
if not os.path.isdir(json_files_path):
os.makedirs(json_files_path)
for i, doc in enumerate(documents):
with open(json_files_path+'/docs{:02d}.json'.format(i), 'w', encoding='utf-8', ) as f:
f.write(json.dumps(doc) + '\n')
# Run index java command
os.system("sh {}target/appassembler/bin/IndexCollection -collection JsonCollection" \
" -generator DefaultLuceneDocumentGenerator -threads 9 -input {}" \
" -index {} -storePositions -storeDocvectors -storeRaw". \
format(anserini_folder, json_files_path, path_index))
# test search
import json
from pyserini.search import SimpleSearcher
searcher = SimpleSearcher(path_index)
query_str = 'hubble space telescope'
num_candidates_samples = 3
hits = searcher.search(query_str, k=num_candidates_samples)
# sampled_initial = [ hit.raw for hit in searcher.search(query_str, k=num_candidates_samples)]
# print(sampled_initial)
# Print the first 3 hits:
for i in range(0, num_candidates_samples):
print(f'{hits[i].score:.5f}', json.loads(hits[i].raw)['contents'])
```
|
github_jupyter
|
# load generated questions
import os
import json
import numpy as np
from collections import Counter, defaultdict
topics = defaultdict(list)
qas = defaultdict(list)
non_unique_questions = defaultdict(list)
squash_path = '/home/svakule/squash-generation/squash/temp/trec_cast19_paragraphs'
generated_questions = {}
for p_id in os.listdir(squash_path):
try:
with open(squash_path+'/%s/generated_questions.json' % p_id) as f:
paragraphs = json.load(f)['data'][0]['paragraphs']
generated_questions[p_id] = []
for i, p in enumerate(paragraphs):
for qa in p['qas']:
q = qa['question']#.lower()
generated_questions[p_id].append(q)
except:
continue
print(len(generated_questions), 'passages with generated questions')
# filter only questions that have enough support ie relevant passages
# same as in TREC
min_passages_per_question = 3
doc_id = 0
documents = []
all_qs = []
for p_id, qs in generated_questions.items():
# index questions with sufficient support (non-unique within topic)
if len(qs) < min_passages_per_question:
continue
for q in qs:
# skip duplicate questions
if q in all_qs:
continue
documents.append({'id': doc_id,
'contents': q})
all_qs.append(q)
doc_id += 1
print(len(documents), 'unique generated questions')
import os
import json
anserini_folder = "/home/svakule/markers_bert/Anserini/"
json_files_path = "../results/questions_index/collection"
path_index = "/home/svakule/markers_bert/Anserini/indexes/squash_cast19"
if not os.path.isdir(json_files_path):
os.makedirs(json_files_path)
for i, doc in enumerate(documents):
with open(json_files_path+'/docs{:02d}.json'.format(i), 'w', encoding='utf-8', ) as f:
f.write(json.dumps(doc) + '\n')
# Run index java command
os.system("sh {}target/appassembler/bin/IndexCollection -collection JsonCollection" \
" -generator DefaultLuceneDocumentGenerator -threads 9 -input {}" \
" -index {} -storePositions -storeDocvectors -storeRaw". \
format(anserini_folder, json_files_path, path_index))
# test search
import json
from pyserini.search import SimpleSearcher
searcher = SimpleSearcher(path_index)
query_str = 'hubble space telescope'
num_candidates_samples = 3
hits = searcher.search(query_str, k=num_candidates_samples)
# sampled_initial = [ hit.raw for hit in searcher.search(query_str, k=num_candidates_samples)]
# print(sampled_initial)
# Print the first 3 hits:
for i in range(0, num_candidates_samples):
print(f'{hits[i].score:.5f}', json.loads(hits[i].raw)['contents'])
| 0.122852 | 0.177241 |
## Machine Learnning - Previsão banco de dados de Câncer
```
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
print("cancer.keys(): \n{}".format(cancer.keys()))
print(cancer.data)
cancer.data[0]
X = cancer.data
y = cancer.target
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X, y)
X_new = [[2.057e+01, 1.777e+01, 1.329e+02, 1.326e+03, 8.474e-02, 7.864e-02,
8.690e-02, 7.017e-02, 1.812e-01, 5.667e-02, 5.435e-01, 7.339e-01,
3.398e+00, 7.408e+01, 5.225e-03, 1.308e-02, 1.860e-02, 1.340e-02,
1.389e-02, 3.532e-03, 2.499e+01, 2.341e+01, 1.588e+02, 1.956e+03,
1.238e-01, 1.866e-01, 2.416e-01, 1.860e-01, 2.750e-01, 8.902e-02], [1.799e+01,1.228e+02, 1.038e+01, 1.001e+03, 1.184e-01, 2.776e-01,
3.001e-01, 2.419e-01, 1.471e-01, 7.871e-02, 1.095e+00, 9.053e-01,
8.589e+00, 1.534e+02, 6.399e-03, 4.904e-02, 5.373e-02, 1.587e-02,
3.003e-02, 6.193e-03, 2.538e+01, 1.733e+01, 1.846e+02, 2.019e+03,
1.622e-01, 6.656e-01, 7.119e-01 ,2.654e-01, 4.601e-01, 1.189e-01]]
knn.predict(X_new)
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X, y)
knn.predict(X_new)
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
logreg.fit(X, y)
logreg.predict(X_new)
y_pred = logreg.predict(X)
from sklearn import metrics
print(metrics.accuracy_score(y, y_pred))
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X, y)
y_pred = knn.predict(X)
print(metrics.accuracy_score(y, y_pred))
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X, y)
y_pred = knn.predict(X)
print(metrics.accuracy_score(y, y_pred))
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=4)
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
y_pred = logreg.predict(X_test)
print(metrics.accuracy_score(y_test, y_pred))
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print(metrics.accuracy_score(y_test, y_pred))
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print(metrics.accuracy_score(y_test, y_pred))
k_range = list(range(1, 26))
scores = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
scores.append(metrics.accuracy_score(y_test, y_pred))
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(k_range, scores)
plt.xlabel('Value of K for KNN')
plt.ylabel('Testing Accuracy')
knn = KNeighborsClassifier(n_neighbors=11)
knn.fit(X, y)
knn.predict([[1.799e+01, 1.038e+01, 1.228e+02, 1.001e+03, 1.184e-01, 2.776e-01,
3.001e-01, 1.471e-01, 2.419e-01, 7.871e-02, 1.095e+00, 9.053e-01,
8.589e+00, 1.534e+02, 6.399e-03, 4.904e-02, 5.373e-02, 1.587e-02,
3.003e-02, 6.193e-03, 2.538e+01, 1.733e+01, 1.846e+02, 2.019e+03,
1.622e-01, 6.656e-01, 7.119e-01, 2.654e-01, 4.601e-01, 1.189e-01]])
```
|
github_jupyter
|
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
print("cancer.keys(): \n{}".format(cancer.keys()))
print(cancer.data)
cancer.data[0]
X = cancer.data
y = cancer.target
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X, y)
X_new = [[2.057e+01, 1.777e+01, 1.329e+02, 1.326e+03, 8.474e-02, 7.864e-02,
8.690e-02, 7.017e-02, 1.812e-01, 5.667e-02, 5.435e-01, 7.339e-01,
3.398e+00, 7.408e+01, 5.225e-03, 1.308e-02, 1.860e-02, 1.340e-02,
1.389e-02, 3.532e-03, 2.499e+01, 2.341e+01, 1.588e+02, 1.956e+03,
1.238e-01, 1.866e-01, 2.416e-01, 1.860e-01, 2.750e-01, 8.902e-02], [1.799e+01,1.228e+02, 1.038e+01, 1.001e+03, 1.184e-01, 2.776e-01,
3.001e-01, 2.419e-01, 1.471e-01, 7.871e-02, 1.095e+00, 9.053e-01,
8.589e+00, 1.534e+02, 6.399e-03, 4.904e-02, 5.373e-02, 1.587e-02,
3.003e-02, 6.193e-03, 2.538e+01, 1.733e+01, 1.846e+02, 2.019e+03,
1.622e-01, 6.656e-01, 7.119e-01 ,2.654e-01, 4.601e-01, 1.189e-01]]
knn.predict(X_new)
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X, y)
knn.predict(X_new)
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
logreg.fit(X, y)
logreg.predict(X_new)
y_pred = logreg.predict(X)
from sklearn import metrics
print(metrics.accuracy_score(y, y_pred))
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X, y)
y_pred = knn.predict(X)
print(metrics.accuracy_score(y, y_pred))
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X, y)
y_pred = knn.predict(X)
print(metrics.accuracy_score(y, y_pred))
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=4)
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
y_pred = logreg.predict(X_test)
print(metrics.accuracy_score(y_test, y_pred))
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print(metrics.accuracy_score(y_test, y_pred))
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print(metrics.accuracy_score(y_test, y_pred))
k_range = list(range(1, 26))
scores = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
scores.append(metrics.accuracy_score(y_test, y_pred))
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(k_range, scores)
plt.xlabel('Value of K for KNN')
plt.ylabel('Testing Accuracy')
knn = KNeighborsClassifier(n_neighbors=11)
knn.fit(X, y)
knn.predict([[1.799e+01, 1.038e+01, 1.228e+02, 1.001e+03, 1.184e-01, 2.776e-01,
3.001e-01, 1.471e-01, 2.419e-01, 7.871e-02, 1.095e+00, 9.053e-01,
8.589e+00, 1.534e+02, 6.399e-03, 4.904e-02, 5.373e-02, 1.587e-02,
3.003e-02, 6.193e-03, 2.538e+01, 1.733e+01, 1.846e+02, 2.019e+03,
1.622e-01, 6.656e-01, 7.119e-01, 2.654e-01, 4.601e-01, 1.189e-01]])
| 0.612426 | 0.949342 |
___
<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
___
# Latent Dirichlet Allocation
## Data
We will be using articles from NPR (National Public Radio), obtained from their website [www.npr.org](http://www.npr.org)
```
import pandas as pd
npr = pd.read_csv('npr.csv')
npr.head()
```
Notice how we don't have the topic of the articles! Let's use LDA to attempt to figure out clusters of the articles.
## Preprocessing
```
from sklearn.feature_extraction.text import CountVectorizer
```
**`max_df`**` : float in range [0.0, 1.0] or int, default=1.0`<br>
When building the vocabulary ignore terms that have a document frequency strictly higher than the given threshold (corpus-specific stop words). If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None.
**`min_df`**` : float in range [0.0, 1.0] or int, default=1`<br>
When building the vocabulary ignore terms that have a document frequency strictly lower than the given threshold. This value is also called cut-off in the literature. If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None.
```
cv = CountVectorizer(max_df=0.95, min_df=2, stop_words='english')
dtm = cv.fit_transform(npr['Article'])
dtm
```
## LDA
```
from sklearn.decomposition import LatentDirichletAllocation
LDA = LatentDirichletAllocation(n_components=7,random_state=42)
# This can take awhile, we're dealing with a large amount of documents!
LDA.fit(dtm)
```
## Showing Stored Words
```
len(cv.get_feature_names())
import random
for i in range(10):
random_word_id = random.randint(0,54776)
print(cv.get_feature_names()[random_word_id])
for i in range(10):
random_word_id = random.randint(0,54776)
print(cv.get_feature_names()[random_word_id])
```
### Showing Top Words Per Topic
```
len(LDA.components_)
LDA.components_
len(LDA.components_[0])
single_topic = LDA.components_[0]
# Returns the indices that would sort this array.
single_topic.argsort()
# Word least representative of this topic
single_topic[18302]
# Word most representative of this topic
single_topic[42993]
# Top 10 words for this topic:
single_topic.argsort()[-10:]
top_word_indices = single_topic.argsort()[-10:]
for index in top_word_indices:
print(cv.get_feature_names()[index])
```
These look like business articles perhaps... Let's confirm by using .transform() on our vectorized articles to attach a label number. But first, let's view all the 10 topics found.
```
for index,topic in enumerate(LDA.components_):
print(f'THE TOP 15 WORDS FOR TOPIC #{index}')
print([cv.get_feature_names()[i] for i in topic.argsort()[-15:]])
print('\n')
```
### Attaching Discovered Topic Labels to Original Articles
```
dtm
dtm.shape
len(npr)
topic_results = LDA.transform(dtm)
topic_results.shape
topic_results[0]
topic_results[0].round(2)
topic_results[0].argmax()
```
This means that our model thinks that the first article belongs to topic #1.
### Combining with Original Data
```
npr.head()
topic_results.argmax(axis=1)
npr['Topic'] = topic_results.argmax(axis=1)
npr.head(10)
```
## Great work!
|
github_jupyter
|
import pandas as pd
npr = pd.read_csv('npr.csv')
npr.head()
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(max_df=0.95, min_df=2, stop_words='english')
dtm = cv.fit_transform(npr['Article'])
dtm
from sklearn.decomposition import LatentDirichletAllocation
LDA = LatentDirichletAllocation(n_components=7,random_state=42)
# This can take awhile, we're dealing with a large amount of documents!
LDA.fit(dtm)
len(cv.get_feature_names())
import random
for i in range(10):
random_word_id = random.randint(0,54776)
print(cv.get_feature_names()[random_word_id])
for i in range(10):
random_word_id = random.randint(0,54776)
print(cv.get_feature_names()[random_word_id])
len(LDA.components_)
LDA.components_
len(LDA.components_[0])
single_topic = LDA.components_[0]
# Returns the indices that would sort this array.
single_topic.argsort()
# Word least representative of this topic
single_topic[18302]
# Word most representative of this topic
single_topic[42993]
# Top 10 words for this topic:
single_topic.argsort()[-10:]
top_word_indices = single_topic.argsort()[-10:]
for index in top_word_indices:
print(cv.get_feature_names()[index])
for index,topic in enumerate(LDA.components_):
print(f'THE TOP 15 WORDS FOR TOPIC #{index}')
print([cv.get_feature_names()[i] for i in topic.argsort()[-15:]])
print('\n')
dtm
dtm.shape
len(npr)
topic_results = LDA.transform(dtm)
topic_results.shape
topic_results[0]
topic_results[0].round(2)
topic_results[0].argmax()
npr.head()
topic_results.argmax(axis=1)
npr['Topic'] = topic_results.argmax(axis=1)
npr.head(10)
| 0.509276 | 0.910942 |
# Task 1
```
import numpy as np
TRAINING_DATA_PATH = './data/toy-data-training.csv'
TESTING_DATA_PATH = './data/toy-data-testing.csv'
DATA_TYPE = [('id', 'i8'), ('v1', 'f8'), ('v2', 'f8'), ('v3', 'f8'), ('v4', 'f8'), ('y', 'S10')]
def weigh(node1, node2):
node1_value = np.array(list(node1)[1:-1])
node2_value = np.array(list(node2)[1:-1])
return 1 / (1 + np.linalg.norm(node1_value - node2_value))
train_data = np.genfromtxt(TRAINING_DATA_PATH, names=True, dtype=DATA_TYPE, delimiter=',')
test_data = np.genfromtxt(TESTING_DATA_PATH, names=True, dtype=DATA_TYPE, delimiter=',')
all_data = np.concatenate((train_data, test_data))
weight13 = []
for x in range(19):
if x == 12:
continue
weight13.append((x + 1, weigh(all_data[12], all_data[x])))
print('w({}, {})'.format(13, x), weigh(all_data[12], all_data[x]))
print(sorted(weight13, key=lambda x: x[1])[-3:])
```
So we can see that the 3 node 15, 8, and 3 are connected with node 13. 8 and 3 are known, so we discover node 15 to see which class it's in:
```
weight15 = []
for x in range(19):
if x == 14:
continue
weight15.append((x + 1, weigh(all_data[14], all_data[x])))
print('w({}, {})'.format(15, x), weigh(all_data[14], all_data[x]))
print(sorted(weight15, key=lambda x: x[1])[-3:])
```
From this we can see that node 15 are connected to node 4, 1, and 2. Since node 1, 2 are blue, we can conclude that node 15 are blue.
Back to the node 13, since node 3, 15 are blue, => node 13 are blue
# Task 2
## Uncertainty sampling
In uncertainty sampling, We will try to select the most informative data points, i.e. those will make the entropy highest, thus makes the uncertainty highest.
In SVM, arcording to the geometric distribution of the support vectors and SVM theory, such points are the points close to the sperating hyperplane and two boundary line. [1]
## Expected error reduction
In expected error reduction stagegy, instead of maximizing the uncertainty of the queried instance like in uncertainty sampling, it tries to minimize the uncertainty on the remaining instances. In other words, they're two different approaches but have the same motivation and effect of the process. By this, the remaining points will be the points furthest points from the hyperplane of the SVM. Thus, the selected for query points are the points closest to the hyperplane of SVM, i.e. the support vectors.
# Task 5
- I will choose transductive SVM algorithm. I can use kernel trick to project the data to higher dimension space so that it can be separated easier. Slide ADA partI, page 42/63.
- EM algo is not good for concentric rings data.
- Graph-based approach for semisupervised classification might work but if the two rings are too close to each other then it might get messed up.
# Task 6
- Linear SVM are stable, which is not suitable for bagging [2]. Poor predictors can be transformed into worse one by bagging [3].
- Kernal SVM performs better with bagging [2].
- Boosting and the RSM may also be advantageous for linear classifiers [4], => must be good with linear SVM.
- Boosting is also good with kernel SVM [5].
[1] [An Uncertainty sampling-based Active Learning Approach
For Support Vector Machines ](https://ieeexplore-ieee-org.ezproxy.uef.fi:2443/stamp/stamp.jsp?tp=&arnumber=5376609)
[2] [Demonstrating the Stability of Support
Vector Machines for Classification](http://ikee.lib.auth.gr/record/115086/files/Buciu_i06a.pdf)
[3] [Bagging Boosting and C.45](http://home.eng.iastate.edu/~julied/classes/ee547/Handouts/q.aaai96.pdf)
[4] [Bagging, Boosting and the Random Subspace Method for Linear Classifiers](http://rduin.nl/papers/paa_02_bagging.pdf)
[5] [From Kernel Machines to Ensemble Learning](https://arxiv.org/pdf/1401.0767.pdf)
|
github_jupyter
|
import numpy as np
TRAINING_DATA_PATH = './data/toy-data-training.csv'
TESTING_DATA_PATH = './data/toy-data-testing.csv'
DATA_TYPE = [('id', 'i8'), ('v1', 'f8'), ('v2', 'f8'), ('v3', 'f8'), ('v4', 'f8'), ('y', 'S10')]
def weigh(node1, node2):
node1_value = np.array(list(node1)[1:-1])
node2_value = np.array(list(node2)[1:-1])
return 1 / (1 + np.linalg.norm(node1_value - node2_value))
train_data = np.genfromtxt(TRAINING_DATA_PATH, names=True, dtype=DATA_TYPE, delimiter=',')
test_data = np.genfromtxt(TESTING_DATA_PATH, names=True, dtype=DATA_TYPE, delimiter=',')
all_data = np.concatenate((train_data, test_data))
weight13 = []
for x in range(19):
if x == 12:
continue
weight13.append((x + 1, weigh(all_data[12], all_data[x])))
print('w({}, {})'.format(13, x), weigh(all_data[12], all_data[x]))
print(sorted(weight13, key=lambda x: x[1])[-3:])
weight15 = []
for x in range(19):
if x == 14:
continue
weight15.append((x + 1, weigh(all_data[14], all_data[x])))
print('w({}, {})'.format(15, x), weigh(all_data[14], all_data[x]))
print(sorted(weight15, key=lambda x: x[1])[-3:])
| 0.159807 | 0.821903 |
### LA CIENCIA DE DATOS: ANÁLISIS, MINERIA Y VISUALIZACIÓN
## UNIDAD 5 CASO PRÁCTICO 2
### DATOS
Un tipo muy habitual de datos con los que se trabaja en ciencia de datos son las series temporales. Una de las características especiales de las series temporales es que una sola variable, el tiempo, puede dar lugar a muchas dimensiones, como puede ser el día de la semana, la hora del día, etc.
En este caso práctico, se analiza una serie de datos de tráfico en la cuidad de Madrid, disponibles en un fichero que se proporciona. Se trata de un conjunto de datos algo más grande que en otros casos prácticos (once millones de registros), lo que no es bastante como para generar problemas de rendimiento significativos, pero sí como para que sea necesario aplicar sistemáticamente técnicas de sumarización de los datos.
Como en otros casos anteriores, se sugiere al alumno continuar el análisis de los datos más allá de lo solicitado en este caso, ya que se trata de un conjunto de datos muy rico y que se presta fácilmente a hacer descubrimientos fácilmente interpretables.
## TRABAJO
A partir del siguiente juego de datos:
1. Importar los datos de tráfico proporcionados y cargarlos en un dataframe de Pandas, con los tipos adecuados y realizando una limpieza de datos básica.
2. Para los datos correspondientes a la M30 (vía de circunvalación de la ciudad de Madrid), comparar los valores de velocidad media e intensidad de tráfico entre distintos puntos.
3. Utilizando métricas típicas de comparación de series temporales, analizar las similitudes y diferencias en los patrones diarios de congestión del tráfico entre los distintos días de la semana.
## DATA WRANGLING
Once millones de registros hacen que el archivo sea incomodo de ver (por lo menos en una pantalla normal de computador). Comencemos por leer el archivo en un *dataframe* de *Pandas* y tratar de discernir como mejor manipular los datos.
```
# Lectura del archivo
import pandas as pd
data = pd.read_csv('11-2019.csv')
data.head()
```
El formato es el típico de España donde en vez de `,` para separador se utiliza `;`. No se ven tildes presentes, por lo que no nos debemos preocupar por el __encoding__. Volvamos a leer el archivo esta vez con otro separador.
```
# Lectura del archivo español
data = pd.read_csv('11-2019.csv', sep = ';')
data.head()
data.info()
```
Lo primero que salta a la vista es que el archivo tiene una variable llamada `fecha` que _Python_ lee como una primitiva tipo `object`. Vamos a tener que forjarla como fecha.
```
# El siguiente código cortesia de Stack Overflow
# https://stackoverflow.com/questions/38333954/converting-object-to-datetime-format-in-python
# by https://stackoverflow.com/users/509824/alberto-garcia-raboso
data['fecha'] = pd.to_datetime(data['fecha'])
data.info()
```
Veamos el alcance de la primera variable `id`.
```
data['id'].describe()
```
Honestamente, `id` parece un tipo de identificador, pero es difícil en once millones de registros ver de que se trata. Busquemos y contemos los casos únicos a ver si ayuda.
```
data['id'].value_counts()
```
Nuevamente, la solución común no nos dice mucho. Usemos una solución propuesta en Stack Overflow.
```
# Solución propuesta en Stack Overflow
# https://stackoverflow.com/questions/34178751/extract-unique-
# values-and-number-of-occurrences-of-each-value-from-dataframe-col
# by https://stackoverflow.com/users/925592/rufusvs
from collections import Counter
Counter(data['id'])
```
A primera vista vemos que con algunas excepciones, hay un promedio de 2,880 mediciones para cada `id`. Lo interesante es ver la cantidad de días que se extiende la serie de tiempo.
```
data['fecha'].describe()
```
Esta vez el comando describe nos da una descripción más completa. Se trata de 2880 mediciones únicas que abarcan desde el 1 de noviembre del 2019 al 30 de noviembre del 2019. O sea una medición cada quince minutos para 96 mediciones al día.
Continuamos con la variable `tipo_elem` que parece contener los nombres de calles o vías.
```
print(data['tipo_elem'].unique())
```
Interesante que solo hay dos descripciones: 'M30' y 'URB'. Aparentemente 'M30' es Autopista de Circunvalación M-30, y 'URB' es 'URBANO'. Estos datos provienen del sitio de datos del __Portal de datos abiertos del Ayuntamiento de Madrid__ donde además podemos descargar un folleto que explica cada campo.
| Nombre | Tipo | Descripción |
|---------------------|--------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| idelem | entero | Identificación única del Punto de Medida en los sistemas de control del tráfico del Ayuntamiento de Madrid |
| fecha | fecha | Fecha y hora oficiales de Madrid con formato yyyy-mm-dd hh:mi:ss |
| indetif | texto | Identificador del Punto de Medida en los Sistemas de Tráfico (se proporciona por compatibilidad hacia atrás). |
| tipo_elem | texto | Nombre del Tipo de Punto de Medida: Urbano o M30. |
| intensidad | entero | Intensidad del Punto de Medida en el periodo de 15 minutos (vehículos/hora). Un valor negativo implica la ausencia de datos. |
| ocupacion | entero | Tiempo de Ocupación del Punto de Medida en el periodo de 15 minutos (%). Un valor negativo implica la ausencia de datos. |
| carga | entero | Carga de vehículos en el periodo de 15 minutos. Parámetro que tiene en cuenta intensidad, ocupación y capacidad de la vía y establece el grado de uso de la vía de 0 a 100. Un valor negativo implica la ausencia de datos |
| vmed | entero | Velocidad media de los vehículos en el periodo de 15 minutos (Km./h). Sólo para puntos de medida interurbanos M30. Un valor negativo implica la ausencia de datos. |
| error | texto | Indicación de si ha habido al menos una muestra errónea o sustituida en el periodo de 15 minutos. N: no ha habido errores ni sustituciones E: los parámetros de calidad de alguna de las muestras integradas no son óptimos. S: alguna de las muestras recibidas era totalmente errónea y no se ha integrado. |
| periodo_integracion | entero | Número de muestras recibidas y consideradas para el periodo de integración. |
Habremos de notar que el campo `identif` ya no existe en nuestro juego de datos. Continuamos con el análisis de la variable `intensidad`.
```
data['intensidad'].describe()
```
No es mucho lo que vemos con esta función. Quizás una visualización nos permita tener un mejor panorama de que abarca estas mediciones.
```
import matplotlib.pyplot as plt
series = data[data.tipo_elem == 'M30']['intensidad'].head(100)
series.plot(style = 'k.')
```
Analicemos juntos la variable `ocupacion` que _"... define el tiempo de ocupación del punto de medida en el periodo de 15 minutos (%). Un valor negativo implica la ausencia de datos..."_
```
data.ocupacion.describe()
series = data[data.tipo_elem == 'M30']['ocupacion'].head(100)
series.plot(style = 'k-')
```
Un problema que se hace evidente con las visualizaciones, es la falta de fechas en el eje de las abscisas. Debieramos transformar el juego de datos a una serie de tiempos de verdad.
```
# Lectura del archivo español como serie de tiempos
ts = pd.read_csv('11-2019.csv', sep = ';', index_col = 1)
ts.head()
series = ts[ts.tipo_elem == 'M30']['ocupacion'].head(100)
series.plot(style = 'k-')
plt.xticks(rotation=45)
plt.show()
# La misma visualización pero en Seaborn usando la data original
import seaborn as sns
sns.set(style="darkgrid")
sns.lineplot(x = 'fecha', y = 'ocupacion', data = data[data.tipo_elem == 'M30'].head(100))
plt.xticks(rotation=45)
plt.show()
```
## M30, VELOCIDAD MEDIA E INTENSIDAD DE TRÁFICO
El segundo punto del caso es, para los datos correspondientes a la M30 (vía de circunvalación de la ciudad de Madrid), comparar los valores de velocidad media e intensidad de tráfico entre distintos puntos.
Lo primero que debemos hacer es reducir el juego de datos solamente a aquellos datos que correspondan a la M30. De igual manera eliminemos los juegos de datos `data` y `ts` que ocupan espacio valioso en memoria.
```
series = data[data.tipo_elem == 'M30']
del data
del ts
import gc
gc.collect()
```
Nos hemos quedado con la versión de los datos que no tiene índices de fecha sino fecha como un campo. ¿Podremos agrupar los datos de una variable cualquiera por fecha, como por ejemplo por semana? Ted Petrou nos da un buen ejemplo extraido de https://stackoverflow.com/questions/11391969/how-to-group-pandas-dataframe-entries-by-date-in-a-non-unique-column
```
# Solución por Stack Overflow
# https://stackoverflow.com/questions/11391969/how-to-group-pandas-dataframe-entries-by-date-in-a-non-unique-column
series.groupby(series['fecha'].dt.dayofweek)['intensidad'].agg(['min', 'mean', 'max'])
```
Para entender bien el resultado podemos revisar la literatura del método `dateofweek`.
> Return the day of the week. It is assumed the week starts on Monday, which is denoted by 0 and ends on Sunday which is denoted by 6. This method is available on both Series with datetime values (using the dt accessor) or DatetimeIndex.
La intensidad del tráfico se mantiene bastante estable de lunes a jueves, baja algo el viernes, y realmente un 40% el sábado y domingo. Sin embargo no es la lectura más intuitiva. Tratemos de armar una visualización fuerte que transmita todo esto.
```
bar = series.groupby(series['fecha'].dt.dayofweek)['intensidad'].agg(['min', 'mean', 'max'])
intensidad = bar['mean'].tolist()
foo = series.groupby(series['fecha'].dt.dayofweek)['vmed'].agg(['min', 'mean', 'max'])
velocidad = foo['mean'].tolist()
velocidad_max = foo['max'].tolist()
data = {'Dia' : ['LUN', 'MAR', 'MIE', 'JUE', 'VIE', 'SAB', 'DOM'],
'Intensidad Promedio' : intensidad,
'Velocidad Promedio' : velocidad,
'Velocidad Máxima' : velocidad_max}
chart_intensidad = pd.DataFrame(data)
print(chart_intensidad)
```
Cómo tabla de información no está mal, pero podemos mejorarla con las cotas superior e inferior de cada variable. También tratemos de redondear a un solo punto decimal.
```
bar = series.groupby(series['fecha'].dt.dayofweek)['intensidad'].agg(['std', 'min', 'mean', 'max'])
foo = series.groupby(series['fecha'].dt.dayofweek)['vmed'].agg(['std', 'min', 'mean', 'max'])
intensidad_promedio = round(bar['mean'],1)
intensidad_maxima = round(bar['max'], 1)
intensidad_std = round(bar['std'], 1)
velocidad_promedio = round(foo['mean'],1)
velocidad_maxima = round(foo['max'], 1)
velocidad_std = round(foo['std'], 1)
data = {'Dia' : ['LUN', 'MAR', 'MIE', 'JUE', 'VIE', 'SAB', 'DOM'],
'Intensidad Promedio' : intensidad_promedio,
'Intensidad Máxima' : intensidad_maxima,
'Intensidad STD' : intensidad_std,
'Velocidad Promedio' : velocidad_promedio,
'Velocidad Máxima' : velocidad_maxima,
'Velocidad STD' : velocidad_std}
chart_intensidad = pd.DataFrame(data)
# Omitir índice del dataframe
# https://stackoverflow.com/questions/24644656/how-to-print-pandas-dataframe-without-index
print(chart_intensidad.to_string(index=False))
from IPython.display import display, HTML
display(chart_intensidad)
```
## Analizando la Carga de Tráfico
Arriba hemos visto patrones de intensidad y velocidad agrupados por dia, pero no hemos visto el congestionamiento, registrado en la variable `carga`.
```
carga = series.groupby(series['fecha'].dt.dayofweek)['carga'].agg(['std', 'min', 'mean', 'max'])
carga_minimo = round(carga['min'],1)
carga_promedio = round(carga['mean'],1)
carga_maximo = round(carga['max'], 1)
carga_dat = {
'Dia' : ['LUN', 'MAR', 'MIE', 'JUE', 'VIE', 'SAB', 'DOM'],
'Mínimo' : carga_minimo,
'Promedio' : carga_promedio,
'Máximo' : carga_maximo}
carga_chart = pd.DataFrame(carga_dat)
display(carga_chart)
```
No está mal, pero esperabamos mucho más. Nos gustaría poder condensar la información por día y por hora, tener un registro de las 24 horas del día, para entonces si obtener una matriz con datos muy interesantes de evaluar. La información está en intervalos de 15 minutos sin embargo, lo que significa que hay 96 puntos de datos por día, no 24. ¿Cuál de los dos podemos usar por ahora que nos permite agrupar de forma funcional y sin definir métodos complejos?
```
# Agregar una columna al juego de datos que identifique el dia de la semana para filtrar mejor
series['dia'] = series['fecha'].dt.dayofweek
lunes = series[series['dia'] == 0]
carga = lunes.groupby(lunes['fecha'].dt.hour)['carga'].agg(['std', 'min', 'mean', 'max'])
display(carga)
```
Automaticemos el proceso generando una matriz de 24 columnas, una para cada hora del día, y 7 filas, una para cada dia de la semana. Este gran dataframe es importante para poder visualizar bien el flujo de carga.
```
import numpy as np
matriz_carga = np.zeros((7,24))
for i in range(0,7):
datos_temporal = series[series['dia'] == i]
carga = datos_temporal.groupby(datos_temporal['fecha'].dt.hour)['carga'].agg(['mean'])
for j in range(0,24):
matriz_carga[i, j] = round(carga['mean'][j],1)
temp = {
'HORA' : ['0:00', '1:00', '2:00', '3:00', '4:00', '5:00', '6:00',
'7:00', '8:00', '9:00', '10:00', '11:00', '12:00', '13:00',
'14:00', '15:00', '16:00', '17:00', '18:00', '19:00', '20:00',
'21:00', '22:00', '23:00'],
'LUNES' : matriz_carga[0,:].tolist(),
'MARTES' : matriz_carga[1,:].tolist(),
'MIERCOLES': matriz_carga[2,:].tolist(),
'JUEVES' : matriz_carga[3,:].tolist(),
'VIERNES' : matriz_carga[4,:].tolist(),
'SABADO' : matriz_carga[5,:].tolist(),
'DOMINGO' : matriz_carga[6,:].tolist()
}
df = pd.DataFrame(temp)
df.set_index('HORA', inplace = True)
display(df)
```
Para una estructura matricial como la de arriba, donde lo que queremos ver es los patrones de carga de la ruta M30, lo mejor es un mapa de calor, o __heatmap__. Justamente la librería _Seaborn_ tiene modelos muy interesantes para usar.
```
# Code from Stack Overflow to obtain maximum of multiple columns
# https://stackoverflow.com/questions/12169170/find-the-max-of-two-or-more-columns-with-pandas
temp_max_load = df[['LUNES', 'MARTES', 'MIERCOLES', 'JUEVES', 'VIERNES', 'SABADO', 'DOMINGO']].max(axis=1)
maximum_load = max(temp_max_load)
plt.figure(figsize=(8, 8))
sns.heatmap(df, vmin=0, vmax=maximum_load, cmap="coolwarm", annot = True).set_title('ANALISIS DE CARGA PROMEDIO DE TRAFICO POR HORA \n EN LA AUTOPISTA M30')
```
Un desafio interesante es transformar la estructura de datos para usarla en un __boxplot__. Después de mucho experiemento, ya que utilizar `stack()` es bastante _hacky_ al momento de asignar nombre a las columnas con valores y se pierde mucho poder de visualización, hacemos el primer intento con la función `melt()`. El tutorial es de _Soner Yıldırım_ y se encuentra en _"Reshaping Pandas DataFrames - Melt, Stack and Pivot functions"_ en el sitio https://towardsdatascience.com/reshaping-pandas-dataframes-9812b3c1270e.
```
df3 = df.melt()
df3.head()
plt.figure(figsize=(8, 8))
sns.boxplot(y='value', x='variable', data=df3, palette="Blues").set_title('ANALISIS DE CARGA PROMEDIO DE TRAFICO POR DIA \n EN LA AUTOPISTA M30')
```
Lo extraño de como funcionó `melt()` es que se perdió la variable de horas. Sospecho que es el efecto de tener el __dataframe__ indexado con la columna hora. Quizás es mejor crear un nuevo df sin índice para _derretir_.
```
temp = {
'HORA' : ['0:00', '1:00', '2:00', '3:00', '4:00', '5:00', '6:00',
'7:00', '8:00', '9:00', '10:00', '11:00', '12:00', '13:00',
'14:00', '15:00', '16:00', '17:00', '18:00', '19:00', '20:00',
'21:00', '22:00', '23:00'],
'LUNES' : matriz_carga[0,:].tolist(),
'MARTES' : matriz_carga[1,:].tolist(),
'MIERCOLES': matriz_carga[2,:].tolist(),
'JUEVES' : matriz_carga[3,:].tolist(),
'VIERNES' : matriz_carga[4,:].tolist(),
'SABADO' : matriz_carga[5,:].tolist(),
'DOMINGO' : matriz_carga[6,:].tolist()
}
df5 = pd.DataFrame(temp)
display(df5)
df6 = df5.melt(id_vars=['HORA'])
df6.head()
df6.info()
plt.figure(figsize=(16, 8))
sns.boxplot(y='value', x='HORA', data=df6, palette="YlGnBu").set_title('ANALISIS DE CARGA PROMEDIO DE TRAFICO POR HORA \n EN LA AUTOPISTA M30')
```
¿Sería esta una mejor estructura que la primera tan solo por quitar el índice de __HORA__? Tratemos de replicar el gráfico primero por día y veamos.
```
plt.figure(figsize=(12, 8))
sns.boxplot(y='value', x='variable', data=df6, palette="Greens").set_title('ANALISIS DE CARGA PROMEDIO DE TRAFICO POR DIA \n EN LA AUTOPISTA M30')
```
Vamos a limpiar un poco este __dataframe__ valioso que sacamos usando el método `melt()` con nombres mejores a las variables y columnas. La pregunta es no porque las HORAS no están indexadas, y _Seaborn_ no reconoce el formato. Pero debe haber una forma de hacerlo.
```
analisis_carga_M30 = df5.melt(id_vars=['HORA'], var_name = 'DIA', value_name = 'CARGA').sort_values(by='HORA').reset_index(drop=True)
analisis_carga_M30.info()
```
La pregunta de oro es: ¿podemos replicar el mapa de calor de horas y días con este formato? La respuesta por ahora es no, ya que no está indexado por __HORA__ y _Seaborn_ pierde el control del plot.
|
github_jupyter
|
# Lectura del archivo
import pandas as pd
data = pd.read_csv('11-2019.csv')
data.head()
# Lectura del archivo español
data = pd.read_csv('11-2019.csv', sep = ';')
data.head()
data.info()
# El siguiente código cortesia de Stack Overflow
# https://stackoverflow.com/questions/38333954/converting-object-to-datetime-format-in-python
# by https://stackoverflow.com/users/509824/alberto-garcia-raboso
data['fecha'] = pd.to_datetime(data['fecha'])
data.info()
data['id'].describe()
data['id'].value_counts()
# Solución propuesta en Stack Overflow
# https://stackoverflow.com/questions/34178751/extract-unique-
# values-and-number-of-occurrences-of-each-value-from-dataframe-col
# by https://stackoverflow.com/users/925592/rufusvs
from collections import Counter
Counter(data['id'])
data['fecha'].describe()
print(data['tipo_elem'].unique())
data['intensidad'].describe()
import matplotlib.pyplot as plt
series = data[data.tipo_elem == 'M30']['intensidad'].head(100)
series.plot(style = 'k.')
data.ocupacion.describe()
series = data[data.tipo_elem == 'M30']['ocupacion'].head(100)
series.plot(style = 'k-')
# Lectura del archivo español como serie de tiempos
ts = pd.read_csv('11-2019.csv', sep = ';', index_col = 1)
ts.head()
series = ts[ts.tipo_elem == 'M30']['ocupacion'].head(100)
series.plot(style = 'k-')
plt.xticks(rotation=45)
plt.show()
# La misma visualización pero en Seaborn usando la data original
import seaborn as sns
sns.set(style="darkgrid")
sns.lineplot(x = 'fecha', y = 'ocupacion', data = data[data.tipo_elem == 'M30'].head(100))
plt.xticks(rotation=45)
plt.show()
series = data[data.tipo_elem == 'M30']
del data
del ts
import gc
gc.collect()
# Solución por Stack Overflow
# https://stackoverflow.com/questions/11391969/how-to-group-pandas-dataframe-entries-by-date-in-a-non-unique-column
series.groupby(series['fecha'].dt.dayofweek)['intensidad'].agg(['min', 'mean', 'max'])
bar = series.groupby(series['fecha'].dt.dayofweek)['intensidad'].agg(['min', 'mean', 'max'])
intensidad = bar['mean'].tolist()
foo = series.groupby(series['fecha'].dt.dayofweek)['vmed'].agg(['min', 'mean', 'max'])
velocidad = foo['mean'].tolist()
velocidad_max = foo['max'].tolist()
data = {'Dia' : ['LUN', 'MAR', 'MIE', 'JUE', 'VIE', 'SAB', 'DOM'],
'Intensidad Promedio' : intensidad,
'Velocidad Promedio' : velocidad,
'Velocidad Máxima' : velocidad_max}
chart_intensidad = pd.DataFrame(data)
print(chart_intensidad)
bar = series.groupby(series['fecha'].dt.dayofweek)['intensidad'].agg(['std', 'min', 'mean', 'max'])
foo = series.groupby(series['fecha'].dt.dayofweek)['vmed'].agg(['std', 'min', 'mean', 'max'])
intensidad_promedio = round(bar['mean'],1)
intensidad_maxima = round(bar['max'], 1)
intensidad_std = round(bar['std'], 1)
velocidad_promedio = round(foo['mean'],1)
velocidad_maxima = round(foo['max'], 1)
velocidad_std = round(foo['std'], 1)
data = {'Dia' : ['LUN', 'MAR', 'MIE', 'JUE', 'VIE', 'SAB', 'DOM'],
'Intensidad Promedio' : intensidad_promedio,
'Intensidad Máxima' : intensidad_maxima,
'Intensidad STD' : intensidad_std,
'Velocidad Promedio' : velocidad_promedio,
'Velocidad Máxima' : velocidad_maxima,
'Velocidad STD' : velocidad_std}
chart_intensidad = pd.DataFrame(data)
# Omitir índice del dataframe
# https://stackoverflow.com/questions/24644656/how-to-print-pandas-dataframe-without-index
print(chart_intensidad.to_string(index=False))
from IPython.display import display, HTML
display(chart_intensidad)
carga = series.groupby(series['fecha'].dt.dayofweek)['carga'].agg(['std', 'min', 'mean', 'max'])
carga_minimo = round(carga['min'],1)
carga_promedio = round(carga['mean'],1)
carga_maximo = round(carga['max'], 1)
carga_dat = {
'Dia' : ['LUN', 'MAR', 'MIE', 'JUE', 'VIE', 'SAB', 'DOM'],
'Mínimo' : carga_minimo,
'Promedio' : carga_promedio,
'Máximo' : carga_maximo}
carga_chart = pd.DataFrame(carga_dat)
display(carga_chart)
# Agregar una columna al juego de datos que identifique el dia de la semana para filtrar mejor
series['dia'] = series['fecha'].dt.dayofweek
lunes = series[series['dia'] == 0]
carga = lunes.groupby(lunes['fecha'].dt.hour)['carga'].agg(['std', 'min', 'mean', 'max'])
display(carga)
import numpy as np
matriz_carga = np.zeros((7,24))
for i in range(0,7):
datos_temporal = series[series['dia'] == i]
carga = datos_temporal.groupby(datos_temporal['fecha'].dt.hour)['carga'].agg(['mean'])
for j in range(0,24):
matriz_carga[i, j] = round(carga['mean'][j],1)
temp = {
'HORA' : ['0:00', '1:00', '2:00', '3:00', '4:00', '5:00', '6:00',
'7:00', '8:00', '9:00', '10:00', '11:00', '12:00', '13:00',
'14:00', '15:00', '16:00', '17:00', '18:00', '19:00', '20:00',
'21:00', '22:00', '23:00'],
'LUNES' : matriz_carga[0,:].tolist(),
'MARTES' : matriz_carga[1,:].tolist(),
'MIERCOLES': matriz_carga[2,:].tolist(),
'JUEVES' : matriz_carga[3,:].tolist(),
'VIERNES' : matriz_carga[4,:].tolist(),
'SABADO' : matriz_carga[5,:].tolist(),
'DOMINGO' : matriz_carga[6,:].tolist()
}
df = pd.DataFrame(temp)
df.set_index('HORA', inplace = True)
display(df)
# Code from Stack Overflow to obtain maximum of multiple columns
# https://stackoverflow.com/questions/12169170/find-the-max-of-two-or-more-columns-with-pandas
temp_max_load = df[['LUNES', 'MARTES', 'MIERCOLES', 'JUEVES', 'VIERNES', 'SABADO', 'DOMINGO']].max(axis=1)
maximum_load = max(temp_max_load)
plt.figure(figsize=(8, 8))
sns.heatmap(df, vmin=0, vmax=maximum_load, cmap="coolwarm", annot = True).set_title('ANALISIS DE CARGA PROMEDIO DE TRAFICO POR HORA \n EN LA AUTOPISTA M30')
df3 = df.melt()
df3.head()
plt.figure(figsize=(8, 8))
sns.boxplot(y='value', x='variable', data=df3, palette="Blues").set_title('ANALISIS DE CARGA PROMEDIO DE TRAFICO POR DIA \n EN LA AUTOPISTA M30')
temp = {
'HORA' : ['0:00', '1:00', '2:00', '3:00', '4:00', '5:00', '6:00',
'7:00', '8:00', '9:00', '10:00', '11:00', '12:00', '13:00',
'14:00', '15:00', '16:00', '17:00', '18:00', '19:00', '20:00',
'21:00', '22:00', '23:00'],
'LUNES' : matriz_carga[0,:].tolist(),
'MARTES' : matriz_carga[1,:].tolist(),
'MIERCOLES': matriz_carga[2,:].tolist(),
'JUEVES' : matriz_carga[3,:].tolist(),
'VIERNES' : matriz_carga[4,:].tolist(),
'SABADO' : matriz_carga[5,:].tolist(),
'DOMINGO' : matriz_carga[6,:].tolist()
}
df5 = pd.DataFrame(temp)
display(df5)
df6 = df5.melt(id_vars=['HORA'])
df6.head()
df6.info()
plt.figure(figsize=(16, 8))
sns.boxplot(y='value', x='HORA', data=df6, palette="YlGnBu").set_title('ANALISIS DE CARGA PROMEDIO DE TRAFICO POR HORA \n EN LA AUTOPISTA M30')
plt.figure(figsize=(12, 8))
sns.boxplot(y='value', x='variable', data=df6, palette="Greens").set_title('ANALISIS DE CARGA PROMEDIO DE TRAFICO POR DIA \n EN LA AUTOPISTA M30')
analisis_carga_M30 = df5.melt(id_vars=['HORA'], var_name = 'DIA', value_name = 'CARGA').sort_values(by='HORA').reset_index(drop=True)
analisis_carga_M30.info()
| 0.490236 | 0.934932 |
# 1. Parameters
```
# Defaults
simulation_dir = 'simulations/unset'
reference_file = 'simulations/reference/reference.fa.gz'
iterations = 3
mincov = 10
ncores = 32
# Parameters
read_coverage = 30
mincov = 10
simulation_dir = "simulations/alpha-10-cov-30"
iterations = 3
sub_alpha = 10
from pathlib import Path
import imp
fp, pathname, description = imp.find_module('gdi_benchmark', ['../../lib'])
gdi_benchmark = imp.load_module('gdi_benchmark', fp, pathname, description)
simulation_dir_path = Path(simulation_dir)
case_name = str(simulation_dir_path.name)
reads_dir = simulation_dir_path / 'simulated_data' / 'reads'
assemblies_dir = simulation_dir_path / 'simulated_data' / 'assemblies'
index_reads_path = simulation_dir_path / 'index-reads'
index_assemblies_path = simulation_dir_path / 'index-assemblies'
output_reads_tree = index_reads_path / 'reads.tre'
output_assemblies_tree = index_assemblies_path / 'assemblies.tre'
reference_name = Path(reference_file).name.split('.')[0]
```
# 2. Index genomes
```
!gdi --version
```
## 2.1. Index reads
```
input_genomes_file = simulation_dir_path / 'input-reads.tsv'
!gdi input --absolute {reads_dir}/*.fq.gz > {input_genomes_file}
results_handler = gdi_benchmark.BenchmarkResultsHandler(name=f'{case_name} reads')
benchmarker = gdi_benchmark.IndexBenchmarker(benchmark_results_handler=results_handler,
index_path=index_reads_path, input_files_file=input_genomes_file,
reference_file=reference_file, mincov=mincov, build_tree=True,
ncores=ncores)
benchmark_df = benchmarker.benchmark(iterations=iterations)
benchmark_df
index_reads_runtime = simulation_dir_path / 'reads-index-info.tsv'
benchmark_df.to_csv(index_reads_runtime, sep='\t', index=False)
```
## 2.2. Index assemblies
```
input_genomes_file = simulation_dir_path / 'input-assemblies.tsv'
!gdi input --absolute {assemblies_dir}/*.fa.gz > {input_genomes_file}
results_handler = gdi_benchmark.BenchmarkResultsHandler(name=f'{case_name} assemblies')
benchmarker = gdi_benchmark.IndexBenchmarker(benchmark_results_handler=results_handler,
index_path=index_assemblies_path, input_files_file=input_genomes_file,
reference_file=reference_file, mincov=mincov, build_tree=True,
ncores=ncores)
benchmark_df = benchmarker.benchmark(iterations=iterations)
benchmark_df
index_assemblies_runtime = simulation_dir_path / 'assemblies-index-info.tsv'
benchmark_df.to_csv(index_assemblies_runtime, sep='\t', index=False)
```
# 3. Export trees
```
!gdi --project-dir {index_assemblies_path} export tree {reference_name} > {output_assemblies_tree}
print(f'Wrote assemblies tree to {output_assemblies_tree}')
!gdi --project-dir {index_reads_path} export tree {reference_name} > {output_reads_tree}
print(f'Wrote assemblies tree to {output_reads_tree}')
```
|
github_jupyter
|
# Defaults
simulation_dir = 'simulations/unset'
reference_file = 'simulations/reference/reference.fa.gz'
iterations = 3
mincov = 10
ncores = 32
# Parameters
read_coverage = 30
mincov = 10
simulation_dir = "simulations/alpha-10-cov-30"
iterations = 3
sub_alpha = 10
from pathlib import Path
import imp
fp, pathname, description = imp.find_module('gdi_benchmark', ['../../lib'])
gdi_benchmark = imp.load_module('gdi_benchmark', fp, pathname, description)
simulation_dir_path = Path(simulation_dir)
case_name = str(simulation_dir_path.name)
reads_dir = simulation_dir_path / 'simulated_data' / 'reads'
assemblies_dir = simulation_dir_path / 'simulated_data' / 'assemblies'
index_reads_path = simulation_dir_path / 'index-reads'
index_assemblies_path = simulation_dir_path / 'index-assemblies'
output_reads_tree = index_reads_path / 'reads.tre'
output_assemblies_tree = index_assemblies_path / 'assemblies.tre'
reference_name = Path(reference_file).name.split('.')[0]
!gdi --version
input_genomes_file = simulation_dir_path / 'input-reads.tsv'
!gdi input --absolute {reads_dir}/*.fq.gz > {input_genomes_file}
results_handler = gdi_benchmark.BenchmarkResultsHandler(name=f'{case_name} reads')
benchmarker = gdi_benchmark.IndexBenchmarker(benchmark_results_handler=results_handler,
index_path=index_reads_path, input_files_file=input_genomes_file,
reference_file=reference_file, mincov=mincov, build_tree=True,
ncores=ncores)
benchmark_df = benchmarker.benchmark(iterations=iterations)
benchmark_df
index_reads_runtime = simulation_dir_path / 'reads-index-info.tsv'
benchmark_df.to_csv(index_reads_runtime, sep='\t', index=False)
input_genomes_file = simulation_dir_path / 'input-assemblies.tsv'
!gdi input --absolute {assemblies_dir}/*.fa.gz > {input_genomes_file}
results_handler = gdi_benchmark.BenchmarkResultsHandler(name=f'{case_name} assemblies')
benchmarker = gdi_benchmark.IndexBenchmarker(benchmark_results_handler=results_handler,
index_path=index_assemblies_path, input_files_file=input_genomes_file,
reference_file=reference_file, mincov=mincov, build_tree=True,
ncores=ncores)
benchmark_df = benchmarker.benchmark(iterations=iterations)
benchmark_df
index_assemblies_runtime = simulation_dir_path / 'assemblies-index-info.tsv'
benchmark_df.to_csv(index_assemblies_runtime, sep='\t', index=False)
!gdi --project-dir {index_assemblies_path} export tree {reference_name} > {output_assemblies_tree}
print(f'Wrote assemblies tree to {output_assemblies_tree}')
!gdi --project-dir {index_reads_path} export tree {reference_name} > {output_reads_tree}
print(f'Wrote assemblies tree to {output_reads_tree}')
| 0.448426 | 0.663839 |
```
# https://www.kaggle.com/c/generative-dog-images/discussion/104281
# https://machinelearningmastery.com/a-gentle-introduction-to-the-biggan/
# https://machinelearningmastery.com/how-to-develop-a-conditional-generative-adversarial-network-from-scratch/
# https://github.com/taki0112/BigGAN-Tensorflow/blob/master/BigGAN_128.py
import sys
import numpy as np
import os
import io
import cv2
import glob
import handshape_datasets as hd
import tensorflow as tf
import tensorflow_datasets as tfds
from densenet import densenet_model
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
from sklearn.utils.class_weight import compute_class_weight
from datetime import datetime
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import time
from tensorflow.keras import Model
from tensorflow.keras.layers import Input, ZeroPadding2D, Dense, Dropout, Activation, Reshape, Flatten
from tensorflow.keras.layers import AveragePooling2D, GlobalAveragePooling2D, MaxPooling2D, BatchNormalization
from tensorflow.keras.layers import Conv2DTranspose, LeakyReLU, Conv2D, Embedding, Multiply, Add, UpSampling2D
from tf.keras.activations import tanh
from tf.keras.backend import ndim
from tensorflow.keras.mixed_precision import experimental as mixed_precision
from tensorflow.nn import relu
import tensorflow_probability.distributions as tfd
import pandas as pd
import seaborn as sns
%matplotlib inline
import matplotlib.pyplot as plt
from PIL import Image
from pathlib import Path
from sklearn.model_selection import train_test_split
tf.__version__
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
# Currently, memory growth needs to be the same across GPUs
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Memory growth must be set before GPUs have been initialized
print(e)
print(tf.config.experimental.list_logical_devices('GPU'))
tf.test.is_gpu_available()
# set up policy used in mixed precision
policy = mixed_precision.Policy('mixed_float16')
mixed_precision.set_policy(policy)
# hyperparameters
# data
rotation_range = 10
width_shift_range = 0.1
height_shift_range = 0.1
horizontal_flip = True
vertical_flip = False
shear_range = 0
zoom_range = 0.1
z_dim = 128
# training
g_lr = 5e-5
g_adam_b1 = 0.0
g_adam_b2 = 0.999
g_adam_eps=1e-8
d_steps = 2
d_lr = 2e-4
d_adam_b1 = 0.0
d_adam_b2 = 0.999
d_adam_eps=1e-8
epochs = 200
min_loss = 25
min_loss_acc = 0
batch_size = 128
noise_dim = 256
# log
log_freq = 1
models_directory = 'results/models/'
date = datetime.now().strftime("%Y_%m_%d-%H:%M:%S")
identifier = "simple_gan-" + date
dataset_name = 'rwth'
path = '/tf/data/{}'.format(dataset_name)
data_dir = os.path.join(path, 'data')
if not os.path.exists(data_dir):
os.makedirs(data_dir)
data = hd.load(dataset_name, Path(data_dir))
good_min = 40
good_classes = []
n_unique = len(np.unique(data[1]['y']))
for i in range(n_unique):
images = data[0][np.equal(i, data[1]['y'])]
if len(images) >= good_min:
good_classes = good_classes + [i]
x = data[0][np.in1d(data[1]['y'], good_classes)]
img_shape = x[0].shape
print(img_shape)
x = tf.image.resize(x, [128, 96]).numpy()
img_shape = x[0].shape
print(img_shape)
y = data[1]['y'][np.in1d(data[1]['y'], good_classes)]
y_dict = dict(zip(np.unique(y), range(len(np.unique(y)))))
y = np.vectorize(y_dict.get)(y)
x_train, x_test, y_train, y_test = train_test_split(
x, y, train_size=0.8, test_size=0.2, stratify=y)
classes = np.unique(y_train)
n_classes = len(classes)
train_size = x_train.shape[0]
test_size = x_test.shape[0]
img_shape
datagen = ImageDataGenerator(
featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=rotation_range,
width_shift_range=width_shift_range,
height_shift_range=height_shift_range,
horizontal_flip=horizontal_flip,
vertical_flip = vertical_flip,
shear_range=shear_range,
zoom_range=zoom_range,
fill_mode='constant',
cval=0,
)
datagen.fit(x_train)
test_datagen = ImageDataGenerator(
featurewise_center=True,
featurewise_std_normalization=True,
)
test_datagen.fit(x_train)
# create data generators
train_gen = datagen.flow(x_train, y_train, batch_size=batch_size)
test_gen = test_datagen.flow(x_test, y_test , batch_size=batch_size, shuffle=False)
for images, _ in train_gen:
plt.imshow(images[0])
break
# https://github.com/thisisiron/spectral_normalization-tf2
class SpectralNormalization(tf.keras.layers.Wrapper):
def __init__(self, layer, iteration=1, eps=1e-12, training=True, **kwargs):
self.iteration = iteration
self.eps = eps
self.do_power_iteration = training
if not isinstance(layer, tf.keras.layers.Layer):
raise ValueError(
'Please initialize `TimeDistributed` layer with a '
'`Layer` instance. You passed: {input}'.format(input=layer))
super(SpectralNormalization, self).__init__(layer, **kwargs)
def build(self, input_shape):
self.layer.build(input_shape)
self.w = self.layer.kernel
self.w_shape = self.w.shape.as_list()
self.v = self.add_weight(shape=(1, np.prod(self.w_shape)),
initializer=tf.initializers.TruncatedNormal(stddev=0.02),
trainable=False,
name='sn_v')
self.u = self.add_weight(shape=(1, self.w_shape[-1]),
initializer=tf.initializers.TruncatedNormal(stddev=0.02),
trainable=False,
name='sn_u')
super(SpectralNormalization, self).build()
def call(self, inputs):
self.update_weights()
output = self.layer(inputs)
self.restore_weights() # Restore weights because of this formula "W = W - alpha * W_SN`"
return output
def update_weights(self):
w_reshaped = tf.reshape(self.w, [-1, self.w_shape[-1]])
u_hat = self.u
v_hat = self.v # init v vector
if self.do_power_iteration:
for _ in range(self.iteration):
v_ = tf.matmul(u_hat, tf.transpose(w_reshaped))
v_hat = v_ / (tf.reduce_sum(v_**2)**0.5 + self.eps)
u_ = tf.matmul(v_hat, w_reshaped)
u_hat = u_ / (tf.reduce_sum(u_**2)**0.5 + self.eps)
sigma = tf.matmul(tf.matmul(v_hat, w_reshaped), tf.transpose(u_hat))
self.u.assign(u_hat)
self.v.assign(v_hat)
self.layer.kernel.assign(self.w / sigma)
def restore_weights(self):
self.layer.kernel.assign(self.w)
class ConditionalBatchNormalization(keras.layers.Layer):
def __init__(self, decay = 0.9, epsilon = 1e-05,
kernel_initializer="glorot_uniform", training=True):
super(Linear, self).__init__()
self.decay = decay
self.epsilon = epsilon
self.kernel_initializer = kernel_initializer
self.training = training
def build(self, input_shape):
self.stored_mean = self.add_weight(shape=input_shape.as_list()[-1],
initializer=tf.constant_initializer(0.0),
trainable=False,
name='population_mean')
self.stored_var = self.add_weight(shape=input_shape.as_list()[-1],
initializer=tf.constant_initializer(1.0),
trainable=False,
name='population_var')
def call(self, inputs):
x = inputs[0]
y = inputs[1]
c = x.get_shape().as_list()[-1]
beta = SpectralNormalization(
Dense(c, kernel_initializer=self.kernel_initializer))(y)
beta = tf.reshape(beta, shape=[-1, 1, 1, c])
gamma = SpectralNormalization(
Dense(c, kernel_initializer=self.kernel_initializer))(y)
gamma = tf.reshape(gamma, shape=[-1, 1, 1, c])
if self.training:
batch_mean, batch_var = tf.nn.moments(x, [0, 1, 2])
self.stored_mean.assign(self.stored_mean * self.decay + batch_mean * (1 - self.decay))
self.stored_var.assign(self.stored_var * self.decay + batch_var * (1 - self.decay))
return tf.nn.batch_normalization(x, batch_mean, batch_var, beta, gamma, self.epsilon)
else:
return tf.nn.batch_normalization(x, self.stored_mean, self.stored_var, beta, gamma, self.epsilon)
class SelfAttention(keras.layers.Layer):
def __init__(self, weight_initializer="glorot_uniform", training=True):
super(Linear, self).__init__()
self.weight_initializer = weight_initializer
self.training = training
def build(self, input_shape):
self.gamma = self.add_weight(shape=input_shape,
initializer=tf.constant_initializer(0.0),
trainable=True,
name='gamma')
def call(self, inputs):
x = inputs[0]
y = inputs[1]
x_shape = x.get_shape().as_list()
c = x_shape[-1]
# convolutional layers
theta = SpectralNormalization(
Convolution2D(c // 8, 1, 1, use_bias=False,
kernel_initializer=self.weight_initializer))(x)
phi = SpectralNormalization(
Convolution2D(c // 8, 1, 1, use_bias=False,
kernel_initializer=self.weight_initializer))(x)
phi = MaxPooling2D(pool_size=(2, 2))(phi)
g = SpectralNormalization(
Convolution2D(c // 2, 1, 1, use_bias=False,
kernel_initializer=self.weight_initializer))(x)
g = MaxPooling2D(pool_size=(2, 2))(g)
theta = Reshape((-1, x_shape[1] * x_shape[2], c // 8))(theta)
phi = Reshape((-1, x_shape[1] * x_shape[2] // 4, c // 8))(phi)
g = Reshape((-1, x_shape[1] * x_shape[2] // 2, c // 2))(g)
# get attention map
beta = Softmax()(tf.matmul(theta, phi, transpose_a=True))
o = tf.matmul(g, beta, transpose_b=True)
o = Reshape((-1, x_shape[1], x_shape[2], c // 2))(o)
o = SpectralNormalization(
Convolution2D(c, 1, 1, use_bias=False,
kernel_initializer=self.weight_initializer))(x)
return self.gamma * o + x
def generator_model(out_channels = [16,8,4,2,1], self_attention_location = -1, z_dim=128, starting_shape=(4,4,16),
output_ch = 3, hierarchical=False, concat=True, deep=True, channel_drop=True, n_classes=None):
initializer = tf.keras.initializers.Orthogonal()
# default model is set to Big-GAN 128
nb_blocks=len(out_channels)
if self_attention_location < 0:
self_attention_location = nb_blocks-self_attention_location
z = Input(shape=(z_dim,), name='img_input')
y = Input(shape=(1,), name='label_input')
y = Embedding(n_classes, z_dim, input_length=1, name='embedding')(y)
if hierarchical:
# split z for the beggining and each conv block
z_chunk_size = z_dim // (nb_blocks + 1)
z_split_remainder = z_dim - (z_chunk_size * nb_blocks)
z_split = tf.split(z, num_or_size_splits=[z_chunk_size] * 5 + [split_dim_remainder], axis=-1)
z = z_split[0]
ys = [tf.concat([y, item], -1) for item in z_split[1:]]
elif concat:
ys = [tf.concat([y, z], -1)] * nb_blocks
z = ys[0]
else:
ys = [y] * nb_blocks
if deep:
up_block = up_deep_res_block
else:
up_block = up_res_block
# Input block
x = SpectralNormalization(Dense(sum(starting_shape), input_shape=z_shape, kernel_initializer=initializer,
name='initial_dense'))(z)
x = Reshape((starting_shape[0], starting_shape[1], starting_shape[2]), name='initial_reshape')(x)
stage = 0
in_channels = starting_shape[2]
# Add residual blocks and self attention block
for block_idx in range(nb_blocks):
if (block_idx == self_attention_location):
x = SelfAttention()(x)
x = up_block(x, y, in_channels, out_channels[block_idx], initializer, block_idx,
channel_equalizer=channel_equalizer)
in_channels = out_channels[block_idx]
# Output block
x = BatchNormalization(name='bn_output')(x)
x = Activation('relu', name='relu_output')(x)
x = SpectralNormalization(Convolution2D(output_ch, 3, 1, padding='same', name='conv_output',
kernel_initializer=initializer))(x)
output = tanh(x, dtype = tf.float32)
return Model(inputs=[z, y], outputs=output)
def up_res_block(x, y, in_channels, out_channels, initializer, block_idx,
channel_drop=True):
idx='up_block'+str(block_idx)
h = ConditionalBatchNormalization(name=idx+'_cbn1', kernel_initializer=initializer)(x, y)
h = Activation('relu', name=idx+'_relu1')(x)
# upsample
x = UpSampling2D(2, name=idx+'_upsample_input')(x)
h = UpSampling2D(2, name=idx+'_upsample_hidden')(h)
x = SpectralNormalization(
Convolution2D(out_channels, 1, 1, kernel_initializer=initializer,
name=idx+'_bottleneck_input'))(h)
h = SpectralNormalization(
Convolution2D(out_channels, 3, 1, padding='same', kernel_initializer=initializer))(h)
h = ConditionalBatchNormalization()(h, y)
h = Activation('relu', name=relu_name_base+'_x1')(h)
h = SpectralNormalization(
Convolution2D(out_channels, 3, 1, padding='same', kernel_initializer=initializer))(h)
return h + x
def up_deep_res_block(x, y, in_channels, out_channels, initializer, block_idx,
channel_drop=True):
idx='up_block'+str(block_idx)
x = deep_res_block(x, y, in_channels, in_channels, initializer, idx+'_res')
return deep_res_block(x, y, in_channels, out_channels, initializer, idx+'_up_res',
channel_drop=channel_drop, upsample=UpSampling2D(2))
def deep_res_block(x, y, in_channels, out_channels, initializer, block_idx,
channel_drop=True, upsample=None)
h = ConditionalBatchNormalization(kernel_initializer=initializer)(x, y)
h = Activation('relu', name=relu_name_base+'_x1')(h)
hidden_channels = in_channels // 4
# Bottleneck to reduce the number of channels
h = SpectralNormalization(
Convolution2D(hidden_channels, 1, 1, kernel_initializer=initializer, use_bias=False))(h)
h = ConditionalBatchNormalization()(h, y)
h = Activation('relu', name=relu_name_base+'_x1')(h)
# make the channels of the input equal the channels of the output
if in_channels != out_channels:
if channel_drop:
x = x[:, :out_channels]
else:
x = SpectralNormalization(
Convolution2D(out_channels, 1, 1, kernel_initializer=initializer, use_bias=False))(x)
# upsample
if upsample:
x = upsample(x)
h = upsample(h)
h = SpectralNormalization(
Convolution2D(hidden_channels, 3, 1, padding='same', kernel_initializer=initializer, use_bias=False))(h)
h = ConditionalBatchNormalization()(h, y)
h = Activation('relu', name=relu_name_base+'_x1')(h)
h = SpectralNormalization(
Convolution2D(hidden_channels, 3, 1, padding='same', kernel_initializer=initializer, use_bias=False))(h)
h = ConditionalBatchNormalization()(h, y)
h = Activation('relu', name=relu_name_base+'_x1')(h)
# Bottleneck to increase the number of channels to the required shape
h = SpectralNormalization(
Convolution2D(out_channels, 1, 1, kernel_initializer=initializer, use_bias=False))(h)
return h + x
def discriminator_model(out_channels = [2,4,8,16,16], self_attention_location = 1,
starting_shape=(128,128,3), deep=True, n_classes=None):
initializer = tf.keras.initializers.Orthogonal()
x = Input(shape=starting_shape, name='img_input')
y = Input(shape=(1,), name='label_input')
y = Embedding(n_classes, out_channels[-1], input_length=1, name='embedding')(y)
# Input block
if deep:
x = SpectralNormalization(Convolution2D(1, 3, 1, padding='same', name='conv_input',
kernel_initializer=initializer))(x)
down_block = down_deep_res_block
else:
down_block = down_res_block
# default model is set to Big-GAN 128
nb_blocks=len(out_channels)
stage = 0
in_channels = starting_shape[2]
# Add residual blocks and self attention block
for block_idx in range(nb_blocks):
if (block_idx == self_attention_location):
x = SelfAttention()(x)
x = down_block(x, y, in_channels, out_channels[block_idx], initializer, block_idx,
channel_equalizer=channel_equalizer)
in_channels = out_channels[block_idx]
# apply global sum pooling
x = Activation('relu', name='out_relu')(x)
x = tf.reduce_sum(x,(1,2))
output = Dense(1, name='out_dense')(x)
output = tf.add(output, tf.reduce_sum(y * x, -1, keepdims=True), dtype='float32')
return Model(inputs=[x,y], outputs=output)
def down_res_block(x, y, in_channels, out_channels, initializer, block_idx, downsample=True):
idx='down_block'+str(block_idx)
if block_idx > 0:
h = Activation('relu', name=idx+'_relu1')(x)
if downsample:
h = AveragePooling2D(2)(h)
x = SpectralNormalization(
Convolution2D(out_channels, 1, 1, kernel_initializer=initializer,
name=idx+'_bottleneck_input'))(x)
else:
h = x
x = SpectralNormalization(
Convolution2D(out_channels, 1, 1, kernel_initializer=initializer,
name=idx+'_bottleneck_input'))(x)
if downsample:
h = AveragePooling2D(2)(h)
h = SpectralNormalization(
Convolution2D(out_channels, 3, 1, padding='same', kernel_initializer=initializer))(h)
h = Activation('relu', name=idx+'_relu2')(x)
h = SpectralNormalization(
Convolution2D(out_channels, 3, 1, padding='same', kernel_initializer=initializer))(h)
if downsample:
h = AveragePooling2D(2)(h)
return h + x
def down_deep_res_block(x, y, in_channels, out_channels, initializer, block_idx,
channel_drop=True):
x = deep_res_block(x, y, in_channels, in_channels, initializer, block_idx)
return deep_res_block(x, y, in_channels, out_channels, initializer, block_idx,
channel_drop=channel_drop, downsample=AveragePooling2D(2))
def deep_res_block(x, y, in_channels, out_channels, initializer, block_idx,
channel_drop=True, downsample=None)
h = Activation('relu', name=relu_name_base+'_x1')(h)
hidden_channels = in_channels // 4
# Bottleneck to reduce the number of channels
h = SpectralNormalization(
Convolution2D(hidden_channels, 1, 1, kernel_initializer=initializer, use_bias=False))(h)
h = Activation('relu', name=relu_name_base+'_x1')(h)
h = SpectralNormalization(
Convolution2D(hidden_channels, 3, 1, padding='same', kernel_initializer=initializer, use_bias=False))(h)
h = Activation('relu', name=relu_name_base+'_x1')(h)
h = SpectralNormalization(
Convolution2D(hidden_channels, 3, 1, padding='same', kernel_initializer=initializer, use_bias=False))(h)
h = Activation('relu', name=relu_name_base+'_x1')(h)
# upsample
if downsample:
x = downsample(x)
h = downsample(h)
# Bottleneck to increase the number of channels to the required shape
h = SpectralNormalization(
Convolution2D(out_channels, 1, 1, kernel_initializer=initializer, use_bias=False))(h)
# make the channels of the input equal the channels of the output
aditional_channels = SpectralNormalization(
Convolution2D(out_channels-in_channels, 1, 1, kernel_initializer=initializer, use_bias=False))(x)
x = tf.concat(x, aditional_channels, -1)
return h + x
generator = generator_model(z_dim=z_dim)
discriminator = discriminator_model()
# losses and optimizers
def discriminator_loss(d_real, d_fake):
loss_real = tf.reduce_mean(relu(1. - dis_real))
loss_fake = tf.reduce_mean(relu(1. + dis_fake))
return loss_real, loss_fake
def generator_loss(d_fake):
return -tf.reduce_mean(d_fake)
generator_optimizer = tf.keras.optimizers.Adam(g_lr, beta_1=g_adam_b1, beta_2=g_adam_b2, epsilon=g_adam_eps)
discriminator_optimizer = tf.keras.optimizers.Adam(d_lr, beta_1=d_adam_b1, beta_2=d_adam_b2, epsilon=d_adam_eps)
# to prevent underflow/overflow
generator_optimizer = mixed_precision.LossScaleOptimizer(generator_optimizer, loss_scale='dynamic')
discriminator_optimizer = mixed_precision.LossScaleOptimizer(discriminator_optimizer, loss_scale='dynamic')
discriminator_loss_real = tf.keras.metrics.Mean(name='discriminator_loss_real')
discriminator_loss_fake = tf.keras.metrics.Mean(name='discriminator_loss_fake')
generator_loss = tf.keras.metrics.Mean(name='generator_loss')
def orthogonal_reg(gradients, strength=1e-4, blacklist=[]):
i = 0
for gradient in gradients:
if ndim >= 2 and i not in blacklist:
w = tf.reshape(gradient, [-1])
reg = (2 * tf.matmul(tf.matmul(w, w, transpose_b=True) * (1. - tf.eye(w.shape[0])), w))
gradient = gradient + strength * tf.reshape(reg, gradient.shape)
i += 1
@tf.function
def d_train_step(z, z_y, x, x_y):
g_z = generator((z, z_y), training=True)
with tf.GradientTape() as tape:
real_output = discriminator((x, x_y), training=True)
fake_output = discriminator((g_z, z_y), training=True)
real_loss, fake_loss = discriminator_loss(real_output, fake_output)
d_loss = real_loss + fake_loss
scaled_d_loss = discriminator_optimizer.get_scaled_loss(d_loss)
scaled_gradients = tape.gradient(scaled_d_loss, discriminator.trainable_variables)
gradients = discriminator_optimizer.get_unscaled_gradients(scaled_gradients)
# apply orthogonal regularization
blacklist = [index for index, trainable_variables in enumerate(discriminator.trainable_variables)
if 'embedding' in trainable_variables.name]
gradients = orthogonal_reg(gradients, blacklist)
discriminator_optimizer.apply_gradients(zip(gradients, discriminator.trainable_variables))
discriminator_loss_real(real_loss)
discriminator_loss_fake(fake_loss)
@tf.function
def g_train_step(z, z_y):
with tf.GradientTape() as tape:
g_z = generator((z, z_y), training=True)
fake_output = discriminator((g_z, z_y), training=True)
g_loss = generator_loss(fake_output)
scaled_g_loss = generator_optimizer.get_scaled_loss(g_loss)
scaled_gradients = tape.gradient(scaled_g_loss, generator.trainable_variables)
gradients = generator_optimizer.get_unscaled_gradients(scaled_gradients)
# apply orthogonal regularization
blacklist = [index for index, trainable_variables in enumerate(generator.trainable_variables)
if 'embedding' in trainable_variables.name]
gradients = orthogonal_reg(gradients, blacklist)
generator_optimizer.apply_gradients(zip(gradients, generator.trainable_variables))
generator_loss(g_loss)
# create summary writers
train_summary_writer = tf.summary.create_file_writer('results/summaries/train/' + identifier)
# create a seed to visualize the progress
z = tfd.Normal(loc=0, scale=1)
shape_z = [batch_size, z_dim]
y = tfd.Categorical(logits=tf.ones(n_classes))
shape_y = [batch_size, 1]
fixed_z = z.sample(shape_z)
fixed_y = y.sample(shape_y)
print("starting training")
for epoch in range(epochs):
time_start = time.time()
for i in range(train_size / (batch_size * 2)):
# train D d_steps times for each training step on G
for step_index in range(d_steps):
images, labels = train_gen.next()
z_ = z.sample(shape_z)
y_ = y.sample(shape_y)
d_train_step(z_, y_, images,labels)
# train G
z_ = z.sample(shape_z)
y_ = y.sample(shape_y)
g_train_step(z_, y_)
time_finish = time.time()
end_time = (time_finish-time_start)
if (epoch % log_freq == 0):
print ('Epoch: {}, Generator Loss: {}, Discriminator Real Loss: {}, Discriminator Fake Loss: {}, Time: {} s'.format(
epoch,
generator_loss.result(),
discriminator_loss_real.result(),
discriminator_loss_fake.result(),
end_time))
generated_image = generator((fixed_z, fixed_y), training=False)
with train_summary_writer.as_default():
tf.summary.image('generated_image', generated_image, step=epoch)
tf.summary.scalar('generator_loss', generator_loss.result(), step=epoch)
tf.summary.scalar('discriminator_loss', discriminator_loss.result(), step=epoch)
generator_loss.reset_states()
discriminator_loss.reset_states()
'''
if ((generator_loss.result() < min_loss) or (discriminator_loss.result() < min_loss)):
if not os.path.exists(models_directory):
os.makedirs(models_directory)
# serialize weights to HDF5
generator.save_weights(models_directory + "best{}-generator.h5".format(identifier))
discriminator.save_weights(models_directory + "best{}-discriminator.h5".format(identifier))
min_loss = min(generator_loss.result(), discriminator_loss.result())
patience = 0
else:
patience += 1
'''
```
|
github_jupyter
|
# https://www.kaggle.com/c/generative-dog-images/discussion/104281
# https://machinelearningmastery.com/a-gentle-introduction-to-the-biggan/
# https://machinelearningmastery.com/how-to-develop-a-conditional-generative-adversarial-network-from-scratch/
# https://github.com/taki0112/BigGAN-Tensorflow/blob/master/BigGAN_128.py
import sys
import numpy as np
import os
import io
import cv2
import glob
import handshape_datasets as hd
import tensorflow as tf
import tensorflow_datasets as tfds
from densenet import densenet_model
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
from sklearn.utils.class_weight import compute_class_weight
from datetime import datetime
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import time
from tensorflow.keras import Model
from tensorflow.keras.layers import Input, ZeroPadding2D, Dense, Dropout, Activation, Reshape, Flatten
from tensorflow.keras.layers import AveragePooling2D, GlobalAveragePooling2D, MaxPooling2D, BatchNormalization
from tensorflow.keras.layers import Conv2DTranspose, LeakyReLU, Conv2D, Embedding, Multiply, Add, UpSampling2D
from tf.keras.activations import tanh
from tf.keras.backend import ndim
from tensorflow.keras.mixed_precision import experimental as mixed_precision
from tensorflow.nn import relu
import tensorflow_probability.distributions as tfd
import pandas as pd
import seaborn as sns
%matplotlib inline
import matplotlib.pyplot as plt
from PIL import Image
from pathlib import Path
from sklearn.model_selection import train_test_split
tf.__version__
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
# Currently, memory growth needs to be the same across GPUs
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Memory growth must be set before GPUs have been initialized
print(e)
print(tf.config.experimental.list_logical_devices('GPU'))
tf.test.is_gpu_available()
# set up policy used in mixed precision
policy = mixed_precision.Policy('mixed_float16')
mixed_precision.set_policy(policy)
# hyperparameters
# data
rotation_range = 10
width_shift_range = 0.1
height_shift_range = 0.1
horizontal_flip = True
vertical_flip = False
shear_range = 0
zoom_range = 0.1
z_dim = 128
# training
g_lr = 5e-5
g_adam_b1 = 0.0
g_adam_b2 = 0.999
g_adam_eps=1e-8
d_steps = 2
d_lr = 2e-4
d_adam_b1 = 0.0
d_adam_b2 = 0.999
d_adam_eps=1e-8
epochs = 200
min_loss = 25
min_loss_acc = 0
batch_size = 128
noise_dim = 256
# log
log_freq = 1
models_directory = 'results/models/'
date = datetime.now().strftime("%Y_%m_%d-%H:%M:%S")
identifier = "simple_gan-" + date
dataset_name = 'rwth'
path = '/tf/data/{}'.format(dataset_name)
data_dir = os.path.join(path, 'data')
if not os.path.exists(data_dir):
os.makedirs(data_dir)
data = hd.load(dataset_name, Path(data_dir))
good_min = 40
good_classes = []
n_unique = len(np.unique(data[1]['y']))
for i in range(n_unique):
images = data[0][np.equal(i, data[1]['y'])]
if len(images) >= good_min:
good_classes = good_classes + [i]
x = data[0][np.in1d(data[1]['y'], good_classes)]
img_shape = x[0].shape
print(img_shape)
x = tf.image.resize(x, [128, 96]).numpy()
img_shape = x[0].shape
print(img_shape)
y = data[1]['y'][np.in1d(data[1]['y'], good_classes)]
y_dict = dict(zip(np.unique(y), range(len(np.unique(y)))))
y = np.vectorize(y_dict.get)(y)
x_train, x_test, y_train, y_test = train_test_split(
x, y, train_size=0.8, test_size=0.2, stratify=y)
classes = np.unique(y_train)
n_classes = len(classes)
train_size = x_train.shape[0]
test_size = x_test.shape[0]
img_shape
datagen = ImageDataGenerator(
featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=rotation_range,
width_shift_range=width_shift_range,
height_shift_range=height_shift_range,
horizontal_flip=horizontal_flip,
vertical_flip = vertical_flip,
shear_range=shear_range,
zoom_range=zoom_range,
fill_mode='constant',
cval=0,
)
datagen.fit(x_train)
test_datagen = ImageDataGenerator(
featurewise_center=True,
featurewise_std_normalization=True,
)
test_datagen.fit(x_train)
# create data generators
train_gen = datagen.flow(x_train, y_train, batch_size=batch_size)
test_gen = test_datagen.flow(x_test, y_test , batch_size=batch_size, shuffle=False)
for images, _ in train_gen:
plt.imshow(images[0])
break
# https://github.com/thisisiron/spectral_normalization-tf2
class SpectralNormalization(tf.keras.layers.Wrapper):
def __init__(self, layer, iteration=1, eps=1e-12, training=True, **kwargs):
self.iteration = iteration
self.eps = eps
self.do_power_iteration = training
if not isinstance(layer, tf.keras.layers.Layer):
raise ValueError(
'Please initialize `TimeDistributed` layer with a '
'`Layer` instance. You passed: {input}'.format(input=layer))
super(SpectralNormalization, self).__init__(layer, **kwargs)
def build(self, input_shape):
self.layer.build(input_shape)
self.w = self.layer.kernel
self.w_shape = self.w.shape.as_list()
self.v = self.add_weight(shape=(1, np.prod(self.w_shape)),
initializer=tf.initializers.TruncatedNormal(stddev=0.02),
trainable=False,
name='sn_v')
self.u = self.add_weight(shape=(1, self.w_shape[-1]),
initializer=tf.initializers.TruncatedNormal(stddev=0.02),
trainable=False,
name='sn_u')
super(SpectralNormalization, self).build()
def call(self, inputs):
self.update_weights()
output = self.layer(inputs)
self.restore_weights() # Restore weights because of this formula "W = W - alpha * W_SN`"
return output
def update_weights(self):
w_reshaped = tf.reshape(self.w, [-1, self.w_shape[-1]])
u_hat = self.u
v_hat = self.v # init v vector
if self.do_power_iteration:
for _ in range(self.iteration):
v_ = tf.matmul(u_hat, tf.transpose(w_reshaped))
v_hat = v_ / (tf.reduce_sum(v_**2)**0.5 + self.eps)
u_ = tf.matmul(v_hat, w_reshaped)
u_hat = u_ / (tf.reduce_sum(u_**2)**0.5 + self.eps)
sigma = tf.matmul(tf.matmul(v_hat, w_reshaped), tf.transpose(u_hat))
self.u.assign(u_hat)
self.v.assign(v_hat)
self.layer.kernel.assign(self.w / sigma)
def restore_weights(self):
self.layer.kernel.assign(self.w)
class ConditionalBatchNormalization(keras.layers.Layer):
def __init__(self, decay = 0.9, epsilon = 1e-05,
kernel_initializer="glorot_uniform", training=True):
super(Linear, self).__init__()
self.decay = decay
self.epsilon = epsilon
self.kernel_initializer = kernel_initializer
self.training = training
def build(self, input_shape):
self.stored_mean = self.add_weight(shape=input_shape.as_list()[-1],
initializer=tf.constant_initializer(0.0),
trainable=False,
name='population_mean')
self.stored_var = self.add_weight(shape=input_shape.as_list()[-1],
initializer=tf.constant_initializer(1.0),
trainable=False,
name='population_var')
def call(self, inputs):
x = inputs[0]
y = inputs[1]
c = x.get_shape().as_list()[-1]
beta = SpectralNormalization(
Dense(c, kernel_initializer=self.kernel_initializer))(y)
beta = tf.reshape(beta, shape=[-1, 1, 1, c])
gamma = SpectralNormalization(
Dense(c, kernel_initializer=self.kernel_initializer))(y)
gamma = tf.reshape(gamma, shape=[-1, 1, 1, c])
if self.training:
batch_mean, batch_var = tf.nn.moments(x, [0, 1, 2])
self.stored_mean.assign(self.stored_mean * self.decay + batch_mean * (1 - self.decay))
self.stored_var.assign(self.stored_var * self.decay + batch_var * (1 - self.decay))
return tf.nn.batch_normalization(x, batch_mean, batch_var, beta, gamma, self.epsilon)
else:
return tf.nn.batch_normalization(x, self.stored_mean, self.stored_var, beta, gamma, self.epsilon)
class SelfAttention(keras.layers.Layer):
def __init__(self, weight_initializer="glorot_uniform", training=True):
super(Linear, self).__init__()
self.weight_initializer = weight_initializer
self.training = training
def build(self, input_shape):
self.gamma = self.add_weight(shape=input_shape,
initializer=tf.constant_initializer(0.0),
trainable=True,
name='gamma')
def call(self, inputs):
x = inputs[0]
y = inputs[1]
x_shape = x.get_shape().as_list()
c = x_shape[-1]
# convolutional layers
theta = SpectralNormalization(
Convolution2D(c // 8, 1, 1, use_bias=False,
kernel_initializer=self.weight_initializer))(x)
phi = SpectralNormalization(
Convolution2D(c // 8, 1, 1, use_bias=False,
kernel_initializer=self.weight_initializer))(x)
phi = MaxPooling2D(pool_size=(2, 2))(phi)
g = SpectralNormalization(
Convolution2D(c // 2, 1, 1, use_bias=False,
kernel_initializer=self.weight_initializer))(x)
g = MaxPooling2D(pool_size=(2, 2))(g)
theta = Reshape((-1, x_shape[1] * x_shape[2], c // 8))(theta)
phi = Reshape((-1, x_shape[1] * x_shape[2] // 4, c // 8))(phi)
g = Reshape((-1, x_shape[1] * x_shape[2] // 2, c // 2))(g)
# get attention map
beta = Softmax()(tf.matmul(theta, phi, transpose_a=True))
o = tf.matmul(g, beta, transpose_b=True)
o = Reshape((-1, x_shape[1], x_shape[2], c // 2))(o)
o = SpectralNormalization(
Convolution2D(c, 1, 1, use_bias=False,
kernel_initializer=self.weight_initializer))(x)
return self.gamma * o + x
def generator_model(out_channels = [16,8,4,2,1], self_attention_location = -1, z_dim=128, starting_shape=(4,4,16),
output_ch = 3, hierarchical=False, concat=True, deep=True, channel_drop=True, n_classes=None):
initializer = tf.keras.initializers.Orthogonal()
# default model is set to Big-GAN 128
nb_blocks=len(out_channels)
if self_attention_location < 0:
self_attention_location = nb_blocks-self_attention_location
z = Input(shape=(z_dim,), name='img_input')
y = Input(shape=(1,), name='label_input')
y = Embedding(n_classes, z_dim, input_length=1, name='embedding')(y)
if hierarchical:
# split z for the beggining and each conv block
z_chunk_size = z_dim // (nb_blocks + 1)
z_split_remainder = z_dim - (z_chunk_size * nb_blocks)
z_split = tf.split(z, num_or_size_splits=[z_chunk_size] * 5 + [split_dim_remainder], axis=-1)
z = z_split[0]
ys = [tf.concat([y, item], -1) for item in z_split[1:]]
elif concat:
ys = [tf.concat([y, z], -1)] * nb_blocks
z = ys[0]
else:
ys = [y] * nb_blocks
if deep:
up_block = up_deep_res_block
else:
up_block = up_res_block
# Input block
x = SpectralNormalization(Dense(sum(starting_shape), input_shape=z_shape, kernel_initializer=initializer,
name='initial_dense'))(z)
x = Reshape((starting_shape[0], starting_shape[1], starting_shape[2]), name='initial_reshape')(x)
stage = 0
in_channels = starting_shape[2]
# Add residual blocks and self attention block
for block_idx in range(nb_blocks):
if (block_idx == self_attention_location):
x = SelfAttention()(x)
x = up_block(x, y, in_channels, out_channels[block_idx], initializer, block_idx,
channel_equalizer=channel_equalizer)
in_channels = out_channels[block_idx]
# Output block
x = BatchNormalization(name='bn_output')(x)
x = Activation('relu', name='relu_output')(x)
x = SpectralNormalization(Convolution2D(output_ch, 3, 1, padding='same', name='conv_output',
kernel_initializer=initializer))(x)
output = tanh(x, dtype = tf.float32)
return Model(inputs=[z, y], outputs=output)
def up_res_block(x, y, in_channels, out_channels, initializer, block_idx,
channel_drop=True):
idx='up_block'+str(block_idx)
h = ConditionalBatchNormalization(name=idx+'_cbn1', kernel_initializer=initializer)(x, y)
h = Activation('relu', name=idx+'_relu1')(x)
# upsample
x = UpSampling2D(2, name=idx+'_upsample_input')(x)
h = UpSampling2D(2, name=idx+'_upsample_hidden')(h)
x = SpectralNormalization(
Convolution2D(out_channels, 1, 1, kernel_initializer=initializer,
name=idx+'_bottleneck_input'))(h)
h = SpectralNormalization(
Convolution2D(out_channels, 3, 1, padding='same', kernel_initializer=initializer))(h)
h = ConditionalBatchNormalization()(h, y)
h = Activation('relu', name=relu_name_base+'_x1')(h)
h = SpectralNormalization(
Convolution2D(out_channels, 3, 1, padding='same', kernel_initializer=initializer))(h)
return h + x
def up_deep_res_block(x, y, in_channels, out_channels, initializer, block_idx,
channel_drop=True):
idx='up_block'+str(block_idx)
x = deep_res_block(x, y, in_channels, in_channels, initializer, idx+'_res')
return deep_res_block(x, y, in_channels, out_channels, initializer, idx+'_up_res',
channel_drop=channel_drop, upsample=UpSampling2D(2))
def deep_res_block(x, y, in_channels, out_channels, initializer, block_idx,
channel_drop=True, upsample=None)
h = ConditionalBatchNormalization(kernel_initializer=initializer)(x, y)
h = Activation('relu', name=relu_name_base+'_x1')(h)
hidden_channels = in_channels // 4
# Bottleneck to reduce the number of channels
h = SpectralNormalization(
Convolution2D(hidden_channels, 1, 1, kernel_initializer=initializer, use_bias=False))(h)
h = ConditionalBatchNormalization()(h, y)
h = Activation('relu', name=relu_name_base+'_x1')(h)
# make the channels of the input equal the channels of the output
if in_channels != out_channels:
if channel_drop:
x = x[:, :out_channels]
else:
x = SpectralNormalization(
Convolution2D(out_channels, 1, 1, kernel_initializer=initializer, use_bias=False))(x)
# upsample
if upsample:
x = upsample(x)
h = upsample(h)
h = SpectralNormalization(
Convolution2D(hidden_channels, 3, 1, padding='same', kernel_initializer=initializer, use_bias=False))(h)
h = ConditionalBatchNormalization()(h, y)
h = Activation('relu', name=relu_name_base+'_x1')(h)
h = SpectralNormalization(
Convolution2D(hidden_channels, 3, 1, padding='same', kernel_initializer=initializer, use_bias=False))(h)
h = ConditionalBatchNormalization()(h, y)
h = Activation('relu', name=relu_name_base+'_x1')(h)
# Bottleneck to increase the number of channels to the required shape
h = SpectralNormalization(
Convolution2D(out_channels, 1, 1, kernel_initializer=initializer, use_bias=False))(h)
return h + x
def discriminator_model(out_channels = [2,4,8,16,16], self_attention_location = 1,
starting_shape=(128,128,3), deep=True, n_classes=None):
initializer = tf.keras.initializers.Orthogonal()
x = Input(shape=starting_shape, name='img_input')
y = Input(shape=(1,), name='label_input')
y = Embedding(n_classes, out_channels[-1], input_length=1, name='embedding')(y)
# Input block
if deep:
x = SpectralNormalization(Convolution2D(1, 3, 1, padding='same', name='conv_input',
kernel_initializer=initializer))(x)
down_block = down_deep_res_block
else:
down_block = down_res_block
# default model is set to Big-GAN 128
nb_blocks=len(out_channels)
stage = 0
in_channels = starting_shape[2]
# Add residual blocks and self attention block
for block_idx in range(nb_blocks):
if (block_idx == self_attention_location):
x = SelfAttention()(x)
x = down_block(x, y, in_channels, out_channels[block_idx], initializer, block_idx,
channel_equalizer=channel_equalizer)
in_channels = out_channels[block_idx]
# apply global sum pooling
x = Activation('relu', name='out_relu')(x)
x = tf.reduce_sum(x,(1,2))
output = Dense(1, name='out_dense')(x)
output = tf.add(output, tf.reduce_sum(y * x, -1, keepdims=True), dtype='float32')
return Model(inputs=[x,y], outputs=output)
def down_res_block(x, y, in_channels, out_channels, initializer, block_idx, downsample=True):
idx='down_block'+str(block_idx)
if block_idx > 0:
h = Activation('relu', name=idx+'_relu1')(x)
if downsample:
h = AveragePooling2D(2)(h)
x = SpectralNormalization(
Convolution2D(out_channels, 1, 1, kernel_initializer=initializer,
name=idx+'_bottleneck_input'))(x)
else:
h = x
x = SpectralNormalization(
Convolution2D(out_channels, 1, 1, kernel_initializer=initializer,
name=idx+'_bottleneck_input'))(x)
if downsample:
h = AveragePooling2D(2)(h)
h = SpectralNormalization(
Convolution2D(out_channels, 3, 1, padding='same', kernel_initializer=initializer))(h)
h = Activation('relu', name=idx+'_relu2')(x)
h = SpectralNormalization(
Convolution2D(out_channels, 3, 1, padding='same', kernel_initializer=initializer))(h)
if downsample:
h = AveragePooling2D(2)(h)
return h + x
def down_deep_res_block(x, y, in_channels, out_channels, initializer, block_idx,
channel_drop=True):
x = deep_res_block(x, y, in_channels, in_channels, initializer, block_idx)
return deep_res_block(x, y, in_channels, out_channels, initializer, block_idx,
channel_drop=channel_drop, downsample=AveragePooling2D(2))
def deep_res_block(x, y, in_channels, out_channels, initializer, block_idx,
channel_drop=True, downsample=None)
h = Activation('relu', name=relu_name_base+'_x1')(h)
hidden_channels = in_channels // 4
# Bottleneck to reduce the number of channels
h = SpectralNormalization(
Convolution2D(hidden_channels, 1, 1, kernel_initializer=initializer, use_bias=False))(h)
h = Activation('relu', name=relu_name_base+'_x1')(h)
h = SpectralNormalization(
Convolution2D(hidden_channels, 3, 1, padding='same', kernel_initializer=initializer, use_bias=False))(h)
h = Activation('relu', name=relu_name_base+'_x1')(h)
h = SpectralNormalization(
Convolution2D(hidden_channels, 3, 1, padding='same', kernel_initializer=initializer, use_bias=False))(h)
h = Activation('relu', name=relu_name_base+'_x1')(h)
# upsample
if downsample:
x = downsample(x)
h = downsample(h)
# Bottleneck to increase the number of channels to the required shape
h = SpectralNormalization(
Convolution2D(out_channels, 1, 1, kernel_initializer=initializer, use_bias=False))(h)
# make the channels of the input equal the channels of the output
aditional_channels = SpectralNormalization(
Convolution2D(out_channels-in_channels, 1, 1, kernel_initializer=initializer, use_bias=False))(x)
x = tf.concat(x, aditional_channels, -1)
return h + x
generator = generator_model(z_dim=z_dim)
discriminator = discriminator_model()
# losses and optimizers
def discriminator_loss(d_real, d_fake):
loss_real = tf.reduce_mean(relu(1. - dis_real))
loss_fake = tf.reduce_mean(relu(1. + dis_fake))
return loss_real, loss_fake
def generator_loss(d_fake):
return -tf.reduce_mean(d_fake)
generator_optimizer = tf.keras.optimizers.Adam(g_lr, beta_1=g_adam_b1, beta_2=g_adam_b2, epsilon=g_adam_eps)
discriminator_optimizer = tf.keras.optimizers.Adam(d_lr, beta_1=d_adam_b1, beta_2=d_adam_b2, epsilon=d_adam_eps)
# to prevent underflow/overflow
generator_optimizer = mixed_precision.LossScaleOptimizer(generator_optimizer, loss_scale='dynamic')
discriminator_optimizer = mixed_precision.LossScaleOptimizer(discriminator_optimizer, loss_scale='dynamic')
discriminator_loss_real = tf.keras.metrics.Mean(name='discriminator_loss_real')
discriminator_loss_fake = tf.keras.metrics.Mean(name='discriminator_loss_fake')
generator_loss = tf.keras.metrics.Mean(name='generator_loss')
def orthogonal_reg(gradients, strength=1e-4, blacklist=[]):
i = 0
for gradient in gradients:
if ndim >= 2 and i not in blacklist:
w = tf.reshape(gradient, [-1])
reg = (2 * tf.matmul(tf.matmul(w, w, transpose_b=True) * (1. - tf.eye(w.shape[0])), w))
gradient = gradient + strength * tf.reshape(reg, gradient.shape)
i += 1
@tf.function
def d_train_step(z, z_y, x, x_y):
g_z = generator((z, z_y), training=True)
with tf.GradientTape() as tape:
real_output = discriminator((x, x_y), training=True)
fake_output = discriminator((g_z, z_y), training=True)
real_loss, fake_loss = discriminator_loss(real_output, fake_output)
d_loss = real_loss + fake_loss
scaled_d_loss = discriminator_optimizer.get_scaled_loss(d_loss)
scaled_gradients = tape.gradient(scaled_d_loss, discriminator.trainable_variables)
gradients = discriminator_optimizer.get_unscaled_gradients(scaled_gradients)
# apply orthogonal regularization
blacklist = [index for index, trainable_variables in enumerate(discriminator.trainable_variables)
if 'embedding' in trainable_variables.name]
gradients = orthogonal_reg(gradients, blacklist)
discriminator_optimizer.apply_gradients(zip(gradients, discriminator.trainable_variables))
discriminator_loss_real(real_loss)
discriminator_loss_fake(fake_loss)
@tf.function
def g_train_step(z, z_y):
with tf.GradientTape() as tape:
g_z = generator((z, z_y), training=True)
fake_output = discriminator((g_z, z_y), training=True)
g_loss = generator_loss(fake_output)
scaled_g_loss = generator_optimizer.get_scaled_loss(g_loss)
scaled_gradients = tape.gradient(scaled_g_loss, generator.trainable_variables)
gradients = generator_optimizer.get_unscaled_gradients(scaled_gradients)
# apply orthogonal regularization
blacklist = [index for index, trainable_variables in enumerate(generator.trainable_variables)
if 'embedding' in trainable_variables.name]
gradients = orthogonal_reg(gradients, blacklist)
generator_optimizer.apply_gradients(zip(gradients, generator.trainable_variables))
generator_loss(g_loss)
# create summary writers
train_summary_writer = tf.summary.create_file_writer('results/summaries/train/' + identifier)
# create a seed to visualize the progress
z = tfd.Normal(loc=0, scale=1)
shape_z = [batch_size, z_dim]
y = tfd.Categorical(logits=tf.ones(n_classes))
shape_y = [batch_size, 1]
fixed_z = z.sample(shape_z)
fixed_y = y.sample(shape_y)
print("starting training")
for epoch in range(epochs):
time_start = time.time()
for i in range(train_size / (batch_size * 2)):
# train D d_steps times for each training step on G
for step_index in range(d_steps):
images, labels = train_gen.next()
z_ = z.sample(shape_z)
y_ = y.sample(shape_y)
d_train_step(z_, y_, images,labels)
# train G
z_ = z.sample(shape_z)
y_ = y.sample(shape_y)
g_train_step(z_, y_)
time_finish = time.time()
end_time = (time_finish-time_start)
if (epoch % log_freq == 0):
print ('Epoch: {}, Generator Loss: {}, Discriminator Real Loss: {}, Discriminator Fake Loss: {}, Time: {} s'.format(
epoch,
generator_loss.result(),
discriminator_loss_real.result(),
discriminator_loss_fake.result(),
end_time))
generated_image = generator((fixed_z, fixed_y), training=False)
with train_summary_writer.as_default():
tf.summary.image('generated_image', generated_image, step=epoch)
tf.summary.scalar('generator_loss', generator_loss.result(), step=epoch)
tf.summary.scalar('discriminator_loss', discriminator_loss.result(), step=epoch)
generator_loss.reset_states()
discriminator_loss.reset_states()
'''
if ((generator_loss.result() < min_loss) or (discriminator_loss.result() < min_loss)):
if not os.path.exists(models_directory):
os.makedirs(models_directory)
# serialize weights to HDF5
generator.save_weights(models_directory + "best{}-generator.h5".format(identifier))
discriminator.save_weights(models_directory + "best{}-discriminator.h5".format(identifier))
min_loss = min(generator_loss.result(), discriminator_loss.result())
patience = 0
else:
patience += 1
'''
| 0.733643 | 0.551151 |
# End-To-End Example: Data Analysis of iSchool Classes
In this end-to-end example we will perform a data analysis in Python Pandas we will attempt to answer the following questions:
- What percentage of the schedule are undergrad (course number 500 or lower)?
- What undergrad classes are on Friday? or at 8AM?
Things we will demonstrate:
- `read_html()` for basic web scraping
- dealing with 5 pages of data
- `append()` multiple `DataFrames` together
- Feature engineering (adding a column to the `DataFrame`)
The iSchool schedule of classes can be found here: https://ischool.syr.edu/classes
```
import pandas as pd
# this turns off warning messages
import warnings
warnings.filterwarnings('ignore')
# just figure out how to get the data
website = 'https://ischool.syr.edu/classes/?page=1'
data = pd.read_html(website)
data[0]
# let's generate links to the other pages
website = 'https://ischool.syr.edu/classes/?page='
for i in range(1,6):
link = website + str(i)
print(link)
# let's read them all and append them to a single data frame
website = 'https://ischool.syr.edu/classes/?page='
classes = pd.DataFrame() # (columns = ['Course','Section','ClassNo','Credits','Title','Instructor','Time','Days','Room'])
for i in range(1,6):
link = website + str(i)
data = pd.read_html(website + str(i))
classes = classes.append(data[0], ignore_index=True)
classes.sample(5)
## let's set the columns
website = 'https://ischool.syr.edu/classes/?page='
classes = pd.DataFrame()
for i in range(1,6):
link = website + str(i)
data = pd.read_html(website + str(i))
classes = classes.append(data[0], ignore_index=True)
classes.columns = ['Course','Section','ClassNo','Credits','Title','Instructor','Time','Days','Room']
classes.sample(5)
## this is good stuff. Let's make a function out of it for simplicity
def get_ischool_classes():
website = 'https://ischool.syr.edu/classes/?page='
classes = pd.DataFrame()
for i in range(1,6):
link = website + str(i)
data = pd.read_html(website + str(i))
classes = classes.append(data[0], ignore_index=True)
classes.columns = ['Course','Section','ClassNo','Credits','Title','Instructor','Time','Days','Room']
return classes
# main program
classes = get_ischool_classes()
# undergrad classes are 0-499, grad classes are 500 and up but we don't have course numbers!!!! So we must engineer them.
classes['Course'].str[0:3].sample(5)
classes['Course'].str[3:].sample(5)
# make the subject and number columns
classes['Subject'] = classes['Course'].str[0:3]
classes['Number'] = classes['Course'].str[3:]
classes.sample(5)
# and finally we can create the column we need!
classes['Type'] = ''
classes['Type'][classes['Number'] < '500'] = 'UGrad'
classes['Type'][classes['Number'] >= '500'] = 'Grad'
classes.sample(5)
# the entire program to retrieve the data and setup the columns looks like this:
# main program
classes = get_ischool_classes()
classes['Subject'] = classes['Course'].str[0:3]
classes['Number'] = classes['Course'].str[3:]
classes['Type'] = ''
classes['Type'][classes['Number'] < '500'] = 'UGrad'
classes['Type'][classes['Number'] >= '500'] = 'Grad'
# let's fins the number of grad / undergrad courses
classes['Type'].value_counts()
# more grad classes than undergrad
# how many undergrad classes on a Friday?
friday = classes[ (classes['Type'] == 'UGrad') & (classes['Days'].str.find('F')>=0 ) ]
friday
# let's get rid of those pesky LAB sections!!!
# how many undergrad classes on a Friday?
friday_no_lab = friday[ ~friday['Title'].str.startswith('LAB:')]
friday_no_lab
# Looking for more classes to avoid? How about 8AM classes?
eight_am = classes[ classes['Time'].str.startswith('8:00am')]
eight_am
```
|
github_jupyter
|
import pandas as pd
# this turns off warning messages
import warnings
warnings.filterwarnings('ignore')
# just figure out how to get the data
website = 'https://ischool.syr.edu/classes/?page=1'
data = pd.read_html(website)
data[0]
# let's generate links to the other pages
website = 'https://ischool.syr.edu/classes/?page='
for i in range(1,6):
link = website + str(i)
print(link)
# let's read them all and append them to a single data frame
website = 'https://ischool.syr.edu/classes/?page='
classes = pd.DataFrame() # (columns = ['Course','Section','ClassNo','Credits','Title','Instructor','Time','Days','Room'])
for i in range(1,6):
link = website + str(i)
data = pd.read_html(website + str(i))
classes = classes.append(data[0], ignore_index=True)
classes.sample(5)
## let's set the columns
website = 'https://ischool.syr.edu/classes/?page='
classes = pd.DataFrame()
for i in range(1,6):
link = website + str(i)
data = pd.read_html(website + str(i))
classes = classes.append(data[0], ignore_index=True)
classes.columns = ['Course','Section','ClassNo','Credits','Title','Instructor','Time','Days','Room']
classes.sample(5)
## this is good stuff. Let's make a function out of it for simplicity
def get_ischool_classes():
website = 'https://ischool.syr.edu/classes/?page='
classes = pd.DataFrame()
for i in range(1,6):
link = website + str(i)
data = pd.read_html(website + str(i))
classes = classes.append(data[0], ignore_index=True)
classes.columns = ['Course','Section','ClassNo','Credits','Title','Instructor','Time','Days','Room']
return classes
# main program
classes = get_ischool_classes()
# undergrad classes are 0-499, grad classes are 500 and up but we don't have course numbers!!!! So we must engineer them.
classes['Course'].str[0:3].sample(5)
classes['Course'].str[3:].sample(5)
# make the subject and number columns
classes['Subject'] = classes['Course'].str[0:3]
classes['Number'] = classes['Course'].str[3:]
classes.sample(5)
# and finally we can create the column we need!
classes['Type'] = ''
classes['Type'][classes['Number'] < '500'] = 'UGrad'
classes['Type'][classes['Number'] >= '500'] = 'Grad'
classes.sample(5)
# the entire program to retrieve the data and setup the columns looks like this:
# main program
classes = get_ischool_classes()
classes['Subject'] = classes['Course'].str[0:3]
classes['Number'] = classes['Course'].str[3:]
classes['Type'] = ''
classes['Type'][classes['Number'] < '500'] = 'UGrad'
classes['Type'][classes['Number'] >= '500'] = 'Grad'
# let's fins the number of grad / undergrad courses
classes['Type'].value_counts()
# more grad classes than undergrad
# how many undergrad classes on a Friday?
friday = classes[ (classes['Type'] == 'UGrad') & (classes['Days'].str.find('F')>=0 ) ]
friday
# let's get rid of those pesky LAB sections!!!
# how many undergrad classes on a Friday?
friday_no_lab = friday[ ~friday['Title'].str.startswith('LAB:')]
friday_no_lab
# Looking for more classes to avoid? How about 8AM classes?
eight_am = classes[ classes['Time'].str.startswith('8:00am')]
eight_am
| 0.159577 | 0.972099 |
# Define mirrors for KAGRA MIF optical layout
## Import modules
```
import gtrace.optcomp as opt
from gtrace.unit import *
```
## Define mirrors
```
PRM = opt.Mirror(HRcenter=[0,0], normAngleHR=0.0,
diameter=25*cm, thickness=10*cm,
wedgeAngle=PRMWedge, inv_ROC_HR=1./PRM_ROC,
inv_ROC_AR=0,
Refl_HR=0.9, Trans_HR=1-0.9,
Refl_AR=500*ppm, Trans_AR=1-500*ppm,
n=nsilica, name='PRM', HRtransmissive=True)
PR2 = opt.Mirror(HRcenter=[0,0], normAngleHR=0.0,
diameter=25*cm, thickness=10*cm,
wedgeAngle=PRWedge, inv_ROC_HR=1./PR2_ROC,
Refl_HR=1-500*ppm, Trans_HR=500*ppm,
Refl_AR=500*ppm, Trans_AR=1-500*ppm,
n=nsilica, name='PR2')
PR3 = opt.Mirror(HRcenter=[0,0], normAngleHR=0.0,
diameter=25*cm, thickness=10*cm,
wedgeAngle=PRWedge, inv_ROC_HR=1./PR3_ROC,
Refl_HR=1-100*ppm, Trans_HR=100*ppm,
Refl_AR=500*ppm, Trans_AR=1-500*ppm,
n=nsilica, name='PR3')
BS = opt.Mirror(HRcenter=[0,0], normAngleHR=0.0,
diameter=38*cm, thickness=BS_Thickness,
wedgeAngle=BSWedge, inv_ROC_HR=0.,
Refl_HR=0.5, Trans_HR=0.5,
Refl_AR=100*ppm, Trans_AR=1-100*ppm,
n=nsilica, name='BS', HRtransmissive=True)
ITMX = opt.Mirror(HRcenter=[0,0], normAngleHR=0.0,
diameter=ITM_DIA, thickness=15*cm,
wedgeAngle=ITMXWedge, inv_ROC_HR=1./ITM_ROC,
Refl_HR=0.996, Trans_HR=1-0.996,
Refl_AR=500*ppm, Trans_AR=1-500*ppm,
n=nsaph, name='ITMX', HRtransmissive=True)
ITMY = opt.Mirror(HRcenter=[0,0], normAngleHR=0.0,
diameter=ITM_DIA, thickness=15*cm,
wedgeAngle=ITMYWedge, inv_ROC_HR=1./ITM_ROC,
Refl_HR=0.996, Trans_HR=1-0.996,
Refl_AR=500*ppm, Trans_AR=1-500*ppm,
n=nsaph, name='ITMY', HRtransmissive=True)
SRM = opt.Mirror(HRcenter=[0,0], normAngleHR=0.0,
diameter=25*cm, thickness=10*cm,
wedgeAngle=SRMWedge, inv_ROC_HR=1./SRM_ROC,
inv_ROC_AR=0,
Refl_HR=1-0.1536, Trans_HR=0.1536,
Refl_AR=500*ppm, Trans_AR=1-500*ppm,
n=nsilica, name='SRM', HRtransmissive=True)
SR2= opt.Mirror(HRcenter=[0,0], normAngleHR=0.0,
diameter=25*cm, thickness=10*cm,
wedgeAngle=SRWedge, inv_ROC_HR=1./SR2_ROC,
Refl_HR=1-500*ppm, Trans_HR=500*ppm,
Refl_AR=500*ppm, Trans_AR=1-500*ppm,
n=nsilica, name='SR2')
SR3 = opt.Mirror(HRcenter=[0,0], normAngleHR=0.0,
diameter=25*cm, thickness=10*cm,
wedgeAngle=SRWedge, inv_ROC_HR=1./SR3_ROC,
Refl_HR=1-100*ppm, Trans_HR=100*ppm,
Refl_AR=500*ppm, Trans_AR=1-500*ppm,
n=nsilica, name='SR3')
ETMX = opt.Mirror(HRcenter=[0,0], normAngleHR=0.0,
diameter=ETM_DIA, thickness=15*cm,
wedgeAngle=ETMXWedge, inv_ROC_HR=1./ETM_ROC,
#Refl_HR=1-55*ppm, Trans_HR=55*ppm,
Refl_HR=0.01, Trans_HR=1-0.01,
Refl_AR=500*ppm, Trans_AR=1-500*ppm,
n=nsaph, name='ETMX')
ETMY = opt.Mirror(HRcenter=[0,0], normAngleHR=0.0,
diameter=ETM_DIA, thickness=15*cm,
wedgeAngle=ETMYWedge, inv_ROC_HR=1./ETM_ROC,
#Refl_HR=1-55*ppm, Trans_HR=55*ppm,
Refl_HR=0.01, Trans_HR=1-0.01,
Refl_AR=500*ppm, Trans_AR=1-500*ppm,
n=nsaph, name='ETMY')
```
### Dictionary to keep all the mirror objects
```
opticsDict = {'PRM':PRM, 'PR2':PR2, 'PR3':PR3, 'BS':BS, 'ITMX':ITMX,
'ITMY':ITMY, 'SR3':SR3, 'SR2':SR2, 'SRM':SRM, 'ETMX':ETMX, 'ETMY':ETMY}
```
|
github_jupyter
|
import gtrace.optcomp as opt
from gtrace.unit import *
PRM = opt.Mirror(HRcenter=[0,0], normAngleHR=0.0,
diameter=25*cm, thickness=10*cm,
wedgeAngle=PRMWedge, inv_ROC_HR=1./PRM_ROC,
inv_ROC_AR=0,
Refl_HR=0.9, Trans_HR=1-0.9,
Refl_AR=500*ppm, Trans_AR=1-500*ppm,
n=nsilica, name='PRM', HRtransmissive=True)
PR2 = opt.Mirror(HRcenter=[0,0], normAngleHR=0.0,
diameter=25*cm, thickness=10*cm,
wedgeAngle=PRWedge, inv_ROC_HR=1./PR2_ROC,
Refl_HR=1-500*ppm, Trans_HR=500*ppm,
Refl_AR=500*ppm, Trans_AR=1-500*ppm,
n=nsilica, name='PR2')
PR3 = opt.Mirror(HRcenter=[0,0], normAngleHR=0.0,
diameter=25*cm, thickness=10*cm,
wedgeAngle=PRWedge, inv_ROC_HR=1./PR3_ROC,
Refl_HR=1-100*ppm, Trans_HR=100*ppm,
Refl_AR=500*ppm, Trans_AR=1-500*ppm,
n=nsilica, name='PR3')
BS = opt.Mirror(HRcenter=[0,0], normAngleHR=0.0,
diameter=38*cm, thickness=BS_Thickness,
wedgeAngle=BSWedge, inv_ROC_HR=0.,
Refl_HR=0.5, Trans_HR=0.5,
Refl_AR=100*ppm, Trans_AR=1-100*ppm,
n=nsilica, name='BS', HRtransmissive=True)
ITMX = opt.Mirror(HRcenter=[0,0], normAngleHR=0.0,
diameter=ITM_DIA, thickness=15*cm,
wedgeAngle=ITMXWedge, inv_ROC_HR=1./ITM_ROC,
Refl_HR=0.996, Trans_HR=1-0.996,
Refl_AR=500*ppm, Trans_AR=1-500*ppm,
n=nsaph, name='ITMX', HRtransmissive=True)
ITMY = opt.Mirror(HRcenter=[0,0], normAngleHR=0.0,
diameter=ITM_DIA, thickness=15*cm,
wedgeAngle=ITMYWedge, inv_ROC_HR=1./ITM_ROC,
Refl_HR=0.996, Trans_HR=1-0.996,
Refl_AR=500*ppm, Trans_AR=1-500*ppm,
n=nsaph, name='ITMY', HRtransmissive=True)
SRM = opt.Mirror(HRcenter=[0,0], normAngleHR=0.0,
diameter=25*cm, thickness=10*cm,
wedgeAngle=SRMWedge, inv_ROC_HR=1./SRM_ROC,
inv_ROC_AR=0,
Refl_HR=1-0.1536, Trans_HR=0.1536,
Refl_AR=500*ppm, Trans_AR=1-500*ppm,
n=nsilica, name='SRM', HRtransmissive=True)
SR2= opt.Mirror(HRcenter=[0,0], normAngleHR=0.0,
diameter=25*cm, thickness=10*cm,
wedgeAngle=SRWedge, inv_ROC_HR=1./SR2_ROC,
Refl_HR=1-500*ppm, Trans_HR=500*ppm,
Refl_AR=500*ppm, Trans_AR=1-500*ppm,
n=nsilica, name='SR2')
SR3 = opt.Mirror(HRcenter=[0,0], normAngleHR=0.0,
diameter=25*cm, thickness=10*cm,
wedgeAngle=SRWedge, inv_ROC_HR=1./SR3_ROC,
Refl_HR=1-100*ppm, Trans_HR=100*ppm,
Refl_AR=500*ppm, Trans_AR=1-500*ppm,
n=nsilica, name='SR3')
ETMX = opt.Mirror(HRcenter=[0,0], normAngleHR=0.0,
diameter=ETM_DIA, thickness=15*cm,
wedgeAngle=ETMXWedge, inv_ROC_HR=1./ETM_ROC,
#Refl_HR=1-55*ppm, Trans_HR=55*ppm,
Refl_HR=0.01, Trans_HR=1-0.01,
Refl_AR=500*ppm, Trans_AR=1-500*ppm,
n=nsaph, name='ETMX')
ETMY = opt.Mirror(HRcenter=[0,0], normAngleHR=0.0,
diameter=ETM_DIA, thickness=15*cm,
wedgeAngle=ETMYWedge, inv_ROC_HR=1./ETM_ROC,
#Refl_HR=1-55*ppm, Trans_HR=55*ppm,
Refl_HR=0.01, Trans_HR=1-0.01,
Refl_AR=500*ppm, Trans_AR=1-500*ppm,
n=nsaph, name='ETMY')
opticsDict = {'PRM':PRM, 'PR2':PR2, 'PR3':PR3, 'BS':BS, 'ITMX':ITMX,
'ITMY':ITMY, 'SR3':SR3, 'SR2':SR2, 'SRM':SRM, 'ETMX':ETMX, 'ETMY':ETMY}
| 0.354992 | 0.595787 |
```
import json
from argparse import Namespace
from pathlib import Path
from typing import Dict, List, Tuple, Set, Union
PROJECT_DIR = Path('..').absolute()
PROJECT_DIR
args = Namespace(
train_file_path=PROJECT_DIR / 'data' / 'WSJ_02-21.pos',
test_file_path=PROJECT_DIR / 'data' / 'WSJ_24.words',
)
lines = [line.strip() for line in open(args.train_file_path, 'r')]
lines = [
'the\tDT',
'cat\tNN',
'sat\tVBD',
'on\tIN',
'the\tDT',
'mat\tNN',
'.\t.',
'',
]
word_count: Dict[str, int] = {}
word_tag_count: Dict[Tuple[str, str], int] = {}
tag_count: Dict[str, int] = {}
tag_tag_count: Dict[Tuple[str, str], int] = {}
last_tag = 'B'
for line in lines:
if last_tag == 'B':
tag_count['B'] = tag_count.get('B', 0) + 1
line = line.strip()
if line:
word, tag = [x.strip() for x in line.split("\t")]
word = word.lower()
else:
word = ''
tag = 'E'
tag_count[tag] = tag_count.get(tag, 0) + 1
tag_tag_count[(last_tag, tag)] = tag_tag_count.get((last_tag, tag), 0) + 1
if word:
word_count[word] = word_count.get(word, 0) + 1
word_tag_count[(word, tag)] = word_tag_count.get((word, tag), 0) + 1
if tag != 'E':
last_tag = tag
else:
last_tag = 'B'
# clean for unknown word
affixes: List[Union[List[str], str]] = [['able', 'ible'], 'al', 'an', 'ar', 'ed', 'en', 'er', 'est', 'ful', 'ic', 'ing', 'ish', 'ive', 'less', 'ly', 'ment', 'ness', 'or', 'ous', 'y']
new_word_count = word_count.copy()
for word, count in word_count.items():
if count > 1:
continue
word_class = "UNKNOWN"
for affix in affixes:
if type(affix) == list:
word_affix_class = affix[0].upper()
selected = False
for affix_item in affix:
if word.endswith(affix_item):
word_class = f"UNKNOWN_AFFIXED_WITH_{word_affix_class}"
selected = True
break
if selected:
break
else:
if word.endswith(affix):
word_class = f"UNKNOWN_AFFIXED_WITH_{affix.upper()}"
break
new_word_count.pop(word)
new_word_count[word_class] = new_word_count.get(word_class, 0) + 1
for tag in tag_count.keys():
original_word_tag_count = word_tag_count.pop((word, tag), 0)
word_tag_count[(word_class, tag)] = word_tag_count.get((word_class, tag), 0) + original_word_tag_count
trans_prob: Dict[Tuple[str, str], float] = {} # (tag1, tag2) -> prob
for tag1 in tag_count.keys():
for tag2 in tag_count.keys():
total_tag1 = tag_count.get(tag1, 0)
tag1_followed_by_tag2 = tag_tag_count.get((tag1, tag2), 0)
if total_tag1 == 0:
prob = 0
else:
prob = tag1_followed_by_tag2 / total_tag1
trans_prob[(tag1, tag2)] = prob
trans_prob
emission_prob: Dict[Tuple[str, str], float] = {} # (word, tag) -> prob
for word in word_set:
for tag in tag_count.keys():
word_tag = word_tag_count.get((word, tag), 0)
tag_total = tag_count.get(tag, 0)
if tag_total == 0:
prob = 0
else:
prob = word_tag / tag_total
emission_prob[(word, tag)] = prob
emission_prob
lines = [line.strip() for line in open(args.test_file_path, 'r')]
```
|
github_jupyter
|
import json
from argparse import Namespace
from pathlib import Path
from typing import Dict, List, Tuple, Set, Union
PROJECT_DIR = Path('..').absolute()
PROJECT_DIR
args = Namespace(
train_file_path=PROJECT_DIR / 'data' / 'WSJ_02-21.pos',
test_file_path=PROJECT_DIR / 'data' / 'WSJ_24.words',
)
lines = [line.strip() for line in open(args.train_file_path, 'r')]
lines = [
'the\tDT',
'cat\tNN',
'sat\tVBD',
'on\tIN',
'the\tDT',
'mat\tNN',
'.\t.',
'',
]
word_count: Dict[str, int] = {}
word_tag_count: Dict[Tuple[str, str], int] = {}
tag_count: Dict[str, int] = {}
tag_tag_count: Dict[Tuple[str, str], int] = {}
last_tag = 'B'
for line in lines:
if last_tag == 'B':
tag_count['B'] = tag_count.get('B', 0) + 1
line = line.strip()
if line:
word, tag = [x.strip() for x in line.split("\t")]
word = word.lower()
else:
word = ''
tag = 'E'
tag_count[tag] = tag_count.get(tag, 0) + 1
tag_tag_count[(last_tag, tag)] = tag_tag_count.get((last_tag, tag), 0) + 1
if word:
word_count[word] = word_count.get(word, 0) + 1
word_tag_count[(word, tag)] = word_tag_count.get((word, tag), 0) + 1
if tag != 'E':
last_tag = tag
else:
last_tag = 'B'
# clean for unknown word
affixes: List[Union[List[str], str]] = [['able', 'ible'], 'al', 'an', 'ar', 'ed', 'en', 'er', 'est', 'ful', 'ic', 'ing', 'ish', 'ive', 'less', 'ly', 'ment', 'ness', 'or', 'ous', 'y']
new_word_count = word_count.copy()
for word, count in word_count.items():
if count > 1:
continue
word_class = "UNKNOWN"
for affix in affixes:
if type(affix) == list:
word_affix_class = affix[0].upper()
selected = False
for affix_item in affix:
if word.endswith(affix_item):
word_class = f"UNKNOWN_AFFIXED_WITH_{word_affix_class}"
selected = True
break
if selected:
break
else:
if word.endswith(affix):
word_class = f"UNKNOWN_AFFIXED_WITH_{affix.upper()}"
break
new_word_count.pop(word)
new_word_count[word_class] = new_word_count.get(word_class, 0) + 1
for tag in tag_count.keys():
original_word_tag_count = word_tag_count.pop((word, tag), 0)
word_tag_count[(word_class, tag)] = word_tag_count.get((word_class, tag), 0) + original_word_tag_count
trans_prob: Dict[Tuple[str, str], float] = {} # (tag1, tag2) -> prob
for tag1 in tag_count.keys():
for tag2 in tag_count.keys():
total_tag1 = tag_count.get(tag1, 0)
tag1_followed_by_tag2 = tag_tag_count.get((tag1, tag2), 0)
if total_tag1 == 0:
prob = 0
else:
prob = tag1_followed_by_tag2 / total_tag1
trans_prob[(tag1, tag2)] = prob
trans_prob
emission_prob: Dict[Tuple[str, str], float] = {} # (word, tag) -> prob
for word in word_set:
for tag in tag_count.keys():
word_tag = word_tag_count.get((word, tag), 0)
tag_total = tag_count.get(tag, 0)
if tag_total == 0:
prob = 0
else:
prob = word_tag / tag_total
emission_prob[(word, tag)] = prob
emission_prob
lines = [line.strip() for line in open(args.test_file_path, 'r')]
| 0.406626 | 0.26172 |
# Lexicon Generator
<div class="alert alert-info">
This tutorial is available as an IPython notebook at [Malaya/example/lexicon](https://github.com/huseinzol05/Malaya/tree/master/example/lexicon).
</div>
```
%%time
import malaya
import numpy as np
```
### Why lexicon
Lexicon is populated words related to certain domains, like, words for negative and positive sentiments.
Example, word `suka` can represent as positive sentiment. If `suka` exists in a sentence, we can say that sentence is positive sentiment.
Lexicon based is common way people use to classify a text and very fast. Again, it is pretty naive because a word can be semantically ambiguous.
### sentiment lexicon
Malaya provided a small sample for sentiment lexicon, simply,
```
sentiment_lexicon = malaya.lexicon.sentiment
sentiment_lexicon.keys()
```
### emotion lexicon
Malaya provided a small sample for emotion lexicon, simply,
```
emotion_lexicon = malaya.lexicon.emotion
emotion_lexicon.keys()
```
### Lexicon generator
To build a lexicon is time consuming, because required expert domains to populate related words to the domains. With the help of word vector, we can induce sample words to specific domains given some annotated lexicon. Why we induced lexicon from word vector? Even for a word `suka` commonly represent positive sentiment, but if the word vector learnt the context of `suka` different polarity and based nearest words also represent different polarity, so `suka` got tendency to become negative sentiment.
Malaya provided inducing lexicon interface, build on top of [Inducing Domain-Specific Sentiment Lexicons from Unlabeled Corpora](https://arxiv.org/pdf/1606.02820.pdf).
Let say you have a lexicon based on standard language or `bahasa baku`, then you want to find similar lexicon on social media context. So you can use this `malaya.lexicon` interface. To use this interface, we must initiate `malaya.wordvector.load` first.
And, at least small lexicon sample like this,
```python
{'label1': ['word1', 'word2'], 'label2': ['word3', 'word4']}
```
`label` can be more than 2, example like `malaya.lexicon.emotion`, up to 6 different labels.
```
vocab, embedded = malaya.wordvector.load(model = 'socialmedia')
wordvector = malaya.wordvector.WordVector(embedded, vocab)
```
### random walk
Random walk technique is main technique use by the paper, can read more at [3.2 Propagating polarities from a seed set](https://arxiv.org/abs/1606.02820)
```python
def random_walk(
lexicon,
wordvector,
pool_size = 10,
top_n = 20,
similarity_power = 100.0,
beta = 0.9,
arccos = True,
normalization = True,
soft = False,
silent = False,
):
"""
Induce lexicon by using random walk technique, use in paper, https://arxiv.org/pdf/1606.02820.pdf
Parameters
----------
lexicon: dict
curated lexicon from expert domain, {'label1': [str], 'label2': [str]}.
wordvector: object
wordvector interface object.
pool_size: int, optional (default=10)
pick top-pool size from each lexicons.
top_n: int, optional (default=20)
top_n for each vectors will multiple with `similarity_power`.
similarity_power: float, optional (default=100.0)
extra score for `top_n`, less will generate less bias induced but high chance unbalanced outcome.
beta: float, optional (default=0.9)
penalty score, towards to 1.0 means less penalty. 0 < beta < 1.
arccos: bool, optional (default=True)
covariance distribution for embedded.dot(embedded.T). If false, covariance + 1.
normalization: bool, optional (default=True)
normalize word vectors using L2 norm. L2 is good to penalize skewed vectors.
soft: bool, optional (default=False)
if True, a word not in the dictionary will be replaced with nearest jarowrinkler ratio.
if False, it will throw an exception if a word not in the dictionary.
silent: bool, optional (default=False)
if True, will not print any logs.
Returns
-------
tuple: (labels[argmax(scores), axis = 1], scores, labels)
"""
```
```
%%time
results, scores, labels = malaya.lexicon.random_walk(sentiment_lexicon, wordvector, pool_size = 5)
np.unique(list(results.values()), return_counts = True)
results
%%time
results_emotion, scores_emotion, labels_emotion = malaya.lexicon.random_walk(emotion_lexicon,
wordvector,
pool_size = 10)
np.unique(list(results_emotion.values()), return_counts = True)
results_emotion
```
### propagate probabilistic
```python
def propagate_probabilistic(
lexicon,
wordvector,
pool_size = 10,
top_n = 20,
similarity_power = 10.0,
arccos = True,
normalization = True,
soft = False,
silent = False,
):
"""
Learns polarity scores via standard label propagation from lexicon sets.
Parameters
----------
lexicon: dict
curated lexicon from expert domain, {'label1': [str], 'label2': [str]}.
wordvector: object
wordvector interface object.
pool_size: int, optional (default=10)
pick top-pool size from each lexicons.
top_n: int, optional (default=20)
top_n for each vectors will multiple with `similarity_power`.
similarity_power: float, optional (default=10.0)
extra score for `top_n`, less will generate less bias induced but high chance unbalanced outcome.
arccos: bool, optional (default=True)
covariance distribution for embedded.dot(embedded.T). If false, covariance + 1.
normalization: bool, optional (default=True)
normalize word vectors using L2 norm. L2 is good to penalize skewed vectors.
soft: bool, optional (default=False)
if True, a word not in the dictionary will be replaced with nearest jarowrinkler ratio.
if False, it will throw an exception if a word not in the dictionary.
silent: bool, optional (default=False)
if True, will not print any logs.
Returns
-------
tuple: (labels[argmax(scores), axis = 1], scores, labels)
"""
```
```
%%time
results_emotion, scores_emotion, labels_emotion = malaya.lexicon.propagate_probabilistic(emotion_lexicon,
wordvector,
pool_size = 10)
np.unique(list(results_emotion.values()), return_counts = True)
results_emotion
```
### propagate graph
```python
def propagate_graph(
lexicon,
wordvector,
pool_size = 10,
top_n = 20,
similarity_power = 10.0,
normalization = True,
soft = False,
silent = False,
):
"""
Graph propagation method dapted from Velikovich, Leonid, et al. "The viability of web-derived polarity lexicons." http://www.aclweb.org/anthology/N10-1119
Parameters
----------
lexicon: dict
curated lexicon from expert domain, {'label1': [str], 'label2': [str]}.
wordvector: object
wordvector interface object.
pool_size: int, optional (default=10)
pick top-pool size from each lexicons.
top_n: int, optional (default=20)
top_n for each vectors will multiple with `similarity_power`.
similarity_power: float, optional (default=10.0)
extra score for `top_n`, less will generate less bias induced but high chance unbalanced outcome.
normalization: bool, optional (default=True)
normalize word vectors using L2 norm. L2 is good to penalize skewed vectors.
soft: bool, optional (default=False)
if True, a word not in the dictionary will be replaced with nearest jarowrinkler ratio.
if False, it will throw an exception if a word not in the dictionary.
silent: bool, optional (default=False)
if True, will not print any logs.
Returns
-------
tuple: (labels[argmax(scores), axis = 1], scores, labels)
"""
```
```
%%time
results_emotion, scores_emotion, labels_emotion = malaya.lexicon.propagate_graph(emotion_lexicon,
wordvector,
pool_size = 10)
np.unique(list(results_emotion.values()), return_counts = True)
results_emotion
```
|
github_jupyter
|
%%time
import malaya
import numpy as np
sentiment_lexicon = malaya.lexicon.sentiment
sentiment_lexicon.keys()
emotion_lexicon = malaya.lexicon.emotion
emotion_lexicon.keys()
{'label1': ['word1', 'word2'], 'label2': ['word3', 'word4']}
vocab, embedded = malaya.wordvector.load(model = 'socialmedia')
wordvector = malaya.wordvector.WordVector(embedded, vocab)
def random_walk(
lexicon,
wordvector,
pool_size = 10,
top_n = 20,
similarity_power = 100.0,
beta = 0.9,
arccos = True,
normalization = True,
soft = False,
silent = False,
):
"""
Induce lexicon by using random walk technique, use in paper, https://arxiv.org/pdf/1606.02820.pdf
Parameters
----------
lexicon: dict
curated lexicon from expert domain, {'label1': [str], 'label2': [str]}.
wordvector: object
wordvector interface object.
pool_size: int, optional (default=10)
pick top-pool size from each lexicons.
top_n: int, optional (default=20)
top_n for each vectors will multiple with `similarity_power`.
similarity_power: float, optional (default=100.0)
extra score for `top_n`, less will generate less bias induced but high chance unbalanced outcome.
beta: float, optional (default=0.9)
penalty score, towards to 1.0 means less penalty. 0 < beta < 1.
arccos: bool, optional (default=True)
covariance distribution for embedded.dot(embedded.T). If false, covariance + 1.
normalization: bool, optional (default=True)
normalize word vectors using L2 norm. L2 is good to penalize skewed vectors.
soft: bool, optional (default=False)
if True, a word not in the dictionary will be replaced with nearest jarowrinkler ratio.
if False, it will throw an exception if a word not in the dictionary.
silent: bool, optional (default=False)
if True, will not print any logs.
Returns
-------
tuple: (labels[argmax(scores), axis = 1], scores, labels)
"""
%%time
results, scores, labels = malaya.lexicon.random_walk(sentiment_lexicon, wordvector, pool_size = 5)
np.unique(list(results.values()), return_counts = True)
results
%%time
results_emotion, scores_emotion, labels_emotion = malaya.lexicon.random_walk(emotion_lexicon,
wordvector,
pool_size = 10)
np.unique(list(results_emotion.values()), return_counts = True)
results_emotion
def propagate_probabilistic(
lexicon,
wordvector,
pool_size = 10,
top_n = 20,
similarity_power = 10.0,
arccos = True,
normalization = True,
soft = False,
silent = False,
):
"""
Learns polarity scores via standard label propagation from lexicon sets.
Parameters
----------
lexicon: dict
curated lexicon from expert domain, {'label1': [str], 'label2': [str]}.
wordvector: object
wordvector interface object.
pool_size: int, optional (default=10)
pick top-pool size from each lexicons.
top_n: int, optional (default=20)
top_n for each vectors will multiple with `similarity_power`.
similarity_power: float, optional (default=10.0)
extra score for `top_n`, less will generate less bias induced but high chance unbalanced outcome.
arccos: bool, optional (default=True)
covariance distribution for embedded.dot(embedded.T). If false, covariance + 1.
normalization: bool, optional (default=True)
normalize word vectors using L2 norm. L2 is good to penalize skewed vectors.
soft: bool, optional (default=False)
if True, a word not in the dictionary will be replaced with nearest jarowrinkler ratio.
if False, it will throw an exception if a word not in the dictionary.
silent: bool, optional (default=False)
if True, will not print any logs.
Returns
-------
tuple: (labels[argmax(scores), axis = 1], scores, labels)
"""
%%time
results_emotion, scores_emotion, labels_emotion = malaya.lexicon.propagate_probabilistic(emotion_lexicon,
wordvector,
pool_size = 10)
np.unique(list(results_emotion.values()), return_counts = True)
results_emotion
def propagate_graph(
lexicon,
wordvector,
pool_size = 10,
top_n = 20,
similarity_power = 10.0,
normalization = True,
soft = False,
silent = False,
):
"""
Graph propagation method dapted from Velikovich, Leonid, et al. "The viability of web-derived polarity lexicons." http://www.aclweb.org/anthology/N10-1119
Parameters
----------
lexicon: dict
curated lexicon from expert domain, {'label1': [str], 'label2': [str]}.
wordvector: object
wordvector interface object.
pool_size: int, optional (default=10)
pick top-pool size from each lexicons.
top_n: int, optional (default=20)
top_n for each vectors will multiple with `similarity_power`.
similarity_power: float, optional (default=10.0)
extra score for `top_n`, less will generate less bias induced but high chance unbalanced outcome.
normalization: bool, optional (default=True)
normalize word vectors using L2 norm. L2 is good to penalize skewed vectors.
soft: bool, optional (default=False)
if True, a word not in the dictionary will be replaced with nearest jarowrinkler ratio.
if False, it will throw an exception if a word not in the dictionary.
silent: bool, optional (default=False)
if True, will not print any logs.
Returns
-------
tuple: (labels[argmax(scores), axis = 1], scores, labels)
"""
%%time
results_emotion, scores_emotion, labels_emotion = malaya.lexicon.propagate_graph(emotion_lexicon,
wordvector,
pool_size = 10)
np.unique(list(results_emotion.values()), return_counts = True)
results_emotion
| 0.874587 | 0.853303 |
# Figure 1: Grism Spectral Resolution vs Wavelength
***
### Table of Contents
1. [Information](#Information)
2. [Imports](#Imports)
3. [Data](#Data)
4. [Generate the Resolution vs Wavelength Plot with Equation](#Generate-the-Resolution-vs-Wavelength-Plot-with-Equation)
5. [Generate the Resolution vs Wavelength Plot with Filters](#Generate-the-Resolution-vs-Wavelength-Plot-with-Filters)
6. [Issues](#Issues)
7. [About this Notebook](#About-this-Notebook)
***
## Information
#### JDox links:
* [NIRCam Grisms](https://jwst-docs.stsci.edu/display/JTI/NIRCam+Grisms#NIRCamGrisms-Resolvingpower)
* NIRCam grism spectral resolution versus wavelength
## Imports
```
import numpy as np
from astropy.io import ascii
from astropy.table import Table
import matplotlib.pyplot as plt
%matplotlib inline
```
## Data
#### Data Location:
In notebook, originally from Tom Greene's JATIS paper here: [λ = 2.4 to 5 μm spectroscopy with the James Webb Space Telescope NIRCam instrument](https://www.spiedigitallibrary.org/journals/Journal-of-Astronomical-Telescopes-Instruments-and-Systems/volume-3/issue-03/035001/%CE%BB24-to-5%CE%BCm-spectroscopy-with-the-James-Webb-Space-Telescope/10.1117/1.JATIS.3.3.035001.full?SSO=1)
```
# resolving power vs. wavelength values
F322W2_x = [2.50,2.60,2.70,2.80,2.90,3.00,3.10,3.20,3.30,3.40,3.50,3.60,3.70,3.80,3.90,4.00]
F322W2_y = [1171.45,1214.66,1256.08,1295.58,1333.11,1368.59,1401.98,1433.26,1462.43,1489.47,1514.43,1537.34,1558.25,1577.20,1594.28,1609.55]
F444W_x = [3.90,4.00,4.10,4.20,4.30,4.40,4.50,4.60,4.70,4.80,4.90,5.00]
F444W_y = [1594.28,1609.55,1623.09,1634.99,1645.32,1654.18,1661.66,1667.84,1672.81,1676.66,1679.46,1681.30]
F410M_x = [3.90,3.95,4.00,4.05,4.10,4.15,4.20,4.25,4.30]
F410M_y = [1593.32,1601.23,1608.71,1615.79,1622.46,1628.73,1634.61,1640.12,1645.26]
F430M_x = [4.20,4.25,4.30,4.35,4.40]
F430M_y = [1634.61,1640.12,1645.26,1650.04,1654.47]
# Equation
x = np.arange(2.5,5.1,0.1)
R = 3.35*(x)**4-41.9*(x)**3+95.5*(x)**2+536*(x)-240
```
## Generate the Resolution vs Wavelength Plot with Equation
```
f, ax1 = plt.subplots(1, sharex=True,figsize=(12, 9))
ax1.plot(x,R,lw=1.5,color='blue')
ax1.set_xlim(2.25,5.1)
ax1.tick_params(labelsize=20)
ax1.tick_params(axis='both', right='off', top='False')
ax1.tick_params('y', length=10, width=2, which='major')
ax1.tick_params('x', length=10, width=2, which='major')
f.text(0.5, 0.92, '\nNIRCam Grism Point Source Resolving Power', ha='center', fontsize=20)
f.text(0.5, 0.04, 'Wavelength ($\mu$m)', ha='center', fontsize=20)
f.text(0.02, 0.5, '$R\equiv\lambda/\Delta\lambda$', va='center', rotation='vertical', fontsize=24)
```
## Generate the Resolution vs Wavelength Plot with Filters
```
f, ax1 = plt.subplots(1, sharex=True,figsize=(12, 9))
ax1.scatter(x,R,marker='o',s=28,color='black',label='Equation')
ax1.plot(F322W2_x, F322W2_y, lw=3, label='F322W2 + GRISMR (modA)',color='blue',alpha=0.7)
ax1.plot(F444W_x, F444W_y, lw=3, label='F444W + GRISMR (modA)',color='purple',alpha=0.7)
ax1.plot(F410M_x, F410M_y, lw=3, label='F410M + GRISMR (modA)',color='green',alpha=0.7)
ax1.plot(F430M_x, F430M_y, lw=3, label='F430M + GRISMR (modA)',color='red',alpha=0.7)
ax1.set_xlim(2.25,5.1)
ax1.set_ylim(1100,1700)
ax1.tick_params(labelsize=20)
ax1.tick_params(axis='both', right='off', top='off')
ax1.tick_params('y', length=10, width=2, which='major')
ax1.tick_params('x', length=10, width=2, which='major')
ax1.legend(loc='best', fontsize=16)
f.text(0.5, 0.92, '\nNIRCam Grism Point Source Resolving Power', ha='center', fontsize=20)
f.text(0.5, 0.04, 'Wavelength ($\mu$m)', ha='center', fontsize=20)
f.text(0.02, 0.5, '$R\equiv\lambda/\Delta\lambda$', va='center', rotation='vertical', fontsize=24)
```
| wavelength | F322W2 | F444W | F430M | F410M | Equation |
|------------|---------|---------|---------|---------|----------|
| 2.50 | 1171.45 | | | | 1173.05 |
| 2.60 | 1214.66 | | | | 1215.83 |
| 2.70 | 1256.08 | | | | 1256.71 |
| 2.80 | 1295.58 | | | | 1295.64 |
| 2.90 | 1333.11 | | | | 1332.60 |
| 3.00 | 1368.59 | | | | 1367.55 |
| 3.10 | 1401.98 | | | | 1400.49 |
| 3.20 | 1433.26 | | | | 1431.41 |
| 3.30 | 1462.43 | | | | 1460.32 |
| 3.40 | 1489.47 | | | | 1487.21 |
| 3.50 | 1514.43 | | | | 1512.12 |
| 3.60 | 1537.34 | | | | 1535.06 |
| 3.70 | 1558.25 | | | | 1556.08 |
| 3.80 | 1577.20 | | | | 1575.20 |
| 3.90 | 1594.28 | 1594.28 | | 1593.32 | 1592.49 |
| 4.00 | 1609.55 | 1609.55 | | 1608.71 | 1608.00 |
| 4.10 | | 1623.09 | | 1622.46 | 1621.80 |
| 4.20 | | 1634.99 | 1634.61 | 1634.61 | 1633.95 |
| 4.30 | | 1645.32 | 1645.26 | 1645.26 | 1644.55 |
| 4.40 | | 1654.18 | 1654.47 | | 1653.68 |
| 4.50 | | 1661.66 | | | 1661.45 |
| 4.60 | | 1667.84 | | | 1667.95 |
| 4.70 | | 1672.81 | | | 1673.30 |
| 4.80 | | 1676.66 | | | 1677.63 |
| 4.90 | | 1679.46 | | | 1681.07 |
| 5.00 | | 1681.30 | | | 1683.75 |
## Issues
* None
## About this Notebook
**Authors:**
Alicia Canipe & Dan Coe
**Updated On:**
April 05, 2019
|
github_jupyter
|
import numpy as np
from astropy.io import ascii
from astropy.table import Table
import matplotlib.pyplot as plt
%matplotlib inline
# resolving power vs. wavelength values
F322W2_x = [2.50,2.60,2.70,2.80,2.90,3.00,3.10,3.20,3.30,3.40,3.50,3.60,3.70,3.80,3.90,4.00]
F322W2_y = [1171.45,1214.66,1256.08,1295.58,1333.11,1368.59,1401.98,1433.26,1462.43,1489.47,1514.43,1537.34,1558.25,1577.20,1594.28,1609.55]
F444W_x = [3.90,4.00,4.10,4.20,4.30,4.40,4.50,4.60,4.70,4.80,4.90,5.00]
F444W_y = [1594.28,1609.55,1623.09,1634.99,1645.32,1654.18,1661.66,1667.84,1672.81,1676.66,1679.46,1681.30]
F410M_x = [3.90,3.95,4.00,4.05,4.10,4.15,4.20,4.25,4.30]
F410M_y = [1593.32,1601.23,1608.71,1615.79,1622.46,1628.73,1634.61,1640.12,1645.26]
F430M_x = [4.20,4.25,4.30,4.35,4.40]
F430M_y = [1634.61,1640.12,1645.26,1650.04,1654.47]
# Equation
x = np.arange(2.5,5.1,0.1)
R = 3.35*(x)**4-41.9*(x)**3+95.5*(x)**2+536*(x)-240
f, ax1 = plt.subplots(1, sharex=True,figsize=(12, 9))
ax1.plot(x,R,lw=1.5,color='blue')
ax1.set_xlim(2.25,5.1)
ax1.tick_params(labelsize=20)
ax1.tick_params(axis='both', right='off', top='False')
ax1.tick_params('y', length=10, width=2, which='major')
ax1.tick_params('x', length=10, width=2, which='major')
f.text(0.5, 0.92, '\nNIRCam Grism Point Source Resolving Power', ha='center', fontsize=20)
f.text(0.5, 0.04, 'Wavelength ($\mu$m)', ha='center', fontsize=20)
f.text(0.02, 0.5, '$R\equiv\lambda/\Delta\lambda$', va='center', rotation='vertical', fontsize=24)
f, ax1 = plt.subplots(1, sharex=True,figsize=(12, 9))
ax1.scatter(x,R,marker='o',s=28,color='black',label='Equation')
ax1.plot(F322W2_x, F322W2_y, lw=3, label='F322W2 + GRISMR (modA)',color='blue',alpha=0.7)
ax1.plot(F444W_x, F444W_y, lw=3, label='F444W + GRISMR (modA)',color='purple',alpha=0.7)
ax1.plot(F410M_x, F410M_y, lw=3, label='F410M + GRISMR (modA)',color='green',alpha=0.7)
ax1.plot(F430M_x, F430M_y, lw=3, label='F430M + GRISMR (modA)',color='red',alpha=0.7)
ax1.set_xlim(2.25,5.1)
ax1.set_ylim(1100,1700)
ax1.tick_params(labelsize=20)
ax1.tick_params(axis='both', right='off', top='off')
ax1.tick_params('y', length=10, width=2, which='major')
ax1.tick_params('x', length=10, width=2, which='major')
ax1.legend(loc='best', fontsize=16)
f.text(0.5, 0.92, '\nNIRCam Grism Point Source Resolving Power', ha='center', fontsize=20)
f.text(0.5, 0.04, 'Wavelength ($\mu$m)', ha='center', fontsize=20)
f.text(0.02, 0.5, '$R\equiv\lambda/\Delta\lambda$', va='center', rotation='vertical', fontsize=24)
| 0.652463 | 0.939969 |
## Read the csv file, drop the duplicate(based on converstaionID) and remove unncessary column
```
import pandas as pd
import os
from utils import show_df
os.chdir(r'D:\Project\Twitter_depression_detector\data')
os.getcwd()
# Reads the json generated from the CLI commands above and creates a pandas dataframe
tweets_df = pd.read_csv('Depression_merged.csv')
tweets_df=tweets_df.drop_duplicates(subset=['conversationId'])
tweets_df=tweets_df.drop(columns=['Unnamed: 0'],axis=1)
```
## Search for all the hastags in tweet using regex
```
tweets_df['hashtags']=tweets_df.content.str.findall(r'#.*?(?=\s|$)')
print(tweets_df['hashtags'])
show_df(tweets_df)
```
## get Count the hashtags and remove the hastags related to medical terms
```
tweets_df.hashtags.value_counts().head(20)
medical_terms = ["#mentalhealth", "#health", "#happiness", "#mentalillness", "#happy", "#joy", "#wellbeing"]
mask1 = tweets_df.hashtags.apply(lambda x: any(item for item in medical_terms if item in x))
print(tweets_df[mask1].content.tail())
tweets_df[mask1==False].content.head(10)
tweets_df=tweets_df[mask1==False]
#show_df(tweets_df)
```
## remove tweets which contains too many hastags
```
mask2 = tweets_df.hashtags.apply(lambda x: len(x) < 4)
tweets_df=tweets_df[mask2]
tweets_df.hashtags.value_counts().head(20)
#show_df(tweets_df)
```
## remove tweets with @ mentions as they are sometimes retweets
```
import ast
print("Len of dataset: ",len(tweets_df))
# the mentioned user were stored as string so converted them to list
tweets_df['mentionedUsers']=[ast.literal_eval(mentioneduser) if type(mentioneduser)!=float else str(mentioneduser) for mentioneduser in tweets_df['mentionedUsers']]
mask3 = tweets_df.mentionedUsers.apply(lambda x: len(x) < 5)
tweets_df = tweets_df[mask3]
print("Len of dataset: ",len(tweets_df))
#show_df(tweets_df)
```
## Remove tweets containing URLS as they might be promotional msgs
```
import ast
print(len(tweets_df))
tweets_df['outlinks']=[ast.literal_eval(outlink) for outlink in tweets_df['outlinks']]
mask4 = tweets_df.outlinks.apply(lambda x: len(x)==0)
tweets_df = tweets_df[mask4]
print(len(tweets_df))
#show_df(tweets_df)
```
## Feature engineering, a column featuring count of the mentioneduser
```
tweets_df['mentionedUserCount']=[len(mentionedUser) if type(mentionedUser)==list else 0 for mentionedUser in tweets_df['mentionedUsers']]
#show_df(tweets_df)
#tweets_df.to_csv('cleaned_tweets.csv')
```
## see the difference between two columns content and renderedcontent
```
count=0
for i in range(len(tweets_df)):
if tweets_df.iloc[i]['renderedContent']==tweets_df.iloc[i]['content']:
count+=1
print(count)
```
## select the useful columns
```
tweets_df.columns
#tweets_cleaned_df=tweets_df[['url','date','user','renderedContent','replyCount','retweetCount','likeCount','quoteCount','mentionedUserCount']]
tweets_cleaned_df=tweets_df[['renderedContent','mentionedUserCount']]
#show_df(tweets_cleaned_df)
```
## Exporting the cleaned tweets
```
tweets_cleaned_df.to_csv('cleaned_merged_depression_tweets.csv',index=False)
```
|
github_jupyter
|
import pandas as pd
import os
from utils import show_df
os.chdir(r'D:\Project\Twitter_depression_detector\data')
os.getcwd()
# Reads the json generated from the CLI commands above and creates a pandas dataframe
tweets_df = pd.read_csv('Depression_merged.csv')
tweets_df=tweets_df.drop_duplicates(subset=['conversationId'])
tweets_df=tweets_df.drop(columns=['Unnamed: 0'],axis=1)
tweets_df['hashtags']=tweets_df.content.str.findall(r'#.*?(?=\s|$)')
print(tweets_df['hashtags'])
show_df(tweets_df)
tweets_df.hashtags.value_counts().head(20)
medical_terms = ["#mentalhealth", "#health", "#happiness", "#mentalillness", "#happy", "#joy", "#wellbeing"]
mask1 = tweets_df.hashtags.apply(lambda x: any(item for item in medical_terms if item in x))
print(tweets_df[mask1].content.tail())
tweets_df[mask1==False].content.head(10)
tweets_df=tweets_df[mask1==False]
#show_df(tweets_df)
mask2 = tweets_df.hashtags.apply(lambda x: len(x) < 4)
tweets_df=tweets_df[mask2]
tweets_df.hashtags.value_counts().head(20)
#show_df(tweets_df)
import ast
print("Len of dataset: ",len(tweets_df))
# the mentioned user were stored as string so converted them to list
tweets_df['mentionedUsers']=[ast.literal_eval(mentioneduser) if type(mentioneduser)!=float else str(mentioneduser) for mentioneduser in tweets_df['mentionedUsers']]
mask3 = tweets_df.mentionedUsers.apply(lambda x: len(x) < 5)
tweets_df = tweets_df[mask3]
print("Len of dataset: ",len(tweets_df))
#show_df(tweets_df)
import ast
print(len(tweets_df))
tweets_df['outlinks']=[ast.literal_eval(outlink) for outlink in tweets_df['outlinks']]
mask4 = tweets_df.outlinks.apply(lambda x: len(x)==0)
tweets_df = tweets_df[mask4]
print(len(tweets_df))
#show_df(tweets_df)
tweets_df['mentionedUserCount']=[len(mentionedUser) if type(mentionedUser)==list else 0 for mentionedUser in tweets_df['mentionedUsers']]
#show_df(tweets_df)
#tweets_df.to_csv('cleaned_tweets.csv')
count=0
for i in range(len(tweets_df)):
if tweets_df.iloc[i]['renderedContent']==tweets_df.iloc[i]['content']:
count+=1
print(count)
tweets_df.columns
#tweets_cleaned_df=tweets_df[['url','date','user','renderedContent','replyCount','retweetCount','likeCount','quoteCount','mentionedUserCount']]
tweets_cleaned_df=tweets_df[['renderedContent','mentionedUserCount']]
#show_df(tweets_cleaned_df)
tweets_cleaned_df.to_csv('cleaned_merged_depression_tweets.csv',index=False)
| 0.059667 | 0.81772 |
```
%matplotlib inline
# With Bayesian Ridge Regression, our data is represented as
# the mean of the coefficients.
# With the Gaussian Process, it's about the variance of the
# coefficients instead of the mean. We assume a mean of 0 so
# we need to specify the covariance function.
from sklearn.datasets import load_boston
import numpy as np
boston = load_boston()
boston_X = boston.data
boston_y = boston.target
train_set = np.random.choice([True, False], len(boston_y), p=[.75, .25])
from sklearn.gaussian_process import GaussianProcess
# GaussianProcess, by default, uses a constant regression function
# and squared exponential correlation.
gp = GaussianProcess()
gp.fit(boston_X[train_set], boston_y[train_set])
# beta0: Regression Weight.
# corr: correlation function.
# regr: constant regression function.
# nugget: regularization parameter.
# normalize: boolean value to center and scale the features.
test_preds = gp.predict(boston_X[~train_set])
from matplotlib import pyplot as plt
f, ax = plt.subplots(figsize=(10,7), nrows=3)
f.tight_layout()
ax[0].plot(range(len(test_preds)), test_preds, label='predicted values')
ax[0].plot(range(len(test_preds)), boston_y[~train_set], label='Actual values')
ax[0].set_title('Predicted Vs Actual')
ax[0].legend(loc='best')
ax[1].plot(range(len(test_preds)), test_preds - boston_y[~train_set])
ax[1].set_title('Plotted Residuals')
ax[2].hist(test_preds - boston_y[~train_set])
ax[2].set_title("Histogram of Residuals")
# other options for corr function:
# absolute_exponential
# squared_exponential (default)
# generalized_exponential
# cubic
# linear
gp = GaussianProcess(regr='linear', theta0=5e-1)
gp.fit(boston_X[train_set], boston_y[train_set])
linear_preds = gp.predict(boston_X[~train_set])
f, ax = plt.subplots(figsize=(7,5))
f.tight_layout()
ax.hist(test_preds - boston_y[~train_set], label='Residuals Original', color='b', alpha=.5)
ax.hist(linear_preds - boston_y[~train_set],
label='Residuals Linear', color='r', alpha=.5)
ax.set_title('Residuals')
ax.legend(loc='best')
# note on above, the book's results looked better for the
# linear regression model but this shows that on our dataset
# the constant regression function performed better.
np.power(test_preds - boston_y[~train_set], 2).mean()
np.power(linear_preds - boston_y[~train_set], 2).mean()
test_preds, MSE = gp.predict(boston_X[~train_set], eval_MSE=True)
MSE[:5]
f, ax = plt.subplots(figsize=(7,5))
n = 20
rng = range(20)
ax.scatter(rng, test_preds[:n])
ax.errorbar(rng, test_preds[:n], yerr=1.96*MSE[:n])
ax.set_title('Predictions with Error Bars')
ax.set_xlim((-1, 21))
```
|
github_jupyter
|
%matplotlib inline
# With Bayesian Ridge Regression, our data is represented as
# the mean of the coefficients.
# With the Gaussian Process, it's about the variance of the
# coefficients instead of the mean. We assume a mean of 0 so
# we need to specify the covariance function.
from sklearn.datasets import load_boston
import numpy as np
boston = load_boston()
boston_X = boston.data
boston_y = boston.target
train_set = np.random.choice([True, False], len(boston_y), p=[.75, .25])
from sklearn.gaussian_process import GaussianProcess
# GaussianProcess, by default, uses a constant regression function
# and squared exponential correlation.
gp = GaussianProcess()
gp.fit(boston_X[train_set], boston_y[train_set])
# beta0: Regression Weight.
# corr: correlation function.
# regr: constant regression function.
# nugget: regularization parameter.
# normalize: boolean value to center and scale the features.
test_preds = gp.predict(boston_X[~train_set])
from matplotlib import pyplot as plt
f, ax = plt.subplots(figsize=(10,7), nrows=3)
f.tight_layout()
ax[0].plot(range(len(test_preds)), test_preds, label='predicted values')
ax[0].plot(range(len(test_preds)), boston_y[~train_set], label='Actual values')
ax[0].set_title('Predicted Vs Actual')
ax[0].legend(loc='best')
ax[1].plot(range(len(test_preds)), test_preds - boston_y[~train_set])
ax[1].set_title('Plotted Residuals')
ax[2].hist(test_preds - boston_y[~train_set])
ax[2].set_title("Histogram of Residuals")
# other options for corr function:
# absolute_exponential
# squared_exponential (default)
# generalized_exponential
# cubic
# linear
gp = GaussianProcess(regr='linear', theta0=5e-1)
gp.fit(boston_X[train_set], boston_y[train_set])
linear_preds = gp.predict(boston_X[~train_set])
f, ax = plt.subplots(figsize=(7,5))
f.tight_layout()
ax.hist(test_preds - boston_y[~train_set], label='Residuals Original', color='b', alpha=.5)
ax.hist(linear_preds - boston_y[~train_set],
label='Residuals Linear', color='r', alpha=.5)
ax.set_title('Residuals')
ax.legend(loc='best')
# note on above, the book's results looked better for the
# linear regression model but this shows that on our dataset
# the constant regression function performed better.
np.power(test_preds - boston_y[~train_set], 2).mean()
np.power(linear_preds - boston_y[~train_set], 2).mean()
test_preds, MSE = gp.predict(boston_X[~train_set], eval_MSE=True)
MSE[:5]
f, ax = plt.subplots(figsize=(7,5))
n = 20
rng = range(20)
ax.scatter(rng, test_preds[:n])
ax.errorbar(rng, test_preds[:n], yerr=1.96*MSE[:n])
ax.set_title('Predictions with Error Bars')
ax.set_xlim((-1, 21))
| 0.906115 | 0.923454 |
```
import math
import os
import nemo
from nemo.utils.lr_policies import WarmupAnnealing
import nemo_nlp
from nemo_nlp import NemoBertTokenizer, SentencePieceTokenizer
from nemo_nlp.callbacks.ner import \
eval_iter_callback, eval_epochs_done_callback
BATCHES_PER_STEP = 1
BATCH_SIZE = 32
CLASSIFICATION_DROPOUT = 0.1
DATA_DIR = "PATH TO WHERE YOU PUT CoNLL-2003 data"
MAX_SEQ_LENGTH = 128
NUM_EPOCHS = 3
LEARNING_RATE = 0.00005
LR_WARMUP_PROPORTION = 0.1
OPTIMIZER = "adam"
# Instantiate neural factory with supported backend
neural_factory = nemo.core.NeuralModuleFactory(
backend=nemo.core.Backend.PyTorch,
# If you're training with multiple GPUs, you should handle this value with
# something like argparse. See examples/nlp/ner.py for an example.
local_rank=None,
# If you're training with mixed precision, this should be set to mxprO1 or mxprO2.
# See https://nvidia.github.io/apex/amp.html#opt-levels for more details.
optimization_level=nemo.core.Optimization.mxprO0,
# If you're training with multiple GPUs, this should be set to
# nemo.core.DeviceType.AllGpu
placement=nemo.core.DeviceType.GPU)
# If you're using a standard BERT model, you should do it like this. To see the full
# list of BERT model names, check out nemo_nlp.huggingface.BERT.list_pretrained_models()
tokenizer = NemoBertTokenizer(pretrained_model="bert-base-cased")
bert_model = nemo_nlp.huggingface.BERT(
pretrained_model_name="bert-base-cased",
factory=neural_factory)
train_data_layer = nemo_nlp.BertNERDataLayer(
tokenizer=tokenizer,
path_to_data=os.path.join(DATA_DIR, "train.txt"),
max_seq_length=MAX_SEQ_LENGTH,
batch_size=BATCH_SIZE,
factory=neural_factory)
tag_ids = train_data_layer.dataset.tag_ids
ner_loss = nemo_nlp.TokenClassificationLoss(
d_model=bert_model.bert.config.hidden_size,
num_labels=len(tag_ids),
dropout=CLASSIFICATION_DROPOUT,
factory=neural_factory)
input_ids, input_type_ids, input_mask, labels, _ = train_data_layer()
hidden_states = bert_model(
input_ids=input_ids,
token_type_ids=input_type_ids,
attention_mask=input_mask)
train_loss, train_logits = ner_loss(
hidden_states=hidden_states,
labels=labels,
input_mask=input_mask)
eval_data_layer = nemo_nlp.BertNERDataLayer(
tokenizer=tokenizer,
path_to_data=os.path.join(DATA_DIR, "dev.txt"),
max_seq_length=MAX_SEQ_LENGTH,
batch_size=BATCH_SIZE,
factory=neural_factory)
input_ids, input_type_ids, eval_input_mask, \
eval_labels, eval_seq_ids = eval_data_layer()
hidden_states = bert_model(
input_ids=input_ids,
token_type_ids=input_type_ids,
attention_mask=eval_input_mask)
eval_loss, eval_logits = ner_loss(
hidden_states=hidden_states,
labels=eval_labels,
input_mask=eval_input_mask)
callback_train = nemo.core.SimpleLossLoggerCallback(
tensors=[train_loss],
print_func=lambda x: print("Loss: {:.3f}".format(x[0].item())))
train_data_size = len(train_data_layer)
# If you're training on multiple GPUs, this should be
# train_data_size / (batch_size * batches_per_step * num_gpus)
steps_per_epoch = int(train_data_size / (BATCHES_PER_STEP * BATCH_SIZE))
callback_eval = nemo.core.EvaluatorCallback(
eval_tensors=[eval_logits, eval_seq_ids],
user_iter_callback=lambda x, y: eval_iter_callback(
x, y, eval_data_layer, tag_ids),
user_epochs_done_callback=lambda x: eval_epochs_done_callback(
x, tag_ids, "output.txt"),
eval_step=steps_per_epoch)
lr_policy = WarmupAnnealing(NUM_EPOCHS * steps_per_epoch,
warmup_ratio=LR_WARMUP_PROPORTION)
optimizer = neural_factory.get_trainer()
optimizer.train(
tensors_to_optimize=[train_loss],
callbacks=[callback_train, callback_eval],
lr_policy=lr_policy,
batches_per_step=BATCHES_PER_STEP,
optimizer=OPTIMIZER,
optimization_params={
"num_epochs": NUM_EPOCHS,
"lr": LEARNING_RATE
})
```
|
github_jupyter
|
import math
import os
import nemo
from nemo.utils.lr_policies import WarmupAnnealing
import nemo_nlp
from nemo_nlp import NemoBertTokenizer, SentencePieceTokenizer
from nemo_nlp.callbacks.ner import \
eval_iter_callback, eval_epochs_done_callback
BATCHES_PER_STEP = 1
BATCH_SIZE = 32
CLASSIFICATION_DROPOUT = 0.1
DATA_DIR = "PATH TO WHERE YOU PUT CoNLL-2003 data"
MAX_SEQ_LENGTH = 128
NUM_EPOCHS = 3
LEARNING_RATE = 0.00005
LR_WARMUP_PROPORTION = 0.1
OPTIMIZER = "adam"
# Instantiate neural factory with supported backend
neural_factory = nemo.core.NeuralModuleFactory(
backend=nemo.core.Backend.PyTorch,
# If you're training with multiple GPUs, you should handle this value with
# something like argparse. See examples/nlp/ner.py for an example.
local_rank=None,
# If you're training with mixed precision, this should be set to mxprO1 or mxprO2.
# See https://nvidia.github.io/apex/amp.html#opt-levels for more details.
optimization_level=nemo.core.Optimization.mxprO0,
# If you're training with multiple GPUs, this should be set to
# nemo.core.DeviceType.AllGpu
placement=nemo.core.DeviceType.GPU)
# If you're using a standard BERT model, you should do it like this. To see the full
# list of BERT model names, check out nemo_nlp.huggingface.BERT.list_pretrained_models()
tokenizer = NemoBertTokenizer(pretrained_model="bert-base-cased")
bert_model = nemo_nlp.huggingface.BERT(
pretrained_model_name="bert-base-cased",
factory=neural_factory)
train_data_layer = nemo_nlp.BertNERDataLayer(
tokenizer=tokenizer,
path_to_data=os.path.join(DATA_DIR, "train.txt"),
max_seq_length=MAX_SEQ_LENGTH,
batch_size=BATCH_SIZE,
factory=neural_factory)
tag_ids = train_data_layer.dataset.tag_ids
ner_loss = nemo_nlp.TokenClassificationLoss(
d_model=bert_model.bert.config.hidden_size,
num_labels=len(tag_ids),
dropout=CLASSIFICATION_DROPOUT,
factory=neural_factory)
input_ids, input_type_ids, input_mask, labels, _ = train_data_layer()
hidden_states = bert_model(
input_ids=input_ids,
token_type_ids=input_type_ids,
attention_mask=input_mask)
train_loss, train_logits = ner_loss(
hidden_states=hidden_states,
labels=labels,
input_mask=input_mask)
eval_data_layer = nemo_nlp.BertNERDataLayer(
tokenizer=tokenizer,
path_to_data=os.path.join(DATA_DIR, "dev.txt"),
max_seq_length=MAX_SEQ_LENGTH,
batch_size=BATCH_SIZE,
factory=neural_factory)
input_ids, input_type_ids, eval_input_mask, \
eval_labels, eval_seq_ids = eval_data_layer()
hidden_states = bert_model(
input_ids=input_ids,
token_type_ids=input_type_ids,
attention_mask=eval_input_mask)
eval_loss, eval_logits = ner_loss(
hidden_states=hidden_states,
labels=eval_labels,
input_mask=eval_input_mask)
callback_train = nemo.core.SimpleLossLoggerCallback(
tensors=[train_loss],
print_func=lambda x: print("Loss: {:.3f}".format(x[0].item())))
train_data_size = len(train_data_layer)
# If you're training on multiple GPUs, this should be
# train_data_size / (batch_size * batches_per_step * num_gpus)
steps_per_epoch = int(train_data_size / (BATCHES_PER_STEP * BATCH_SIZE))
callback_eval = nemo.core.EvaluatorCallback(
eval_tensors=[eval_logits, eval_seq_ids],
user_iter_callback=lambda x, y: eval_iter_callback(
x, y, eval_data_layer, tag_ids),
user_epochs_done_callback=lambda x: eval_epochs_done_callback(
x, tag_ids, "output.txt"),
eval_step=steps_per_epoch)
lr_policy = WarmupAnnealing(NUM_EPOCHS * steps_per_epoch,
warmup_ratio=LR_WARMUP_PROPORTION)
optimizer = neural_factory.get_trainer()
optimizer.train(
tensors_to_optimize=[train_loss],
callbacks=[callback_train, callback_eval],
lr_policy=lr_policy,
batches_per_step=BATCHES_PER_STEP,
optimizer=OPTIMIZER,
optimization_params={
"num_epochs": NUM_EPOCHS,
"lr": LEARNING_RATE
})
| 0.680348 | 0.179459 |
<h1><center>PISAP: Python Interactive Sparse Astronomical Data Analysis Packages</center></h1>
<h2><center>Anstronomic/Neuroimaging denoising tutorial</center></h2>
<div style="text-align: center">Credit: </div>
Pisap is a Python package related to sparsity and its application in
astronomical or mediacal data analysis. This package propose sparse denosing methods reusable in various contexts.
For more information please visit the project page on github: https://github.com/neurospin/pisap.<br><br>
<h3>First check</h3>
In order to test if the 'pisap' package is installed on your machine, you can check the package version:
```
import pisap
print pisap.__version__
```
<h2>The Condat-Vu primal dual sparse denoising with reweightings</h2>
The package provides a flexible implementation of the Condat-Vu denoising algorithm that can be reused is various contexts. In this tutorial we will apply this denoising method on two toy astronomical and neuroimaging toy dataset respectivelly.
<h3>Astronomical denoising</h3>
First load the toy datase and the associated sampling mask.
```
import scipy.fftpack as pfft
import numpy as np
import matplotlib.pylab as plt
%matplotlib inline
from pisap.data import get_sample_data
from pisap.base.utils import convert_mask_to_locations
from pisap.numerics.noise import add_noise
from pisap.numerics.reconstruct import sparse_rec_fista
from pisap.numerics.gradient import Grad2DSynthesis
from pisap.numerics.linear import Wavelet
from pisap.numerics.fourier import FFT
from pisap.numerics.cost import snr, nrmse
fits_data_path = get_sample_data("astro-fits")
image = pisap.io.load(fits_data_path)
image.show()
mask_data_path = get_sample_data("astro-mask")
mask = pisap.io.load(mask_data_path)
mask.show()
```
Now generate a synthetic image from the previous toy_dataset and sampling mask.
```
dirty_data = add_noise(image.data, sigma=0.01, noise_type="gauss")
dirty_image = pisap.Image(data=dirty_data)
dirty_image.show()
mask_shift = pfft.ifftshift(mask.data)
localization = convert_mask_to_locations(mask_shift)
dirty_fft = mask_shift * pfft.fft2(dirty_image.data)
```
Now run the denoising algoritm with custom gradient and linear operator using a positivity constraint.
```
metrics = {'snr':{'metric':snr,
'mapping': {'x_new': 'test', 'y_new':None},
'cst_kwargs':{'ref':image.data},
'early_stopping': True,
},
'nrmse':{'metric':nrmse,
'mapping': {'x_new': 'test', 'y_new':None},
'cst_kwargs':{'ref':image.data},
'early_stopping': False,
},
}
params = {
'data':dirty_fft,
'gradient_cls':Grad2DSynthesis,
'gradient_kwargs':{"ft_cls": {FFT: {"samples_locations": localization,
"img_size": dirty_fft.shape[0]}}},
'linear_cls':Wavelet,
'linear_kwargs':{"nb_scale": 3, "wavelet": "MallatWaveletTransform79Filters"},
'max_nb_of_iter':100,
'mu':2.0e-2,
'metrics':metrics,
'verbose':1,
}
x, y, saved_metrics = sparse_rec_fista(**params)
plt.figure()
plt.imshow(mask, cmap='gray')
plt.title("Mask")
plt.figure()
plt.imshow(dirty_image.data, interpolation="nearest", cmap="gist_stern")
plt.colorbar()
plt.title("Dirty image")
plt.figure()
plt.imshow(np.abs(x.data), interpolation="nearest", cmap="gist_stern")
plt.colorbar()
plt.title("Analytic sparse reconstruction via Condat-Vu method")
metric = saved_metrics['snr']
fig = plt.figure()
plt.grid()
plt.plot(metric['time'], metric['values'])
plt.xlabel("time (s)")
plt.ylabel("SNR")
plt.title("Evo. SNR per time")
metric = saved_metrics['nrmse']
fig = plt.figure()
plt.grid()
plt.plot(metric['time'], metric['values'])
plt.xlabel("time (s)")
plt.ylabel("NRMSE")
plt.title("Evo. NRMSE per time")
plt.show()
```
<h3>Neuroimagin denoising</h3>
First load the toy datase and the associated sampling mask.
```
fits_data_path = get_sample_data("mri-slice-nifti")
image = pisap.io.load(fits_data_path)
image.show()
mask_data_path = get_sample_data("mri-mask")
mask = pisap.io.load(mask_data_path)
mask.show()
mask_shift = pfft.ifftshift(mask.data)
```
Now generate a synthetic image from the previous toy_dataset and sampling mask.
```
dirty_data = add_noise(image.data, sigma=0.01, noise_type="gauss")
dirty_image = pisap.Image(data=dirty_data)
dirty_image.show()
localization = convert_mask_to_locations(mask_shift)
dirty_fft = mask_shift * pfft.fft2(dirty_image.data)
```
Now run the denoising algoritm with custom gradient and linear operator using a positivity constraint.
```
metrics = {'snr':{'metric':snr,
'mapping': {'x_new': 'test', 'y_new':None},
'cst_kwargs':{'ref':image.data},
'early_stopping': True,
},
'nrmse':{'metric':nrmse,
'mapping': {'x_new': 'test', 'y_new':None},
'cst_kwargs':{'ref':image.data},
'early_stopping': False,
},
}
params = {
'data':dirty_fft,
'gradient_cls':Grad2DSynthesis,
'gradient_kwargs':{"ft_cls": {FFT: {"samples_locations": localization,
"img_size": dirty_fft.shape[0]}}},
'linear_cls':Wavelet,
'linear_kwargs':{"nb_scale":5, "wavelet": "MallatWaveletTransform79Filters"},
'max_nb_of_iter':100,
'mu':4.5e-2,
'metrics':metrics,
'verbose':1,
}
x, y, saved_metrics = sparse_rec_fista(**params)
plt.figure()
plt.imshow(np.real(mask), cmap='gray')
plt.title("Mask")
plt.figure()
plt.imshow(dirty_image.data, interpolation="nearest", cmap="gist_stern")
plt.colorbar()
plt.title("Dirty image")
plt.figure()
plt.imshow(np.abs(x.data), interpolation="nearest", cmap="gist_stern")
plt.colorbar()
plt.title("Analytic sparse reconstruction via Condat-Vu method")
metric = saved_metrics['snr']
fig = plt.figure()
plt.grid()
plt.plot(metric['time'], metric['values'])
plt.xlabel("time (s)")
plt.ylabel("SNR")
plt.title("Evo. SNR per time")
metric = saved_metrics['nrmse']
fig = plt.figure()
plt.grid()
plt.plot(metric['time'], metric['values'])
plt.xlabel("time (s)")
plt.ylabel("NRMSE")
plt.title("Evo. NRMSE per time")
plt.show()
```
|
github_jupyter
|
import pisap
print pisap.__version__
import scipy.fftpack as pfft
import numpy as np
import matplotlib.pylab as plt
%matplotlib inline
from pisap.data import get_sample_data
from pisap.base.utils import convert_mask_to_locations
from pisap.numerics.noise import add_noise
from pisap.numerics.reconstruct import sparse_rec_fista
from pisap.numerics.gradient import Grad2DSynthesis
from pisap.numerics.linear import Wavelet
from pisap.numerics.fourier import FFT
from pisap.numerics.cost import snr, nrmse
fits_data_path = get_sample_data("astro-fits")
image = pisap.io.load(fits_data_path)
image.show()
mask_data_path = get_sample_data("astro-mask")
mask = pisap.io.load(mask_data_path)
mask.show()
dirty_data = add_noise(image.data, sigma=0.01, noise_type="gauss")
dirty_image = pisap.Image(data=dirty_data)
dirty_image.show()
mask_shift = pfft.ifftshift(mask.data)
localization = convert_mask_to_locations(mask_shift)
dirty_fft = mask_shift * pfft.fft2(dirty_image.data)
metrics = {'snr':{'metric':snr,
'mapping': {'x_new': 'test', 'y_new':None},
'cst_kwargs':{'ref':image.data},
'early_stopping': True,
},
'nrmse':{'metric':nrmse,
'mapping': {'x_new': 'test', 'y_new':None},
'cst_kwargs':{'ref':image.data},
'early_stopping': False,
},
}
params = {
'data':dirty_fft,
'gradient_cls':Grad2DSynthesis,
'gradient_kwargs':{"ft_cls": {FFT: {"samples_locations": localization,
"img_size": dirty_fft.shape[0]}}},
'linear_cls':Wavelet,
'linear_kwargs':{"nb_scale": 3, "wavelet": "MallatWaveletTransform79Filters"},
'max_nb_of_iter':100,
'mu':2.0e-2,
'metrics':metrics,
'verbose':1,
}
x, y, saved_metrics = sparse_rec_fista(**params)
plt.figure()
plt.imshow(mask, cmap='gray')
plt.title("Mask")
plt.figure()
plt.imshow(dirty_image.data, interpolation="nearest", cmap="gist_stern")
plt.colorbar()
plt.title("Dirty image")
plt.figure()
plt.imshow(np.abs(x.data), interpolation="nearest", cmap="gist_stern")
plt.colorbar()
plt.title("Analytic sparse reconstruction via Condat-Vu method")
metric = saved_metrics['snr']
fig = plt.figure()
plt.grid()
plt.plot(metric['time'], metric['values'])
plt.xlabel("time (s)")
plt.ylabel("SNR")
plt.title("Evo. SNR per time")
metric = saved_metrics['nrmse']
fig = plt.figure()
plt.grid()
plt.plot(metric['time'], metric['values'])
plt.xlabel("time (s)")
plt.ylabel("NRMSE")
plt.title("Evo. NRMSE per time")
plt.show()
fits_data_path = get_sample_data("mri-slice-nifti")
image = pisap.io.load(fits_data_path)
image.show()
mask_data_path = get_sample_data("mri-mask")
mask = pisap.io.load(mask_data_path)
mask.show()
mask_shift = pfft.ifftshift(mask.data)
dirty_data = add_noise(image.data, sigma=0.01, noise_type="gauss")
dirty_image = pisap.Image(data=dirty_data)
dirty_image.show()
localization = convert_mask_to_locations(mask_shift)
dirty_fft = mask_shift * pfft.fft2(dirty_image.data)
metrics = {'snr':{'metric':snr,
'mapping': {'x_new': 'test', 'y_new':None},
'cst_kwargs':{'ref':image.data},
'early_stopping': True,
},
'nrmse':{'metric':nrmse,
'mapping': {'x_new': 'test', 'y_new':None},
'cst_kwargs':{'ref':image.data},
'early_stopping': False,
},
}
params = {
'data':dirty_fft,
'gradient_cls':Grad2DSynthesis,
'gradient_kwargs':{"ft_cls": {FFT: {"samples_locations": localization,
"img_size": dirty_fft.shape[0]}}},
'linear_cls':Wavelet,
'linear_kwargs':{"nb_scale":5, "wavelet": "MallatWaveletTransform79Filters"},
'max_nb_of_iter':100,
'mu':4.5e-2,
'metrics':metrics,
'verbose':1,
}
x, y, saved_metrics = sparse_rec_fista(**params)
plt.figure()
plt.imshow(np.real(mask), cmap='gray')
plt.title("Mask")
plt.figure()
plt.imshow(dirty_image.data, interpolation="nearest", cmap="gist_stern")
plt.colorbar()
plt.title("Dirty image")
plt.figure()
plt.imshow(np.abs(x.data), interpolation="nearest", cmap="gist_stern")
plt.colorbar()
plt.title("Analytic sparse reconstruction via Condat-Vu method")
metric = saved_metrics['snr']
fig = plt.figure()
plt.grid()
plt.plot(metric['time'], metric['values'])
plt.xlabel("time (s)")
plt.ylabel("SNR")
plt.title("Evo. SNR per time")
metric = saved_metrics['nrmse']
fig = plt.figure()
plt.grid()
plt.plot(metric['time'], metric['values'])
plt.xlabel("time (s)")
plt.ylabel("NRMSE")
plt.title("Evo. NRMSE per time")
plt.show()
| 0.544559 | 0.965479 |
# Embeddings
https://www.youtube.com/watch?v=wSXGlvTR9UM
```
import pandas as pd
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Activation, Embedding, Merge, Flatten
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('data/cmc.data',header=None,names=['Age','Education','H_education',
'num_child','Religion', 'Employ',
'H_occupation','living_standard',
'Media_exposure','contraceptive'])
df.head()
df.isnull().any()
df.Education.hist()
df.shape
df.contraceptive.hist()
df.dtypes
def one_hot_encoding(idx):
y = np.zeros((len(idx),max(idx)+1))
y[np.arange(len(idx)), idx] = 1
return y
scaler = StandardScaler()
df[['Age','num_child']] = scaler.fit_transform(df[['Age','num_child']])
x = df[['Age','num_child','Employ','Media_exposure']].values
y = one_hot_encoding(df.contraceptive.values-1)
liv_cats = df.living_standard.max()
edu_cats = df.Education.max()
liv = df.living_standard.values - 1
liv_one_hot = one_hot_encoding(liv)
edu = df.Education.values - 1
edu_one_hot = one_hot_encoding(edu)
train_x, test_x, train_liv, \
test_liv, train_edu, test_edu, train_y, test_y = train_test_split(x,liv_one_hot,edu_one_hot,y,test_size=0.1, random_state=1)
train_x = np.hstack([train_x, train_edu, train_liv])
test_x = np.hstack([test_x, test_edu, test_liv])
train_x.shape
train_edu.shape
train_liv.shape
train_x.shape
model = Sequential()
model.add(Dense(input_dim=train_x.shape[1],output_dim=12))
model.add(Activation('relu'))
model.add(Dense(output_dim=3))
model.add(Activation('softmax'))
model.compile(optimizer='adagrad', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(train_x, train_y, nb_epoch=100, verbose=2)
model.summary()
for w in model.get_weights():
print(w.shape)
model.evaluate(test_x, test_y, batch_size=256)
model.predict(test_x[:10])
liv
train_x, test_x, train_liv, \
test_liv, train_edu, test_edu, train_y, test_y = train_test_split(x,liv,edu,y,test_size=0.1, random_state=1)
# Input layer for religion
encoder_liv = Sequential()
encoder_liv.add(Embedding(liv_cats,4,input_length=1))
encoder_liv.add(Flatten())
# Input layer for religion
encoder_edu = Sequential()
encoder_edu.add(Embedding(edu_cats,4,input_length=1))
encoder_edu.add(Flatten())
# Input layer for triggers(x_b)
dense_x = Sequential()
dense_x.add(Dense(4, input_dim=x.shape[1]))
model = Sequential()
model.add(Merge([encoder_liv, encoder_edu, dense_x], mode='concat'))
# model.add(Activation('relu'))
model.add(Dense(output_dim=12))
model.add(Activation('relu'))
model.add(Dense(output_dim=3))
model.add(Activation('softmax'))
model.compile(optimizer='adagrad', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit([train_liv[:,None], train_edu[:,None], train_x], train_y, nb_epoch=100, verbose=2)
dense_x.summary()
encoder_liv.summary()
model.summary()
for w in model.get_weights():
print(w.shape)
w
a = model.get_weights()
a
model.evaluate([test_liv[:,None], test_edu[:,None], test_x],test_y, batch_size=256)
p = model.predict([test_liv[:,None], test_edu[:,None], test_x], batch_size=256)
p[:5]
model.summary()
model = Sequential()
model.add(Dense(4, input_dim=train_x.shape[1]))
model.add(Activation('relu'))
model.add(Dense(output_dim=3))
model.add(Activation('softmax'))
model.compile(optimizer='adagrad', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(train_x, train_y, nb_epoch=100)
model.evaluate(test_x,test_y,batch_size=256)
model.fit?
```
|
github_jupyter
|
import pandas as pd
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Activation, Embedding, Merge, Flatten
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('data/cmc.data',header=None,names=['Age','Education','H_education',
'num_child','Religion', 'Employ',
'H_occupation','living_standard',
'Media_exposure','contraceptive'])
df.head()
df.isnull().any()
df.Education.hist()
df.shape
df.contraceptive.hist()
df.dtypes
def one_hot_encoding(idx):
y = np.zeros((len(idx),max(idx)+1))
y[np.arange(len(idx)), idx] = 1
return y
scaler = StandardScaler()
df[['Age','num_child']] = scaler.fit_transform(df[['Age','num_child']])
x = df[['Age','num_child','Employ','Media_exposure']].values
y = one_hot_encoding(df.contraceptive.values-1)
liv_cats = df.living_standard.max()
edu_cats = df.Education.max()
liv = df.living_standard.values - 1
liv_one_hot = one_hot_encoding(liv)
edu = df.Education.values - 1
edu_one_hot = one_hot_encoding(edu)
train_x, test_x, train_liv, \
test_liv, train_edu, test_edu, train_y, test_y = train_test_split(x,liv_one_hot,edu_one_hot,y,test_size=0.1, random_state=1)
train_x = np.hstack([train_x, train_edu, train_liv])
test_x = np.hstack([test_x, test_edu, test_liv])
train_x.shape
train_edu.shape
train_liv.shape
train_x.shape
model = Sequential()
model.add(Dense(input_dim=train_x.shape[1],output_dim=12))
model.add(Activation('relu'))
model.add(Dense(output_dim=3))
model.add(Activation('softmax'))
model.compile(optimizer='adagrad', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(train_x, train_y, nb_epoch=100, verbose=2)
model.summary()
for w in model.get_weights():
print(w.shape)
model.evaluate(test_x, test_y, batch_size=256)
model.predict(test_x[:10])
liv
train_x, test_x, train_liv, \
test_liv, train_edu, test_edu, train_y, test_y = train_test_split(x,liv,edu,y,test_size=0.1, random_state=1)
# Input layer for religion
encoder_liv = Sequential()
encoder_liv.add(Embedding(liv_cats,4,input_length=1))
encoder_liv.add(Flatten())
# Input layer for religion
encoder_edu = Sequential()
encoder_edu.add(Embedding(edu_cats,4,input_length=1))
encoder_edu.add(Flatten())
# Input layer for triggers(x_b)
dense_x = Sequential()
dense_x.add(Dense(4, input_dim=x.shape[1]))
model = Sequential()
model.add(Merge([encoder_liv, encoder_edu, dense_x], mode='concat'))
# model.add(Activation('relu'))
model.add(Dense(output_dim=12))
model.add(Activation('relu'))
model.add(Dense(output_dim=3))
model.add(Activation('softmax'))
model.compile(optimizer='adagrad', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit([train_liv[:,None], train_edu[:,None], train_x], train_y, nb_epoch=100, verbose=2)
dense_x.summary()
encoder_liv.summary()
model.summary()
for w in model.get_weights():
print(w.shape)
w
a = model.get_weights()
a
model.evaluate([test_liv[:,None], test_edu[:,None], test_x],test_y, batch_size=256)
p = model.predict([test_liv[:,None], test_edu[:,None], test_x], batch_size=256)
p[:5]
model.summary()
model = Sequential()
model.add(Dense(4, input_dim=train_x.shape[1]))
model.add(Activation('relu'))
model.add(Dense(output_dim=3))
model.add(Activation('softmax'))
model.compile(optimizer='adagrad', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(train_x, train_y, nb_epoch=100)
model.evaluate(test_x,test_y,batch_size=256)
model.fit?
| 0.710929 | 0.785925 |
```
import numpy as np
import pandas as pd
import catboost as cbt
import lightgbm as lgb
import time
import gc
from tqdm import tqdm
from sklearn.metrics import roc_auc_score,log_loss
from sklearn.model_selection import StratifiedKFold
from sklearn.preprocessing import LabelEncoder,MinMaxScaler
from itertools import combinations,permutations
import warnings
warnings.filterwarnings('ignore')
from gensim.models import Word2Vec
from sklearn.decomposition import PCA
def reduce_mem_usage(data):
'''
通过判断数据范围的上下限来选择最小能存储数据的类型
注意:在存储feather前不要使用,因为feather不支持float16位类型
data:输入dataframe
return:返回优化后的dataframe
'''
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
start_mem = data.memory_usage().sum() / 1024**2
for col in tqdm(data.columns):
col_type = data[col].dtypes
if col_type in numerics:
c_min = data[col].min()
c_max = data[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
data[col] = data[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
data[col] = data[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
data[col] = data[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
data[col] = data[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
#为避免feather不支持此类型
#data[col] = data[col].astype(np.float16)
pass
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
data[col] = data[col].astype(np.float32)
else:
data[col] = data[col].astype(np.float64)
end_mem = data.memory_usage().sum() / 1024**2
print('Mem. usage decreased to {:5.2f} Mb ({:.1f}% reduction)'.format(end_mem, 100 * (start_mem - end_mem) / start_mem))
return data
train = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
label = pd.read_csv('train_label.csv')
train['label'] = label.label
data = pd.concat([train,test],axis=0,sort=False).reset_index(drop=True)
data['date'] =pd.to_datetime(data['date'])
data['hour'] = data['date'].dt.hour
data['day'] = data['date'].dt.day
A_ft = ['A'+str(i) for i in range(1,4)]
B_ft = ['B'+str(i) for i in range(1,4)]
C_ft = ['C'+str(i) for i in range(1,4)]
D_ft = ['D1','D2']
cat_list = A_ft+B_ft+C_ft+['hour','day']+['E'+str(i) for i in [1,6,14,20,27]]+['E'+str(i) for i in [4,11,12,24,26,28]]+['E'+str(i) for i in [8,15,18,25]]+['E'+str(i) for i in [23,29]]
numerical_cols = [col for col in data.columns if col not in ['ID','label','date']+cat_list+['D1','D2']]
data[numerical_cols] = np.exp(data[numerical_cols])
data['mean_numerical1'] = np.mean(data[numerical_cols],axis=1)
data['std_numerical1'] = np.std(data[numerical_cols],axis=1)
data['min_numerical1'] = np.min(data[numerical_cols],axis=1)
data['max_numerical1'] = np.max(data[numerical_cols],axis=1)
for cb in [('E2','E7'),('E9','E17'),('E5','E9')]:
data[cb[0]+'_plus_'+cb[1]] = data[cb[0]] + data[cb[1]]
for cb in [('E2','E7'),('E9','E17'),('E19','E9'),('E7','E9')]:
data[cb[0]+'_mul_'+cb[1]] = data[cb[0]] + data[cb[1]]
for cb in [('E17','E9'),('E9','E17'),('E5','E9'),('E7','E2'),('E9','E2')]:
data[cb[0]+'_devide_'+cb[1]] = data[cb[0]] + data[cb[1]]
count_feature = []
for col in tqdm(A_ft+B_ft+C_ft+['E'+str(i) for i in [1,6,14,20,27]]):
data[col + "_count"] = data.groupby([col])[col].transform('count')
count_feature.append(col + "_count")
def vec(data,col1,col2):
dataword2vec2 = pd.concat((data[col1],data[col2]), axis=1)
dataword2vec3=np.array(dataword2vec2.astype(str))
dataword2vec3=dataword2vec3.tolist() #必须用列表类型的数据才能训练词向量
model = Word2Vec(dataword2vec3, size=200,iter=15, hs=1, min_count=1, window=5,workers=6)
ws1=np.array(dataword2vec2[col1].astype('str'))
ws2=np.array(dataword2vec2[col2].astype('str'))
ws1=ws1.tolist()
ws2=ws2.tolist()
word2vecsim1=[]
for i in tqdm(range(len(data))):
ws3=[ws1[i]]
ws4=[ws2[i]]
word2vecsim2=model.wv.n_similarity(ws3,ws4)#计算两列的相似度
word2vecsim1.append(word2vecsim2)
data[col1+col2+'_vec'] = np.array(word2vecsim1)
for cols in combinations(A_ft,2):
vec(data,cols[0],cols[1])
for cols in combinations(B_ft,2):
vec(data,cols[0],cols[1])
for cols in combinations(C_ft,2):
vec(data,cols[0],cols[1])
numerical_cols = [col for col in data.columns if col not in ['ID','label','date']+cat_list+['D1','D2']]
data['mean_numerical2'] = np.mean(data[numerical_cols],axis=1)
data['std_numerical2'] = np.std(data[numerical_cols],axis=1)
data['min_numerical2'] = np.min(data[numerical_cols],axis=1)
data['max_numerical2'] = np.max(data[numerical_cols],axis=1)
useless_ft = ['E22','E3','E19']
feature_name = list(set([col for col in data.columns if col not in useless_ft+['ID','label','date']]))
#cat_list = A_ft+B_ft+['hour','day']+['E1','E14']
print(feature_name)
print(len(feature_name))
print(cat_list)
print(len(cat_list))
data[cat_list] = data[cat_list].astype(int)
%time data = reduce_mem_usage(data)
tr_index = ~data['label'].isnull()
X_train = data.loc[tr_index,:].reset_index(drop=True)
y = data.loc[tr_index,:]['label'].reset_index(drop=True).astype(int)
X_test = data[~tr_index].reset_index(drop=True)
print(X_train.shape,X_test.shape)
def run_cbt_cv(train_X,train_Y,test_X,params,feature_name=None,split=5,seed=20191031,cat_list=None,use_best=True,iterations=10000):
val_results = []
models_list = []
best_iterations = []
train_pred = np.zeros(train_X.shape[0])
test_pred = np.zeros(test_X.shape[0])
seeds=range(seed,seed+split)
learning_rate = params['learning_rate']
depth = params['max_depth']
reg_lambda = params['reg_lambda']
bagging_temperature = params['bagging_temperature']
random_strength = params['random_strength']
if feature_name == None:
feature_name = [col for col in train_X.columns if col not in ['ID','label','date']]
print('Using features:',feature_name)
print(len(feature_name))
train_val_spliter = StratifiedKFold(n_splits=split, random_state=seeds[0], shuffle=True)
for index, (train_index, test_index) in enumerate(train_val_spliter.split(train_X, train_Y)):
print('fold:',index+1)
val_result = []
train_x, val_x, train_y, val_y = train_X[feature_name].iloc[train_index], train_X[feature_name].iloc[test_index], train_Y.iloc[train_index], train_Y.iloc[test_index]
cbt_model = cbt.CatBoostClassifier(iterations=iterations,learning_rate=learning_rate,max_depth=depth,verbose=100,
early_stopping_rounds=700,task_type='GPU',eval_metric='AUC',loss_function='Logloss',
cat_features=cat_list,random_state=seeds[index],reg_lambda=reg_lambda,use_best_model=use_best)
cbt_model.fit(train_x[feature_name], train_y,eval_set=(val_x[feature_name],val_y))
gc.collect()
train_pred[test_index] += cbt_model.predict_proba(val_x)[:,1]
fold_test_pred = cbt_model.predict_proba(X_test[feature_name])[:,1]
test_pred += fold_test_pred/split
val_result.append(roc_auc_score(val_y, train_pred[test_index]))
print('AUC: ',val_result[-1])
val_result.append(log_loss(val_y, train_pred[test_index]))
print('log_loss: ',val_result[-1])
best_iterations.append(cbt_model.get_best_iteration())
val_results.append(val_result)
del cbt_model
gc.collect()
val_results = np.array(val_results)
print('cv completed')
print('mean best iteration: ',np.mean(best_iterations))
print('std best iteration: ',np.std(best_iterations))
print('oof AUC: ',roc_auc_score(train_Y,train_pred))
print('mean AUC: ',np.mean(val_results[:,0]))
print('std AUC: ',np.std(val_results[:,0]))
print('oof log_loss: ',log_loss(train_Y,train_pred))
print('mean log_loss: ',np.mean(val_results[:,1]))
print('std log_loss: ',np.std(val_results[:,1]))
return train_pred,test_pred
default_params = {'bagging_temperature': 1.0,
'learning_rate': 0.03,
'max_depth': 7,
'random_strength': 1.0,
'reg_lambda': 6.0
}
prediction_df = pd.DataFrame()
n_times = 12
oof_df = pd.DataFrame()
for i in [10*i+20191091 for i in range(n_times)]:
oof,pred = run_cbt_cv(X_train,y,X_test,params=default_params,feature_name=feature_name,seed=i,iterations=10000,use_best=True,split=5,cat_list=cat_list)
gc.collect()
prediction_temp = pd.DataFrame()
prediction_temp['cbt_'+str(i)] = pred
prediction_df = pd.concat([prediction_df,prediction_temp],axis=1)
oof_temp = pd.DataFrame()
oof_temp['cbt_'+str(i)] = oof
oof_df = pd.concat([oof_df,oof_temp],axis=1)
oof_df.to_csv('oof_cbt_12.csv',index=False)
prediction_df.to_csv('test_cbt_12.csv',index=False)
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import catboost as cbt
import lightgbm as lgb
import time
import gc
from tqdm import tqdm
from sklearn.metrics import roc_auc_score,log_loss
from sklearn.model_selection import StratifiedKFold
from sklearn.preprocessing import LabelEncoder,MinMaxScaler
from itertools import combinations,permutations
import warnings
warnings.filterwarnings('ignore')
from gensim.models import Word2Vec
from sklearn.decomposition import PCA
def reduce_mem_usage(data):
'''
通过判断数据范围的上下限来选择最小能存储数据的类型
注意:在存储feather前不要使用,因为feather不支持float16位类型
data:输入dataframe
return:返回优化后的dataframe
'''
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
start_mem = data.memory_usage().sum() / 1024**2
for col in tqdm(data.columns):
col_type = data[col].dtypes
if col_type in numerics:
c_min = data[col].min()
c_max = data[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
data[col] = data[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
data[col] = data[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
data[col] = data[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
data[col] = data[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
#为避免feather不支持此类型
#data[col] = data[col].astype(np.float16)
pass
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
data[col] = data[col].astype(np.float32)
else:
data[col] = data[col].astype(np.float64)
end_mem = data.memory_usage().sum() / 1024**2
print('Mem. usage decreased to {:5.2f} Mb ({:.1f}% reduction)'.format(end_mem, 100 * (start_mem - end_mem) / start_mem))
return data
train = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
label = pd.read_csv('train_label.csv')
train['label'] = label.label
data = pd.concat([train,test],axis=0,sort=False).reset_index(drop=True)
data['date'] =pd.to_datetime(data['date'])
data['hour'] = data['date'].dt.hour
data['day'] = data['date'].dt.day
A_ft = ['A'+str(i) for i in range(1,4)]
B_ft = ['B'+str(i) for i in range(1,4)]
C_ft = ['C'+str(i) for i in range(1,4)]
D_ft = ['D1','D2']
cat_list = A_ft+B_ft+C_ft+['hour','day']+['E'+str(i) for i in [1,6,14,20,27]]+['E'+str(i) for i in [4,11,12,24,26,28]]+['E'+str(i) for i in [8,15,18,25]]+['E'+str(i) for i in [23,29]]
numerical_cols = [col for col in data.columns if col not in ['ID','label','date']+cat_list+['D1','D2']]
data[numerical_cols] = np.exp(data[numerical_cols])
data['mean_numerical1'] = np.mean(data[numerical_cols],axis=1)
data['std_numerical1'] = np.std(data[numerical_cols],axis=1)
data['min_numerical1'] = np.min(data[numerical_cols],axis=1)
data['max_numerical1'] = np.max(data[numerical_cols],axis=1)
for cb in [('E2','E7'),('E9','E17'),('E5','E9')]:
data[cb[0]+'_plus_'+cb[1]] = data[cb[0]] + data[cb[1]]
for cb in [('E2','E7'),('E9','E17'),('E19','E9'),('E7','E9')]:
data[cb[0]+'_mul_'+cb[1]] = data[cb[0]] + data[cb[1]]
for cb in [('E17','E9'),('E9','E17'),('E5','E9'),('E7','E2'),('E9','E2')]:
data[cb[0]+'_devide_'+cb[1]] = data[cb[0]] + data[cb[1]]
count_feature = []
for col in tqdm(A_ft+B_ft+C_ft+['E'+str(i) for i in [1,6,14,20,27]]):
data[col + "_count"] = data.groupby([col])[col].transform('count')
count_feature.append(col + "_count")
def vec(data,col1,col2):
dataword2vec2 = pd.concat((data[col1],data[col2]), axis=1)
dataword2vec3=np.array(dataword2vec2.astype(str))
dataword2vec3=dataword2vec3.tolist() #必须用列表类型的数据才能训练词向量
model = Word2Vec(dataword2vec3, size=200,iter=15, hs=1, min_count=1, window=5,workers=6)
ws1=np.array(dataword2vec2[col1].astype('str'))
ws2=np.array(dataword2vec2[col2].astype('str'))
ws1=ws1.tolist()
ws2=ws2.tolist()
word2vecsim1=[]
for i in tqdm(range(len(data))):
ws3=[ws1[i]]
ws4=[ws2[i]]
word2vecsim2=model.wv.n_similarity(ws3,ws4)#计算两列的相似度
word2vecsim1.append(word2vecsim2)
data[col1+col2+'_vec'] = np.array(word2vecsim1)
for cols in combinations(A_ft,2):
vec(data,cols[0],cols[1])
for cols in combinations(B_ft,2):
vec(data,cols[0],cols[1])
for cols in combinations(C_ft,2):
vec(data,cols[0],cols[1])
numerical_cols = [col for col in data.columns if col not in ['ID','label','date']+cat_list+['D1','D2']]
data['mean_numerical2'] = np.mean(data[numerical_cols],axis=1)
data['std_numerical2'] = np.std(data[numerical_cols],axis=1)
data['min_numerical2'] = np.min(data[numerical_cols],axis=1)
data['max_numerical2'] = np.max(data[numerical_cols],axis=1)
useless_ft = ['E22','E3','E19']
feature_name = list(set([col for col in data.columns if col not in useless_ft+['ID','label','date']]))
#cat_list = A_ft+B_ft+['hour','day']+['E1','E14']
print(feature_name)
print(len(feature_name))
print(cat_list)
print(len(cat_list))
data[cat_list] = data[cat_list].astype(int)
%time data = reduce_mem_usage(data)
tr_index = ~data['label'].isnull()
X_train = data.loc[tr_index,:].reset_index(drop=True)
y = data.loc[tr_index,:]['label'].reset_index(drop=True).astype(int)
X_test = data[~tr_index].reset_index(drop=True)
print(X_train.shape,X_test.shape)
def run_cbt_cv(train_X,train_Y,test_X,params,feature_name=None,split=5,seed=20191031,cat_list=None,use_best=True,iterations=10000):
val_results = []
models_list = []
best_iterations = []
train_pred = np.zeros(train_X.shape[0])
test_pred = np.zeros(test_X.shape[0])
seeds=range(seed,seed+split)
learning_rate = params['learning_rate']
depth = params['max_depth']
reg_lambda = params['reg_lambda']
bagging_temperature = params['bagging_temperature']
random_strength = params['random_strength']
if feature_name == None:
feature_name = [col for col in train_X.columns if col not in ['ID','label','date']]
print('Using features:',feature_name)
print(len(feature_name))
train_val_spliter = StratifiedKFold(n_splits=split, random_state=seeds[0], shuffle=True)
for index, (train_index, test_index) in enumerate(train_val_spliter.split(train_X, train_Y)):
print('fold:',index+1)
val_result = []
train_x, val_x, train_y, val_y = train_X[feature_name].iloc[train_index], train_X[feature_name].iloc[test_index], train_Y.iloc[train_index], train_Y.iloc[test_index]
cbt_model = cbt.CatBoostClassifier(iterations=iterations,learning_rate=learning_rate,max_depth=depth,verbose=100,
early_stopping_rounds=700,task_type='GPU',eval_metric='AUC',loss_function='Logloss',
cat_features=cat_list,random_state=seeds[index],reg_lambda=reg_lambda,use_best_model=use_best)
cbt_model.fit(train_x[feature_name], train_y,eval_set=(val_x[feature_name],val_y))
gc.collect()
train_pred[test_index] += cbt_model.predict_proba(val_x)[:,1]
fold_test_pred = cbt_model.predict_proba(X_test[feature_name])[:,1]
test_pred += fold_test_pred/split
val_result.append(roc_auc_score(val_y, train_pred[test_index]))
print('AUC: ',val_result[-1])
val_result.append(log_loss(val_y, train_pred[test_index]))
print('log_loss: ',val_result[-1])
best_iterations.append(cbt_model.get_best_iteration())
val_results.append(val_result)
del cbt_model
gc.collect()
val_results = np.array(val_results)
print('cv completed')
print('mean best iteration: ',np.mean(best_iterations))
print('std best iteration: ',np.std(best_iterations))
print('oof AUC: ',roc_auc_score(train_Y,train_pred))
print('mean AUC: ',np.mean(val_results[:,0]))
print('std AUC: ',np.std(val_results[:,0]))
print('oof log_loss: ',log_loss(train_Y,train_pred))
print('mean log_loss: ',np.mean(val_results[:,1]))
print('std log_loss: ',np.std(val_results[:,1]))
return train_pred,test_pred
default_params = {'bagging_temperature': 1.0,
'learning_rate': 0.03,
'max_depth': 7,
'random_strength': 1.0,
'reg_lambda': 6.0
}
prediction_df = pd.DataFrame()
n_times = 12
oof_df = pd.DataFrame()
for i in [10*i+20191091 for i in range(n_times)]:
oof,pred = run_cbt_cv(X_train,y,X_test,params=default_params,feature_name=feature_name,seed=i,iterations=10000,use_best=True,split=5,cat_list=cat_list)
gc.collect()
prediction_temp = pd.DataFrame()
prediction_temp['cbt_'+str(i)] = pred
prediction_df = pd.concat([prediction_df,prediction_temp],axis=1)
oof_temp = pd.DataFrame()
oof_temp['cbt_'+str(i)] = oof
oof_df = pd.concat([oof_df,oof_temp],axis=1)
oof_df.to_csv('oof_cbt_12.csv',index=False)
prediction_df.to_csv('test_cbt_12.csv',index=False)
| 0.329068 | 0.353052 |
```
# Useful for debugging
%load_ext autoreload
%autoreload 2
%pylab --no-import-all inline
%config InlineBackend.figure_format = 'retina'
```
# LCLS Classic model
```
from lcls_live.bmad import LCLSTaoModel
from lcls_live.epics import epics_proxy
import os
# Make sure this exists
assert 'LCLS_CLASSIC_LATTICE' in os.environ
```
# Get snapshot
```
# Cached EPICS pv data
SNAPSHOT = 'data/epics_snapshot_2018-03-06T14:21:29.000000-08:00.json'
epics = epics_proxy('data/epics_snapshot_2018-03-06T11:22:45.000000-08:00.json', verbose=True)
#epics = epics_proxy(SNAPSHOT, verbose=True)
M = LCLSTaoModel('lcls_classic', epics = epics ,verbose=True, ploton=True)
print(M)
%%tao
place floor beta_compare
set lattice base = model
```
# Archiver restore
```
# Optional.
# For archiver, if off-site
# Open an SSH tunnel in a terminal like:
# ssh -D 8080 cmayes@rhel6-64.slac.stanford.edu
# And then set:
if False:
os.environ['http_proxy']='socks5h://localhost:8080'
os.environ['HTTPS_PROXY']='socks5h://localhost:8080'
os.environ['ALL_PROXY']='socks5h://localhost:8080'
# Restore from some other time
#M.archiver_restore('2018-11-06T11:22:45.000000-08:00')
M.archiver_restore('2018-03-06T14:21:29.000000-08:00')
```
## Track particles with CSR
```
%%tao
set beam_init beam_track_end = UNDSTART
set csr_param n_bin = 40
snparticle 10000
set bmad_com csr_and_space_charge_on = T
set csr_param ds_track_step = 0.01
set ele BC1BEG:BC1END CSR_METHOD = 1_dim
set ele BC2BEG:BC2END CSR_METHOD = 1_dim
beamon
beamoff
```
# Plot
```
from pmd_beamphysics import ParticleGroup
P = ParticleGroup(data=M.bunch_data('BC2FIN'))
Palive = P.where(P['status'] == 1)
Pdead = P.where(P['status'] != 1)
Palive.plot('delta_t', 'delta_pz', bins=100)
if len(Pdead) >0:
print(Pdead)
```
# Functional usage
```
from lcls_live.bmad.evaluate import run_LCLSTao, evaluate_LCLSTao
settings00 = {
# 'ele:O_BC1:angle_deg':-5.12345,
# 'ele:O_BC2:angle_deg':-2.0,
# 'ele:O_L1:phase_deg':-25.1,
# 'ele:O_L2:phase_deg':-41.4,
# 'ele:O_L3:phase_deg':0.0,
# 'ele:O_L1_fudge:f': 1.0,
# 'ele:O_L2_fudge:f': 1.0,
# 'ele:O_L3_fudge:f': 1.0,
'ele:CE11:x1_limit': 2.5e-3, # Basic 'horn cutting'
'ele:CE11:x2_limit': 4.0e-3,
'csr_param:n_bin':40,
'csr_param:ds_track_step':0.01,
'beam_init:n_particle': 10000,
'beam:beam_saved_at':'CE11, UNDSTART',
'beam:beam_track_end':'UNDSTART',
'bmad_com:csr_and_space_charge_on':True,
'ele:BC1BEG:BC1END:CSR_METHOD': '1_Dim',
'ele:BC2BEG:BC2END:CSR_METHOD': '1_Dim'
}
M = run_LCLSTao(settings=settings00, model_name='lcls_classic', verbose=True)
```
Because Tao runs as a library in global space, you can patch in commands:
```
%%tao
beamoff
set global plot_on = True
place floor zphase
szpz undstart
x-s floor -.055 -0.02
sc
# This will run the model, and return a dict with values from the following expressions
expressions = [
'lat::orbit.x[end]',
'beam::n_particle_loss[end]'
]
res = evaluate_LCLSTao(settings=settings00,
# epics_json='data/epics_snapshot_2018-03-06T11:22:45.000000-08:00.json',
expressions=expressions,
beam_archive_path = '.'
)
res
# Restore something from the archiver
settings00 = {
'csr_param:n_bin':40,
'csr_param:ds_track_step':0.01,
'beam_init:n_particle': 10000,
'beam:beam_saved_at':'CE11, UNDSTART',
'beam:beam_track_end':'UNDSTART',
'bmad_com:csr_and_space_charge_on':True,
'ele:BC1BEG:BC1END:CSR_METHOD': '1_Dim',
'ele:BC2BEG:BC2END:CSR_METHOD': '1_Dim'
}
res2 = evaluate_LCLSTao(settings=settings00,
epics_json='data/epics_snapshot_2018-03-06T11:22:45.000000-08:00.json',
expressions=expressions,
beam_archive_path = '.'
)
res2
```
# Plot
```
from pmd_beamphysics import particle_paths
import h5py
afile = res['beam_archive']
h5 = h5py.File(afile, 'r')
ppaths = particle_paths(h5)
ppaths
P = ParticleGroup(h5[ppaths[-1]])
Palive = P.where(P['status'] == 1)
Pdead = P.where(P['status'] != 1)
Palive.plot('delta_t', 'delta_pz', bins=100)
# These particles were lost (probably due to collimation)
Pdead
# Cleanup
os.remove(res['beam_archive'])
os.remove(res2['beam_archive'])
res2
```
|
github_jupyter
|
# Useful for debugging
%load_ext autoreload
%autoreload 2
%pylab --no-import-all inline
%config InlineBackend.figure_format = 'retina'
from lcls_live.bmad import LCLSTaoModel
from lcls_live.epics import epics_proxy
import os
# Make sure this exists
assert 'LCLS_CLASSIC_LATTICE' in os.environ
# Cached EPICS pv data
SNAPSHOT = 'data/epics_snapshot_2018-03-06T14:21:29.000000-08:00.json'
epics = epics_proxy('data/epics_snapshot_2018-03-06T11:22:45.000000-08:00.json', verbose=True)
#epics = epics_proxy(SNAPSHOT, verbose=True)
M = LCLSTaoModel('lcls_classic', epics = epics ,verbose=True, ploton=True)
print(M)
%%tao
place floor beta_compare
set lattice base = model
# Optional.
# For archiver, if off-site
# Open an SSH tunnel in a terminal like:
# ssh -D 8080 cmayes@rhel6-64.slac.stanford.edu
# And then set:
if False:
os.environ['http_proxy']='socks5h://localhost:8080'
os.environ['HTTPS_PROXY']='socks5h://localhost:8080'
os.environ['ALL_PROXY']='socks5h://localhost:8080'
# Restore from some other time
#M.archiver_restore('2018-11-06T11:22:45.000000-08:00')
M.archiver_restore('2018-03-06T14:21:29.000000-08:00')
%%tao
set beam_init beam_track_end = UNDSTART
set csr_param n_bin = 40
snparticle 10000
set bmad_com csr_and_space_charge_on = T
set csr_param ds_track_step = 0.01
set ele BC1BEG:BC1END CSR_METHOD = 1_dim
set ele BC2BEG:BC2END CSR_METHOD = 1_dim
beamon
beamoff
from pmd_beamphysics import ParticleGroup
P = ParticleGroup(data=M.bunch_data('BC2FIN'))
Palive = P.where(P['status'] == 1)
Pdead = P.where(P['status'] != 1)
Palive.plot('delta_t', 'delta_pz', bins=100)
if len(Pdead) >0:
print(Pdead)
from lcls_live.bmad.evaluate import run_LCLSTao, evaluate_LCLSTao
settings00 = {
# 'ele:O_BC1:angle_deg':-5.12345,
# 'ele:O_BC2:angle_deg':-2.0,
# 'ele:O_L1:phase_deg':-25.1,
# 'ele:O_L2:phase_deg':-41.4,
# 'ele:O_L3:phase_deg':0.0,
# 'ele:O_L1_fudge:f': 1.0,
# 'ele:O_L2_fudge:f': 1.0,
# 'ele:O_L3_fudge:f': 1.0,
'ele:CE11:x1_limit': 2.5e-3, # Basic 'horn cutting'
'ele:CE11:x2_limit': 4.0e-3,
'csr_param:n_bin':40,
'csr_param:ds_track_step':0.01,
'beam_init:n_particle': 10000,
'beam:beam_saved_at':'CE11, UNDSTART',
'beam:beam_track_end':'UNDSTART',
'bmad_com:csr_and_space_charge_on':True,
'ele:BC1BEG:BC1END:CSR_METHOD': '1_Dim',
'ele:BC2BEG:BC2END:CSR_METHOD': '1_Dim'
}
M = run_LCLSTao(settings=settings00, model_name='lcls_classic', verbose=True)
%%tao
beamoff
set global plot_on = True
place floor zphase
szpz undstart
x-s floor -.055 -0.02
sc
# This will run the model, and return a dict with values from the following expressions
expressions = [
'lat::orbit.x[end]',
'beam::n_particle_loss[end]'
]
res = evaluate_LCLSTao(settings=settings00,
# epics_json='data/epics_snapshot_2018-03-06T11:22:45.000000-08:00.json',
expressions=expressions,
beam_archive_path = '.'
)
res
# Restore something from the archiver
settings00 = {
'csr_param:n_bin':40,
'csr_param:ds_track_step':0.01,
'beam_init:n_particle': 10000,
'beam:beam_saved_at':'CE11, UNDSTART',
'beam:beam_track_end':'UNDSTART',
'bmad_com:csr_and_space_charge_on':True,
'ele:BC1BEG:BC1END:CSR_METHOD': '1_Dim',
'ele:BC2BEG:BC2END:CSR_METHOD': '1_Dim'
}
res2 = evaluate_LCLSTao(settings=settings00,
epics_json='data/epics_snapshot_2018-03-06T11:22:45.000000-08:00.json',
expressions=expressions,
beam_archive_path = '.'
)
res2
from pmd_beamphysics import particle_paths
import h5py
afile = res['beam_archive']
h5 = h5py.File(afile, 'r')
ppaths = particle_paths(h5)
ppaths
P = ParticleGroup(h5[ppaths[-1]])
Palive = P.where(P['status'] == 1)
Pdead = P.where(P['status'] != 1)
Palive.plot('delta_t', 'delta_pz', bins=100)
# These particles were lost (probably due to collimation)
Pdead
# Cleanup
os.remove(res['beam_archive'])
os.remove(res2['beam_archive'])
res2
| 0.560493 | 0.661499 |
```
# A notebook to attempt to rehabilitate distances as meaningful features:
import pybiomart
import os
import pickle
import pandas as pd
import seaborn as sns
import pybedtools
import pybedtools.featurefuncs as featurefuncs
import umap
import numpy as np
from sklearn.preprocessing import maxabs_scale
activity_var_df = pd.read_pickle("data/activity_features.pkl")
ctcf_var_df = pd.read_pickle("data/iap_variances.pkl")
tissue_var_df = pd.read_pickle("data/activity_tissue_variances.pkl")
concat_list = [activity_var_df.sort_index(),
ctcf_var_df.iloc[:, 7:].sort_index(),
tissue_var_df.iloc[:, 7:].sort_index()]
total_df = pd.concat(concat_list, axis=1)
total_df.shape
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rcParams['figure.figsize'] = [7, 7]
sns.heatmap(total_df.iloc[:, 8:], vmax=5)
total_df["integer_encodings"] = total_df["val_result"].copy()
total_df["val_result"] = total_df["val_result"].replace("-1", "Untested")
print(total_df.shape)
total_df.loc[:, "integer_encodings"] = \
total_df.loc[:, "integer_encodings"].replace("Untested", -1)
total_df.loc[:, "integer_encodings"] = \
total_df.loc[:, "integer_encodings"].replace("True ME", 1)
total_df.loc[:, "integer_encodings"] = \
total_df.loc[:, "integer_encodings"].replace("False-positive", 2)
total_df.loc[:, "integer_encodings"] = \
total_df.loc[:, "integer_encodings"].replace("Tissue-specific", 3)
total_df.loc[:, "integer_encodings"] = \
total_df.loc[:, "integer_encodings"].replace(3, 1)
print(total_df.shape)
cols = []
cols = total_df.columns.tolist()
cols = cols[:7] + [cols[-1]] + cols[7:-1]
total_df = total_df.loc[:, cols]
print(total_df.shape)
# Generating validation dataframe for model training:
val_df = total_df[total_df.val_result != "Untested"].copy()
val_df.loc[:, "integer_encodings"] = \
val_df.loc[:, "integer_encodings"].replace(3, 2)
from sklearn.feature_selection import VarianceThreshold, SelectKBest, mutual_info_classif
from sklearn.preprocessing import MaxAbsScaler, scale
# Normalising variances & counts:
transformer = MaxAbsScaler().fit(total_df.iloc[:, 8:])
total_abs = transformer.transform(total_df.iloc[:, 8:])
val_abs = transformer.transform(val_df.iloc[:, 8:])
# Using mutualinfo to extract relevant features:
def mi_kbest_selector(data, labels, k=60):
selector = SelectKBest(mutual_info_classif, k)
selector.fit(data, labels)
return data[:, selector.get_support(indices=True)], selector.get_support(indices=True)
k_best_df, support = mi_kbest_selector(val_abs, val_df["integer_encodings"])
# Extracting high variance features:
kbest_total = total_abs[:, support]
kbest_val = val_abs[:, support]
kbest_total.shape
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=5, random_state=0).fit(kbest_total)
total_category_labels = pd.Series(kmeans.labels_)
total_category_labels.index = total_df["element_id"].astype(int).to_list()
sorted_total_df = total_df.iloc[:, 0:7].copy(deep=True)
sorted_total_df["cluster_assignments"] = total_category_labels
sorted_total_df = sorted_total_df.sort_values(by=["cluster_assignments"])
hue_order = ["A", "B", "C", "D", "E"]
kbest_total = pd.DataFrame(kbest_total)
kbest_total.index = total_df["element_id"].astype(int).to_list()
matplotlib.rcParams['figure.figsize'] = [40, 10]
kbest_total["cluster_assignments"] = sorted_total_df["cluster_assignments"]
kbest_total = kbest_total.sort_values(by=["cluster_assignments"])
ax = sns.heatmap(kbest_total.iloc[:, :-1], vmin=0)
pyroval_bed = pybedtools.BedTool("data/IAP_validation.July2019.stranded.with_IDs.bed")
names = ["chrom", "start", "end", "strand", "gene",
"blueprint", "ear", "b_cell", "val_status", "element_id"]
pyroval_df = pyroval_bed.to_dataframe(names=names)
pyroval_df = pyroval_df[pyroval_df["ear"].notnull()]
pyroval_df = pyroval_df[pyroval_df["element_id"] != "."]
pyroval_df = pyroval_df[pyroval_df["chrom"] != "chrX"]
pyroval_df.index = pyroval_df["element_id"].astype(int).to_list()
pyroval_df
sorted_total_df["ear"] = pyroval_df["ear"]
blueprint_categories = sorted_total_df[sorted_total_df.ear.notnull()]
blueprint_categories["cluster_assignments"].value_counts(sort=False).sort_index()
matplotlib.rcParams['figure.figsize'] = [10, 6]
ax = sns.swarmplot(x="cluster_assignments", y="ear", hue="val_result", data=blueprint_categories)
ax.set(xticklabels=["Variable CTCF", "Low Feature Density",
"Variably Active Promoter", "Invariable CTCF",
"Variably Active Enhancer"])
ax.set_xticklabels(ax.get_xticklabels(), rotation=20, fontsize=12, horizontalalignment='right')
ax.set_xlabel("", fontsize=12)
ax.set_ylabel("Range of methylation across individuals (%)", fontsize=12)
from yellowbrick.cluster.elbow import kelbow_visualizer
matplotlib.rcParams['figure.figsize'] = [15, 10]
kelbow_visualizer(KMeans(random_state=0), kbest_total_abs, k=(2,15))
```
|
github_jupyter
|
# A notebook to attempt to rehabilitate distances as meaningful features:
import pybiomart
import os
import pickle
import pandas as pd
import seaborn as sns
import pybedtools
import pybedtools.featurefuncs as featurefuncs
import umap
import numpy as np
from sklearn.preprocessing import maxabs_scale
activity_var_df = pd.read_pickle("data/activity_features.pkl")
ctcf_var_df = pd.read_pickle("data/iap_variances.pkl")
tissue_var_df = pd.read_pickle("data/activity_tissue_variances.pkl")
concat_list = [activity_var_df.sort_index(),
ctcf_var_df.iloc[:, 7:].sort_index(),
tissue_var_df.iloc[:, 7:].sort_index()]
total_df = pd.concat(concat_list, axis=1)
total_df.shape
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rcParams['figure.figsize'] = [7, 7]
sns.heatmap(total_df.iloc[:, 8:], vmax=5)
total_df["integer_encodings"] = total_df["val_result"].copy()
total_df["val_result"] = total_df["val_result"].replace("-1", "Untested")
print(total_df.shape)
total_df.loc[:, "integer_encodings"] = \
total_df.loc[:, "integer_encodings"].replace("Untested", -1)
total_df.loc[:, "integer_encodings"] = \
total_df.loc[:, "integer_encodings"].replace("True ME", 1)
total_df.loc[:, "integer_encodings"] = \
total_df.loc[:, "integer_encodings"].replace("False-positive", 2)
total_df.loc[:, "integer_encodings"] = \
total_df.loc[:, "integer_encodings"].replace("Tissue-specific", 3)
total_df.loc[:, "integer_encodings"] = \
total_df.loc[:, "integer_encodings"].replace(3, 1)
print(total_df.shape)
cols = []
cols = total_df.columns.tolist()
cols = cols[:7] + [cols[-1]] + cols[7:-1]
total_df = total_df.loc[:, cols]
print(total_df.shape)
# Generating validation dataframe for model training:
val_df = total_df[total_df.val_result != "Untested"].copy()
val_df.loc[:, "integer_encodings"] = \
val_df.loc[:, "integer_encodings"].replace(3, 2)
from sklearn.feature_selection import VarianceThreshold, SelectKBest, mutual_info_classif
from sklearn.preprocessing import MaxAbsScaler, scale
# Normalising variances & counts:
transformer = MaxAbsScaler().fit(total_df.iloc[:, 8:])
total_abs = transformer.transform(total_df.iloc[:, 8:])
val_abs = transformer.transform(val_df.iloc[:, 8:])
# Using mutualinfo to extract relevant features:
def mi_kbest_selector(data, labels, k=60):
selector = SelectKBest(mutual_info_classif, k)
selector.fit(data, labels)
return data[:, selector.get_support(indices=True)], selector.get_support(indices=True)
k_best_df, support = mi_kbest_selector(val_abs, val_df["integer_encodings"])
# Extracting high variance features:
kbest_total = total_abs[:, support]
kbest_val = val_abs[:, support]
kbest_total.shape
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=5, random_state=0).fit(kbest_total)
total_category_labels = pd.Series(kmeans.labels_)
total_category_labels.index = total_df["element_id"].astype(int).to_list()
sorted_total_df = total_df.iloc[:, 0:7].copy(deep=True)
sorted_total_df["cluster_assignments"] = total_category_labels
sorted_total_df = sorted_total_df.sort_values(by=["cluster_assignments"])
hue_order = ["A", "B", "C", "D", "E"]
kbest_total = pd.DataFrame(kbest_total)
kbest_total.index = total_df["element_id"].astype(int).to_list()
matplotlib.rcParams['figure.figsize'] = [40, 10]
kbest_total["cluster_assignments"] = sorted_total_df["cluster_assignments"]
kbest_total = kbest_total.sort_values(by=["cluster_assignments"])
ax = sns.heatmap(kbest_total.iloc[:, :-1], vmin=0)
pyroval_bed = pybedtools.BedTool("data/IAP_validation.July2019.stranded.with_IDs.bed")
names = ["chrom", "start", "end", "strand", "gene",
"blueprint", "ear", "b_cell", "val_status", "element_id"]
pyroval_df = pyroval_bed.to_dataframe(names=names)
pyroval_df = pyroval_df[pyroval_df["ear"].notnull()]
pyroval_df = pyroval_df[pyroval_df["element_id"] != "."]
pyroval_df = pyroval_df[pyroval_df["chrom"] != "chrX"]
pyroval_df.index = pyroval_df["element_id"].astype(int).to_list()
pyroval_df
sorted_total_df["ear"] = pyroval_df["ear"]
blueprint_categories = sorted_total_df[sorted_total_df.ear.notnull()]
blueprint_categories["cluster_assignments"].value_counts(sort=False).sort_index()
matplotlib.rcParams['figure.figsize'] = [10, 6]
ax = sns.swarmplot(x="cluster_assignments", y="ear", hue="val_result", data=blueprint_categories)
ax.set(xticklabels=["Variable CTCF", "Low Feature Density",
"Variably Active Promoter", "Invariable CTCF",
"Variably Active Enhancer"])
ax.set_xticklabels(ax.get_xticklabels(), rotation=20, fontsize=12, horizontalalignment='right')
ax.set_xlabel("", fontsize=12)
ax.set_ylabel("Range of methylation across individuals (%)", fontsize=12)
from yellowbrick.cluster.elbow import kelbow_visualizer
matplotlib.rcParams['figure.figsize'] = [15, 10]
kelbow_visualizer(KMeans(random_state=0), kbest_total_abs, k=(2,15))
| 0.555918 | 0.24599 |
**October 13, 2020**
**1:00-1:40**
## Optimized Python for Working with Data and API's
_Dolsy Smith_
_George Washington University_
dsmith@gwu.edu
Thanks to Alma’s API’s, we can create custom applications, automate workflows, and perform batch operations not possible with Jobs and Sets. However, using the API’s on large amounts of data can be slow. In this session, we will explore tools and strategies available in Python that can make these tasks and applications more efficient.
### Introduction
#### Why I Use Python
- It's a scripting language with user-friendly syntax, making it relatively easy to learn.
- It's a high-level language with some great features (like list comprehensions) that make it useful for rapid prototyping.
- It has a huge open-source ecosystem, with third-party Python libraries available for almost every task you might imagine.
- For an interpreted language, it's fairly performant.
#### Why "Optimized" Python?
Being interpreted, native Python cannot achieve the efficiency of compiled languages like Java and C. Several robust approaches exist to mitigate this limitation:
- Use high-performance libraries that delegate repetitive operations to a lower-level language under the hood.
- Invoke multiple threads and/or processors.
- Take advantage of Python's support for asynchronous/concurrent I/O.
We will look at the first and third of these approaches today.
#### Use Cases
The following are a handful of ways I've used these approaches in my work with Alma and its API's.
##### Discharging over 60,000 items using Alma's scan-In API.
With concurrent requests, this took a [Python script](https://github.com/gwu-libraries/bulk-loans-discharge-alma) roughly 45 minutes to complete on my laptop.
##### Looking up Alma users in real-time for a LibCal integration.
Needing to adhere to strict constraints on usage of our library's physical spaces during the pandemic, we require patrons to book an appointment in LibCal. [Our application](https://github.com/gwu-libraries/libcal_pp_integration) retrieves appointments from the LibCal API, enriches them with user data from Alma, and loads them into our visitor-management software. Concurrent requests allow us to retrieve data for multiple users from Alma at the same time -- an important piece of efficiency, since our app is fetching appointments at 5-minute intervals.
##### Testing legacy portfolio URL's
After migration, we found ourselves with several thousand portfolios that had been created from Voyager catalog records (as opposed to the activations from our ERM). Being unattached to collections, these would have to be analyzed one by one. But with concurrent requests, I was able to test the URLs quickly and identify those that did not return a valid HTTP response so that they could be deactivated.
##### Migration cleanup: merging and munging
After our migration, we had (and still have) lots of cleanup to do. Frequently, this involves comparing Alma Analytics reports with data from other sources, including our legacy Voyager database. With Python's `pandas` module, I can filter, clean, re-shape, and merge large datasets far more efficiently than in Excel. Plus, `pandas` supports a variety of input and output formats, including `.csv` and `.xlsx` files. Doing this work in Jupyter notebooks has the further advantage of allowing me to document my process in Markdown, which makes the work more shareable and reproducible.
This workshop presents an optimized workflow for batch operations using Python and the Alma API's.
#### Outline
1. API housekeeping (YAML)
2. Reading an Analytics report in Python (`pandas`)
3. A brief primer on asynchronous programming
4. Making API requests asynchronously (`aiohttp`)
5. Processing the results (`pandas`)
### Setup with `pandas` and YAML
#### YAML files for configuration
YAML is "human-readable data-serialization language" [Wikipedia](https://en.wikipedia.org/wiki/YAML) that works much like JSON but without all the extra punctuation. Like Python, it uses whitespace/indentation to create nested blocks (instead of JSON's curly braces), and it doesn't require quotation marks around strings except in certain situations.
I've stored my API key, the Alma API endpoint I'll be using, and the path to a CSV file containing identifiers in a file called `workshop-config.yml`. The following allows me to read these config values into a Python dictionary.
```
# If you haven't already, run !pip install pyaml
import yaml
with open('./workshop-config.yml', 'r') as f:
config = yaml.load(f, Loader=yaml.FullLoader)
config
```
For now, I'm leaving this as a global variable. We'll use it throughout the workflow that follows.
#### Reading Analytics data with `pandas`
The report I'm using contains items that received an error status during our migration to Alma. The report has about 20,000 rows. We'll work with a subset of them for this example.
In what follows, I'll walk through some of the features of the `pandas` library that make it useful for cleaning and re-shaping data.
```
import pandas as pd
```
First, we read our report into `pandas`. The report has been saved locally as a CSV.
The `.read_csv` method returns a `DataFrame` object.
```
migration_errors = pd.read_csv(config['csv_path'])
```
The column names are the standard Analytics column names, in title case with whitespace. These will be easier to work with if we make them valid Python identifiers. We can do this by way of a list comprehension applied to the `.columns` attribute on our `DataFrame`.
```
migration_errors.columns = [column.lower().replace(' ', '_') for column in migration_errors.columns]
```
We have two columns named `suppressed_from_discovery`. The first is for the holdings record, the second for the bib. We should probably rename them to avoid confusion.
The following `for` loop uses built-in Python list functions to create new headers for the `suppressed_from_discovery` columns.
```
new_names = ['suppressed_holdings', 'suppressed_bibs']
new_columns = []
for column in migration_errors.columns:
if column == 'suppressed_from_discovery':
new_columns.append(new_names.pop(0))
else:
new_columns.append(column)
migration_errors.columns = new_columns
```
Now let's exclude those suppressed items (holdings or bibs) from our set. We can do this using the `.loc` functionality of a `DataFrame`, which can accept a Boolean expression, returning only those rows for which the Boolean expression evaluates to `True`.
If you evaluate on its own either one of the expressions in parentheses below, you'll notice that it returns a data structure called a `Series`, which in this case has the same number of rows as the original `migration_errors` `DataFrame`. But for values each `Series` has only `True` and `False`. Passing those expressions to `migration_expressions.loc` (which is not a function, but an indexer, hence the square brackets) gives us the result we want.
The pipe symbol `|` is used here in place of the keyword `or` because we are comparing two Boolean values for _every row of the `DataFrame`_, rather than comparing two objects (the Boolean `Series` themselves).
```
migration_errors = migration_errors.loc[(migration_errors.suppressed_holdings == 0) | (migration_errors.suppressed_bibs == 'No')]
```
We can use the same process to exclude items in temporary locations.
(Here, the value `None` coming from Analytics has been read by Python as a string, not as the Python null type `None`.)
```
migration_errors = migration_errors.loc[migration_errors.temporary_location_name == 'None']
```
Finally, what if we want to limit our set by call number, only to items in the N's?
`pandas` support efficient indexing by string conditions, using the `.str` attribute of a column that consists of Python strings.
```
art_books = migration_errors.loc[migration_errors.permanent_call_number.str.startswith('N')]
```
We could, of course, achieve these same results in Analytics by using filters. But the ability to do so in Python often gives me greater flexibility, since I can quickly try different approaches without re-running the query. In addition, `pandas` has far more tools for data cleanup and analysis than the restricted SQL set of Analytics. Finally, I can use `pandas` to merge Analytics data sets with data from other sources (including other Analytics subject areas).
We could easily devote a whole workshop to exploring `pandas`. But for now, we'll see how to use our filtered report to create concurrent requests to the Alma API.
### Asynchronous programming: a brief primer
The following section attempts to illustrate some of the principles of asynchronous programming via more atomic Python constructs, namely, iterators, generators, and coroutines. At the end of this notebook, I've provided additional resources on this topic. Depending on your background in writing code, it may not be an easy topic to grasp; I certainly struggled with it for several years before it began to crystallize (before I could really understand why the code I wrote sometimes behaved the way it did). What helped was seeing the connection between the higher-level asynchronous syntax I'll introduce below, and the more foundational parts of the language we'll turn to now.
Most Python code we write executes in a **synchronous** fashion. But what does _that_ mean?
- It's not necessarily about things happening _at the same time_.
- Rather, it's about things happening _in sync_ with one another. As in synchronized swimming.
```
for i in range(5):
print(i)
```
By definition, a `for` loop in Python executes the statements in the block sequentially. Another way to put it is that the outcome of the loop is deterministic: the variable `i` will take on the values from `range(5)` always in this order. If it somehow printed the `3` before the `2`, we would conclude that something had gone terribly wrong.
In fact, strictly speaking, the Python interpreter _never_ does more than one thing at the same time. For complicated reasons, and as conventional wisdom has it, Python isn't very good at parallel processing. There is the `multiprocessing` library, but it effectively spawns multiple copies of the Python interpreter, which comes with a certain amount of overhead.
Libraries like `pandas`, which support fast computation, tend to delegate CPU-intensive operations to lower-level code (usually written in C). But there are other things we use Python for that don't consume a lot of CPU cycles, but which can still slow down our code. Chief among these are operations involving what's called _blocking I/O_.
#### Do Python scripts dream of electric sheep?
Let's say we need to request data from the same webserver five times in a row. If our requests **block**, then between sending the request and receiving the response, nothing else can happen on our end. It's as though whenever you sent an email, you had to wait until the recipient responded before sending another. That would certainly make your inbox easier to organize, but on the other hand, you might not be able to accomplish much.
Using Python's `requests` library, we can see this behavior in action. (Do `!pip install requests` first if you get a `ModuleNotFound` error when running the `import` statement.)
The URL we're using for this test causes a delay of at least one second before the server returns a response.
```
import requests
from datetime import datetime
def sync_fn(i):
resp = requests.get(config['test_url'])
print(f'Loop {i}; URL status: {resp.status_code}; Timestamp: {datetime.now().time()}')
for i in range(5):
sync_fn(i)
```
The timestamps should be roughly one second apart. Most of that time was spent by Python idling for a response. Which is not such a big deal if we're making 5 requests, or even 500, but what about 5,000 or 50,000?
The alternative is to write code that doesn't block on certain kinds of I/O. How do we accomplish, if the Python interpreter only does one thing at a time?
It might help to think about multitasking. Human beings do many things at the same time, like walking and breathing and digesting food. But when we talk about multitasking, we're usually talking about something short of true simultaneity. Handling email correspondence is a good example: you might be engaged in multiple threads of conversation throughout the day. You might be writing a couple of emails while watching a webinar. But it would be quite the neurological feat if three separate parts of your brain were each working on a different task, completely in parallel. More likely, the same parts of your brain are quickly switching back and forth between the tasks. Multitasking is an exercise in effective sequencing. Consider the case where you compose an email, send it, compose another, send it, send a third, receive a response to the first, reply, compose a fourth email while waiting for replies to the other two, and so on.
Such commonplace multitasking is analogous to **non-blocking** I/O. Different languages support non-blocking I/O (if they support it at all) in different ways. In Python 3, the `asyncio` library provides this support.
Before we turn to `asyncio`, let's take a closer look at the building blocks of Python's support for asynchronous programming.
#### Iterators, generators, and coroutines, oh my!
An **iterator** is a special Python object that permits iteration. It does so by producing a sequence of values _on demand_. Some built-in Python functions are iterators. `enumerate`, for example, which accepts a sequence of values and returns, for each element in the sequence, a pair of values: the original element and its index. Normally, we use `enumerate` in a `for` loop, but we can expose its iterator nature by using the `next` function.
```
enum = enumerate(['a', 'b', 'c'])
```
Used outside of a loop context, `enumerate` doesn't actually enumerate anything. But running `next(enum)` repeatedly will produce values from the sequence until it is exhausted.
```
next(enum)
```
The `StopIteration` exception will be caught by a Python `for` loop, so normally we don't see it. But it represents the iterator's signal that it has no more values to emit.
An easy way to create your own iterator in Python is to write a function that uses the `yield` keyword. Such functions are called **generators**. When the Python interpreter encounters a generator, it turns that function into an iterator object.
We could write our own version of `enumerate` as follows:
```
def my_enumerate(seq):
i = 0
while seq:
yield i, seq.pop(0)
i += 1
enum2 = my_enumerate(['a', 'b', 'c'])
next(enum2)
```
`my_enumerate` isn't as useful as the built-in `enumerate`; for one, our version works only on Python lists. But the relevant point for our discussion of asynchronous programming is this. A regular Python function (one without the `yield` keyword) is "one-and-done," so speak: it is called from a particular context, it executes its code, and then it returns the control flow to the calling context. A regular Python function demands the interpreter's undivided attention. Or from the user's perspective, calling a regular Python function is a bit like ordering food for curbside pickup: you submit your order, you wait, and then you receive your food.
A generator, on the other hand, when called with `next` or used in a `for` loop, behaves more like a server in a sit-down restaurant, bringing your meal one dish at a time. But unlike dine-in service, generators can actually be more efficient than regular functions in many contexts. One reason for this is that generators -- or more precisely, the iterators that they become -- do not need to allocate a set amount of memory in advance. For instance, we can use them to parse a file line by line without reading the whole file into memory first. Or we can create generators capable of producing infinite sequences:
```
def to_infinity():
i = 0
while True:
yield i
i += 1
gen = to_infinity()
for _ in range(10):
print(next(gen))
```
Our `to_infinity` function will never raise a `StopIteration` exception. If run for enough iterations, it will consume all the available memory.
Thus far we haven't seen any asynchronous behavior, but in the guise of generators, we have met Python functions that can start and stop on demand. We can also write generators that can communicate with their calling context. If you're dining in a restaurant, you generally don't have to order all your food at once. You can order an appetizer while you decide on your entree. The following generator, which is technically called a **coroutine**, we can use like a calculator to perform running sums:
```
def co_sum():
total = 0
while True:
n = yield total
if not n:
return total
total += n
```
Notice that the `yield` keyword appears on the right-hand side of an equals sign. The line `n = yield total` instructs the function to provide the value of `total` to the calling context, and then to "wait" until it receives a new value from that context. The calling context can provide such a value with the `.send` method (built into every generator by default). I put _wait_ in quotation marks because the key thing about a coroutine is that **while it waits for new input, it doesn't actually block the Python interpreter from doing other tasks.** It just sits there until its `.send` method is called, at which point it resumes execution where it left off, pausing again at **the next `yield` statement**.
The only quirk is that we have to "prime" the coroutine by calling `.send(None)` before we can use it.
```
calc = co_sum()
calc.send(None)
calc.send(1)
calc.send(10)
calc.send(14)
```
Our `co_sum` coroutine will keep adding until it encounters an error (_e.g._, it runs out of memory, or we send it a non-numeric value). We can make it quit gracefully by sending `None`, which returns the current value of `total` inside a `StopIteration` exception.
```
calc.send(None)
```
#### Coroutines in action: `async`, `await`, and `asyncio`
Since version 3.5, Python as provided higher-level abstractions for using coroutines with non-blocking I/O. Again, the use cases here are those where a Python program is waiting on input from an external process, such as the response from a webserver. These abstractions consist of two new keywords -- `async` and `await` -- and a module in the standard library, `asyncio`.
`asyncio` provides an implementation of an **event loop**. There are a variety of patterns for using the event loop, but in one of the most straighforward patterns, which we'll employ below, the event loop manages a collection of coroutines, each of which has one task that involves non-blocking I/O. To modify our previous analogy, imagine a server in a busy restaurant. The server is the event loop; their tables are the coroutines. Because diners spend far more time (as a general rule) eating than they do ordering food, the server can manage many tables at once. All they need to do is keep checking with each table to see if they want to order something else (if the coroutine has a value to `yield`).
Using `asyncio`, we don't have to write the event loop ourselves. All we have to do is supply it with a collection of coroutines to manage.
Our coroutines we define by using the `async` and `await` keywords.
Unfortunately, we can just stick `await` in front of every Python function to make it asynchronous. Even Python functions that work with I/O -- like the `requests` library we used above -- cannot be uses asynchronously if they were not designed to be. But there are a growing number of Python libraries for asynchronous I/O. Here we'll use the library called `aiohttp` for making asynchronous HTTP requests.
You may need to install `aiohttp` first: `!pip install aiohttp`.
Then we import both `aiohttp` and `asyncio`.
```
import aiohttp
import asyncio
```
To define an asynchronous coroutine, use `async def` in place of `def`. And such a coroutine must include at least one `await` statement.
Here our coroutine makes a simple HTTP request. The code is a bit more complex because `aiohttp` uses _context managers_ to handle opening and closing HTTP connections. The `async with` statement is an asynchronous version of the regular Python `with` statement that creates an instance of a context manager.
Our coroutine also accepts a `client` argument, which will be an object created by the `aiohttp.ClientSession` context manager. This allows us to re-use the same connection for multiple requests.
Note that the `yield` keyword does not appear inside our async coroutine. (In Python 3.6+, it's possible to `yield` from an async coroutine, but it's not required.) Here `await` does the work of pausing the coroutine until it receives a value from "outside," as it were. An important difference between `await` and `yield` is that **we** are **not** sending the value (as we did above with `calc.send()`). Rather, the value is coming from the special asynchronous `text()` method on our `aiohttp` object.
You can only `await` other async coroutines (and some other specialized Python objects called Futures and Tasks).
The `async` keyword in front of the `def` and the `with` keywords is important. Without them, the code will either throw an exception or not work as intended.
```
async def async_fn(i, client):
async with client.get(config['test_url']) as session:
resp = await session.text()
print(f'Loop {i}; URL status: {session.status}; Timestamp: {datetime.now().time()}')
```
Typically, we initialize our collection of coroutines inside another asynchronous function (coroutine). We'll call this one `main`.
1. `main` invokes the `aiohttp.ClientSession` context manager, creating an instance of a client that we can re-use across all our requests.
2. Then it creates a collection of `async_fn` coroutines, initializing each with a new value between 0 and 4 and with the client created above.
3. Next we pass this collection to `asyncio.gather` and `await` it. `gather` will execute the coroutines concurrently, ensuring that their results (if any) are accumulated and arranged according to the order in which they were submitted.
The `await` keyword before `asnycio.gather` is important. Our `main` coroutine, like our `async_fn` coroutines, will run _inside_ the event loop. This seems counterintuitive, since `main` manages other coroutines. But it's actually more of a helper; there is a different function that kicks off the event loop itself, which we'll see below.
```
async def main():
async with aiohttp.ClientSession() as client:
awaitables = [async_fn(i, client) for i in range(5)] # Here async_fn(i, client) doesn't execute -- it only initializes the corouting
await asyncio.gather(*awaitables) # The asterisk before the variable unpacks the list -- gather() expects one or more coroutines but not a list of them
```
In Python 3.7+, we would write -- from some **non** async function or from the main part of our script -- the following to call the `main` coroutine:
`asyncio.run(main)`
- This **blocking** command populates the event loop with the coroutine `main`.
- The `main` coroutine then adds each initialized instance of our `async_fn` coroutine (via the `awaitables` variable) to the event loop.
- **Each instance of `async_fn` makes an HTTP request then cedes control back to the event loop.**
- **The event loop checks each coroutine to see if it has received a response.**
- Once **all** of the `async_fn` coroutines have finished -- either by returning or raising an exception -- the `main` coroutine will return.
- At this point, execution will resume after the call to `asyncio.run`.
In a Jupyter Notebook, however, we are already inside an event loop. So we can't call `asyncio.run` without getting an error. Fortunately, we can just write `await main()` to get the same behavior.
```
await main()
```
Comparing this output with our synchronous loop above, notice the following:
- The timestamps should be within a few milliseconds of each other, even though each HTTP server still took at least 1 second to respond. The requests were made concurrently, so the **total** time to send the requests and receive the responses should be approximately 1 second.
- `asyncio.gather` guarantees that it will **return results** from coroutines in the order that they were passed to it. In this case, however, the output on screen is from a `print` statement inside each coroutine, showing that the coroutines do not necessarily **complete** in the same order.
- That's the key to asynchronous programming: it eases the requirement that every operation occur in the sequence stipulated by the programmer.
- And in exchange for some loss of synchronicity, we get significant gains in performance.
### Using the Alma API's asynchronously
The preceding tour of iterators, generators, and coroutines was intended to provide some conceptual grounding for a grasp of how Python's `async` coroutines work. To summarize, the key points are as follows:
1. **Coroutines** are Python functions that can suspend their operation while waiting for more input. While they are paused, the Python interpreter can do other work.
2. `async` coroutines, which typically handle input from I/O processes, are managed by the `asyncio` **event loop**. The event loop is responsible for resuming the coroutines based on the availability of their input from external processes (like an HTTP response).
3. The event loop allows us to achieve **concurrency** in our I/O requests. This is not the same as true parallelism, but more like highly efficient multitasking. Since the processes involved typically don't require much, if any, of the Python interpreter's resources -- and may not even involve many system resources -- asynchronous programming in Python is essentially a way to **occupy the idle time** that the interpreter would otherwise spend waiting for responses from elsewhere.
The **main challenge** in writing asynchronous Python is adapting to a different way of programming, one in which the path from input to output is less straightforward (and in some sense, less predictable).
The Alma API's impose a **rate limit** of 25 requests per second. With synchronous approaches, that's typically not a problem, because the API generally doesn't respond quickly enough for us to be able to make 25 requests _in sequence_ in under a second. But _concurrent_ requests are different. We can pass 100, 1,000, or 100,000 `async` coroutines to the event loop, and if each coroutine makes one request, the event loop will issue those requests immediately, the only constraint being how fast the hardware on our end can handle it. As a result, we can easily exceed the rate limit if we don't **throttle** the requests somehow.
We'll use a tiny Python library call `asyncio-throttle` to do that, which you can install by running
```
!pip install asyncio-throttle
```
in your notebook.
```
from asyncio_throttle import Throttler
```
#### Writing an `async` coroutine to make API requests
The following function will make a single GET request.
1. The function accepts the following arguments:
- An instance of the `aiohttp.ClientSession` class. This allows us to reuse the same connection across requests.
- An instance of the `Throttler` class from `asyncio_throttle`.
- A url string. The URL in this case will be formatted for retrieving a specific item from the Bibs API.
- A Python dictionary called `headers`. This will contain the header information required by the API.
2. The function will return either
- a valid response from the API, which we expect to be in JSON format,
- or an error.
- Both types of return values will be Python dictionaries.
3. Error handling with asynchronous programming can be challenging.
- If a coroutine raises an exception, the `asyncio` event loop will allow the rest to continue execution. That's helpful, because in a case like ours, we wouldn't want one API error -- which might be causes by a bad identifier -- to cause the whole batch to fail.
- However, it's important to keep track of which API calls **did** fail and why. In our function, we use `try...except` blocks to catch exceptions, and we package errors in Python dictionaries, including, where possible, the API response, which may include a useful message.
4. Note the nested `async with` statements:
- `async with throttler` applies the throttler to our request. This essentially keeps our coroutine in a queue until it's time to be executed (at a rate of no more than 25 per second).
- `async with client.get(...) as session`: This context manager creates a single request session, closing it out when we exit the block. This ensures that we can reuse the same client between requests effectively.
5. There are two `await` statements here.
- The first, which is executed in the case of an HTTP error, gets the HHTP response body as a string and assigns it to the `resp` variable.
- The second, executed in the case of a successful HTTP requests, parses the HTTP response body as JSON.
6. Our function uses the `aiohttp.ClientSession.get` method, but with a couple of tweaks we could make this function handle POST requests instead.
- Replace the above method call with one to `aiohttp.ClientSession.post`.
- Accept as an argument some data to be POST-ed, and include this in the method call as a `data` keyword argument.
```
async def get_item(client, throttler, url, headers):
async with throttler:
try:
resp = None
async with client.get(url, headers=headers) as session:
if session.status != 200:
resp = await session.text()
session.raise_for_status()
resp = await session.json()
return {'url': url, 'response': resp}
except Exception as e:
return {'error': e, 'message': resp}
```
#### Creating the API URL's
We used `pandas` to load the identifiers for our API requests from a CSV file. Let's look at how we can extract the data we need from our `DataFrame`.
Our API endpoint looks like this:
```
almaws/v1/bibs/{mms_id}/holdings/{holding_id}/items/{item_id}'
```
So we will need the MMS, Holdings, and Item ID numbers.
- A `DataFrame` has an `.itertuples()` method that is an iterator; it produces a Python [named tuple](https://docs.python.org/3/library/collections.html#collections.namedtuple) from each row of the DataFrame.
- Provided the DataFrame's column labels are valid Python identifiers (no spaces or special characters, must start with a letter of the alphabet), we can convert the tuple to a Python dictionary by calling its `_asdict()` method.
- Finally, we can use Python's `str.format()` [method](https://docs.python.org/3/library/stdtypes.html#str.format) to substitute the placeholders in the URL with the appropriate values from each row. `str.format` accepts optional keyword arguments and uses them to fill any matching placeholder keys (the parts of the string between curly braces). - By passing to `str.format` our row-dictionary with the double-asterisk prefix -- `url.format(**row_dict)` -- we can unpack it into keyword arguments. **Provided our column names for MMS, Holdings, and Item ID match the keys in the URL string**, `str.format` will substitute those keys for the values we need.
- `str.format` will ignore any keyword arguments that don't match the string, so it doesn't matter that our row-dictionary contains more columns than there are keys in the string.
```
def format_urls(url_str, df):
for row in df.itertuples(index=False):
yield url_str.format(**row._asdict())
[url for url in format_urls(config['get_item_url'], art_books.iloc[:50])]
```
We also need a header that contains our API key and one that instructs the API to return JSON.
```
headers = {'Authorization': f'apikey {config["api_key"]}',
'Accept': 'application/json'}
```
#### Putting it all together
Finally, we make an `async` coroutine to create our concurrent requests.
This coroutine accepts the following arguments:
- A `DataFrame` (called `data`) where each row should contain a set of identifiers we want to pass to the Bibs API.
- A Python dictionary containing the API headers (`headers`).
- A complete URL for an API endpoint, suitable for formatting with the identifiers in our dataset.
It does the following:
- Creates an instance of the `asyncio_throttle.Throttler` class with the specified rate limit.
- Creates an `aiohttp.ClientSession` instance via context manager.
- Initializes a list of `get_item` coroutines with the formatted URLs.
- Accumulates the results from those coroutines via `asyncio.gather`.
- Returns the results.
This is the coroutine we will pass to `asyncio.run` in order to kick off the event loop.
```
async def make_requests(data, headers, base_url):
throttler = Throttler(rate_limit=25)
async with aiohttp.ClientSession() as client:
awaitables = [get_item(client=client,
throttler=throttler,
headers=headers,
url=url) for url in format_urls(base_url, data)]
results = await asyncio.gather(*awaitables)
return results
```
And we can launch our concurrent requests as follows. (If running this outside of a Jupyter notebook, you will need to write (assuming you're using Python 3.7+):
```
results = asyncio.run(make_requests(art_books, headers, config['get_item_url'])
```
The syntax for Python 3.5 and 3.6 is a little different. It should be
```
loop = asyncio.get_event_loop()
results = loop.run_until_complete(make_requests(art_books, headers, config['get_item_url'])
```
```
results = await make_requests(art_books, headers, config['get_item_url'])
```
`results` should be a list of objects returned by the API, along with any errors. It should equal the length of our original dataset.
```
assert len(results) == len(art_books)
```
We can identify errors by looking for any objects within results that have the `error` key.
```
[r['message'] for r in results if 'error' in r]
```
Alternately, we may want to mark the rows in our original list that we have successfully completed.
First, we create a list of unique Item identifiers in our results set (filtering out any errors).
```
items = [r['response']['item_data']['pid'] for r in results if 'error' not in r]
```
Then we can use `pandas` functionality to add a column with values based on a Boolean condition: whether the value in the `item_id` column is in our `items` list.
**Note** that our `item_id` column was imported as an integer value, but `items` is a list a strings. So we need to do an explicit type cast in order for the test to work. `DataFrame['item_id'].astype(str)` converts the values in that column to strings.
Then we can use the `.asin` method to check for membership in a list. This will return a list of `True`/`False` values aligned with the original column.
```
art_books['item_id'] = art_books['item_id'].astype(str)
art_books['completed'] = art_books['item_id'].isin(items)
```
If that code produces a `SettingWithCopyWarning`, it's safe to ignore it in this case. We can check to make sure that our flag works by comparing the subset of values in the `completed` column that are `False` with the list of error messages we received.
```
len(art_books.loc[art_books.completed == False]) == len([r['message'] for r in results if 'error' in r])
```
And then we can save our flagged dataset back to the disk as CSV.
```
art_books.to_csv('art_books_completed.csv', index=False)
```
|
github_jupyter
|
# If you haven't already, run !pip install pyaml
import yaml
with open('./workshop-config.yml', 'r') as f:
config = yaml.load(f, Loader=yaml.FullLoader)
config
import pandas as pd
migration_errors = pd.read_csv(config['csv_path'])
migration_errors.columns = [column.lower().replace(' ', '_') for column in migration_errors.columns]
new_names = ['suppressed_holdings', 'suppressed_bibs']
new_columns = []
for column in migration_errors.columns:
if column == 'suppressed_from_discovery':
new_columns.append(new_names.pop(0))
else:
new_columns.append(column)
migration_errors.columns = new_columns
migration_errors = migration_errors.loc[(migration_errors.suppressed_holdings == 0) | (migration_errors.suppressed_bibs == 'No')]
migration_errors = migration_errors.loc[migration_errors.temporary_location_name == 'None']
art_books = migration_errors.loc[migration_errors.permanent_call_number.str.startswith('N')]
for i in range(5):
print(i)
import requests
from datetime import datetime
def sync_fn(i):
resp = requests.get(config['test_url'])
print(f'Loop {i}; URL status: {resp.status_code}; Timestamp: {datetime.now().time()}')
for i in range(5):
sync_fn(i)
enum = enumerate(['a', 'b', 'c'])
next(enum)
def my_enumerate(seq):
i = 0
while seq:
yield i, seq.pop(0)
i += 1
enum2 = my_enumerate(['a', 'b', 'c'])
next(enum2)
def to_infinity():
i = 0
while True:
yield i
i += 1
gen = to_infinity()
for _ in range(10):
print(next(gen))
def co_sum():
total = 0
while True:
n = yield total
if not n:
return total
total += n
calc = co_sum()
calc.send(None)
calc.send(1)
calc.send(10)
calc.send(14)
calc.send(None)
import aiohttp
import asyncio
async def async_fn(i, client):
async with client.get(config['test_url']) as session:
resp = await session.text()
print(f'Loop {i}; URL status: {session.status}; Timestamp: {datetime.now().time()}')
async def main():
async with aiohttp.ClientSession() as client:
awaitables = [async_fn(i, client) for i in range(5)] # Here async_fn(i, client) doesn't execute -- it only initializes the corouting
await asyncio.gather(*awaitables) # The asterisk before the variable unpacks the list -- gather() expects one or more coroutines but not a list of them
await main()
!pip install asyncio-throttle
from asyncio_throttle import Throttler
async def get_item(client, throttler, url, headers):
async with throttler:
try:
resp = None
async with client.get(url, headers=headers) as session:
if session.status != 200:
resp = await session.text()
session.raise_for_status()
resp = await session.json()
return {'url': url, 'response': resp}
except Exception as e:
return {'error': e, 'message': resp}
almaws/v1/bibs/{mms_id}/holdings/{holding_id}/items/{item_id}'
def format_urls(url_str, df):
for row in df.itertuples(index=False):
yield url_str.format(**row._asdict())
[url for url in format_urls(config['get_item_url'], art_books.iloc[:50])]
headers = {'Authorization': f'apikey {config["api_key"]}',
'Accept': 'application/json'}
async def make_requests(data, headers, base_url):
throttler = Throttler(rate_limit=25)
async with aiohttp.ClientSession() as client:
awaitables = [get_item(client=client,
throttler=throttler,
headers=headers,
url=url) for url in format_urls(base_url, data)]
results = await asyncio.gather(*awaitables)
return results
results = asyncio.run(make_requests(art_books, headers, config['get_item_url'])
loop = asyncio.get_event_loop()
results = loop.run_until_complete(make_requests(art_books, headers, config['get_item_url'])
results = await make_requests(art_books, headers, config['get_item_url'])
assert len(results) == len(art_books)
[r['message'] for r in results if 'error' in r]
items = [r['response']['item_data']['pid'] for r in results if 'error' not in r]
art_books['item_id'] = art_books['item_id'].astype(str)
art_books['completed'] = art_books['item_id'].isin(items)
len(art_books.loc[art_books.completed == False]) == len([r['message'] for r in results if 'error' in r])
art_books.to_csv('art_books_completed.csv', index=False)
| 0.246261 | 0.989948 |
```
%load_ext nb_black
%load_ext autoreload
%autoreload 2
from os.path import join
import re
from os import makedirs
import numpy as np
from tqdm.auto import tqdm
import pandas as pd
from scipy.stats import pearsonr
from IPython.display import display
import seaborn as sns
from time import time
rng_seed = 399
np.random.seed(rng_seed)
import persim
import joblib
from matplotlib import pyplot as plt
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import (
mean_squared_error,
f1_score,
confusion_matrix,
roc_auc_score,
)
from sklearn.linear_model import Lasso, LassoCV, LogisticRegressionCV
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from umap import UMAP
# Directory constants
root_code_dir = ".."
output_dir = join(root_code_dir, "output")
word2vec_training_dir = join(output_dir, "word2vec_training")
word2vec_ann_indices_dir = join(output_dir, "word2vec_ann_indices")
word2vec_cluster_analysis_dir = join(output_dir, "word2vec_cluster_analysis")
output_plots_dir = join("output_plots")
makedirs(output_plots_dir, exist_ok=True)
# Extend sys path for importing custom Python files
import sys
sys.path.append(root_code_dir)
from topological_data_analysis.topological_polysemy import tps
from word_embeddings.word2vec import load_model_training_output
from analysis_of_word_embeddings.estimate_num_meanings_supervised import (
create_classification_labels,
evaluate_regression_model,
evaluate_classification_model,
create_feature_importance_df,
visualize_feature_importances,
)
from analysis_utils import word_group_visualization
from vis_utils import configure_plotting_for_thesis
configure_plotting_for_thesis()
```
## Prepare data
```
def format_feature_name_human_readable(feature_name: str) -> str:
"""
Formats feature names to make them human readable (e.g. for thesis).
Parameters
----------
feature_name : str
Feature name to make human readable.
Returns
-------
human_readable_feature_name : str
Human readable feature name.
"""
alg_names = ["tps", "gad", "estimated_id"]
human_readable_regexes = [
r"X_tps_(\d+)(_pd_(?:max|avg|std)|)",
r"X_gad_knn_(\d+)_(\d+)_(P_man|P_bnd|P_int)",
r"X_estimated_id_(.+)_(\d+)",
]
for alg_name, human_readable_re in zip(alg_names, human_readable_regexes):
re_match = re.match(human_readable_re, feature_name)
if re_match is None:
continue
re_groups = re_match.groups()
if alg_name == "tps":
tps_n = re_groups[0]
if re_groups[1] is None:
return fr"TPS$_{tps_n}"
else:
tps_pd_type = re_groups[1]
return fr"TPS{tps_pd_type}_{tps_n}"
elif alg_name == "gad":
inner_annulus_knn, outer_annulus_knn, P_cat = re_groups
P_cat_human = {
"P_man": "manifold",
"P_bnd": "boundary",
"P_int": "singular",
}
return fr"GAD_{P_cat_human[P_cat]}_{inner_annulus_knn}_{outer_annulus_knn}"
elif alg_name == "estimated_id":
id_estimator_name, num_neighbours = re_groups
id_estimator_human = {
"lpca": "LPCA",
"knn": "KNN",
"twonn": "TWO-NN",
"mle": "MLE",
"tle": "TLE",
}
return fr"ID_{id_estimator_human[id_estimator_name]}_{num_neighbours}"
word_meaning_train_data = pd.read_csv("data/word_meaning_train_data.csv")
word_meaning_test_data = pd.read_csv("data/word_meaning_test_data.csv")
word_meaning_semeval_test_data = pd.read_csv("data/word_meaning_semeval_test_data.csv")
word_meaning_data_cols = word_meaning_train_data.columns.values
word_meaning_data_feature_cols = np.array(
[col for col in word_meaning_data_cols if col.startswith("X_")]
)
word_meaning_data_feature_cols_human_readable = np.array(
[format_feature_name_human_readable(col) for col in word_meaning_data_feature_cols]
)
print("Train")
word_meaning_train_data
plt.hist(word_meaning_train_data["y"], bins=word_meaning_train_data["y"].max())
plt.xlabel("Label y")
plt.ylabel("Count")
plt.show()
print("Test")
word_meaning_test_data
plt.hist(word_meaning_test_data["y"], bins=word_meaning_test_data["y"].max())
plt.xlabel("Label y")
plt.ylabel("Count")
plt.show()
# Split into X and y
data_scaler = StandardScaler()
data_scaler.fit(word_meaning_train_data[word_meaning_data_feature_cols].values)
X_train = data_scaler.transform(
word_meaning_train_data[word_meaning_data_feature_cols].values
)
X_test = data_scaler.transform(
word_meaning_test_data[word_meaning_data_feature_cols].values
)
X_test_semeval = data_scaler.transform(
word_meaning_semeval_test_data[word_meaning_data_feature_cols].values
)
y_train = word_meaning_train_data["y"].values
y_test = word_meaning_test_data["y"].values
y_test_semeval = word_meaning_semeval_test_data["y"].values
# Create multi-class labels
max_y_multi = np.quantile(y_train, q=0.9)
y_train_binary_classes = create_classification_labels(labels=y_train, max_label=1)
y_train_multi_class = create_classification_labels(
labels=y_train, max_label=max_y_multi
)
y_test_binary_classes = create_classification_labels(labels=y_test, max_label=1)
y_test_multi_class = create_classification_labels(labels=y_test, max_label=max_y_multi)
y_test_semeval_binary_classes = create_classification_labels(
labels=y_test_semeval, max_label=1
)
y_test_semeval_multi_class = create_classification_labels(
labels=y_test_semeval, max_label=max_y_multi
)
labels_str = [
str(label + 1) if i < 4 else "gt_or_eq_5"
for i, label in enumerate(np.unique(y_train_multi_class))
]
# Load output from training word2vec
w2v_training_output = load_model_training_output(
model_training_output_dir=join(
word2vec_training_dir, "word2vec_enwiki_jan_2021_word2phrase"
),
model_name="word2vec",
dataset_name="enwiki",
return_normalized_embeddings=True,
)
last_embedding_weights_normalized = w2v_training_output[
"last_embedding_weights_normalized"
]
words = w2v_training_output["words"]
word_to_int = w2v_training_output["word_to_int"]
word_counts = w2v_training_output["word_counts"]
# Load SemEval-2010 task 14 words
semeval_2010_14_word_senses = joblib.load(
join(
"..", "topological_data_analysis", "data", "semeval_2010_14_word_senses.joblib"
)
)
semeval_target_words = np.array(list(semeval_2010_14_word_senses["all"].keys()))
semeval_target_words_in_vocab_filter = [
i for i, word in enumerate(semeval_target_words) if word in word_to_int
]
semeval_target_words_in_vocab = semeval_target_words[
semeval_target_words_in_vocab_filter
]
semeval_gs_clusters = np.array(list(semeval_2010_14_word_senses["all"].values()))
semeval_gs_clusters_in_vocab = semeval_gs_clusters[semeval_target_words_in_vocab_filter]
num_semeval_words = len(semeval_target_words_in_vocab)
```
## Evaluate modeling results
```
# Constants
estimate_num_meanings_supervised_dir = join("data", "estimate_num_meanings_supervised")
```
### LASSO / Logistic regression
#### LASSO
```
# Load results
lasso_reg = joblib.load(join(estimate_num_meanings_supervised_dir, "lasso_reg.joblib"))
print(f"Selected alpha: {lasso_reg.alpha_:.16f}")
# LASSO regression
evaluate_regression_model(
model=lasso_reg,
test_sets=[
(
X_train,
y_train,
"Train",
"Predicted number of word meanings",
"Synsets in WordNet",
),
(
X_test,
y_test,
"Test",
"Predicted number of word meanings",
"Synsets in WordNet",
),
(
X_test_semeval,
y_test_semeval,
"SemEval test",
"Predicted number of word meanings",
"SemEval gold standard",
),
],
show_plot=False,
use_rasterization=True,
)
# Plot/save
save_to_pgf = True
plt.tight_layout()
if save_to_pgf:
plt.savefig(
join(
output_plots_dir,
"wme-enwiki-correlation-result.pdf",
),
backend="pgf",
bbox_inches="tight",
)
else:
plt.show()
# Visualize top 10 feature importances
_, ax = plt.subplots(figsize=(10, 5))
# Sort coefficient by absolute value
lasso_reg_coef_abs_sorted_indces = np.argsort(abs(lasso_reg.coef_))[::-1]
top_n_importances = 10
top_n_importances_indices = lasso_reg_coef_abs_sorted_indces[:top_n_importances]
# Plot horizontal barplot
y_pos = np.arange(top_n_importances)
ax.barh(y=y_pos, width=lasso_reg.coef_[top_n_importances_indices], color="b")
ax.set_yticks(y_pos)
ax.set_yticklabels(
word_meaning_data_feature_cols_human_readable[top_n_importances_indices]
)
ax.invert_yaxis()
ax.set_xlabel("Feature importance")
# Plot/save
save_to_pgf = True
plt.tight_layout()
if save_to_pgf:
plt.savefig(
join(
output_plots_dir,
"wme-enwiki-top-10-feature-importances.pdf",
),
backend="pgf",
bbox_inches="tight",
)
else:
plt.show()
# Visualize top 10 feature importances
_, axes = plt.subplots(ncols=3, figsize=(13, 5))
ax_chars = "abc"
top_n_importances = 10
feature_alg_names = ["TPS", "GAD", "ID estimator"]
feature_alg_names_start = ["X_tps", "X_gad", "X_estimated_id"]
for ax, ax_char, alg_name, alg_names_start in zip(
axes, ax_chars, feature_alg_names, feature_alg_names_start
):
# Filter algorithm columns
alg_filter = [
i
for i, feature_col in enumerate(word_meaning_data_feature_cols)
if feature_col.startswith(alg_names_start)
]
alg_coeffs = lasso_reg.coef_[alg_filter]
# Sort coefficient by absolute value
lasso_reg_coef_abs_sorted_indces = np.argsort(abs(alg_coeffs))[::-1]
top_n_importances_indices = lasso_reg_coef_abs_sorted_indces[:top_n_importances]
# Plot horizontal barplot
y_pos = np.arange(top_n_importances)
ax.barh(y=y_pos, width=alg_coeffs[top_n_importances_indices], color="b")
ax.set_yticks(y_pos)
ax.set_yticklabels(
word_meaning_data_feature_cols_human_readable[alg_filter][
top_n_importances_indices
]
)
ax.invert_yaxis()
ax.set_xlabel("Feature importance")
ax.set_title(f"({ax_char}) {alg_name} features")
# Plot/save
save_to_pgf = True
plt.tight_layout()
if save_to_pgf:
plt.savefig(
join(
output_plots_dir,
"wme-enwiki-top-10-feature-importances-tps-gad-estimated-ids.pdf",
),
backend="pgf",
bbox_inches="tight",
)
else:
plt.show()
visualize_feature_importances(
feature_importances=create_feature_importance_df(
feature_names=word_meaning_data_feature_cols,
feature_importances=np.abs(lasso_reg.coef_),
)
)
print(f"Number of zero features: {sum(lasso_reg.coef_ == 0)}")
```
#### Logistic regression with L1 penalty
```
# Load results
binary_logistic_reg = joblib.load(
join(estimate_num_meanings_supervised_dir, "binary_logistic_reg.joblib")
)
print(f"Selected alpha: {(1 / binary_logistic_reg.C_[0]):.16f}")
# Binary classification
evaluate_classification_model(
model=binary_logistic_reg,
test_sets=[
(
X_train,
y_train_binary_classes,
"Train",
"Predicted number of word meanings",
"Synsets in WordNet",
),
(
X_test,
y_test_binary_classes,
"Test",
"Predicted number of word meanings",
"Synsets in WordNet",
),
],
cm_ticklabels=["1 word meaning", ">1 word meanings"],
show_plot=False,
)
# Plot/save
save_to_pgf = True
plt.tight_layout()
if save_to_pgf:
plt.savefig(
join(
output_plots_dir,
"bwme-enwiki-confusion-matrices.pdf",
),
backend="pgf",
bbox_inches="tight",
)
else:
plt.show()
def get_classified_test_words_binary(
target_true_binary_class: int,
target_pred_binary_class: int,
true_binary_classes: np.ndarray,
X_test: np.ndarray,
log_reg_model: np.ndarray,
X_test_words: np.ndarray,
) -> np.ndarray:
"""
Gets (in)correctly classified test words from binary classification model.
Parameters
----------
target_true_binary_class : int
True binary class to look for.
target_pred_binary_class : int
Predicted binary class to look for.
true_binary_classes : np.ndarray
True binary classes.
X_test : np.ndarray
Test data for prediction.
log_reg_model : LogisticRegression
Logistic regression model used to predict binary classes.
X_test_words : np.ndarray
Words associated with the X_test data.
Returns
-------
test_word_indices : np.ndarray
Indices of test words corresponding with the parameters.
test_words : np.ndarray
Test words corresponding with the parameters.
"""
test_indices = np.where(true_binary_classes == target_true_binary_class)[0]
test_pred_values = log_reg_model.predict(X_test[test_indices])
test_word_indices = []
test_words = []
for idx, pred_val in zip(test_indices, test_pred_values):
word = X_test_words[idx]
if pred_val == target_pred_binary_class:
test_word_indices.append(idx)
test_words.append(word)
test_word_indices = np.array(test_word_indices)
test_words = np.array(test_words)
return test_word_indices, test_words
# Report examples of misclassified TN, FP, FN and TP polysemous words from BWME-enwiki model
(
classified_tn_test_word_indices,
classified_tn_test_words,
) = get_classified_test_words_binary(
target_true_binary_class=0,
target_pred_binary_class=0,
true_binary_classes=y_test_binary_classes,
X_test=X_test,
log_reg_model=binary_logistic_reg,
X_test_words=word_meaning_test_data["word"].values,
)
(
misclassified_fp_test_word_indices,
misclassified_fp_test_words,
) = get_classified_test_words_binary(
target_true_binary_class=0,
target_pred_binary_class=1,
true_binary_classes=y_test_binary_classes,
X_test=X_test,
log_reg_model=binary_logistic_reg,
X_test_words=word_meaning_test_data["word"].values,
)
(
misclassified_fn_test_word_indices,
misclassified_fn_test_words,
) = get_classified_test_words_binary(
target_true_binary_class=1,
target_pred_binary_class=0,
true_binary_classes=y_test_binary_classes,
X_test=X_test,
log_reg_model=binary_logistic_reg,
X_test_words=word_meaning_test_data["word"].values,
)
(
classified_tp_test_word_indices,
classified_tp_test_words,
) = get_classified_test_words_binary(
target_true_binary_class=1,
target_pred_binary_class=1,
true_binary_classes=y_test_binary_classes,
X_test=X_test,
log_reg_model=binary_logistic_reg,
X_test_words=word_meaning_test_data["word"].values,
)
# Sort correctly classified TN monosemous test words
classified_tn_test_words_sorted = classified_tn_test_words[
np.argsort([word_to_int[word] for word in classified_tn_test_words])
]
print("Correctly classified TN monosemous words from BWME-enwiki model:")
for i in range(10):
print(f"- {classified_tn_test_words_sorted[i]}")
# Sort misclassified FP monosemous test words
classified_fp_test_words_sorted = misclassified_fp_test_words[
np.argsort([word_to_int[word] for word in misclassified_fp_test_words])
]
print("Misclassified FP monosemous words from BWME-enwiki model:")
for i in range(10):
print(f"- {classified_fp_test_words_sorted[i]}")
# Sort misclassified FN polysemous test words
classified_fn_test_words_sorted = misclassified_fn_test_words[
np.argsort([word_to_int[word] for word in misclassified_fn_test_words])
]
print("Misclassified FN polysemous words from BWME-enwiki model:")
for i in range(10):
print(f"- {classified_fn_test_words_sorted[i]}")
# Sort correctly classified TP polysemous test words
classified_tp_test_words_sorted = classified_tp_test_words[
np.argsort([word_to_int[word] for word in classified_tp_test_words])
]
print("Correctly classified TP polysemous words from BWME-enwiki model:")
for i in range(10):
print(f"- {classified_tp_test_words_sorted[i]}")
# Create UMAP embedding of test data words
word_meaning_test_data_word_indices = np.array(
[word_to_int[test_word] for test_word in word_meaning_test_data["word"].values]
)
word_meaning_test_data_word_umap_embedding = UMAP(
n_components=2,
random_state=rng_seed,
).fit_transform(
last_embedding_weights_normalized[word_meaning_test_data_word_indices],
)
_, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2, figsize=(6 * 2, 6 * 2))
# TN
word_group_visualization(
transformed_word_embeddings=word_meaning_test_data_word_umap_embedding,
words=word_meaning_test_data["word"].values,
word_groups={
"classified_tn_words": {
"words": classified_tn_test_words,
"color": "g",
"label": "Correctly classified monosemous words",
},
},
emphasis_words=[
("norway", -40, 0),
("scientists", 20, -80),
("sarah", 40, 0),
("architect", -20, 0),
("commonly", 40, -80),
],
xlabel="UMAP 1",
ylabel="UMAP 2",
alpha=1,
ax=ax1,
scatter_set_rasterized=True,
show_plot=False,
)
ax1.set_title("(a) Correctly classified monosemous words (TN)")
ax1.legend()
# FP
word_group_visualization(
transformed_word_embeddings=word_meaning_test_data_word_umap_embedding,
words=word_meaning_test_data["word"].values,
word_groups={
"classified_fp_words": {
"words": misclassified_fp_test_words,
"color": "r",
"label": "Misclassified monosemous words",
},
},
emphasis_words=[
("january", -60, -60),
("ninety-six", 80, -40),
("sixty-three", 60, -70),
("citizens", 40, 0),
("additionally", 40, -100),
],
xlabel="UMAP 1",
ylabel="UMAP 2",
alpha=1,
ax=ax2,
scatter_set_rasterized=True,
show_plot=False,
)
ax2.set_title("(b) Misclassified monosemous words (FP)")
ax2.legend()
# FN
word_group_visualization(
transformed_word_embeddings=word_meaning_test_data_word_umap_embedding,
words=word_meaning_test_data["word"].values,
word_groups={
"classified_fn_words": {
"words": misclassified_fn_test_words,
"color": "r",
"label": "Misclassified polysemous words",
},
},
emphasis_words=[
("time", 40, -80),
("age", 40, 0),
("returned", -40, -80),
("italian", 0, 10),
("chicago", -60, 0),
],
xlabel="UMAP 1",
ylabel="UMAP 2",
alpha=1,
ax=ax3,
scatter_set_rasterized=True,
show_plot=False,
)
ax3.set_title("(c) Misclassified polysemous words (FN)")
ax3.legend()
# FP
word_group_visualization(
transformed_word_embeddings=word_meaning_test_data_word_umap_embedding,
words=word_meaning_test_data["word"].values,
word_groups={
"classified_tp_words": {
"words": classified_tp_test_words,
"color": "g",
"label": "Correctly classified polysemous words",
},
},
emphasis_words=[
("eight", 60, -70),
("under", -60, -60),
("well", 20, 0),
("film", 40, 0),
("game", -20, 0),
],
xlabel="UMAP 1",
ylabel="UMAP 2",
alpha=1,
ax=ax4,
scatter_set_rasterized=True,
show_plot=False,
)
ax4.set_title("(d) Correctly classified polysemous words (TP)")
ax4.legend(loc="lower right")
# Plot/save
save_to_pgf = True
plt.tight_layout()
if save_to_pgf:
plt.savefig(
join(
output_plots_dir,
"bwme-enwiki-umap-classified-words.pdf",
),
backend="pgf",
bbox_inches="tight",
)
else:
plt.show()
# Visualize top 10 feature importances
_, ax = plt.subplots(figsize=(10, 5))
# Sort coefficient by absolute value
binary_log_reg_coef_abs_sorted_indces = np.argsort(abs(binary_logistic_reg.coef_[0]))[
::-1
]
top_n_importances = 10
top_n_importances_indices = binary_log_reg_coef_abs_sorted_indces[:top_n_importances]
# Plot horizontal barplot
y_pos = np.arange(top_n_importances)
ax.barh(
y=y_pos, width=binary_logistic_reg.coef_[0][top_n_importances_indices], color="b"
)
ax.set_yticks(y_pos)
ax.set_yticklabels(
word_meaning_data_feature_cols_human_readable[top_n_importances_indices]
)
ax.invert_yaxis()
ax.set_xlabel("Feature importance")
# Plot/save
save_to_pgf = True
plt.tight_layout()
if save_to_pgf:
plt.savefig(
join(
output_plots_dir,
"bwme-enwiki-top-10-feature-importances.pdf",
),
backend="pgf",
bbox_inches="tight",
)
else:
plt.show()
# Visualize top 10 feature importances
_, axes = plt.subplots(ncols=3, figsize=(13, 5))
ax_chars = "abc"
top_n_importances = 10
feature_alg_names = ["TPS", "GAD", "ID estimator"]
feature_alg_names_start = ["X_tps", "X_gad", "X_estimated_id"]
for ax, ax_char, alg_name, alg_names_start in zip(
axes, ax_chars, feature_alg_names, feature_alg_names_start
):
# Filter algorithm columns
alg_filter = [
i
for i, feature_col in enumerate(word_meaning_data_feature_cols)
if feature_col.startswith(alg_names_start)
]
alg_coeffs = binary_logistic_reg.coef_[0][alg_filter]
# Sort coefficient by absolute value
binary_log_reg_coef_abs_sorted_indces = np.argsort(abs(alg_coeffs))[::-1]
top_n_importances_indices = binary_log_reg_coef_abs_sorted_indces[
:top_n_importances
]
# Plot horizontal barplot
y_pos = np.arange(top_n_importances)
ax.barh(y=y_pos, width=alg_coeffs[top_n_importances_indices], color="b")
ax.set_yticks(y_pos)
ax.set_yticklabels(
word_meaning_data_feature_cols_human_readable[alg_filter][
top_n_importances_indices
]
)
ax.invert_yaxis()
ax.set_xlabel("Feature importance")
ax.set_title(f"({ax_char}) {alg_name} features")
# Plot/save
save_to_pgf = True
plt.tight_layout()
if save_to_pgf:
plt.savefig(
join(
output_plots_dir,
"bwme-enwiki-top-10-feature-importances-tps-gad-estimated-ids.pdf",
),
backend="pgf",
bbox_inches="tight",
)
else:
plt.show()
visualize_feature_importances(
feature_importances=create_feature_importance_df(
feature_names=word_meaning_data_feature_cols,
feature_importances=np.abs(binary_logistic_reg.coef_[0]),
)
)
print(f"Number of zero features: {sum(binary_logistic_reg.coef_[0] == 0)}")
```
|
github_jupyter
|
%load_ext nb_black
%load_ext autoreload
%autoreload 2
from os.path import join
import re
from os import makedirs
import numpy as np
from tqdm.auto import tqdm
import pandas as pd
from scipy.stats import pearsonr
from IPython.display import display
import seaborn as sns
from time import time
rng_seed = 399
np.random.seed(rng_seed)
import persim
import joblib
from matplotlib import pyplot as plt
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import (
mean_squared_error,
f1_score,
confusion_matrix,
roc_auc_score,
)
from sklearn.linear_model import Lasso, LassoCV, LogisticRegressionCV
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from umap import UMAP
# Directory constants
root_code_dir = ".."
output_dir = join(root_code_dir, "output")
word2vec_training_dir = join(output_dir, "word2vec_training")
word2vec_ann_indices_dir = join(output_dir, "word2vec_ann_indices")
word2vec_cluster_analysis_dir = join(output_dir, "word2vec_cluster_analysis")
output_plots_dir = join("output_plots")
makedirs(output_plots_dir, exist_ok=True)
# Extend sys path for importing custom Python files
import sys
sys.path.append(root_code_dir)
from topological_data_analysis.topological_polysemy import tps
from word_embeddings.word2vec import load_model_training_output
from analysis_of_word_embeddings.estimate_num_meanings_supervised import (
create_classification_labels,
evaluate_regression_model,
evaluate_classification_model,
create_feature_importance_df,
visualize_feature_importances,
)
from analysis_utils import word_group_visualization
from vis_utils import configure_plotting_for_thesis
configure_plotting_for_thesis()
def format_feature_name_human_readable(feature_name: str) -> str:
"""
Formats feature names to make them human readable (e.g. for thesis).
Parameters
----------
feature_name : str
Feature name to make human readable.
Returns
-------
human_readable_feature_name : str
Human readable feature name.
"""
alg_names = ["tps", "gad", "estimated_id"]
human_readable_regexes = [
r"X_tps_(\d+)(_pd_(?:max|avg|std)|)",
r"X_gad_knn_(\d+)_(\d+)_(P_man|P_bnd|P_int)",
r"X_estimated_id_(.+)_(\d+)",
]
for alg_name, human_readable_re in zip(alg_names, human_readable_regexes):
re_match = re.match(human_readable_re, feature_name)
if re_match is None:
continue
re_groups = re_match.groups()
if alg_name == "tps":
tps_n = re_groups[0]
if re_groups[1] is None:
return fr"TPS$_{tps_n}"
else:
tps_pd_type = re_groups[1]
return fr"TPS{tps_pd_type}_{tps_n}"
elif alg_name == "gad":
inner_annulus_knn, outer_annulus_knn, P_cat = re_groups
P_cat_human = {
"P_man": "manifold",
"P_bnd": "boundary",
"P_int": "singular",
}
return fr"GAD_{P_cat_human[P_cat]}_{inner_annulus_knn}_{outer_annulus_knn}"
elif alg_name == "estimated_id":
id_estimator_name, num_neighbours = re_groups
id_estimator_human = {
"lpca": "LPCA",
"knn": "KNN",
"twonn": "TWO-NN",
"mle": "MLE",
"tle": "TLE",
}
return fr"ID_{id_estimator_human[id_estimator_name]}_{num_neighbours}"
word_meaning_train_data = pd.read_csv("data/word_meaning_train_data.csv")
word_meaning_test_data = pd.read_csv("data/word_meaning_test_data.csv")
word_meaning_semeval_test_data = pd.read_csv("data/word_meaning_semeval_test_data.csv")
word_meaning_data_cols = word_meaning_train_data.columns.values
word_meaning_data_feature_cols = np.array(
[col for col in word_meaning_data_cols if col.startswith("X_")]
)
word_meaning_data_feature_cols_human_readable = np.array(
[format_feature_name_human_readable(col) for col in word_meaning_data_feature_cols]
)
print("Train")
word_meaning_train_data
plt.hist(word_meaning_train_data["y"], bins=word_meaning_train_data["y"].max())
plt.xlabel("Label y")
plt.ylabel("Count")
plt.show()
print("Test")
word_meaning_test_data
plt.hist(word_meaning_test_data["y"], bins=word_meaning_test_data["y"].max())
plt.xlabel("Label y")
plt.ylabel("Count")
plt.show()
# Split into X and y
data_scaler = StandardScaler()
data_scaler.fit(word_meaning_train_data[word_meaning_data_feature_cols].values)
X_train = data_scaler.transform(
word_meaning_train_data[word_meaning_data_feature_cols].values
)
X_test = data_scaler.transform(
word_meaning_test_data[word_meaning_data_feature_cols].values
)
X_test_semeval = data_scaler.transform(
word_meaning_semeval_test_data[word_meaning_data_feature_cols].values
)
y_train = word_meaning_train_data["y"].values
y_test = word_meaning_test_data["y"].values
y_test_semeval = word_meaning_semeval_test_data["y"].values
# Create multi-class labels
max_y_multi = np.quantile(y_train, q=0.9)
y_train_binary_classes = create_classification_labels(labels=y_train, max_label=1)
y_train_multi_class = create_classification_labels(
labels=y_train, max_label=max_y_multi
)
y_test_binary_classes = create_classification_labels(labels=y_test, max_label=1)
y_test_multi_class = create_classification_labels(labels=y_test, max_label=max_y_multi)
y_test_semeval_binary_classes = create_classification_labels(
labels=y_test_semeval, max_label=1
)
y_test_semeval_multi_class = create_classification_labels(
labels=y_test_semeval, max_label=max_y_multi
)
labels_str = [
str(label + 1) if i < 4 else "gt_or_eq_5"
for i, label in enumerate(np.unique(y_train_multi_class))
]
# Load output from training word2vec
w2v_training_output = load_model_training_output(
model_training_output_dir=join(
word2vec_training_dir, "word2vec_enwiki_jan_2021_word2phrase"
),
model_name="word2vec",
dataset_name="enwiki",
return_normalized_embeddings=True,
)
last_embedding_weights_normalized = w2v_training_output[
"last_embedding_weights_normalized"
]
words = w2v_training_output["words"]
word_to_int = w2v_training_output["word_to_int"]
word_counts = w2v_training_output["word_counts"]
# Load SemEval-2010 task 14 words
semeval_2010_14_word_senses = joblib.load(
join(
"..", "topological_data_analysis", "data", "semeval_2010_14_word_senses.joblib"
)
)
semeval_target_words = np.array(list(semeval_2010_14_word_senses["all"].keys()))
semeval_target_words_in_vocab_filter = [
i for i, word in enumerate(semeval_target_words) if word in word_to_int
]
semeval_target_words_in_vocab = semeval_target_words[
semeval_target_words_in_vocab_filter
]
semeval_gs_clusters = np.array(list(semeval_2010_14_word_senses["all"].values()))
semeval_gs_clusters_in_vocab = semeval_gs_clusters[semeval_target_words_in_vocab_filter]
num_semeval_words = len(semeval_target_words_in_vocab)
# Constants
estimate_num_meanings_supervised_dir = join("data", "estimate_num_meanings_supervised")
# Load results
lasso_reg = joblib.load(join(estimate_num_meanings_supervised_dir, "lasso_reg.joblib"))
print(f"Selected alpha: {lasso_reg.alpha_:.16f}")
# LASSO regression
evaluate_regression_model(
model=lasso_reg,
test_sets=[
(
X_train,
y_train,
"Train",
"Predicted number of word meanings",
"Synsets in WordNet",
),
(
X_test,
y_test,
"Test",
"Predicted number of word meanings",
"Synsets in WordNet",
),
(
X_test_semeval,
y_test_semeval,
"SemEval test",
"Predicted number of word meanings",
"SemEval gold standard",
),
],
show_plot=False,
use_rasterization=True,
)
# Plot/save
save_to_pgf = True
plt.tight_layout()
if save_to_pgf:
plt.savefig(
join(
output_plots_dir,
"wme-enwiki-correlation-result.pdf",
),
backend="pgf",
bbox_inches="tight",
)
else:
plt.show()
# Visualize top 10 feature importances
_, ax = plt.subplots(figsize=(10, 5))
# Sort coefficient by absolute value
lasso_reg_coef_abs_sorted_indces = np.argsort(abs(lasso_reg.coef_))[::-1]
top_n_importances = 10
top_n_importances_indices = lasso_reg_coef_abs_sorted_indces[:top_n_importances]
# Plot horizontal barplot
y_pos = np.arange(top_n_importances)
ax.barh(y=y_pos, width=lasso_reg.coef_[top_n_importances_indices], color="b")
ax.set_yticks(y_pos)
ax.set_yticklabels(
word_meaning_data_feature_cols_human_readable[top_n_importances_indices]
)
ax.invert_yaxis()
ax.set_xlabel("Feature importance")
# Plot/save
save_to_pgf = True
plt.tight_layout()
if save_to_pgf:
plt.savefig(
join(
output_plots_dir,
"wme-enwiki-top-10-feature-importances.pdf",
),
backend="pgf",
bbox_inches="tight",
)
else:
plt.show()
# Visualize top 10 feature importances
_, axes = plt.subplots(ncols=3, figsize=(13, 5))
ax_chars = "abc"
top_n_importances = 10
feature_alg_names = ["TPS", "GAD", "ID estimator"]
feature_alg_names_start = ["X_tps", "X_gad", "X_estimated_id"]
for ax, ax_char, alg_name, alg_names_start in zip(
axes, ax_chars, feature_alg_names, feature_alg_names_start
):
# Filter algorithm columns
alg_filter = [
i
for i, feature_col in enumerate(word_meaning_data_feature_cols)
if feature_col.startswith(alg_names_start)
]
alg_coeffs = lasso_reg.coef_[alg_filter]
# Sort coefficient by absolute value
lasso_reg_coef_abs_sorted_indces = np.argsort(abs(alg_coeffs))[::-1]
top_n_importances_indices = lasso_reg_coef_abs_sorted_indces[:top_n_importances]
# Plot horizontal barplot
y_pos = np.arange(top_n_importances)
ax.barh(y=y_pos, width=alg_coeffs[top_n_importances_indices], color="b")
ax.set_yticks(y_pos)
ax.set_yticklabels(
word_meaning_data_feature_cols_human_readable[alg_filter][
top_n_importances_indices
]
)
ax.invert_yaxis()
ax.set_xlabel("Feature importance")
ax.set_title(f"({ax_char}) {alg_name} features")
# Plot/save
save_to_pgf = True
plt.tight_layout()
if save_to_pgf:
plt.savefig(
join(
output_plots_dir,
"wme-enwiki-top-10-feature-importances-tps-gad-estimated-ids.pdf",
),
backend="pgf",
bbox_inches="tight",
)
else:
plt.show()
visualize_feature_importances(
feature_importances=create_feature_importance_df(
feature_names=word_meaning_data_feature_cols,
feature_importances=np.abs(lasso_reg.coef_),
)
)
print(f"Number of zero features: {sum(lasso_reg.coef_ == 0)}")
# Load results
binary_logistic_reg = joblib.load(
join(estimate_num_meanings_supervised_dir, "binary_logistic_reg.joblib")
)
print(f"Selected alpha: {(1 / binary_logistic_reg.C_[0]):.16f}")
# Binary classification
evaluate_classification_model(
model=binary_logistic_reg,
test_sets=[
(
X_train,
y_train_binary_classes,
"Train",
"Predicted number of word meanings",
"Synsets in WordNet",
),
(
X_test,
y_test_binary_classes,
"Test",
"Predicted number of word meanings",
"Synsets in WordNet",
),
],
cm_ticklabels=["1 word meaning", ">1 word meanings"],
show_plot=False,
)
# Plot/save
save_to_pgf = True
plt.tight_layout()
if save_to_pgf:
plt.savefig(
join(
output_plots_dir,
"bwme-enwiki-confusion-matrices.pdf",
),
backend="pgf",
bbox_inches="tight",
)
else:
plt.show()
def get_classified_test_words_binary(
target_true_binary_class: int,
target_pred_binary_class: int,
true_binary_classes: np.ndarray,
X_test: np.ndarray,
log_reg_model: np.ndarray,
X_test_words: np.ndarray,
) -> np.ndarray:
"""
Gets (in)correctly classified test words from binary classification model.
Parameters
----------
target_true_binary_class : int
True binary class to look for.
target_pred_binary_class : int
Predicted binary class to look for.
true_binary_classes : np.ndarray
True binary classes.
X_test : np.ndarray
Test data for prediction.
log_reg_model : LogisticRegression
Logistic regression model used to predict binary classes.
X_test_words : np.ndarray
Words associated with the X_test data.
Returns
-------
test_word_indices : np.ndarray
Indices of test words corresponding with the parameters.
test_words : np.ndarray
Test words corresponding with the parameters.
"""
test_indices = np.where(true_binary_classes == target_true_binary_class)[0]
test_pred_values = log_reg_model.predict(X_test[test_indices])
test_word_indices = []
test_words = []
for idx, pred_val in zip(test_indices, test_pred_values):
word = X_test_words[idx]
if pred_val == target_pred_binary_class:
test_word_indices.append(idx)
test_words.append(word)
test_word_indices = np.array(test_word_indices)
test_words = np.array(test_words)
return test_word_indices, test_words
# Report examples of misclassified TN, FP, FN and TP polysemous words from BWME-enwiki model
(
classified_tn_test_word_indices,
classified_tn_test_words,
) = get_classified_test_words_binary(
target_true_binary_class=0,
target_pred_binary_class=0,
true_binary_classes=y_test_binary_classes,
X_test=X_test,
log_reg_model=binary_logistic_reg,
X_test_words=word_meaning_test_data["word"].values,
)
(
misclassified_fp_test_word_indices,
misclassified_fp_test_words,
) = get_classified_test_words_binary(
target_true_binary_class=0,
target_pred_binary_class=1,
true_binary_classes=y_test_binary_classes,
X_test=X_test,
log_reg_model=binary_logistic_reg,
X_test_words=word_meaning_test_data["word"].values,
)
(
misclassified_fn_test_word_indices,
misclassified_fn_test_words,
) = get_classified_test_words_binary(
target_true_binary_class=1,
target_pred_binary_class=0,
true_binary_classes=y_test_binary_classes,
X_test=X_test,
log_reg_model=binary_logistic_reg,
X_test_words=word_meaning_test_data["word"].values,
)
(
classified_tp_test_word_indices,
classified_tp_test_words,
) = get_classified_test_words_binary(
target_true_binary_class=1,
target_pred_binary_class=1,
true_binary_classes=y_test_binary_classes,
X_test=X_test,
log_reg_model=binary_logistic_reg,
X_test_words=word_meaning_test_data["word"].values,
)
# Sort correctly classified TN monosemous test words
classified_tn_test_words_sorted = classified_tn_test_words[
np.argsort([word_to_int[word] for word in classified_tn_test_words])
]
print("Correctly classified TN monosemous words from BWME-enwiki model:")
for i in range(10):
print(f"- {classified_tn_test_words_sorted[i]}")
# Sort misclassified FP monosemous test words
classified_fp_test_words_sorted = misclassified_fp_test_words[
np.argsort([word_to_int[word] for word in misclassified_fp_test_words])
]
print("Misclassified FP monosemous words from BWME-enwiki model:")
for i in range(10):
print(f"- {classified_fp_test_words_sorted[i]}")
# Sort misclassified FN polysemous test words
classified_fn_test_words_sorted = misclassified_fn_test_words[
np.argsort([word_to_int[word] for word in misclassified_fn_test_words])
]
print("Misclassified FN polysemous words from BWME-enwiki model:")
for i in range(10):
print(f"- {classified_fn_test_words_sorted[i]}")
# Sort correctly classified TP polysemous test words
classified_tp_test_words_sorted = classified_tp_test_words[
np.argsort([word_to_int[word] for word in classified_tp_test_words])
]
print("Correctly classified TP polysemous words from BWME-enwiki model:")
for i in range(10):
print(f"- {classified_tp_test_words_sorted[i]}")
# Create UMAP embedding of test data words
word_meaning_test_data_word_indices = np.array(
[word_to_int[test_word] for test_word in word_meaning_test_data["word"].values]
)
word_meaning_test_data_word_umap_embedding = UMAP(
n_components=2,
random_state=rng_seed,
).fit_transform(
last_embedding_weights_normalized[word_meaning_test_data_word_indices],
)
_, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2, figsize=(6 * 2, 6 * 2))
# TN
word_group_visualization(
transformed_word_embeddings=word_meaning_test_data_word_umap_embedding,
words=word_meaning_test_data["word"].values,
word_groups={
"classified_tn_words": {
"words": classified_tn_test_words,
"color": "g",
"label": "Correctly classified monosemous words",
},
},
emphasis_words=[
("norway", -40, 0),
("scientists", 20, -80),
("sarah", 40, 0),
("architect", -20, 0),
("commonly", 40, -80),
],
xlabel="UMAP 1",
ylabel="UMAP 2",
alpha=1,
ax=ax1,
scatter_set_rasterized=True,
show_plot=False,
)
ax1.set_title("(a) Correctly classified monosemous words (TN)")
ax1.legend()
# FP
word_group_visualization(
transformed_word_embeddings=word_meaning_test_data_word_umap_embedding,
words=word_meaning_test_data["word"].values,
word_groups={
"classified_fp_words": {
"words": misclassified_fp_test_words,
"color": "r",
"label": "Misclassified monosemous words",
},
},
emphasis_words=[
("january", -60, -60),
("ninety-six", 80, -40),
("sixty-three", 60, -70),
("citizens", 40, 0),
("additionally", 40, -100),
],
xlabel="UMAP 1",
ylabel="UMAP 2",
alpha=1,
ax=ax2,
scatter_set_rasterized=True,
show_plot=False,
)
ax2.set_title("(b) Misclassified monosemous words (FP)")
ax2.legend()
# FN
word_group_visualization(
transformed_word_embeddings=word_meaning_test_data_word_umap_embedding,
words=word_meaning_test_data["word"].values,
word_groups={
"classified_fn_words": {
"words": misclassified_fn_test_words,
"color": "r",
"label": "Misclassified polysemous words",
},
},
emphasis_words=[
("time", 40, -80),
("age", 40, 0),
("returned", -40, -80),
("italian", 0, 10),
("chicago", -60, 0),
],
xlabel="UMAP 1",
ylabel="UMAP 2",
alpha=1,
ax=ax3,
scatter_set_rasterized=True,
show_plot=False,
)
ax3.set_title("(c) Misclassified polysemous words (FN)")
ax3.legend()
# FP
word_group_visualization(
transformed_word_embeddings=word_meaning_test_data_word_umap_embedding,
words=word_meaning_test_data["word"].values,
word_groups={
"classified_tp_words": {
"words": classified_tp_test_words,
"color": "g",
"label": "Correctly classified polysemous words",
},
},
emphasis_words=[
("eight", 60, -70),
("under", -60, -60),
("well", 20, 0),
("film", 40, 0),
("game", -20, 0),
],
xlabel="UMAP 1",
ylabel="UMAP 2",
alpha=1,
ax=ax4,
scatter_set_rasterized=True,
show_plot=False,
)
ax4.set_title("(d) Correctly classified polysemous words (TP)")
ax4.legend(loc="lower right")
# Plot/save
save_to_pgf = True
plt.tight_layout()
if save_to_pgf:
plt.savefig(
join(
output_plots_dir,
"bwme-enwiki-umap-classified-words.pdf",
),
backend="pgf",
bbox_inches="tight",
)
else:
plt.show()
# Visualize top 10 feature importances
_, ax = plt.subplots(figsize=(10, 5))
# Sort coefficient by absolute value
binary_log_reg_coef_abs_sorted_indces = np.argsort(abs(binary_logistic_reg.coef_[0]))[
::-1
]
top_n_importances = 10
top_n_importances_indices = binary_log_reg_coef_abs_sorted_indces[:top_n_importances]
# Plot horizontal barplot
y_pos = np.arange(top_n_importances)
ax.barh(
y=y_pos, width=binary_logistic_reg.coef_[0][top_n_importances_indices], color="b"
)
ax.set_yticks(y_pos)
ax.set_yticklabels(
word_meaning_data_feature_cols_human_readable[top_n_importances_indices]
)
ax.invert_yaxis()
ax.set_xlabel("Feature importance")
# Plot/save
save_to_pgf = True
plt.tight_layout()
if save_to_pgf:
plt.savefig(
join(
output_plots_dir,
"bwme-enwiki-top-10-feature-importances.pdf",
),
backend="pgf",
bbox_inches="tight",
)
else:
plt.show()
# Visualize top 10 feature importances
_, axes = plt.subplots(ncols=3, figsize=(13, 5))
ax_chars = "abc"
top_n_importances = 10
feature_alg_names = ["TPS", "GAD", "ID estimator"]
feature_alg_names_start = ["X_tps", "X_gad", "X_estimated_id"]
for ax, ax_char, alg_name, alg_names_start in zip(
axes, ax_chars, feature_alg_names, feature_alg_names_start
):
# Filter algorithm columns
alg_filter = [
i
for i, feature_col in enumerate(word_meaning_data_feature_cols)
if feature_col.startswith(alg_names_start)
]
alg_coeffs = binary_logistic_reg.coef_[0][alg_filter]
# Sort coefficient by absolute value
binary_log_reg_coef_abs_sorted_indces = np.argsort(abs(alg_coeffs))[::-1]
top_n_importances_indices = binary_log_reg_coef_abs_sorted_indces[
:top_n_importances
]
# Plot horizontal barplot
y_pos = np.arange(top_n_importances)
ax.barh(y=y_pos, width=alg_coeffs[top_n_importances_indices], color="b")
ax.set_yticks(y_pos)
ax.set_yticklabels(
word_meaning_data_feature_cols_human_readable[alg_filter][
top_n_importances_indices
]
)
ax.invert_yaxis()
ax.set_xlabel("Feature importance")
ax.set_title(f"({ax_char}) {alg_name} features")
# Plot/save
save_to_pgf = True
plt.tight_layout()
if save_to_pgf:
plt.savefig(
join(
output_plots_dir,
"bwme-enwiki-top-10-feature-importances-tps-gad-estimated-ids.pdf",
),
backend="pgf",
bbox_inches="tight",
)
else:
plt.show()
visualize_feature_importances(
feature_importances=create_feature_importance_df(
feature_names=word_meaning_data_feature_cols,
feature_importances=np.abs(binary_logistic_reg.coef_[0]),
)
)
print(f"Number of zero features: {sum(binary_logistic_reg.coef_[0] == 0)}")
| 0.536313 | 0.435661 |
# Order matching
Orders should be at Market price and with a time in force of "Immediate or Cancel". The exchange simulator will match with orders of the opposite side and fill as much as possible.
For example: If the bid/ask is 99\$/101\$, a buy order will be execute at 101\$ and a sell order will be executed at 99\$.
# Commission
HSI Futures & Options : HK$ 15.00 per contract per side
HHI Futures & Options : HK$ 8.00 per contract per side
MHI Futures : HK$ 5.00 per contract per side
MHI Option : HK$ 3.00 per contract per side
MCH Futures : HK$ 3.00 per contract per side
The commission is paid on purchases and sales. Therefore commission is paid when a position is opened and also when it is closed.
# Capital
The sample portfolio in the tutorial starts with a capital of 1 000 000\$. At the end of each day, the system calculates the realized Profit/Loss (see section on P&L) and adjusts the capital accordingly. For instance, if your strategy earns 10 000\$ on the first day it will start the second day with 1 010 000\$ in capital.
The requirement on the initial capital is flexible. Participating teams can set their own initial capital in the backtesting background setting. In the final round, each team needs to suggest a minimum initial capital for their own strategy. The backtesting performed by CASH Algo will take the value suggested to be the initial capital.
It is assumed the profit and loss is linearly proportional to the capital. For example, if the amount of initial capital is doubled, the absolute profit or absolute loss will also be doubled, leaving the rate of return unchanged. Essentially, we assume that there is no market impact for simplicity in this competition.
# Margin Requirement
A “Client Margining Methodology” called SPAN will be adopted. SPAN is a risk-based, portfolio approach for calculating the daily margin requirement which has been developed by the Chicago Mercantile Exchange (CME). It finds the overall risk of a complete portfolio containing futures and options positions. The margin requirement is computed to cover that risk.
For more details, please visit the following website:
https://www.hkex.com.hk/eng/market/rm/rm_dcrm/rm_dcrm_clearing/futrsksys2.htm
Participants should be aware of the overall position they have taken. If it is found that the margin cannot be maintained during the backtesting, the team may be disqualified. As a result, the initial capital should be carefully set so that the margin can be maintained throughout the backtesting while the rate of return will not be too small.
Note that the position value takes the price multiplier into consideration. HSI futures have a multiplier of 50: for every change of 1 in the price, the P&L moves by 50.
# P&L
We calculate P&L by matching purchases and sales in the order they were issued. For example, let's say you bought 1 contract at 19998, 2 contracts at 20001 then sold 2 contracts at 20010.
The first 2 trades open a position of 3 contracts that you partially close by selling 2 contracts. You have an outstanding position of 1 contract at the end.
To calculate the profit/loss, purchases and sales are paired together. We match them using their average price.
First 3 contracts have an average price of (19998+2*20001)/3 = 20000
- Pair A: Buy at 20000, Sell at 20010 = +10
- Pair B: Buy at 20000, Sell at 20010 = +10
Your gross P/L is (taking the price multiplier into consideration but not the commissions)
```
(10+10)*50
```
The P/L that corresponds to a close position is called 'realized' and will be added to your capital. Your open position has a P/L calculated by evaluating its value at current market prices. In our example, we have one contract purchased at 20005. If the market price is 20015, the 'unrealized' P/L is
```
(20015-20000)*50
```
Realized and unrealized P/L are added together to form the portfolio P/L but only the realized P/L counts towards the capital.
This accounting method is called 'average price'. There are other possible ways to pair purchases and sales and they lead to differences in realized / unrealized P/L. However once the position is closed, every method will have the same cumulative realized P/L.
# End of the Simulation
At the end of the evaluation period, your position should be empty. If not, we will automatically close it.
|
github_jupyter
|
(10+10)*50
(20015-20000)*50
| 0.080837 | 0.961461 |
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import spacy
spacy_nlp = spacy.load('en_core_web_md')
abstract = """Developments in hybrid propulsion technology over the past several decades have made these motors attractive candidates for a variety of applications. In the past, they have been overlooked due to the low regression rate of classical hybrid fuels or in favor of the heritage and commercial availability of liquid or solid propulsion systems. The slow burning rate translates into either a reduced thrust level or the requirement for a complicated, multi-port fuel grain to increase the available burning surface area. These major disadvantages can be mitigated through the use of liquefying hybrid fuels, such as paraffin. Typically, this increase is enough to achieve desired thrust levels with a simple, single port design. Benefits unique to the paraffin-based hybrid design makes it a competitive and viable option for solar system exploration missions. Two specific examples are included to illustrate the advantages of hybrids for solar system exploration. A hybrid design for a Mars Ascent Vehicle as part of a sample return campaign takes advantage of paraffin's tolerance to low and variable temperatures. Hybrid propulsion systems are well suited for planetary orbit insertion because of their ability to throttle, stop and restart at high thrust levels. The high regression rates of liquefying hybrid fuels are due to a fuel entrainment mass transfer mechanism. The design, assembly and results of an experiment to visualize this mechanism are presented. A combustion chamber with three windows allows visual access to the combustion process. A flow conditioning system is employed to create a uniform oxidizer flow at the entrance to the combustion chamber. Experimental visualization of entrainment mass transfer will enable the improvement of combustion models and therefore future hybrid designs."""
import pandas as pd
fast_topics = pd.read_csv('/Users/jpnelson/2020/sul-dlss/ai-etd/data/topic_uri_label_utf8.csv', names=['URI', 'Label'])
topic_labels = {}
for row in fast_topics.iterrows():
topic_labels[row[1]['URI']] = [row[1]['Label'],]
topic_labels
from spacy_lookup import Entity
topic_entity = Entity(keywords_dict=topic_labels, label="FAST_TOPICS")
spacy_nlp.add_pipe(topic_entity)
spacy_nlp.remove_pipe("ner")
doc = spacy_nlp(abstract)
for ent in doc.ents:
print(ent.text, topic_entity.keyword_processor.get_keyword(ent.text))
from spacy import displacy
displacy.render(doc, style='ent')
topic_entity.keyword_processor.get_keyword('mass transfer')
len(topic_entity.keyword_processor)
for row in topic_labels:
if 'propulsion systems' in row.lower():
print(row)
import stanza
stanza_ner = stanza.Pipeline(lang='en', processors='tokenize,ner')
stanza_doc = stanza_ner(abstract)
print(*[f'entity: {ent.text}\ttype: {ent.type}' for ent in stanza_doc.ents], sep='\n')
from transformers import pipeline
hugs_ner = pipeline('ner')
hugs_doc = hugs_ner(abstract)
hugs_doc
topic_kb = spacy.KnowledgeBase(fast_topics)
doc2 = spacy_nlp("""People often engage in behaviors that benefit both themselves and others. In particular, people frequently receive something in exchange for their prosocial behavior. These self-interested benefits can take the form of tangible items, feelings of moral self-regard, or a positive image in the eyes of others. I explore how people navigate these various motives and their effects on prosocial decision making. Chapter 1 examines the inconsistency in existing research showing that appeals to self-interest sometimes increase and sometimes decrease prosocial behavior. I propose that this inconsistency is in part due to the framings of these appeals. Different framings generate different salient reference points, leading to different assessments of the appeal. Study 1 demonstrates that buying an item with the proceeds going to charity evokes a different set of alternative behaviors than donating and receiving an item in return. Studies 2 and 3a-g establish that people are more willing to act, and give more when they do, when reading the former framing than the latter. Study 4 establishes ecological validity by replicating the effect in a field experiment assessing participants' actual charitable contributions. Finally, Study 5 provides additional process evidence via moderation for the proposed mechanism. Chapter 2 further examines how the motivation to feel moral guides people's behavior. I propose that people's efforts to preserve their moral self-regard conform to a moral threshold model. This model predicts that people are primarily concerned with whether their prosocial behavior legitimates the claim that they have acted morally, a claim that often diverges from whether their behavior is in the best interests of the recipient of the prosocial behavior. Specifically, it predicts that for people to feel moral following a prosocial decision, that decision need not have promised the greatest benefit for the recipient but only one larger than at least one other available outcome. Moreover, this model predicts that once people produce a benefit that exceeds this threshold, their moral self-regard is relatively insensitive to the magnitude of benefit that they produce. In seven studies, I test this moral threshold model by examining people's prosocial risk decisions. I find that, compared to risky egoistic decisions, people systematically avoid making risky prosocial decisions that carry the possibility of producing the worst possible outcome in a choice set—even when those decisions are objectively superior. I further find that people's greater aversion to producing the worst possible outcome when the beneficiary is a prosocial cause leads their prosocial (vs. egoistic) risk decisions to be less sensitive to those decisions' maximum possible benefit. Finally, Chapter 3 explores the potential drawbacks that come with behaving prosocially in public. Specifically, I argue that being identified for one's prosocial behavior can sometimes crowd out feelings of moral self-regard. This in turn, leads to a preference for private acts of prosociality over public ones. Five studies provide evidence that, when given the option between engaging in prosocial behavior in public or in private, people often choose the latter—contrary to prior work. In further support of a crowding out effect, people perceived private prosocial behavior to be more moral than public prosocial behavior. However, this difference in morality between public and private behavior was malleable and depended on the salient comparison point used, providing evidence that contextual factors play a role in how the identifiability of a prosocial act affects one's moral self-regard.""")
doc2.ents
```
|
github_jupyter
|
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import spacy
spacy_nlp = spacy.load('en_core_web_md')
abstract = """Developments in hybrid propulsion technology over the past several decades have made these motors attractive candidates for a variety of applications. In the past, they have been overlooked due to the low regression rate of classical hybrid fuels or in favor of the heritage and commercial availability of liquid or solid propulsion systems. The slow burning rate translates into either a reduced thrust level or the requirement for a complicated, multi-port fuel grain to increase the available burning surface area. These major disadvantages can be mitigated through the use of liquefying hybrid fuels, such as paraffin. Typically, this increase is enough to achieve desired thrust levels with a simple, single port design. Benefits unique to the paraffin-based hybrid design makes it a competitive and viable option for solar system exploration missions. Two specific examples are included to illustrate the advantages of hybrids for solar system exploration. A hybrid design for a Mars Ascent Vehicle as part of a sample return campaign takes advantage of paraffin's tolerance to low and variable temperatures. Hybrid propulsion systems are well suited for planetary orbit insertion because of their ability to throttle, stop and restart at high thrust levels. The high regression rates of liquefying hybrid fuels are due to a fuel entrainment mass transfer mechanism. The design, assembly and results of an experiment to visualize this mechanism are presented. A combustion chamber with three windows allows visual access to the combustion process. A flow conditioning system is employed to create a uniform oxidizer flow at the entrance to the combustion chamber. Experimental visualization of entrainment mass transfer will enable the improvement of combustion models and therefore future hybrid designs."""
import pandas as pd
fast_topics = pd.read_csv('/Users/jpnelson/2020/sul-dlss/ai-etd/data/topic_uri_label_utf8.csv', names=['URI', 'Label'])
topic_labels = {}
for row in fast_topics.iterrows():
topic_labels[row[1]['URI']] = [row[1]['Label'],]
topic_labels
from spacy_lookup import Entity
topic_entity = Entity(keywords_dict=topic_labels, label="FAST_TOPICS")
spacy_nlp.add_pipe(topic_entity)
spacy_nlp.remove_pipe("ner")
doc = spacy_nlp(abstract)
for ent in doc.ents:
print(ent.text, topic_entity.keyword_processor.get_keyword(ent.text))
from spacy import displacy
displacy.render(doc, style='ent')
topic_entity.keyword_processor.get_keyword('mass transfer')
len(topic_entity.keyword_processor)
for row in topic_labels:
if 'propulsion systems' in row.lower():
print(row)
import stanza
stanza_ner = stanza.Pipeline(lang='en', processors='tokenize,ner')
stanza_doc = stanza_ner(abstract)
print(*[f'entity: {ent.text}\ttype: {ent.type}' for ent in stanza_doc.ents], sep='\n')
from transformers import pipeline
hugs_ner = pipeline('ner')
hugs_doc = hugs_ner(abstract)
hugs_doc
topic_kb = spacy.KnowledgeBase(fast_topics)
doc2 = spacy_nlp("""People often engage in behaviors that benefit both themselves and others. In particular, people frequently receive something in exchange for their prosocial behavior. These self-interested benefits can take the form of tangible items, feelings of moral self-regard, or a positive image in the eyes of others. I explore how people navigate these various motives and their effects on prosocial decision making. Chapter 1 examines the inconsistency in existing research showing that appeals to self-interest sometimes increase and sometimes decrease prosocial behavior. I propose that this inconsistency is in part due to the framings of these appeals. Different framings generate different salient reference points, leading to different assessments of the appeal. Study 1 demonstrates that buying an item with the proceeds going to charity evokes a different set of alternative behaviors than donating and receiving an item in return. Studies 2 and 3a-g establish that people are more willing to act, and give more when they do, when reading the former framing than the latter. Study 4 establishes ecological validity by replicating the effect in a field experiment assessing participants' actual charitable contributions. Finally, Study 5 provides additional process evidence via moderation for the proposed mechanism. Chapter 2 further examines how the motivation to feel moral guides people's behavior. I propose that people's efforts to preserve their moral self-regard conform to a moral threshold model. This model predicts that people are primarily concerned with whether their prosocial behavior legitimates the claim that they have acted morally, a claim that often diverges from whether their behavior is in the best interests of the recipient of the prosocial behavior. Specifically, it predicts that for people to feel moral following a prosocial decision, that decision need not have promised the greatest benefit for the recipient but only one larger than at least one other available outcome. Moreover, this model predicts that once people produce a benefit that exceeds this threshold, their moral self-regard is relatively insensitive to the magnitude of benefit that they produce. In seven studies, I test this moral threshold model by examining people's prosocial risk decisions. I find that, compared to risky egoistic decisions, people systematically avoid making risky prosocial decisions that carry the possibility of producing the worst possible outcome in a choice set—even when those decisions are objectively superior. I further find that people's greater aversion to producing the worst possible outcome when the beneficiary is a prosocial cause leads their prosocial (vs. egoistic) risk decisions to be less sensitive to those decisions' maximum possible benefit. Finally, Chapter 3 explores the potential drawbacks that come with behaving prosocially in public. Specifically, I argue that being identified for one's prosocial behavior can sometimes crowd out feelings of moral self-regard. This in turn, leads to a preference for private acts of prosociality over public ones. Five studies provide evidence that, when given the option between engaging in prosocial behavior in public or in private, people often choose the latter—contrary to prior work. In further support of a crowding out effect, people perceived private prosocial behavior to be more moral than public prosocial behavior. However, this difference in morality between public and private behavior was malleable and depended on the salient comparison point used, providing evidence that contextual factors play a role in how the identifiability of a prosocial act affects one's moral self-regard.""")
doc2.ents
| 0.763131 | 0.875999 |
# Naive Bayes
## Importing the libraries
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
```
## Importing the dataset
```
dataset = pd.read_csv('Social_Network_Ads.csv')
X = dataset.iloc[:,:-1].values
y = dataset.iloc[:,-1].values
```
## Splitting the dataset into the Training set and Test set
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=.25,random_state=0)
print(X_train)
print(X_test)
print(y_train)
print(y_test)
```
## Feature Scaling
```
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
print(X_train)
print(X_test)
```
## Training the Naive Bayes model on the Training set
```
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
classifier.fit(X_train,y_train)
```
## Predicting a new result
```
print(classifier.predict(sc.transform([[30,87000]])))
```
## Predicting the Test set results
```
y_pred = classifier.predict(X_test)
print(np.concatenate((y_pred.reshape(len(y_pred),1),y_test.reshape(len(y_test),1)),1))
```
## Making the Confusion Matrix
```
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
```
## Visualising the Training set results
```
from matplotlib.colors import ListedColormap
X_set, y_set = sc.inverse_transform(X_train), y_train
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25),
np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25))
plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Naive Bayes (Training set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
```
## Visualising the Test set results
```
from matplotlib.colors import ListedColormap
X_set, y_set = sc.inverse_transform(X_test), y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25),
np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25))
plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Naive Bayes (Test set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset = pd.read_csv('Social_Network_Ads.csv')
X = dataset.iloc[:,:-1].values
y = dataset.iloc[:,-1].values
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=.25,random_state=0)
print(X_train)
print(X_test)
print(y_train)
print(y_test)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
print(X_train)
print(X_test)
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
classifier.fit(X_train,y_train)
print(classifier.predict(sc.transform([[30,87000]])))
y_pred = classifier.predict(X_test)
print(np.concatenate((y_pred.reshape(len(y_pred),1),y_test.reshape(len(y_test),1)),1))
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
from matplotlib.colors import ListedColormap
X_set, y_set = sc.inverse_transform(X_train), y_train
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25),
np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25))
plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Naive Bayes (Training set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
from matplotlib.colors import ListedColormap
X_set, y_set = sc.inverse_transform(X_test), y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25),
np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25))
plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Naive Bayes (Test set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
| 0.578805 | 0.971806 |
As you will see, the microarray 'processed' data obtained from the array express dataset is not really processed.
**data ref:** https://jb.asm.org/content/190/11/3904
```
import pandas as pd
import re
data_file_path = 'ArrayExpressData/E-TABM-386/'
processed_matrix = 'FinalDataMatrix.txt'
toepel_raw = pd.read_csv(data_file_path+processed_matrix,sep='\t',header=[0,1])
toepel_raw.head()
```
The columns are all messed up (starts from Day2 LL9), index has contigs (we'll get proper gene names), the rest of the data looks good (it is already normalized and log2 values are calculated, sd and se are also given).
We will arrange the columns referring the paper (link given above). The order will be Day1_L2, Day1_L6, Day1_L10, Day1_D2, Day1_D6, Day1_D10, Day2_LL2, Day2_LL6, Day2_LL10, Day2_LD2, Day2_LD6, Day2_LD10.
The column has 2 levels, we'll edit the column names in both the levels and follow the above pattern.
```
def colNameFunc(name):
names = name.split(';')
pattern = re.compile(r'^PooledBioReps_(Day[1|2])_([D|L]{1,2}[0-9]{1,2})_[1|2]$')
matched = re.match(pattern,names[0])
return '_'.join([matched.group(1),matched.group(2)]) if matched else names[0]
def colNameLevel1(name):
names = name.split(':')
if len(names)==1:
return names[0]
replace = {'Replicates Mean Normalized Log2 Ratio': 'Exp-Val','Replicates SD': 'SD', 'Replicates SE':'SE'}
return replace[names[1]]
toepel_new_col_names = list(map(colNameFunc,list(toepel_raw.columns.levels[0])))
toepel_new_col_level1 = list(map(colNameLevel1,list(toepel_raw.columns.levels[1])))
toepel_raw.columns = toepel_raw.columns.set_levels(toepel_new_col_names,level=0)
toepel_raw.columns = toepel_raw.columns.set_levels(toepel_new_col_level1,level=1)
```
Next we will only keep the log2 ratio column and the contigs, and merge into a single column level.
```
df = toepel_raw.loc[:,toepel_raw.columns.get_level_values(1).isin({'Exp-Val','Reporter REF'})]
```
Then we will sort the column order as given above.
```
df.columns = df.columns.droplevel(1)
df = df.reindex(['Day1_L2', 'Day1_L6', 'Day1_L10', 'Day1_D2', 'Day1_D6', 'Day1_D10', 'Day2_LL2', 'Day2_LL6',
'Day2_LL10', 'Day2_LD2', 'Day2_LD6', 'Day2_LD10','Scan REF'],axis=1).set_index('Scan REF')
df.head()
```
Next we will map the contigs to the gene names. For that, we will need the contig to gene mappings.
```
mappings = 'Contig-ORF.txt'
toepel_contig = pd.read_csv(data_file_path+mappings,sep='\t',header=None)
toepel_contig.columns = ['Contig','ORF']
toepel_contig.head()
```
Need to parse the contig columns properly and apply the function to all the rows of that columns.
```
def ContigFunc(name):
return name.replace('ebi.ac.uk:MIAMExpress:Reporter:A-MEXP-951.','')
toepel_contig.Contig = toepel_contig.Contig.apply(ContigFunc)
toepel_contig = toepel_contig.iloc[:-1].set_index('Contig')
```
Creating a new dataframe by merging information from both the dataframes
```
new_df = toepel_contig.join(df)
new_df.head()
processed_data_toepel = new_df.reset_index()
processed_data_toepel.head()
```
*Will use this data for analysis*
```
processed_data_toepel.to_csv('MicroarrayData/ToepelProcessed.csv',index=False)
```
|
github_jupyter
|
import pandas as pd
import re
data_file_path = 'ArrayExpressData/E-TABM-386/'
processed_matrix = 'FinalDataMatrix.txt'
toepel_raw = pd.read_csv(data_file_path+processed_matrix,sep='\t',header=[0,1])
toepel_raw.head()
def colNameFunc(name):
names = name.split(';')
pattern = re.compile(r'^PooledBioReps_(Day[1|2])_([D|L]{1,2}[0-9]{1,2})_[1|2]$')
matched = re.match(pattern,names[0])
return '_'.join([matched.group(1),matched.group(2)]) if matched else names[0]
def colNameLevel1(name):
names = name.split(':')
if len(names)==1:
return names[0]
replace = {'Replicates Mean Normalized Log2 Ratio': 'Exp-Val','Replicates SD': 'SD', 'Replicates SE':'SE'}
return replace[names[1]]
toepel_new_col_names = list(map(colNameFunc,list(toepel_raw.columns.levels[0])))
toepel_new_col_level1 = list(map(colNameLevel1,list(toepel_raw.columns.levels[1])))
toepel_raw.columns = toepel_raw.columns.set_levels(toepel_new_col_names,level=0)
toepel_raw.columns = toepel_raw.columns.set_levels(toepel_new_col_level1,level=1)
df = toepel_raw.loc[:,toepel_raw.columns.get_level_values(1).isin({'Exp-Val','Reporter REF'})]
df.columns = df.columns.droplevel(1)
df = df.reindex(['Day1_L2', 'Day1_L6', 'Day1_L10', 'Day1_D2', 'Day1_D6', 'Day1_D10', 'Day2_LL2', 'Day2_LL6',
'Day2_LL10', 'Day2_LD2', 'Day2_LD6', 'Day2_LD10','Scan REF'],axis=1).set_index('Scan REF')
df.head()
mappings = 'Contig-ORF.txt'
toepel_contig = pd.read_csv(data_file_path+mappings,sep='\t',header=None)
toepel_contig.columns = ['Contig','ORF']
toepel_contig.head()
def ContigFunc(name):
return name.replace('ebi.ac.uk:MIAMExpress:Reporter:A-MEXP-951.','')
toepel_contig.Contig = toepel_contig.Contig.apply(ContigFunc)
toepel_contig = toepel_contig.iloc[:-1].set_index('Contig')
new_df = toepel_contig.join(df)
new_df.head()
processed_data_toepel = new_df.reset_index()
processed_data_toepel.head()
processed_data_toepel.to_csv('MicroarrayData/ToepelProcessed.csv',index=False)
| 0.260013 | 0.884838 |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from whatif import Model
```
## Excel "What if?" analysis with Python - Part 5: Documentation
Documentation is important.
>“Code is more often read than written.”
>
> — Guido van Rossum
In this introduction, we'll touch on three specific aspects of documentation:
* comments and docstrings
* the readme file
* Sphinx and restructured text
The Real Python people have developed a very nice guide to documentation. You should read through it and use it as a reference as you are going through this notebook.
* [Documenting Python Code: A Complete Guide](https://realpython.com/documenting-python-code/)
Other good resources include:
* [Hitchhikers Guide to Python - Documentation](https://docs.python-guide.org/writing/documentation/)
* [PEP 8 - Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/) and [PEP 257 - Docstring Conventions](https://www.python.org/dev/peps/pep-0257/)
- PEP stands for Python Enhancement Proposal. See https://www.python.org/dev/peps/,
- these are great references for learning to use good Python coding style conventions,
- includes information on docstrings and comments as well as on actual code.
## Comments and docstrings
At the very minimum, your code should be well commented and include appropriate docstrings. Let's start by looking back at an early version of our `whatif.data_table` function from the `what_if_1_datatable.ipynb` notebook.
```
def data_table(model, scenario_inputs, outputs):
"""Create n-inputs by m-outputs data table.
Parameters
----------
model : object
User defined model object
scenario_inputs : dict of str to sequence
Keys are input variable names and values are sequence of values for this variable.
outputs : list of str
List of output variable names
Returns
-------
results_df : pandas DataFrame
Contains values of all outputs for every combination of scenario inputs
"""
# Clone the model using deepcopy
model_clone = copy.deepcopy(model)
# Create parameter grid
dt_param_grid = list(ParameterGrid(scenario_inputs))
# Create the table as a list of dictionaries
results = []
# Loop over the scenarios
for params in dt_param_grid:
# Update the model clone with scenario specific values
model_clone.update(params)
# Create a result dictionary based on a copy of the scenario inputs
result = copy.copy(params)
# Loop over the list of requested outputs
for output in outputs:
# Compute the output.
out_val = getattr(model_clone, output)()
# Add the output to the result dictionary
result[output] = out_val
# Append the result dictionary to the results list
results.append(result)
# Convert the results list (of dictionaries) to a pandas DataFrame and return it
results_df = pd.DataFrame(results)
return results_df
```
A few things to note:
* The block at the top within the triple quotes is a docstring in what is known as *numpydoc* format. It is pretty verbose but easy for humans to read. Learn more at https://numpydoc.readthedocs.io/en/latest/format.html. This type of block docstring is appropriate for documenting functions and classes.
* Code comments start with a '#', are on their own line, and the line should be less than 72 chars wide.
* The code above is a little over-commented. This is intentional as it's part of a learning tutorial.
* By including docstrings, we get to do this...
```
data_table?
```
Now let's look at our `BookstoreModel` class with respect to comments and docstrings. Notice that:
* the individual methods have short concise docstrings - many are one line. This is ok if the meaning of the method and the way it was implemented is pretty straight forward.
* the first line of a multi-line docstring should be a self-contained short description and be followed by a blank line.
```
class BookstoreModel(Model):
"""Bookstore model
This example is based on the "Walton Bookstore" problem in *Business Analytics: Data Analysis and Decision Making* (Albright and Winston) in the chapter on Monte-Carlo simulation. Here's the basic problem (with a few modifications):
* we have to place an order for a perishable product (e.g. a calendar),
* there's a known unit cost for each one ordered,
* we have a known selling price,
* demand is uncertain but we can model it with some simple probability distribution,
* for each unsold item, we can get a partial refund of our unit cost,
* we need to select the order quantity for our one order for the year; orders can only be in multiples of 25.
Attributes
----------
unit_cost: float or array-like of float, optional
Cost for each item ordered (default 7.50)
selling_price : float or array-like of float, optional
Selling price for each item (default 10.00)
unit_refund : float or array-like of float, optional
For each unsold item we receive a refund in this amount (default 2.50)
order_quantity : float or array-like of float, optional
Number of items ordered in the one time we get to order (default 200)
demand : float or array-like of float, optional
Number of items demanded by customers (default 193)
"""
def __init__(self, unit_cost=7.50, selling_price=10.00, unit_refund=2.50,
order_quantity=200, demand=193):
self.unit_cost = unit_cost
self.selling_price = selling_price
self.unit_refund = unit_refund
self.order_quantity = order_quantity
self.demand = demand
def order_cost(self):
"""Compute total order cost"""
return self.unit_cost * self.order_quantity
def num_sold(self):
"""Compute number of items sold
Assumes demand in excess of order quantity is lost.
"""
return np.minimum(self.order_quantity, self.demand)
def sales_revenue(self):
"""Compute total sales revenue based on number sold and selling price"""
return self.num_sold() * self.selling_price
def num_unsold(self):
"""Compute number of items ordered but not sold
Demand was less than order quantity
"""
return np.maximum(0, self.order_quantity - self.demand)
def refund_revenue(self):
"""Compute total sales revenue based on number unsold and unit refund"""
return self.num_unsold() * self.unit_refund
def total_revenue(self):
"""Compute total revenue from sales and refunds"""
return self.sales_revenue() + self.refund_revenue()
def profit(self):
"""Compute profit based on revenue and cost"""
profit = self.sales_revenue() + self.refund_revenue() - self.order_cost()
return profit
```
## The readme file
Every project should have a readme file at the very least. Usually it will contain a high level description of the project and instructions for installing it obtaining the source code. It may also contain contact info, tell people how to contribute and licensing info, among other things. Write your readme file using markdown as then it will automatically be rendered as html in your GitHub repo and serve as a type of "home page" for your repo. Here's a sample readme file from my whatif project:
# whatif - Do Excel style what if? analysis in Python
The whatif package helps you build business analysis oriented models in Python that you might normally build in Excel.
Specifically, whatif includes functions that are similar to Excel's Data Tables and Goal Seek for doing
sensitivity analysis and "backsolving" (e.g. finding a breakeven point). It also includes functions
for facilitating Monte-Carlo simulation using these models.
Related blog posts
* [Part 1: Models and Data Tables](http://hselab.org/excel-to-python-1-models-datatables.html)
* [Part 2: Goal Seek](http://hselab.org/excel-to-python-2-goalseek.html)
* [Part 3: Monte-carlo simulation](http://hselab.org/excel-to-python-3-simulation.html)
## Features
The whatif package is new and quite small. It contains:
* a base ``Model`` class that can be subclassed to create new models
* Functions for doing data tables (``data_table``) and goal seek (``goal_seek``) on a models
* Functions for doing Monte-Carlo simulation with a model (``simulate``)
* Some Jupyter notebook based example models
## Installation
Clone the whatif project from GitHub:
.. code::
git clone https://github.com/misken/whatif.git
and then you can install it locally by running
the following from the project directory.
.. code::
cd whatif
pip install .
Getting started
---------------
See the [Getting started with whatif](TODO) page in the docs.
License
-------
The project is licensed under the MIT license.
<div class="alert alert-warning">
<b>Even if you write no other documentation, your project should have a readme file, be well commented and include docstrings.</b>
</div>
## Creating documentation with Sphinx and restructured text
[Sphinx](https://www.sphinx-doc.org/en/master/) is a widely used tool for creating Python documentation (and other things) from plain text files written in something known as [reStructureText](https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html), or reST for. You'll see that reST is similar to markdown but way more powerful.
It's easy to create a new documentation project using the **sphinx-quickstart** script described on the [Getting Started](https://www.sphinx-doc.org/en/master/usage/quickstart.html) page. For our `cookiecutter-datascience-aap` template, I've already run the quick start script and the docs folder contains the base files for the documentation - the most important being conf.py and index.rst, getting_started.rst. We'll discuss these shortly, but let's start by exploring a finished reST based site - one of my coursewebs.
Yes, all of my public coursewebs are written in reST and the html is generated from it by Sphinx. I've included my MIS 4460/5460 course website in the `mis5460_w21` folder within the downloads folder. Let's go take a look.
### Exploring a reST based site
A few things to note:
* all of the pages have a `.rst` extension, indicating that they are written in reST. They are JUST PLAIN TEXT files.
* Look at `index.rst` to see a table of contents directive.
* There is some flexibility in how sectioning is done. See [this section of the reST primer](https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html#sections).
* **bold** and *italics* are the same as they are in markdown.
* hyperlinks are different than in markdown.
* Sphinx uses the `toctree` along with section headings to automatically generate a table of contents and navigation. Very convenient.
* Sphinx gets much of its power from something known as [directives](https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html#directives). Yes, it's easy to make mistakes related to spacing or blank lines or missing colons when using directives and I'm frequently referring to the reST documentation when things aren't working quite right.
- a good example of the power of directives is the yellow warning block above in this document. If you double click it to get into edit mode, you see that it's just raw html. Markdown doesn't have a way to easily custom style something like this just by indicating that's it's a note or a warning. In reST, we just do:
```
.. note:: This is a note admonition.
This is the second line of the first paragraph.
- The note contains all indented body elements
following.
- It includes this bullet list.
```
### Generating documentation for whatif
Now let's go look at the `docs` folder for the whatif project that I included in the downloads folder. Actually, even though I've included my whatif folder, I'm going to clone it from GitHub to show how easy it is to clone a repo.
The URL is https://github.com/misken/whatif.git. Open a git bash shell and navigate to some folder into which you'll clone my whatif repo. Make sure you don't already have your whatif repo in the same folder.
```
git clone https://github.com/misken/whatif.git
```
We'll explore the various files and show how to generate html based documentation. A couple of important things to be alert to:
* In order to be able to automatically generate documentation from our code's docstrings, we need to tell Sphinx to enable a few key extensions - `sphinx.ext.napoleon` and `sphinx.ext.autodoc`. We will do this in the `conf.py` file.
* It still feels a little magical when we can turn text files and code into documentation by typing `make html`.
Some resources:
* [Documenting Python Code: A Complete Guide](https://realpython.com/documenting-python-code/)
* [Hitchhikers Guide to Python - Documentation](https://docs.python-guide.org/writing/documentation/)
* [PEP 8 - Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/)
* [PEP 257 - Docstring Conventions](https://www.python.org/dev/peps/pep-0257/)
* [Sphinx](https://www.sphinx-doc.org/en/master/)
* [reStructureText primer](https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html)
* [autodoc extension](https://www.sphinx-doc.org/en/master/usage/quickstart.html#autodoc) - generate docs from docstrings in code
* [napolean extension](https://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html#module-sphinx.ext.napoleon) - converts numpydoc to reST
* [Read the Docs](https://readthedocs.org/)
* [More doc tips](https://github.com/choderalab/software-development/blob/master/DOCUMENTATION.md)
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from whatif import Model
def data_table(model, scenario_inputs, outputs):
"""Create n-inputs by m-outputs data table.
Parameters
----------
model : object
User defined model object
scenario_inputs : dict of str to sequence
Keys are input variable names and values are sequence of values for this variable.
outputs : list of str
List of output variable names
Returns
-------
results_df : pandas DataFrame
Contains values of all outputs for every combination of scenario inputs
"""
# Clone the model using deepcopy
model_clone = copy.deepcopy(model)
# Create parameter grid
dt_param_grid = list(ParameterGrid(scenario_inputs))
# Create the table as a list of dictionaries
results = []
# Loop over the scenarios
for params in dt_param_grid:
# Update the model clone with scenario specific values
model_clone.update(params)
# Create a result dictionary based on a copy of the scenario inputs
result = copy.copy(params)
# Loop over the list of requested outputs
for output in outputs:
# Compute the output.
out_val = getattr(model_clone, output)()
# Add the output to the result dictionary
result[output] = out_val
# Append the result dictionary to the results list
results.append(result)
# Convert the results list (of dictionaries) to a pandas DataFrame and return it
results_df = pd.DataFrame(results)
return results_df
data_table?
class BookstoreModel(Model):
"""Bookstore model
This example is based on the "Walton Bookstore" problem in *Business Analytics: Data Analysis and Decision Making* (Albright and Winston) in the chapter on Monte-Carlo simulation. Here's the basic problem (with a few modifications):
* we have to place an order for a perishable product (e.g. a calendar),
* there's a known unit cost for each one ordered,
* we have a known selling price,
* demand is uncertain but we can model it with some simple probability distribution,
* for each unsold item, we can get a partial refund of our unit cost,
* we need to select the order quantity for our one order for the year; orders can only be in multiples of 25.
Attributes
----------
unit_cost: float or array-like of float, optional
Cost for each item ordered (default 7.50)
selling_price : float or array-like of float, optional
Selling price for each item (default 10.00)
unit_refund : float or array-like of float, optional
For each unsold item we receive a refund in this amount (default 2.50)
order_quantity : float or array-like of float, optional
Number of items ordered in the one time we get to order (default 200)
demand : float or array-like of float, optional
Number of items demanded by customers (default 193)
"""
def __init__(self, unit_cost=7.50, selling_price=10.00, unit_refund=2.50,
order_quantity=200, demand=193):
self.unit_cost = unit_cost
self.selling_price = selling_price
self.unit_refund = unit_refund
self.order_quantity = order_quantity
self.demand = demand
def order_cost(self):
"""Compute total order cost"""
return self.unit_cost * self.order_quantity
def num_sold(self):
"""Compute number of items sold
Assumes demand in excess of order quantity is lost.
"""
return np.minimum(self.order_quantity, self.demand)
def sales_revenue(self):
"""Compute total sales revenue based on number sold and selling price"""
return self.num_sold() * self.selling_price
def num_unsold(self):
"""Compute number of items ordered but not sold
Demand was less than order quantity
"""
return np.maximum(0, self.order_quantity - self.demand)
def refund_revenue(self):
"""Compute total sales revenue based on number unsold and unit refund"""
return self.num_unsold() * self.unit_refund
def total_revenue(self):
"""Compute total revenue from sales and refunds"""
return self.sales_revenue() + self.refund_revenue()
def profit(self):
"""Compute profit based on revenue and cost"""
profit = self.sales_revenue() + self.refund_revenue() - self.order_cost()
return profit
.. note:: This is a note admonition.
This is the second line of the first paragraph.
- The note contains all indented body elements
following.
- It includes this bullet list.
git clone https://github.com/misken/whatif.git
| 0.821116 | 0.916596 |
```
import json
import logging
import os
from bs4 import BeautifulSoup
import requests
import ricecooker
from engageny_chef import get_text, get_parsed_html_from_url, make_fully_qualified_url
from engageny_chef import ENGAGENY_CC_START_URL
from engageny_chef import DATA_DIR, TREES_DATA_DIR, CRAWLING_STAGE_OUTPUT
from engageny_chef import LOGGER
LOGGER.addHandler(logging.StreamHandler()) # needed for logging in to work in notebook
from pprint import pprint as pp
from re import compile
# basic reconaissance ...
doc = get_parsed_html_from_url(ENGAGENY_CC_START_URL)
dual_toc_div = doc.find('div', id='mini-panel-common_core_curriculum')
ELA_toc = dual_toc_div.find('div', class_='panel-col-first')
MATH_toc = dual_toc_div.find('div', class_='panel-col-last')
MATH_grades_lis = MATH_toc.find_all('li')
ELA_grades_lis = ELA_toc.find_all('li')
MATH_grades = []
CONTENT_OR_RESOURCE_URL_RE = compile(r'/(content|resource)/*')
pp(MATH_toc.find_all('a', attrs={'href': CONTENT_OR_RESOURCE_URL_RE }))
for grade_li in MATH_grades_lis:
grade_path = grade_li.find('a')['href']
grade_url = make_fully_qualified_url(grade_path)
MATH_grades.append({
'type': 'grade',
'url': grade_url,
'title': get_text(grade_li)
})
pp(MATH_grades)
grade_page = get_parsed_html_from_url(MATH_grades[0]['url'])
pp(grade_page)
```
```
pp(grade_curriculum_toc)
grade_url_re = compile(r'^/resource/(\w)+-(\w)+-module-(\d)+$')
pp(grade_url_re2.match('/resource/prekindergarten-mathematics-module-1'))
for grade in MATH_grades:
grade_page = get_parsed_html_from_url(grade['url'])
grade_curriculum_toc = grade_page.find('div', class_='nysed-book-outline curriculum-map')
for module_li in grade_curriculum_toc.find_all('li', class_='module'):
details = module_li.find('div', class_='details').find('a', attrs={'href': grade_url_re })
topics = []
for topic in module_li.find('div', class_='tree').find_all('li', class_='topic'):
pass
grade_children.append({
'title': get_text(details),
'url': make_fully_qualified_url(details['href']),
'topics': topics,
})
## Debugging web_resource_tree.json
def print_web_resource_tree(web_resource_tree):
print('----')
print('WEB RESOURCE TREE:', 'title:', web_resource_tree['title'], ' len(children) =', len(web_resource_tree['children']))
# print(' description:', web_resource_tree['description'][0:60]+'..')
for category in web_resource_tree['children']:
print(' - Category title:', category['title']) # len(category['title']))
# print(' desciption:', category['description'][0:60]+'..') # len(category['description']))
for resource in category['children']:
# print(resource)
if 'kind' not in resource:
resource['kind'] = resource['type']
print(' - Resource (%s):' % resource['kind'], resource['title'])
for child in resource['children']:
# print(child)
print(' - Child (%s):' % child['kind'], child['title'])
print('\n\n')
with open(os.path.join(TREES_DATA_DIR,CRAWLING_STAGE_OUTPUT)) as json_file:
web_resource_tree = json.load(json_file)
print_web_resource_tree(web_resource_tree)
```
## Calling chef method for debugging...
```
from engageny_chef import EngageNYChef
chef = EngageNYChef()
chef_args = None
chef_options = {}
chef.crawl(chef_args, chef_options)
```
|
github_jupyter
|
import json
import logging
import os
from bs4 import BeautifulSoup
import requests
import ricecooker
from engageny_chef import get_text, get_parsed_html_from_url, make_fully_qualified_url
from engageny_chef import ENGAGENY_CC_START_URL
from engageny_chef import DATA_DIR, TREES_DATA_DIR, CRAWLING_STAGE_OUTPUT
from engageny_chef import LOGGER
LOGGER.addHandler(logging.StreamHandler()) # needed for logging in to work in notebook
from pprint import pprint as pp
from re import compile
# basic reconaissance ...
doc = get_parsed_html_from_url(ENGAGENY_CC_START_URL)
dual_toc_div = doc.find('div', id='mini-panel-common_core_curriculum')
ELA_toc = dual_toc_div.find('div', class_='panel-col-first')
MATH_toc = dual_toc_div.find('div', class_='panel-col-last')
MATH_grades_lis = MATH_toc.find_all('li')
ELA_grades_lis = ELA_toc.find_all('li')
MATH_grades = []
CONTENT_OR_RESOURCE_URL_RE = compile(r'/(content|resource)/*')
pp(MATH_toc.find_all('a', attrs={'href': CONTENT_OR_RESOURCE_URL_RE }))
for grade_li in MATH_grades_lis:
grade_path = grade_li.find('a')['href']
grade_url = make_fully_qualified_url(grade_path)
MATH_grades.append({
'type': 'grade',
'url': grade_url,
'title': get_text(grade_li)
})
pp(MATH_grades)
grade_page = get_parsed_html_from_url(MATH_grades[0]['url'])
pp(grade_page)
pp(grade_curriculum_toc)
grade_url_re = compile(r'^/resource/(\w)+-(\w)+-module-(\d)+$')
pp(grade_url_re2.match('/resource/prekindergarten-mathematics-module-1'))
for grade in MATH_grades:
grade_page = get_parsed_html_from_url(grade['url'])
grade_curriculum_toc = grade_page.find('div', class_='nysed-book-outline curriculum-map')
for module_li in grade_curriculum_toc.find_all('li', class_='module'):
details = module_li.find('div', class_='details').find('a', attrs={'href': grade_url_re })
topics = []
for topic in module_li.find('div', class_='tree').find_all('li', class_='topic'):
pass
grade_children.append({
'title': get_text(details),
'url': make_fully_qualified_url(details['href']),
'topics': topics,
})
## Debugging web_resource_tree.json
def print_web_resource_tree(web_resource_tree):
print('----')
print('WEB RESOURCE TREE:', 'title:', web_resource_tree['title'], ' len(children) =', len(web_resource_tree['children']))
# print(' description:', web_resource_tree['description'][0:60]+'..')
for category in web_resource_tree['children']:
print(' - Category title:', category['title']) # len(category['title']))
# print(' desciption:', category['description'][0:60]+'..') # len(category['description']))
for resource in category['children']:
# print(resource)
if 'kind' not in resource:
resource['kind'] = resource['type']
print(' - Resource (%s):' % resource['kind'], resource['title'])
for child in resource['children']:
# print(child)
print(' - Child (%s):' % child['kind'], child['title'])
print('\n\n')
with open(os.path.join(TREES_DATA_DIR,CRAWLING_STAGE_OUTPUT)) as json_file:
web_resource_tree = json.load(json_file)
print_web_resource_tree(web_resource_tree)
from engageny_chef import EngageNYChef
chef = EngageNYChef()
chef_args = None
chef_options = {}
chef.crawl(chef_args, chef_options)
| 0.08253 | 0.131954 |
<a href="https://colab.research.google.com/github/ceos-seo/odc-colab/blob/master/notebooks/02.10.Colab_VIIRS_Night_Lights.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Downloads the odc-colab Python module and runs it to setup ODC.
```
!wget -nc https://raw.githubusercontent.com/ceos-seo/odc-colab/master/odc_colab.py
from odc_colab import odc_colab_init
odc_colab_init(install_odc_gee=True)
```
Downloads an existing index and populates the new ODC environment with it.
```
from odc_colab import populate_db
populate_db()
```
# Nightlight Radiance from VIIRS
This notebook demonstrates nightlight radiance measurements from VIIRS. These measurements can be used to study urban growth and loss of power from storms. The data is available as monthly mean radiance from April-2012 through December-2020 at a resolution of 15 arc-seconds (approximately 500 meters). More information about this dataset can be found [HERE](https://developers.google.com/earth-engine/datasets/catalog/NOAA_VIIRS_DNB_MONTHLY_V1_VCMCFG).
## Load Data Cube Configuration and Import Utilities
```
# Load Data Cube Configuration
from odc_gee import earthengine
dc = earthengine.Datacube(app='Nightlights')
# Import Utilities
import matplotlib.pyplot as plt
import pandas as pd
import sys
products = dc.list_products()
display_columns = ["name",
"description",
"platform",
"instrument",
"crs",
"resolution"]
products[display_columns].sort_index()
# Select the Product
product = 'viirs_google'
```
## <span id="define_extents">Define the Extents of the Analysis [▴](#top)</span>
```
# MODIFY HERE
# Select the center of an analysis region (lat_long)
# Adjust the surrounding box size (box_size) around the center (in degrees)
# Remove the comment tags (#) below to change the sample location
# Kumasi, Ghana, Africa
lat_long = (6.69, -1.63)
box_size_deg = 0.75
# Calculate the latitude and longitude bounds of the analysis box
latitude = (lat_long[0]-box_size_deg/2, lat_long[0]+box_size_deg/2)
longitude = (lat_long[1]-box_size_deg/2, lat_long[1]+box_size_deg/2)
# Define time window (START, END)
# Time format is (YEAR-MM)
# Range of data is 2012-04 to 2020-12
time = ('2012-04', '2020-12')
# The code below renders a map that can be used to view the region.
from utils.data_cube_utilities.dc_display_map import display_map
display_map(latitude,longitude)
```
## Load the data
```
# The loaded data is monthly mean radiance
dataset = dc.load(product=product,latitude=latitude,longitude=longitude,time=time,measurements=['avg_rad'])
# Plot the monthly time slice data in a table
# If there are more than 10 values, only the beginning and end will be listed
pd.DataFrame({'time': dataset.time.values})
```
## View the Nighlight Radiance for a selected time slice
```
# MODIFY HERE
# Select one of the time slices and create an output image.
# Time slices are numbered from 0 to x and shown in the table above
slice = 0 # select the time slice number here
# Plot the mean radiance for a given time slice
# Adjust the MIN and MAX range for the data to enhance colors
# Adjust the size to fill the plotting window
dataset.isel(time=slice).avg_rad.plot.imshow(cmap=plt.cm.nipy_spectral, vmin=0, vmax=30, size=8,
aspect=dataset.dims['longitude']/dataset.dims['latitude']);
# Plot mean monthly radiance
# Some data may show "zero" radiance which means there are no datasets
# Some data may be skewed high or low due to cloud contamination of part of the scene
img = dataset['avg_rad'].mean(dim=['longitude','latitude']).plot(figsize=(12,4)
,marker='o',markersize=6,linewidth=1)
img[0].axes.set_title("Monthly Mean Average Radiance");
```
|
github_jupyter
|
!wget -nc https://raw.githubusercontent.com/ceos-seo/odc-colab/master/odc_colab.py
from odc_colab import odc_colab_init
odc_colab_init(install_odc_gee=True)
from odc_colab import populate_db
populate_db()
# Load Data Cube Configuration
from odc_gee import earthengine
dc = earthengine.Datacube(app='Nightlights')
# Import Utilities
import matplotlib.pyplot as plt
import pandas as pd
import sys
products = dc.list_products()
display_columns = ["name",
"description",
"platform",
"instrument",
"crs",
"resolution"]
products[display_columns].sort_index()
# Select the Product
product = 'viirs_google'
# MODIFY HERE
# Select the center of an analysis region (lat_long)
# Adjust the surrounding box size (box_size) around the center (in degrees)
# Remove the comment tags (#) below to change the sample location
# Kumasi, Ghana, Africa
lat_long = (6.69, -1.63)
box_size_deg = 0.75
# Calculate the latitude and longitude bounds of the analysis box
latitude = (lat_long[0]-box_size_deg/2, lat_long[0]+box_size_deg/2)
longitude = (lat_long[1]-box_size_deg/2, lat_long[1]+box_size_deg/2)
# Define time window (START, END)
# Time format is (YEAR-MM)
# Range of data is 2012-04 to 2020-12
time = ('2012-04', '2020-12')
# The code below renders a map that can be used to view the region.
from utils.data_cube_utilities.dc_display_map import display_map
display_map(latitude,longitude)
# The loaded data is monthly mean radiance
dataset = dc.load(product=product,latitude=latitude,longitude=longitude,time=time,measurements=['avg_rad'])
# Plot the monthly time slice data in a table
# If there are more than 10 values, only the beginning and end will be listed
pd.DataFrame({'time': dataset.time.values})
# MODIFY HERE
# Select one of the time slices and create an output image.
# Time slices are numbered from 0 to x and shown in the table above
slice = 0 # select the time slice number here
# Plot the mean radiance for a given time slice
# Adjust the MIN and MAX range for the data to enhance colors
# Adjust the size to fill the plotting window
dataset.isel(time=slice).avg_rad.plot.imshow(cmap=plt.cm.nipy_spectral, vmin=0, vmax=30, size=8,
aspect=dataset.dims['longitude']/dataset.dims['latitude']);
# Plot mean monthly radiance
# Some data may show "zero" radiance which means there are no datasets
# Some data may be skewed high or low due to cloud contamination of part of the scene
img = dataset['avg_rad'].mean(dim=['longitude','latitude']).plot(figsize=(12,4)
,marker='o',markersize=6,linewidth=1)
img[0].axes.set_title("Monthly Mean Average Radiance");
| 0.712432 | 0.959535 |
SIRGame aims to simulate an infectious virus epidemic following a classic SIR type model.
To that end, we will simulate a population (which individuals are named "agents"), evolving on a toric grid world. The dimension of the grid, number of agent and the parameter of the epidemic model are all customizable in an user-friendly way.
To learn more about the context of the project, we invite you to read the "readme.md" file, and the instructions we had which are described in the "tp-SIR.pdf" file.
This notebook aims to teach you how to use SIRGame and show you all its functionalities.
To use SIRGame, you first need to install tkinter as follow:
```
import tkinter as tk
```
You can then start a simulation by executing the "gui.py" file.
```
%run -i "../gui.py"
```
You should then have the following window poping-up:
<img src="./Capture1.jpg">
Our first step is to create the grid. To that end, we got to select a grid dimension with the "grid size" pannel, and then click the "create grid" button.
<img src="./Capture2.jpg">
The next step consists in creating the agents of the population. The "number of agents" pannel allows to decide of its size, and the "initial fraction of infected" controls the proportion of the population which is infected at t=0.
The other 3 pannel allow you to decide of the probability of infection of a susceptible agent when entering in contact with an infected one, and the probability of recovery and death of an infected agent at each time step.
Once we decided the parameters, we can create the population by clicking the "Create agents" button.
<img src="./Capture3.jpg">
You can then start the simulation by clicking the "Play/Pause" button on the bottom right of the window.
The agents are represented by dots (green for susceptible, red for infected, blue for recovered and gray for dead).
You can speed up or slow down the simulation using the "Speed" pannel, and pause it at any time using the "Play/Pause" button.
The stats and time step of the simulation are displayed at the bottom right.
This software is still in its early phase of development, and some functionality are yet to be implemented. Among those:
-The "Set Obstacles" button aims to create obstacle where the agents can't go. While working properly, the wall created is not displayed in the current version of the build.
-The "Select output file" button aims to allow the user to extract the data of his simulation into a .txt file. It has not be implemented yet.
SIRGame
Copyright © 2020 Nathan GAUTHIER
Permission is hereby granted, free of charge, to any person obtaining a copy of this software
and associated documentation files (the "Software"), to deal in the Software without restriction,
including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or
substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE
FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
github_jupyter
|
import tkinter as tk
%run -i "../gui.py"
| 0.097837 | 0.977328 |
Bayesian Statistics Made Simple
===
Code and exercises from my workshop on Bayesian statistics in Python.
Copyright 2016 Allen Downey
MIT License: https://opensource.org/licenses/MIT
```
from __future__ import print_function, division
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
from thinkbayes2 import Pmf, Suite
import thinkplot
```
Working with Pmfs
---
Create a Pmf object to represent a six-sided die.
```
d6 = Pmf()
```
A Pmf is a map from possible outcomes to their probabilities.
```
for x in [1,2,3,4,5,6]:
d6[x] = 1
```
Initially the probabilities don't add up to 1.
```
d6.Print()
```
`Normalize` adds up the probabilities and divides through. The return value is the total probability before normalizing.
```
d6.Normalize()
```
Now the Pmf is normalized.
```
d6.Print()
```
And we can compute its mean (which only works if it's normalized).
```
d6.Mean()
```
`Random` chooses a random value from the Pmf.
```
d6.Random()
```
`thinkplot` provides methods for plotting Pmfs in a few different styles.
```
thinkplot.Hist(d6)
```
**Exercise 1:** The Pmf object provides `__add__`, so you can use the `+` operator to compute the Pmf of the sum of two dice.
Compute and plot the Pmf of the sum of two 6-sided dice.
```
# Solution goes here
```
**Exercise 2:** Suppose I roll two dice and tell you the result is greater than 3.
Plot the Pmf of the remaining possible outcomes and compute its mean.
```
# Solution goes here
```
The cookie problem
---
Create a Pmf with two equally likely hypotheses.
```
cookie = Pmf(['Bowl 1', 'Bowl 2'])
cookie.Print()
```
Update each hypothesis with the likelihood of the data (a vanilla cookie).
```
cookie['Bowl 1'] *= 0.75
cookie['Bowl 2'] *= 0.5
cookie.Normalize()
```
Print the posterior probabilities.
```
cookie.Print()
```
**Exercise 3:** Suppose we put the first cookie back, stir, choose again from the same bowl, and get a chocolate cookie.
Hint: The posterior (after the first cookie) becomes the prior (before the second cookie).
```
# Solution goes here
```
**Exercise 4:** Instead of doing two updates, what if we collapse the two pieces of data into one update?
Re-initialize `Pmf` with two equally likely hypotheses and perform one update based on two pieces of data, a vanilla cookie and a chocolate cookie.
The result should be the same regardless of how many updates you do (or the order of updates).
```
# Solution goes here
```
The dice problem
---
Create a Suite to represent dice with different numbers of sides.
```
pmf = Pmf([4, 6, 8, 12])
pmf.Print()
```
**Exercise 5:** We'll solve this problem two ways. First we'll do it "by hand", as we did with the cookie problem; that is, we'll multiply each hypothesis by the likelihood of the data, and then renormalize.
In the space below, update `suite` based on the likelihood of the data (rolling a 6), then normalize and print the results.
```
# Solution goes here
```
**Exercise 6:** Now let's do the same calculation using `Suite.Update`.
Write a definition for a new class called `Dice` that extends `Suite`. Then define a method called `Likelihood` that takes `data` and `hypo` and returns the probability of the data (the outcome of rolling the die) for a given hypothesis (number of sides on the die).
Hint: What should you do if the outcome exceeds the hypothetical number of sides on the die?
Here's an outline to get you started:
```
class Dice(Suite):
# hypo is the number of sides on the die
# data is the outcome
def Likelihood(self, data, hypo):
return 1
# Solution goes here
```
Now we can create a `Dice` object and update it.
```
dice = Dice([4, 6, 8, 12])
dice.Update(6)
dice.Print()
```
If we get more data, we can perform more updates.
```
for roll in [8, 7, 7, 5, 4]:
dice.Update(roll)
```
Here are the results.
```
dice.Print()
```
The German tank problem
---
The German tank problem is actually identical to the dice problem.
```
class Tank(Suite):
# hypo is the number of tanks
# data is an observed serial number
def Likelihood(self, data, hypo):
if data > hypo:
return 0
else:
return 1 / hypo
```
Here are the posterior probabilities after seeing Tank #37.
```
tank = Tank(range(100))
tank.Update(37)
thinkplot.Pdf(tank)
tank.Mean()
```
**Exercise 7:** Suppose we see another tank with serial number 17. What effect does this have on the posterior probabilities?
Update the suite again with the new data and plot the results.
```
# Solution goes here
```
The Euro problem
---
**Exercise 8:** Write a class definition for `Euro`, which extends `Suite` and defines a likelihood function that computes the probability of the data (heads or tails) for a given value of `x` (the probability of heads).
Note that `hypo` is in the range 0 to 100. Here's an outline to get you started.
```
class Euro(Suite):
def Likelihood(self, data, hypo):
"""
hypo is the prob of heads (0-100)
data is a string, either 'H' or 'T'
"""
return 1
# Solution goes here
```
We'll start with a uniform distribution from 0 to 100.
```
euro = Euro(range(101))
thinkplot.Pdf(euro)
```
Now we can update with a single heads:
```
euro.Update('H')
thinkplot.Pdf(euro)
```
Another heads:
```
euro.Update('H')
thinkplot.Pdf(euro)
```
And a tails:
```
euro.Update('T')
thinkplot.Pdf(euro)
```
Starting over, here's what it looks like after 7 heads and 3 tails.
```
euro = Euro(range(101))
for outcome in 'HHHHHHHTTT':
euro.Update(outcome)
thinkplot.Pdf(euro)
euro.MaximumLikelihood()
```
The maximum posterior probability is 70%, which is the observed proportion.
Here are the posterior probabilities after 140 heads and 110 tails.
```
euro = Euro(range(101))
evidence = 'H' * 140 + 'T' * 110
for outcome in evidence:
euro.Update(outcome)
thinkplot.Pdf(euro)
```
The posterior mean s about 56%
```
euro.Mean()
```
So is the value with Maximum Aposteriori Probability (MAP).
```
euro.MAP()
```
The posterior credible interval has a 90% chance of containing the true value (provided that the prior distribution truly represents our background knowledge).
```
euro.CredibleInterval(90)
```
## Swamping the prior
The following function makes a Euro object with a triangle prior.
```
def TrianglePrior():
"""Makes a Suite with a triangular prior."""
suite = Euro(label='triangle')
for x in range(0, 51):
suite[x] = x
for x in range(51, 101):
suite[x] = 100-x
suite.Normalize()
return suite
```
And here's what it looks like:
```
euro1 = Euro(range(101), label='uniform')
euro2 = TrianglePrior()
thinkplot.Pdfs([euro1, euro2])
thinkplot.Config(title='Priors')
```
**Exercise 9:** Update euro1 and euro2 with the same data we used before (140 heads and 110 tails) and plot the posteriors. How big is the difference in the means?
```
# Solution goes here
```
|
github_jupyter
|
from __future__ import print_function, division
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
from thinkbayes2 import Pmf, Suite
import thinkplot
d6 = Pmf()
for x in [1,2,3,4,5,6]:
d6[x] = 1
d6.Print()
d6.Normalize()
d6.Print()
d6.Mean()
d6.Random()
thinkplot.Hist(d6)
# Solution goes here
# Solution goes here
cookie = Pmf(['Bowl 1', 'Bowl 2'])
cookie.Print()
cookie['Bowl 1'] *= 0.75
cookie['Bowl 2'] *= 0.5
cookie.Normalize()
cookie.Print()
# Solution goes here
# Solution goes here
pmf = Pmf([4, 6, 8, 12])
pmf.Print()
# Solution goes here
class Dice(Suite):
# hypo is the number of sides on the die
# data is the outcome
def Likelihood(self, data, hypo):
return 1
# Solution goes here
dice = Dice([4, 6, 8, 12])
dice.Update(6)
dice.Print()
for roll in [8, 7, 7, 5, 4]:
dice.Update(roll)
dice.Print()
class Tank(Suite):
# hypo is the number of tanks
# data is an observed serial number
def Likelihood(self, data, hypo):
if data > hypo:
return 0
else:
return 1 / hypo
tank = Tank(range(100))
tank.Update(37)
thinkplot.Pdf(tank)
tank.Mean()
# Solution goes here
class Euro(Suite):
def Likelihood(self, data, hypo):
"""
hypo is the prob of heads (0-100)
data is a string, either 'H' or 'T'
"""
return 1
# Solution goes here
euro = Euro(range(101))
thinkplot.Pdf(euro)
euro.Update('H')
thinkplot.Pdf(euro)
euro.Update('H')
thinkplot.Pdf(euro)
euro.Update('T')
thinkplot.Pdf(euro)
euro = Euro(range(101))
for outcome in 'HHHHHHHTTT':
euro.Update(outcome)
thinkplot.Pdf(euro)
euro.MaximumLikelihood()
euro = Euro(range(101))
evidence = 'H' * 140 + 'T' * 110
for outcome in evidence:
euro.Update(outcome)
thinkplot.Pdf(euro)
euro.Mean()
euro.MAP()
euro.CredibleInterval(90)
def TrianglePrior():
"""Makes a Suite with a triangular prior."""
suite = Euro(label='triangle')
for x in range(0, 51):
suite[x] = x
for x in range(51, 101):
suite[x] = 100-x
suite.Normalize()
return suite
euro1 = Euro(range(101), label='uniform')
euro2 = TrianglePrior()
thinkplot.Pdfs([euro1, euro2])
thinkplot.Config(title='Priors')
# Solution goes here
| 0.712832 | 0.992981 |
<img style="float: right;" src="https://docs.expert.ai/logo.png" width="150px">
# Detect Personally Identifiable Information (PII) in Italian documents
In this notbook you will learn how to detect [PII](https://en.wikipedia.org/wiki/Personal_data) in Italian documents using the expert.ai [Natural Language API](https://docs.expert.ai/nlapi).
Detecting PII allows you to determine if a document contains sensitive data and helps creating a new version of the document in which that data is [de-identified](https://en.wikipedia.org/wiki/De-identification).
## Requisites
This notebook uses [expertai-nlapi](https://pypi.org/project/expertai-nlapi/) to access the Natural Language API and [pandas](https://pypi.org/project/pandas/) to present results, so install both packages:
```
!pip install expertai-nlapi
!pip install pandas
```
To access the API you need to set two environment variables with your expert.ai developer account credentials.
If you don't have an account already, get one for free by signing up on [developer.expert.ai](https://developer.expert.ai).
Replace `YOUR USERNAME` and `YOUR PASSWORD` with your credentials:
```
import os
os.environ["EAI_USERNAME"] = 'YOUR USERNAME'
os.environ["EAI_PASSWORD"] = 'YOUR PASSWORD'
```
## Instantiate the Natural Language API client
```
from expertai.nlapi.cloud.client import ExpertAiClient
import json, os
client = ExpertAiClient()
```
## Load the documents from the `documents_it` folder
The `documents_en` folder is located in the folder of the [GitHub repository](https://github.com/therealexpertai/) containing this notebook.
```
filesTexts=[]
for fileName in os.listdir("documents_it"):
with open('documents_it/' + fileName) as file:
filesTexts.append({'text':file.read(), 'fileName':fileName})
```
## Detect PII in all the documents
```
filesResults=[]
for fileText in filesTexts:
filesResults.append({
'fileName': fileText['fileName'],
'results': client.detection(body={"document": {"text": fileText['text']}}, params={'language': 'it','detector':'pii'})
})
```
## Present detected information with a pandas DataFrame
```
import pandas as pandas
import json
from IPython.core.display import display, HTML
pandas.set_option('display.max_rows', None)
mapColoredCell = set()
def coloredCell(s):
key = '-'.join(s.name[0:3])
if(key not in mapColoredCell):
mapColoredCell.add(key)
return ['border-top: 1px solid !important']
return['']
dataToShow = []
for fileResults in filesResults:
mapInstances = {}
fieldName=""
for extraction in fileResults['results'].extractions:
if extraction.template in mapInstances:
mapInstances[extraction.template] += 1
else:
mapInstances[extraction.template] = 1
dateCount=0;
for field in extraction.fields:
fieldName = field.name
if field.name == "dateTime":
dateCount+=1
fieldName+=" #" + str(dateCount)
row = {
"file": fileResults['fileName'],
"template": extraction.template,
"instance": '#' + str(mapInstances[extraction.template]),
'field': fieldName,
'value': field.value
}
dataToShow.append(row)
dataFrame = pandas.DataFrame(dataToShow)
dataFrame.set_index(['file', 'template', 'instance', 'field'], inplace=True)
leftAlignedDataFrame = dataFrame.style.set_properties(**{'text-align': 'left', 'padding-left': '30px'})
leftAlignedDataFrame.apply(coloredCell,axis=1)
display(leftAlignedDataFrame)
```
## Print the JSON-LD object
The PII detector output includes a [JSON-LD](https://json-ld.org/) object. It contains exactly the same detected information, but in JSON-LD format and the data types are linked to [schema.org](https://schema.org/) types.
```
for fileResults in filesResults:
print("************************")
print (fileResults['fileName']+": ")
print(json.dumps(fileResults['results'].extra_data, indent=2, sort_keys=True))
print("************************")
```
Congratulations, you're done, it's that simple!
Read the [documentation](https://docs.expert.ai/nlapi/latest/guide/detectors/#pii-detector) to know more about the capabilities of the PII detector.
|
github_jupyter
|
!pip install expertai-nlapi
!pip install pandas
import os
os.environ["EAI_USERNAME"] = 'YOUR USERNAME'
os.environ["EAI_PASSWORD"] = 'YOUR PASSWORD'
from expertai.nlapi.cloud.client import ExpertAiClient
import json, os
client = ExpertAiClient()
filesTexts=[]
for fileName in os.listdir("documents_it"):
with open('documents_it/' + fileName) as file:
filesTexts.append({'text':file.read(), 'fileName':fileName})
filesResults=[]
for fileText in filesTexts:
filesResults.append({
'fileName': fileText['fileName'],
'results': client.detection(body={"document": {"text": fileText['text']}}, params={'language': 'it','detector':'pii'})
})
import pandas as pandas
import json
from IPython.core.display import display, HTML
pandas.set_option('display.max_rows', None)
mapColoredCell = set()
def coloredCell(s):
key = '-'.join(s.name[0:3])
if(key not in mapColoredCell):
mapColoredCell.add(key)
return ['border-top: 1px solid !important']
return['']
dataToShow = []
for fileResults in filesResults:
mapInstances = {}
fieldName=""
for extraction in fileResults['results'].extractions:
if extraction.template in mapInstances:
mapInstances[extraction.template] += 1
else:
mapInstances[extraction.template] = 1
dateCount=0;
for field in extraction.fields:
fieldName = field.name
if field.name == "dateTime":
dateCount+=1
fieldName+=" #" + str(dateCount)
row = {
"file": fileResults['fileName'],
"template": extraction.template,
"instance": '#' + str(mapInstances[extraction.template]),
'field': fieldName,
'value': field.value
}
dataToShow.append(row)
dataFrame = pandas.DataFrame(dataToShow)
dataFrame.set_index(['file', 'template', 'instance', 'field'], inplace=True)
leftAlignedDataFrame = dataFrame.style.set_properties(**{'text-align': 'left', 'padding-left': '30px'})
leftAlignedDataFrame.apply(coloredCell,axis=1)
display(leftAlignedDataFrame)
for fileResults in filesResults:
print("************************")
print (fileResults['fileName']+": ")
print(json.dumps(fileResults['results'].extra_data, indent=2, sort_keys=True))
print("************************")
| 0.219003 | 0.973869 |
# Training and Deploying a PyTorch model with Amazon SageMaker Local Mode
<img align="left" width="130" src="https://raw.githubusercontent.com/PacktPublishing/Amazon-SageMaker-Cookbook/master/Extra/cover-small-padded.png"/>
This notebook contains the code to help readers work through one of the recipes of the book [Machine Learning with Amazon SageMaker Cookbook: 80 proven recipes for data scientists and developers to perform ML experiments and deployments](https://www.amazon.com/Machine-Learning-Amazon-SageMaker-Cookbook/dp/1800567030)
### How to do it...
```
!pip install 'sagemaker[local]' --upgrade
!sudo service docker restart
!docker rmi -f $(docker images -a -q)
s3_bucket = '<insert s3 bucket name here>'
prefix = 'chapter03'
train_s3 = \
f"s3://{s3_bucket}/{prefix}/synthetic/training_data.csv"
from sagemaker.inputs import TrainingInput
train_input = TrainingInput(train_s3, content_type="text/csv")
import os
import sagemaker
from sagemaker import get_execution_role
from sagemaker.local import LocalSession
sagemaker_session = LocalSession()
sagemaker_session.config = {'local': {'local_code': True}}
role = get_execution_role()
from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point='pytorch_training.py',
session=sagemaker_session,
role=role,
instance_count=1,
instance_type='local',
framework_version='1.5.0',
py_version='py3')
estimator.fit({'train': train_input})
from sagemaker.pytorch.model import PyTorchModel
pytorch_model = PyTorchModel(model_data=estimator.model_data,
role=role,
entry_point='pytorch_inference.py',
framework_version='1.5.0',
py_version="py3")
predictor = pytorch_model.deploy(instance_type='local',
initial_instance_count=1)
import numpy as np
predictor.predict(np.array([[100], [200]], dtype=np.float32))
!mkdir -p tmp
all_s3 = f"s3://{s3_bucket}/{prefix}/synthetic/all_data.csv"
!aws s3 cp {all_s3} tmp/all_data.csv
import pandas as pd
all_data = pd.read_csv("tmp/all_data.csv", header=None)
x = all_data[[1]].values
y = all_data[[0]].values
from numpy import arange
line_x = arange(-5000, 5000, 10)
line_x
input_data = np.array(line_x.reshape(-1, 1), dtype=np.float32)
result = predictor.predict(input_data)
result
line_y = result
from matplotlib import pyplot
pyplot.plot(line_x, line_y, 'r')
pyplot.scatter(x,y,s=1)
pyplot.show()
predictor.delete_endpoint()
```
|
github_jupyter
|
!pip install 'sagemaker[local]' --upgrade
!sudo service docker restart
!docker rmi -f $(docker images -a -q)
s3_bucket = '<insert s3 bucket name here>'
prefix = 'chapter03'
train_s3 = \
f"s3://{s3_bucket}/{prefix}/synthetic/training_data.csv"
from sagemaker.inputs import TrainingInput
train_input = TrainingInput(train_s3, content_type="text/csv")
import os
import sagemaker
from sagemaker import get_execution_role
from sagemaker.local import LocalSession
sagemaker_session = LocalSession()
sagemaker_session.config = {'local': {'local_code': True}}
role = get_execution_role()
from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point='pytorch_training.py',
session=sagemaker_session,
role=role,
instance_count=1,
instance_type='local',
framework_version='1.5.0',
py_version='py3')
estimator.fit({'train': train_input})
from sagemaker.pytorch.model import PyTorchModel
pytorch_model = PyTorchModel(model_data=estimator.model_data,
role=role,
entry_point='pytorch_inference.py',
framework_version='1.5.0',
py_version="py3")
predictor = pytorch_model.deploy(instance_type='local',
initial_instance_count=1)
import numpy as np
predictor.predict(np.array([[100], [200]], dtype=np.float32))
!mkdir -p tmp
all_s3 = f"s3://{s3_bucket}/{prefix}/synthetic/all_data.csv"
!aws s3 cp {all_s3} tmp/all_data.csv
import pandas as pd
all_data = pd.read_csv("tmp/all_data.csv", header=None)
x = all_data[[1]].values
y = all_data[[0]].values
from numpy import arange
line_x = arange(-5000, 5000, 10)
line_x
input_data = np.array(line_x.reshape(-1, 1), dtype=np.float32)
result = predictor.predict(input_data)
result
line_y = result
from matplotlib import pyplot
pyplot.plot(line_x, line_y, 'r')
pyplot.scatter(x,y,s=1)
pyplot.show()
predictor.delete_endpoint()
| 0.283087 | 0.851768 |
## Imports
Import things needed for Tensorflow and CoreML
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __builtin__ import any as b_any
import math
import os
os.environ["CUDA_VISIBLE_DEVICES"]=""
import numpy as np
from PIL import Image
import tensorflow as tf
import configuration
import inference_wrapper
import sys
sys.path.insert(0, 'im2txt/inference_utils')
sys.path.insert(0, 'im2txt/ops')
import caption_generator
import image_processing
import vocabulary
import urllib, os, sys, zipfile
from os.path import dirname
from tensorflow.core.framework import graph_pb2
from tensorflow.python.tools.freeze_graph import freeze_graph
from tensorflow.python.tools import strip_unused_lib
from tensorflow.python.framework import dtypes
from tensorflow.python.platform import gfile
import tfcoreml
import configuration
from coremltools.proto import NeuralNetwork_pb2
# Turn on debugging on error
%pdb on
```
## Create the models
Create the Tensorflow model and strip all unused nodes
```
checkpoint_file = './trainlogIncNEW/model.ckpt'
pre_frozen_model_file = './frozen_model_textgenNEW.pb'
frozen_model_file = './frozen_model_textgenNEW.pb'
# Which nodes we want to input for the network
# Use ['image_feed'] for just Memeception
input_node_names = ['seq_embeddings','lstm/state_feed']
# Which nodes we want to output from the network
# Use ['lstm/initial_state'] for just Memeception
output_node_names = ['softmax_T','lstm/state']
# Set the depth of the beam search
beam_size = 2
# Build the inference graph.
g = tf.Graph()
with g.as_default():
model = inference_wrapper.InferenceWrapper()
restore_fn = model.build_graph_from_config(configuration.ModelConfig(),
checkpoint_file)
g.finalize()
# Write the graph
tf_model_path = './log/pre_graph_textgenNEW.pb'
tf.train.write_graph(
g,
'./log',
'pre_graph_textgenNEW.pb',
as_text=False,
)
with open(tf_model_path, 'rb') as f:
serialized = f.read()
tf.reset_default_graph()
original_gdef = tf.GraphDef()
original_gdef.ParseFromString(serialized)
# Strip unused graph elements and serialize the output to file
gdef = strip_unused_lib.strip_unused(
input_graph_def = original_gdef,
input_node_names = input_node_names,
output_node_names = output_node_names,
placeholder_type_enum = dtypes.float32.as_datatype_enum)
# Save it to an output file
with gfile.GFile(pre_frozen_model_file, 'wb') as f:
f.write(gdef.SerializeToString())
# Freeze the graph with checkpoint data inside
freeze_graph(input_graph=pre_frozen_model_file,
input_saver='',
input_binary=True,
input_checkpoint=checkpoint_file,
output_node_names=','.join(output_node_names),
restore_op_name='save/restore_all',
filename_tensor_name='save/Const:0',
output_graph=frozen_model_file,
clear_devices=True,
initializer_nodes='')
```
## Verify the model
Check that it is producing legit captions for *One does not simply*
```
# Configure the model and load the vocab
config = configuration.ModelConfig()
vocab_file ='vocab4.txt'
vocab = vocabulary.Vocabulary(vocab_file)
# Generate captions on a hard-coded image
with tf.Session(graph=g) as sess:
restore_fn(sess)
generator = caption_generator.CaptionGenerator(
model, vocab, beam_size=beam_size)
for i,filename in enumerate(['memes/advice-god.jpg']):
with tf.gfile.GFile(filename, "rb") as f:
image = Image.open(f)
image = ((np.array(image.resize((299,299)))/255.0)-0.5)*2.0
for k in range(50):
captions = generator.beam_search(sess, image)
for i, caption in enumerate(captions):
sentence = [vocab.id_to_word(w) for w in caption.sentence[1:-1]]
sentence = " ".join(sentence)
print(sentence)
```
## Convert the model to CoreML
Specify output variables from the graph to be used
```
# Define basic shapes
# If using Memeception, add 'image_feed:0': [299, 299, 3]
input_tensor_shapes = {
'seq_embeddings:0': [1, beam_size, 300],
'lstm/state_feed:0': [1, beam_size, 1024],
}
coreml_model_file = './Textgen_NEW.mlmodel'
output_tensor_names = [node + ':0' for node in output_node_names]
coreml_model = tfcoreml.convert(
tf_model_path=frozen_model_file,
mlmodel_path=coreml_model_file,
input_name_shape_dict=input_tensor_shapes,
output_feature_names=output_tensor_names,
add_custom_layers=True,
)
```
## Test the model
Run a predictable randomly seeded inputs through and see where the disparities are
```
seq_rand = np.random.rand(300)
seq_embeddings_tf = np.array([[seq_rand, seq_rand]])
seq_embeddings_ml = np.array([[[sr, sr]] for sr in seq_rand])
state_rand = np.random.rand(1024)
state_feed_tf = np.array([[state_rand, state_rand]])
state_feed_ml = np.array([[[sr, sr]] for sr in state_rand])
coreml_inputs = {
'seq_embeddings__0': seq_embeddings_ml,
'lstm__state_feed__0': state_feed_ml,
}
coreml_output = coreml_model.predict(coreml_inputs, useCPUOnly=True)
# print(coreml_output['lstm__state__0'].shape)
# print(coreml_output['softmax__0'].shape)
# print(coreml_output['softmax__0'].reshape(38521, 1, 2))
# print(coreml_output)
def print_ml(ml):
for key in sorted(ml.keys()):
print(key)
print(ml[key].shape)
print(ml[key])
print_ml(coreml_output)
with tf.Session(graph=g) as sess:
# Load the model from checkpoint.
restore_fn(sess)
input_names = ['lstm/state:0', 'softmax:0']
output_values = sess.run(
fetches=input_names,
feed_dict={
#"input_feed:0": input_feed,
"lstm/state_feed:0": state_feed_tf,
"seq_embeddings:0": seq_embeddings_tf,
#"seq_embedding/embedding_map:0": self.embedding_map
})
for (index, value) in sorted(enumerate(input_names), key=lambda x: x[1]):
print(value)
print(output_values[index].shape)
print(output_values[index])
np.matmul(np.random.rand(1, 20), np.random.rand(20, 45)).shape
np.random.rand(1, 2, 812)[0,:].shape
```
|
github_jupyter
|
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __builtin__ import any as b_any
import math
import os
os.environ["CUDA_VISIBLE_DEVICES"]=""
import numpy as np
from PIL import Image
import tensorflow as tf
import configuration
import inference_wrapper
import sys
sys.path.insert(0, 'im2txt/inference_utils')
sys.path.insert(0, 'im2txt/ops')
import caption_generator
import image_processing
import vocabulary
import urllib, os, sys, zipfile
from os.path import dirname
from tensorflow.core.framework import graph_pb2
from tensorflow.python.tools.freeze_graph import freeze_graph
from tensorflow.python.tools import strip_unused_lib
from tensorflow.python.framework import dtypes
from tensorflow.python.platform import gfile
import tfcoreml
import configuration
from coremltools.proto import NeuralNetwork_pb2
# Turn on debugging on error
%pdb on
checkpoint_file = './trainlogIncNEW/model.ckpt'
pre_frozen_model_file = './frozen_model_textgenNEW.pb'
frozen_model_file = './frozen_model_textgenNEW.pb'
# Which nodes we want to input for the network
# Use ['image_feed'] for just Memeception
input_node_names = ['seq_embeddings','lstm/state_feed']
# Which nodes we want to output from the network
# Use ['lstm/initial_state'] for just Memeception
output_node_names = ['softmax_T','lstm/state']
# Set the depth of the beam search
beam_size = 2
# Build the inference graph.
g = tf.Graph()
with g.as_default():
model = inference_wrapper.InferenceWrapper()
restore_fn = model.build_graph_from_config(configuration.ModelConfig(),
checkpoint_file)
g.finalize()
# Write the graph
tf_model_path = './log/pre_graph_textgenNEW.pb'
tf.train.write_graph(
g,
'./log',
'pre_graph_textgenNEW.pb',
as_text=False,
)
with open(tf_model_path, 'rb') as f:
serialized = f.read()
tf.reset_default_graph()
original_gdef = tf.GraphDef()
original_gdef.ParseFromString(serialized)
# Strip unused graph elements and serialize the output to file
gdef = strip_unused_lib.strip_unused(
input_graph_def = original_gdef,
input_node_names = input_node_names,
output_node_names = output_node_names,
placeholder_type_enum = dtypes.float32.as_datatype_enum)
# Save it to an output file
with gfile.GFile(pre_frozen_model_file, 'wb') as f:
f.write(gdef.SerializeToString())
# Freeze the graph with checkpoint data inside
freeze_graph(input_graph=pre_frozen_model_file,
input_saver='',
input_binary=True,
input_checkpoint=checkpoint_file,
output_node_names=','.join(output_node_names),
restore_op_name='save/restore_all',
filename_tensor_name='save/Const:0',
output_graph=frozen_model_file,
clear_devices=True,
initializer_nodes='')
# Configure the model and load the vocab
config = configuration.ModelConfig()
vocab_file ='vocab4.txt'
vocab = vocabulary.Vocabulary(vocab_file)
# Generate captions on a hard-coded image
with tf.Session(graph=g) as sess:
restore_fn(sess)
generator = caption_generator.CaptionGenerator(
model, vocab, beam_size=beam_size)
for i,filename in enumerate(['memes/advice-god.jpg']):
with tf.gfile.GFile(filename, "rb") as f:
image = Image.open(f)
image = ((np.array(image.resize((299,299)))/255.0)-0.5)*2.0
for k in range(50):
captions = generator.beam_search(sess, image)
for i, caption in enumerate(captions):
sentence = [vocab.id_to_word(w) for w in caption.sentence[1:-1]]
sentence = " ".join(sentence)
print(sentence)
# Define basic shapes
# If using Memeception, add 'image_feed:0': [299, 299, 3]
input_tensor_shapes = {
'seq_embeddings:0': [1, beam_size, 300],
'lstm/state_feed:0': [1, beam_size, 1024],
}
coreml_model_file = './Textgen_NEW.mlmodel'
output_tensor_names = [node + ':0' for node in output_node_names]
coreml_model = tfcoreml.convert(
tf_model_path=frozen_model_file,
mlmodel_path=coreml_model_file,
input_name_shape_dict=input_tensor_shapes,
output_feature_names=output_tensor_names,
add_custom_layers=True,
)
seq_rand = np.random.rand(300)
seq_embeddings_tf = np.array([[seq_rand, seq_rand]])
seq_embeddings_ml = np.array([[[sr, sr]] for sr in seq_rand])
state_rand = np.random.rand(1024)
state_feed_tf = np.array([[state_rand, state_rand]])
state_feed_ml = np.array([[[sr, sr]] for sr in state_rand])
coreml_inputs = {
'seq_embeddings__0': seq_embeddings_ml,
'lstm__state_feed__0': state_feed_ml,
}
coreml_output = coreml_model.predict(coreml_inputs, useCPUOnly=True)
# print(coreml_output['lstm__state__0'].shape)
# print(coreml_output['softmax__0'].shape)
# print(coreml_output['softmax__0'].reshape(38521, 1, 2))
# print(coreml_output)
def print_ml(ml):
for key in sorted(ml.keys()):
print(key)
print(ml[key].shape)
print(ml[key])
print_ml(coreml_output)
with tf.Session(graph=g) as sess:
# Load the model from checkpoint.
restore_fn(sess)
input_names = ['lstm/state:0', 'softmax:0']
output_values = sess.run(
fetches=input_names,
feed_dict={
#"input_feed:0": input_feed,
"lstm/state_feed:0": state_feed_tf,
"seq_embeddings:0": seq_embeddings_tf,
#"seq_embedding/embedding_map:0": self.embedding_map
})
for (index, value) in sorted(enumerate(input_names), key=lambda x: x[1]):
print(value)
print(output_values[index].shape)
print(output_values[index])
np.matmul(np.random.rand(1, 20), np.random.rand(20, 45)).shape
np.random.rand(1, 2, 812)[0,:].shape
| 0.563138 | 0.478529 |
### The MLDA sampler
This notebook is a good starting point to understand the basic usage of the Multi-Level Delayed Acceptance MCMC algorithm (MLDA) proposed in [1], as implemented within PyMC3.
It uses a simple linear regression model (and a toy coarse model counterpart) to show the basic workflow when using MLDA. The model is similar to the one used in https://docs.pymc.io/notebooks/GLM-linear.html.
The MLDA sampler is designed to deal with computationally intensive problems where we have access not only to the desired (fine) posterior distribution but also to a set of approximate (coarse) posteriors of decreasing accuracy and decreasing computational cost (we need at least one of those). Its main idea is that coarser chains' samples are used as proposals for the finer chains. This has been shown to improve the effective sample size of the finest chain and this allows us to reduce the number of expensive fine-chain likelihood evaluations.
The PyMC3 implementation supports any number of levels, tuning parameterization for the bottom-level sampler, separate subsampling rates for each level, choice between blocked and compound sampling for the bottom-level sampler. More features like support for two types of bottom-level samplers (Metropolis, DEMetropolisZ), adaptive error correction and variance reduction are currently under development.
For more details about the MLDA sampler and the way it should be used and parameterised, the user can refer to the docstrings in the code and to the other example notebooks which deal with more complex problem settings and more advanced MLDA features.
Please note that the MLDA sampler is new in PyMC3. The user should be extra critical about the results and report any problems as issues in the PyMC3's github repository.
[1] Dodwell, Tim & Ketelsen, Chris & Scheichl, Robert & Teckentrup, Aretha. (2019). Multilevel Markov Chain Monte Carlo. SIAM Review. 61. 509-545. https://doi.org/10.1137/19M126966X
### Work flow
MLDA is used in a similar way as most step method in PyMC3. It has the special requirement that the user need to provide at least one coarse model to allow it to work.
The basic flow to use MLDA consists of four steps, which we demonstrate here using a simple linear regression model with a toy coarse model counterpart.
##### Step 1: Generate some data
Here, we generate a vector `x` of 200 points equally spaced between 0.0 and 1.0. Then we project those onto a straight line with intercept 1.0 and slope 2.0, adding some random noise, resulting in a vector `y`. The goal is to infer the intercept and slope from `x` and `y`, i.e. a very simple linear regression problem.
```
# Import libraries
import time as time
import arviz as az
import numpy as np
import pymc3 as pm
az.style.use("arviz-darkgrid")
# Generate data
RANDOM_SEED = 915623497
np.random.seed(RANDOM_SEED)
true_intercept = 1
true_slope = 2
sigma = 1
size = 200
x = np.linspace(0, 1, size)
y = true_intercept + true_slope * x + np.random.normal(0, sigma ** 2, size)
```
##### Step 2: Define the fine model
In this step we use the PyMC3 model definition language to define the priors and the likelihood. We choose non-informative Normal priors for both intercept and slope and a Normal likelihood, where we feed in `x` and `y`.
```
# Constructing the fine model
with pm.Model() as fine_model:
# Define priors
intercept = pm.Normal("intercept", 0, sigma=20)
slope = pm.Normal("slope", 0, sigma=20)
# Define likelihood
likelihood = pm.Normal("y", mu=intercept + slope * x, sigma=sigma, observed=y)
```
##### Step 3: Define a coarse model
Here, we define a toy coarse model where coarseness is introduced by using fewer data in the likelihood compared to the fine model, i.e. we only use every 2nd data point from the original data set.
```
# Thinning the data set
x_coarse = x[::2]
y_coarse = y[::2]
# Constructing the coarse model
with pm.Model() as coarse_model:
# Define priors
intercept = pm.Normal("intercept", 0, sigma=20)
slope = pm.Normal("slope", 0, sigma=20)
# Define likelihood
likelihood = pm.Normal(
"y", mu=intercept + slope * x_coarse, sigma=sigma, observed=y_coarse
)
```
##### Step 4: Draw MCMC samples from the posterior using MLDA
We feed `coarse_model` to the MLDA instance and we also set `subsampling_rate` to 10. The subsampling rate is the number of samples drawn in the coarse chain to construct a proposal for the fine chain. In this case, MLDA draws 10 samples in the coarse chain and uses the last one as a proposal for the fine chain. This is accepted or rejected by the fine chain and then control goes back to the coarse chain which generates another 10 samples, etc. Note that `pm.MLDA` has many other tuning arguments which can be found in the documentation.
Next, we use the universal `pm.sample` method, passing the MLDA instance to it. This runs MLDA and returns a `trace`, containing all MCMC samples and various by-products. Here, we also run a standard Metropolis sampler for comparison which returns a separate trace. We time the runs to compare later.
Finally, PyMC3 provides various functions to visualise the trace and print summary statistics (two of them are shown below).
```
with fine_model:
# Initialise step methods
step = pm.MLDA(coarse_models=[coarse_model], subsampling_rates=[10])
step_2 = pm.Metropolis()
# Sample using MLDA
t_start = time.time()
trace = pm.sample(
draws=6000, chains=4, tune=2000, step=step, random_seed=RANDOM_SEED
)
runtime = time.time() - t_start
# Sample using Metropolis
t_start = time.time()
trace_2 = pm.sample(
draws=6000, chains=4, tune=2000, step=step_2, random_seed=RANDOM_SEED
)
runtime_2 = time.time() - t_start
# Trace plots
pm.plots.traceplot(trace)
pm.plots.traceplot(trace_2)
# Summary statistics for MLDA
pm.stats.summary(trace)
# Summary statistics for Metropolis
pm.stats.summary(trace_2)
# Make sure samplers have converged
assert all(az.rhat(trace) < 1.03)
assert all(az.rhat(trace_2) < 1.03)
# Display runtimes
print(f"Runtimes: MLDA: {runtime}, Metropolis: {runtime_2}")
```
##### Comments
**Performance:**
You can see from the summary statistics above that MLDA's ESS is ~8x higher than Metropolis. The runtime of MLDA is ~6x larger than Metropolis. Therefore in this toy example MLDA is almost an overkill. For more complex problems where the difference in computational cost between the coarse and fine models/likelihoods is orders of magnitude, MLDA is expected to outperform Metropolis, as long as the coarse model is reasonably close to the fine one. This case is often enountered in inverse problems in engineering, ecology, imaging, etc where a forward model can be defined with varying coarseness in space and/or time (e.g. subsurface water flow, predator prey models, etc).
**Subsampling rate:**
The MLDA sampler is based on the assumption that the coarse proposal samples (i.e. the samples proposed from the coarse chain to the fine one) are independent from each other. In order to generate independent samples, it is necessary to run the coarse chain for at least an adequate number of iterations to get rid of autocorrelation. Therefore, the higher the autocorrelation in the coarse chain, the more iterations are needed and the larger the subsampling rate should be.
Values larger than the minimum for beating autocorreletion can further improve the proposal (as the distribution is epxlored better and the proposal are imptoved), and thus ESS. But at the same time more steps cost more computationally. Users are encouraged to do test runs with different subsampling rates to understand which gives the best ESS/sec.
Note that in cases where you have more than one coarse model/level, MLDA allows you to choose a different subsampling rate for each coarse level (as a list of integers when you instantiate the stepper).
```
# Show packages' and Python's versions
%load_ext watermark
%watermark -n -u -v -iv -w
```
|
github_jupyter
|
# Import libraries
import time as time
import arviz as az
import numpy as np
import pymc3 as pm
az.style.use("arviz-darkgrid")
# Generate data
RANDOM_SEED = 915623497
np.random.seed(RANDOM_SEED)
true_intercept = 1
true_slope = 2
sigma = 1
size = 200
x = np.linspace(0, 1, size)
y = true_intercept + true_slope * x + np.random.normal(0, sigma ** 2, size)
# Constructing the fine model
with pm.Model() as fine_model:
# Define priors
intercept = pm.Normal("intercept", 0, sigma=20)
slope = pm.Normal("slope", 0, sigma=20)
# Define likelihood
likelihood = pm.Normal("y", mu=intercept + slope * x, sigma=sigma, observed=y)
# Thinning the data set
x_coarse = x[::2]
y_coarse = y[::2]
# Constructing the coarse model
with pm.Model() as coarse_model:
# Define priors
intercept = pm.Normal("intercept", 0, sigma=20)
slope = pm.Normal("slope", 0, sigma=20)
# Define likelihood
likelihood = pm.Normal(
"y", mu=intercept + slope * x_coarse, sigma=sigma, observed=y_coarse
)
with fine_model:
# Initialise step methods
step = pm.MLDA(coarse_models=[coarse_model], subsampling_rates=[10])
step_2 = pm.Metropolis()
# Sample using MLDA
t_start = time.time()
trace = pm.sample(
draws=6000, chains=4, tune=2000, step=step, random_seed=RANDOM_SEED
)
runtime = time.time() - t_start
# Sample using Metropolis
t_start = time.time()
trace_2 = pm.sample(
draws=6000, chains=4, tune=2000, step=step_2, random_seed=RANDOM_SEED
)
runtime_2 = time.time() - t_start
# Trace plots
pm.plots.traceplot(trace)
pm.plots.traceplot(trace_2)
# Summary statistics for MLDA
pm.stats.summary(trace)
# Summary statistics for Metropolis
pm.stats.summary(trace_2)
# Make sure samplers have converged
assert all(az.rhat(trace) < 1.03)
assert all(az.rhat(trace_2) < 1.03)
# Display runtimes
print(f"Runtimes: MLDA: {runtime}, Metropolis: {runtime_2}")
# Show packages' and Python's versions
%load_ext watermark
%watermark -n -u -v -iv -w
| 0.798265 | 0.993771 |
# Instability Detection and Characterization
This tutorial shows how to implement instability ("drift") detection and characterization on time-stamped data. This data can be from *any* quantum circuits, on *any* number of qubits, but we require around 100+ time-stamps per circuit (perhaps fewer if there are multiple measurement outcomes per time-stamp). If you only have data that is binned into a few different time periods then consider instead using the `DataComparator` object demonstrated in the [DataSetComparison](../algorithms/DatasetComparison.ipynb) tutorial.
Currently the gap between data collection times for each circuit is required to be approximately constant, both across the data collection times for each circuit, and across circuits. If this is not the case the code should still work, but the analysis it performs may be significantly sub-optimal, and interpretting the results is more complicated. There is beta-level capabilities within the functions used below to properly analyze unequally-spaced data, but it is untested and will not be used with the default options in the analysis code. This limitation will be addressed in a future release of pyGSTi.
This notebook is an introduction to these tools, and it will be augmented with further notebooks at a later date.
```
# Importing the drift module is essential
from pygsti.extras import drift
# Importing all of pyGSTi is optional, but often useful.
import pygsti
```
## Quick and Easy Analysis
First we import some *time-stamped* data. For more information on the mechanics of using time-stamped `DataSets` see the [TimestampedDataSets](../objects/advanced/TimestampedDataSets.ipynb) tutorial. The data we are importing is from long-sequence GST on $G_i$, $G_x$, and $G_y$ with time-dependent coherent errors on the gates.
We load the time-dependent data from the `timestamped_dataset.txt` file included with pyGSTi, and then build a `ProtocolData` object out of it so it can be used as input for `Protocol` objects. We can pass `None` as the experiment design when constructing `data` because the stability analysis doesn't require any special structure to the circuits - it just requires the data to have timestamps.
```
# Initialize the circuit structure details of the imported data.
# Construct a basic ExplicitModel for the experiment design
model = pygsti.construction.create_explicit_model( ['Q0'], ['Gi','Gx','Gy'], [ "I(Q0)","X(pi/2,Q0)", "Y(pi/2,Q0)"] )
# This manually specifies the germ and fiducial structure for the imported data.
fiducial_strs = ['{}','Gx','Gy','GxGx','GxGxGx','GyGyGy']
germ_strs = ['Gi','Gx','Gy','GxGy','GxGyGi','GxGiGy','GxGiGi','GyGiGi','GxGxGiGy','GxGyGyGi','GxGxGyGxGyGy']
log2maxL = 9 # log2 of the maximum germ power
# Below we use the maxlength, germ and fiducial lists to create the GST structures needed for box plots.
fiducials = [pygsti.objects.Circuit(fs) for fs in fiducial_strs]
germs = [pygsti.objects.Circuit(g) for g in germ_strs]
max_lengths = [2**i for i in range(0,log2maxL)]
exp_design = pygsti.protocols.StandardGSTDesign(model, fiducials, fiducials, germs, max_lengths)
ds = pygsti.io.load_dataset("../tutorial_files/timestamped_dataset.txt") # a DataSet
data = pygsti.protocols.ProtocolData(exp_design, ds)
```
We then simply create a `StabilityAnalysis` protocol, and run it on the data.
```
protocol = pygsti.protocols.StabilityAnalysis()
results = protocol.run(data)
```
**Note** the `StabilityAnalysis` protocol has a variety of optional arguments that can be used to optimize the tool to different circumstances. In this tutorial we won't discuss the full range of analyzes that can be performed using the `drift` module.
## Inspecting the results
Everything has been calculated, and we can now look at the results. If we print the returned results object (currently, essentially a container `StabilityAnalyzer` object), it will tell us whether instability was detected. If no instability is detected, then there is little else to do: the circuits are, as far as we can tell, stable, and most of the other results contained in a `StabilityAnalyzer` will not be very interesting. However, here instability is detected:
```
print(results.stabilityanalyzer)
# Create a workspace to show plots
w = pygsti.report.Workspace()
w.init_notebook_mode(connected=False, autodisplay=True)
```
### 1. Instability Detection Results : Power Spectra and the Frequencies of Instabilities
The analysis is based on generating power spectra from the data. If the data is sampled from a time-invariant probability distribution, the powers in these spectra have an expected value of 1 and a known distribution (it is $ \frac{1}{k}\chi^2_k$ with $k$ depending on exactly what spectrum we're looking at). So we can test for violations of this, by looking for powers that are too high to be consistent with the time-invariant probability distribution hypothesis.
A power spectrum is obtained for each circuit, and these can be averaged to obtain a single "global" power spectrum. This is plotted below:
```
w.PowerSpectraPlot(results)
```
Frequencies with power above the threshold in a spectrum are almost certainly components in the underlying (perhaps) time-varying probability for the corresponding circuit. We can access the frequencies above the threshold in the global power spectrum (which can't be assigned to a particular circuit, but *are* components in the probabilities for one or more circuits) as follows:
```
print(results.get_instability_frequencies())
```
To get the power spectrum, and detected significant frequencies, for a particular circuit we add an optional argument to the above functions:
```
spectrumlabel = {'circuit':pygsti.obj.Circuit('Gx(Gi)^128')}
print("significant frequencies: ", results.get_instability_frequencies(spectrumlabel))
w.PowerSpectraPlot(results, spectrumlabel)
```
Note that all frequencies are in 1/units where "units" are the units of the time stamps in the `DataSet`. Some of the plotting functions display frequencies as Hertz, which is based on the assumption that these time stamps are in seconds. (in the future we will allow the user to specify the units of the time stamps).
We can access a dictionary of all the circuits that we have detected as being unstable, with the values the detected frequencies of the instabilities.
```
unstablecircuits = results.get_unstable_circuits()
# We only display the first 10 circuits and frequencies, as there are a lot of them!
for ind, (circuit, freqs) in enumerate(unstablecircuits.items()):
if ind < 10: print(circuit.str, freqs)
```
We can jointly plot the power spectra for any set of circuits, by handing a dictionary of circuits to the `PowerSpectraPlot` function.
```
circuits = {L: pygsti.obj.Circuit(None,stringrep='Gx(Gi)^'+str(L)+'Gx') for L in [1,2,4,16,64,128,256]}
w.PowerSpectraPlot(results, {'circuit':circuits}, showlegend=True)
```
### 2. Instability Characterization Results : Probability Trajectories
The tools also estimate the probability trajectories for each circuit, i.e., the probabilities to obtain each possible circuit outcome as a function of time. We can plot the estimated probability trajectory for any circuit of interest (or a selection of circuits, if we hand the plotting function a dictionary or list of circuits instead of a single circuit).
```
circuit = pygsti.obj.Circuit(None, stringrep= 'Gx(Gi)^256GxGxGx')
w.ProbTrajectoriesPlot(results.stabilityanalyzer, circuit, ('1',))
```
If you simply want to access the time-varying distribution, use the `get_probability_trajectory()` method.
The size of the instability in a circuit can be summarized by the amplitudes in front of the non-constant basis functions in our estimate of the probability trajectories. By summing these all up (and dividing by 2), we can get an upper bound on the maximum TVD between the instaneous probability distribution (over circuit outcomes) and the mean of this time-varying probability distribution, with this maximization over all times.
```
results.get_max_tvd_bound(circuit)
```
If you want to access this quantity for all unstable circuits, you can set `getmaxtvd = True` in the `get_unstable_circuits()` method. We can also access it's maximum over all the circuits as
```
results.get_maxmax_tvd_bound()
```
### 3. Further plotting for data from structured circuits (e.g., GST circuits)
If the data is from GST experiments, or anything with a GST-like structure of germs and fudicials (such as Ramsey circuits), we can create some extra plots, and a drift report.
We can plot all of the power spectra, and probability trajectories, for amy (preparation fiducial, germ, measurement fiducial) triple. This shows how any instability changes, for this triple, as the germ power is increased.
```
w.GermFiducialPowerSpectraPlot(results, 'Gy', 'Gi', 'Gx', showlegend=True)
w.GermFiducialProbTrajectoriesPlot(results, 'Gy', 'Gi', 'Gx', ('0',), showlegend=True)
```
We can make a boxplot that shows $\lambda = -\log_{10}(p)$ for each circuit, where $p$ is the p-value of the maximum power in the spectrum for that circuit. This statistic is a good summary for the evidence for instability in the circuit. Note that, for technical reasons, $\lambda$ is truncated above at 16.
```
circuits256 = exp_design.circuit_lists[-1] # Pull out circuits up to max L (256)
w.ColorBoxPlot('driftdetector', circuits256, None, None, stabilityanalyzer=results.stabilityanalyzer)
```
The $\lambda = -\log_{10}(p)$ values do not *directly* tell us anything about the size of any detected instabilities. The $\lambda$ statistics summarizes how certain we are that there is some instability, but if enough data is taken then even tiny instabilities will become obvious (and so we would have $\lambda \to \infty$, except that the code truncates it to 16).
The boxplot below summarizes the **size** of any detected instability with the bound on the maximal instantaneous TVD for each circuit, introduced above. Here this is zero for most circuits, as we did not detect any instability in those circuits - so our estimate for the probability trajectory is constant. The gradient of the colored boxes in this plot fairly closely mirror those in the plot above, but this is not always guaranteed to happen (e.g., if there is a lot more data for some of the circuits than others).
```
# Create a boxplot of the maximum power in the power spectra for each sequence.
w.ColorBoxPlot('driftsize', circuits256, None, None, stabilityanalyzer=results.stabilityanalyzer)
```
We can also create a report that contains all of these plots, as well as a few other things. But note that creating this report is currently fairly slow. Moreover, all the plots it contains have been demonstrated above, and everything else it contains can be accessed directly from the `StabilityAnalyzer` object. To explore all the things that are recorded in the `StabilityAnalyzer` object take a look at its `get` methods.
```
report = pygsti.report.create_drift_report(results, title='Example Drift Report')
report.write_html('../tutorial_files/DriftReport')
```
You can now open the file [../tutorial_files/DriftReport/main.html](../tutorial_files/DriftReport/main.html) in your browser (Firefox works best) to view the report.
|
github_jupyter
|
# Importing the drift module is essential
from pygsti.extras import drift
# Importing all of pyGSTi is optional, but often useful.
import pygsti
# Initialize the circuit structure details of the imported data.
# Construct a basic ExplicitModel for the experiment design
model = pygsti.construction.create_explicit_model( ['Q0'], ['Gi','Gx','Gy'], [ "I(Q0)","X(pi/2,Q0)", "Y(pi/2,Q0)"] )
# This manually specifies the germ and fiducial structure for the imported data.
fiducial_strs = ['{}','Gx','Gy','GxGx','GxGxGx','GyGyGy']
germ_strs = ['Gi','Gx','Gy','GxGy','GxGyGi','GxGiGy','GxGiGi','GyGiGi','GxGxGiGy','GxGyGyGi','GxGxGyGxGyGy']
log2maxL = 9 # log2 of the maximum germ power
# Below we use the maxlength, germ and fiducial lists to create the GST structures needed for box plots.
fiducials = [pygsti.objects.Circuit(fs) for fs in fiducial_strs]
germs = [pygsti.objects.Circuit(g) for g in germ_strs]
max_lengths = [2**i for i in range(0,log2maxL)]
exp_design = pygsti.protocols.StandardGSTDesign(model, fiducials, fiducials, germs, max_lengths)
ds = pygsti.io.load_dataset("../tutorial_files/timestamped_dataset.txt") # a DataSet
data = pygsti.protocols.ProtocolData(exp_design, ds)
protocol = pygsti.protocols.StabilityAnalysis()
results = protocol.run(data)
print(results.stabilityanalyzer)
# Create a workspace to show plots
w = pygsti.report.Workspace()
w.init_notebook_mode(connected=False, autodisplay=True)
w.PowerSpectraPlot(results)
print(results.get_instability_frequencies())
spectrumlabel = {'circuit':pygsti.obj.Circuit('Gx(Gi)^128')}
print("significant frequencies: ", results.get_instability_frequencies(spectrumlabel))
w.PowerSpectraPlot(results, spectrumlabel)
unstablecircuits = results.get_unstable_circuits()
# We only display the first 10 circuits and frequencies, as there are a lot of them!
for ind, (circuit, freqs) in enumerate(unstablecircuits.items()):
if ind < 10: print(circuit.str, freqs)
circuits = {L: pygsti.obj.Circuit(None,stringrep='Gx(Gi)^'+str(L)+'Gx') for L in [1,2,4,16,64,128,256]}
w.PowerSpectraPlot(results, {'circuit':circuits}, showlegend=True)
circuit = pygsti.obj.Circuit(None, stringrep= 'Gx(Gi)^256GxGxGx')
w.ProbTrajectoriesPlot(results.stabilityanalyzer, circuit, ('1',))
results.get_max_tvd_bound(circuit)
results.get_maxmax_tvd_bound()
w.GermFiducialPowerSpectraPlot(results, 'Gy', 'Gi', 'Gx', showlegend=True)
w.GermFiducialProbTrajectoriesPlot(results, 'Gy', 'Gi', 'Gx', ('0',), showlegend=True)
circuits256 = exp_design.circuit_lists[-1] # Pull out circuits up to max L (256)
w.ColorBoxPlot('driftdetector', circuits256, None, None, stabilityanalyzer=results.stabilityanalyzer)
# Create a boxplot of the maximum power in the power spectra for each sequence.
w.ColorBoxPlot('driftsize', circuits256, None, None, stabilityanalyzer=results.stabilityanalyzer)
report = pygsti.report.create_drift_report(results, title='Example Drift Report')
report.write_html('../tutorial_files/DriftReport')
| 0.725649 | 0.993686 |
```
import re
with open('annot.opcorpora.xml', 'r') as f:
raw_data = f.read()
split = raw_data.split("source")
split_len = len(split)
text = " ".join([x[1:-2] for x in split[1::2]])
text[:1000]
```
Число символов в текстах:
```
len(text)
```
Число слов в текстах:
```
len(text.split())
low = text.lower()
```
Пример текста:
```
low[:1000]
def is_alpha_or_space(x):
return str.isalpha(x) or (x == ' ')
low_filtered = "".join(filter(is_alpha_or_space, low))
del low
del raw_data
words = low_filtered.split()
for i in range(len(words)):
if words[i] in ["чёрный", "чёрная", "чёрное","черный", "черная", "черное" ]:
print(" ".join(words[i-2:i+3]))
for i in range(len(words)):
if words[i] in ["бел" + x for x in ["ый", "ая", "ое"] ]:
print(" ".join(words[i-2:i+3]))
for i in range(len(words)):
if words[i] in ["сер" + x for x in ["ый", "ая", "ое"] ]:
print(" ".join(words[i-2:i+3]))
```
Сколько раз встречаются цвета в текстах
```
from collections import defaultdict
d = defaultdict(int)
for w in words:
if w in ["сер" + x for x in ["ый", "ая", "ое"] ] + ["бел" + x for x in ["ый", "ая", "ое"] ] + ["чёрный", "чёрная", "чёрное","черный", "черная", "черное" ]:
d[w]+= 1
print(d)
gray = ["сер" + x for x in ["ый", "ая", "ое"] ]
white = ["бел" + x for x in ["ый", "ая", "ое"] ]
black = ["чёрный", "чёрная", "чёрное","черный", "черная", "черное" ]
```
Сколько раз встречается серый, чёрный, белый в любом роде:
```
print(sum([d[x] for x in gray]))
print(sum([d[x] for x in black]))
print(sum([d[x] for x in white]))
d_gray = [defaultdict(int),defaultdict(int),defaultdict(int),defaultdict(int)]
d_black = [defaultdict(int),defaultdict(int),defaultdict(int),defaultdict(int)]
d_white = [defaultdict(int),defaultdict(int),defaultdict(int),defaultdict(int)]
for i in range(len(words)):
if words[i] in gray:
d_gray[0][words[i-2]] += 1
d_gray[1][words[i-1]] += 1
d_gray[2][words[i+1]] += 1
d_gray[3][words[i+2]] += 1
if words[i] in black:
d_black[0][words[i-2]] += 1
d_black[1][words[i-1]] += 1
d_black[2][words[i+1]] += 1
d_black[3][words[i+2]] += 1
if words[i] in white:
d_white[0][words[i-2]] += 1
d_white[1][words[i-1]] += 1
d_white[2][words[i+1]] += 1
d_white[3][words[i+2]] += 1
import operator
def d_sort(x):
sorted_x = sorted(x.items(), key=operator.itemgetter(1))
return sorted_x
```
Сколько раз встречается слово после "чёрн -ый -ая -ое"
```
d_sort(d_black[2])[::-1]
```
Белый
```
d_sort(d_white[2])[::-1]
```
Серый (тучка на самом деле 1, просто один текст попался в набор 3 раза)
```
d_sort(d_gray[2])[::-1]
```
Слова на расстоянии не дальше двух:
```
d_gray_4 = defaultdict(int)
for d in d_gray:
for key in d:
d_gray_4[key]+=d[key]
d_sort(d_gray_4)[::-1]
d_black_4 = defaultdict(int)
for d in d_black:
for key in d:
d_black_4[key]+=d[key]
d_sort(d_black_4)[::-1]
d_white_4 = defaultdict(int)
for d in d_white:
for key in d:
d_white_4[key]+=d[key]
d_sort(d_white_4)[::-1]
```
|
github_jupyter
|
import re
with open('annot.opcorpora.xml', 'r') as f:
raw_data = f.read()
split = raw_data.split("source")
split_len = len(split)
text = " ".join([x[1:-2] for x in split[1::2]])
text[:1000]
len(text)
len(text.split())
low = text.lower()
low[:1000]
def is_alpha_or_space(x):
return str.isalpha(x) or (x == ' ')
low_filtered = "".join(filter(is_alpha_or_space, low))
del low
del raw_data
words = low_filtered.split()
for i in range(len(words)):
if words[i] in ["чёрный", "чёрная", "чёрное","черный", "черная", "черное" ]:
print(" ".join(words[i-2:i+3]))
for i in range(len(words)):
if words[i] in ["бел" + x for x in ["ый", "ая", "ое"] ]:
print(" ".join(words[i-2:i+3]))
for i in range(len(words)):
if words[i] in ["сер" + x for x in ["ый", "ая", "ое"] ]:
print(" ".join(words[i-2:i+3]))
from collections import defaultdict
d = defaultdict(int)
for w in words:
if w in ["сер" + x for x in ["ый", "ая", "ое"] ] + ["бел" + x for x in ["ый", "ая", "ое"] ] + ["чёрный", "чёрная", "чёрное","черный", "черная", "черное" ]:
d[w]+= 1
print(d)
gray = ["сер" + x for x in ["ый", "ая", "ое"] ]
white = ["бел" + x for x in ["ый", "ая", "ое"] ]
black = ["чёрный", "чёрная", "чёрное","черный", "черная", "черное" ]
print(sum([d[x] for x in gray]))
print(sum([d[x] for x in black]))
print(sum([d[x] for x in white]))
d_gray = [defaultdict(int),defaultdict(int),defaultdict(int),defaultdict(int)]
d_black = [defaultdict(int),defaultdict(int),defaultdict(int),defaultdict(int)]
d_white = [defaultdict(int),defaultdict(int),defaultdict(int),defaultdict(int)]
for i in range(len(words)):
if words[i] in gray:
d_gray[0][words[i-2]] += 1
d_gray[1][words[i-1]] += 1
d_gray[2][words[i+1]] += 1
d_gray[3][words[i+2]] += 1
if words[i] in black:
d_black[0][words[i-2]] += 1
d_black[1][words[i-1]] += 1
d_black[2][words[i+1]] += 1
d_black[3][words[i+2]] += 1
if words[i] in white:
d_white[0][words[i-2]] += 1
d_white[1][words[i-1]] += 1
d_white[2][words[i+1]] += 1
d_white[3][words[i+2]] += 1
import operator
def d_sort(x):
sorted_x = sorted(x.items(), key=operator.itemgetter(1))
return sorted_x
d_sort(d_black[2])[::-1]
d_sort(d_white[2])[::-1]
d_sort(d_gray[2])[::-1]
d_gray_4 = defaultdict(int)
for d in d_gray:
for key in d:
d_gray_4[key]+=d[key]
d_sort(d_gray_4)[::-1]
d_black_4 = defaultdict(int)
for d in d_black:
for key in d:
d_black_4[key]+=d[key]
d_sort(d_black_4)[::-1]
d_white_4 = defaultdict(int)
for d in d_white:
for key in d:
d_white_4[key]+=d[key]
d_sort(d_white_4)[::-1]
| 0.112624 | 0.750004 |
# Lecture 19: Data Exploration
CSCI 1360E: Foundations for Informatics and Analytics
## Overview and Objectives
We've previously covered the basics of exploring data. In this lecture, we'll go into a bit more detail of some of the slightly more formal strategies of "data munging," including introducing the `pandas` DataFrame for organizing your data. By the end of this lecture, you should be able to
- Generate histograms and plots for exploring 1D and 2D data
- Rescale and normalize data to more directly compare different distributions
- Import data into pandas DataFrames and perform basic analyses
## Part 1: Exploring
As has been (hopefully) hammered at you over the past few weeks, one particularly important skill that all data scientists must have is the ability to **explore your data.**
If you recall way back in [Lecture 13 on working with text](http://nbviewer.jupyter.org/format/slides/github/eds-uga/csci1360e-su16/blob/master/lectures/L13.ipynb#/), I mentioned something about structured versus unstructured data, and how the vast majority of data out there falls in the latter category.
You can't work directly with data that is unstructured! So you have to give it structure. But in order to do that, you have to *understand* your data.
### One dimension
This is about as simple as it gets: your data consist of a list of numbers. We saw in previous lectures that you can compute statistics (mean, median, variance, etc) on these numbers. You can also visualize them using histograms. We'll reiterate that point here, using a particular example.
```
import numpy as np
np.random.seed(3908544)
# Generate two random datasets.
data1 = np.random.normal(loc = 0, scale = 58, size = 1000)
data2 = 200 * np.random.random(1000) - 100
# What are their means and variances?
print("Dataset 1 :: {:.2f} (avg) :: {:.2f} (std)".format(data1.mean(), data1.std()))
print("Dataset 2 :: {:.2f} (avg) :: {:.2f} (std)".format(data2.mean(), data2.std()))
```
Both datasets contain 1000 random numbers. Both datasets have very nearly the same mean and same standard deviation.
But the two datasets *look* very different!
```
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
plt.figure().set_figwidth(12)
plt.subplot(121)
plt.title("Dataset 1")
_ = plt.hist(data1, bins = 20, range = (-100, 100))
plt.subplot(122)
plt.title("Dataset 2")
_ = plt.hist(data2, bins = 20, range = (-100, 100))
```
Behold: the importance of viewing your data! Dataset 1 is drawn from a Gaussian (normal), while Dataset 2 is uniform.
### Two dimensions
Two (and even three) dimensions? **Scatter plots** are your friend. Consider the following fake datasets.
```
np.random.seed(8493248)
X = np.random.normal(size = 1000)
Y1 = (X + np.random.normal(size = 1000) / 2)
Y2 = (-X + np.random.normal(size = 1000) / 2)
```
If you plotted `Y1` and `Y2` using the histograms from the previous strategy, you'd get two datasets that looked pretty much identical.
```
plt.figure().set_figwidth(12)
plt.subplot(121)
plt.title("Dataset Y1")
_ = plt.hist(Y1, bins = 50, range = (-4, 4))
plt.subplot(122)
plt.title("Dataset Y2")
_ = plt.hist(Y2, bins = 50, range = (-4, 4))
```
Maybe *slightly* different shapes, but qualitatively (and statistically) identical.
But what if we visualized the data in 2D using a scatter plot?
```
plt.scatter(X, Y1, marker = ".", color = "black", label = "Dataset 1")
plt.scatter(X, Y2, marker = ".", color = "gray", label = "Dataset 2")
plt.xlabel("X")
plt.ylabel("Y")
plt.legend(loc = 0)
plt.title("Joint Distribution")
```
TOTES DIFFERENT, again!
These two datasets are *anticorrelated*. To see what this means, we can derive the correlation coefficients for the two datasets independently:
```
print(np.corrcoef(X, Y1)[0, 1])
print(np.corrcoef(X, Y2)[0, 1])
```
"Correlation" means as we change one variable (X), another variable changes by a similar amount (Y). Positive correlation means as we increase one variable, the other increases; negative correlation means as we increase one variable, the other *decreases*.
Anticorrelation, then, is the presence of both positive and negative correlation, which is what we see in this dataset: one has a correlation coefficient of 0.9 (1.0 is perfect positive correlation), while the other is -0.9 (-1.0 is perfect negative correlation).
**This is something we'd only know from either visualizing the data or examining how the data are correlated.**
### More than two dimensions
If you have 3D data, matplotlib is capable of displaying that. But beyond three dimensions, it can get tricky. A good starting point is to make a *correlation matrix*, where the $i^{th}$ row and $j^{th}$ column of the matrix is the correlation coefficient between the $i^{th}$ and $j^{th}$ dimensions of the data.
Another strategy is to create 2D scatter plots of every pairwise combinations of dimensions. For every $i^{th}$ and $j^{th}$ dimension in the data, create a 2D scatter plot like we did in the last slide. This way, you can visualize each dimensions relative to each other dimension and easily spot any correlations.
The upshot here is to **find a way to visualize your data**.
## Part 2: Rescaling
Many data science analysis techniques can be sensitive to the *scale* of your data. This is where normalization or *scaling* your data can help immensely.
Let's say you're interested in grouping together your friends based on height and weight. You collect the following data points:
```
personA = np.array([63, 150]) # 63 inches, 150 pounds
personB = np.array([67, 160]) # 67 inches, 160 pounds
personC = np.array([70, 171]) # 70 inches, 171 pounds
plt.scatter(personA[0], personA[1])
plt.scatter(personB[0], personB[1])
plt.scatter(personC[0], personC[1])
```
And you compute the "distance" between each point (we'll just use standard Euclidean distance):
```
import numpy.linalg as nla
print("A to B: {:.2f}".format( nla.norm(personA - personB) ))
print("A to C: {:.2f}".format( nla.norm(personA - personC) ))
print("B to C: {:.2f}".format( nla.norm(personB - personC) ))
```
As you can see, the two closest data points are person A and person B.
But now your UK friend comes to you with the same dataset but a totally different conclusion! Turns out, this friend computed the heights of everyone in *centimeters*, rather than inches, giving the following dataset:
```
personA = np.array([160.0, 150]) # 160 cm, 150 pounds
personB = np.array([170.2, 160]) # 170.2 cm, 160 pounds
personC = np.array([177.8, 171]) # 177.8 cm, 171 pounds
plt.scatter(personA[0], personA[1])
plt.scatter(personB[0], personB[1])
plt.scatter(personC[0], personC[1])
print("A to B: {:.2f}".format( nla.norm(personA - personB) ))
print("A to C: {:.2f}".format( nla.norm(personA - personC) ))
print("B to C: {:.2f}".format( nla.norm(personB - personC) ))
```
Using this data, we arrive at the conclusion that persons B and C are most similar! Oops...?
It can be very problematic if a simple change of units completely alters the conclusions you draw from the data. One way to deal with this is through scaling--we've actually done this before in a homework assignment.
By rescaling the data, we eliminate any and all units. We remove the mean (subtract it off) and divide by the standard deviation, so if you had to include a unit, it would essentially be units of "standard deviations away from 0."
```
def rescale(data):
# First: subtract off the mean of each column.
data -= data.mean(axis = 0)
# Second: divide by the standard deviation of each column.
data /= data.std(axis = 0)
return data
np.random.seed(3248)
X = np.random.random((5, 3)) # Five rows with three dimensions.
print("=== BEFORE ===")
print("Means: {}\nStds: {}".format(X.mean(axis = 0), X.std(axis = 0)))
Xs = rescale(X)
print("=== AFTER ===")
print("Means: {}\nStds: {}".format(Xs.mean(axis = 0), Xs.std(axis = 0)))
```
Of course, like anything (everything?), there are still caveats.
- There is an implicit assumption being made when you rescale your data: that your dimensions are distributed like a Gaussian. If this is not true--or even worse, true for *some* dimensions but not others--you risk creating more problems than you solve by putting dimensions on "equal" footing that shouldn't be.
- On the other hand, rescaling your data can do wonders to mitigate or even eliminate the effects of outliers on your data. Rescaling will maintain *relative* distances (in terms of standard deviations) between dimensions, but will eliminate *absolute* differences that could just be flukes.
- As always, be careful about numerical round-off errors. You'll notice none of the means in the previous slide were *exactly* 0; this has to do with the precision of floating-point numbers and this precise behavior can vary depending on what operating system you're using.
## Part 3: DataFrames
*DataFrames* are a relatively new data structure on the data science scene. Equal parts spreadsheet, database, and array, they are capable of handling rich data formats as well as having built-in methods for dealing with the idiosyncrasies of unstructured datasets.
As Jake wrote in his book, *Python Data Science Handbook*:
> NumPy's `ndarray` data structure provides essential features for the type of clean, well-organized data typically seen in numerical computing tasks. While it serves this purpose very well, its limitations become clear when we need more flexibility (such as attaching labels to data, working with missing data, etc.) and when attempting operations which do not map well to element-wise broadcasting (such as groupings, pivots, etc.), each of which is an important piece of analyzing the less structured data available in many forms in the world around us. Pandas [...] builds on the NumPy array structure and provides efficient access to these sorts of "data munging" tasks that occupy most of a data scientist's time."
What exactly is a DataFrame, then?
Well, it's a collection of Series! `</unhelpful>`
```
import pandas as pd # "pd" is the import convention, like "np" is for NumPy
data = pd.Series([0.25, 0.5, 0.75, 1])
print(data)
```
Think of a `Series` as a super-fancy 1D NumPy array. It's so fancy, in fact, that you can give a `Series` completely custom indices, sort of like a dictionary.
```
data = pd.Series({2:'a', 1:'b', 3:'c'})
print(data)
```
If a `Series` is essentially a fancy 1D NumPy array, then a DataFrame is a fancy 2D array. Here's an example.
```
# Standard Python dictionary, nothing new and exciting.
population_dict = {'California': 38332521,
'Texas': 26448193,
'New York': 19651127,
'Florida': 19552860,
'Illinois': 12882135}
population = pd.Series(population_dict) # Oh right: you can feed dicts to Series!
area_dict = {'California': 423967,
'Texas': 695662,
'New York': 141297,
'Florida': 170312,
'Illinois': 149995}
area = pd.Series(area_dict)
# Build the DataFrame!
states = pd.DataFrame({'population': population,
'area': area})
print(states)
```
DataFrames are really nice--you can directly access all the extra information they contain.
```
print(states.index) # Our row names
print(states.columns) # Our Series / column names
```
You can also directly access the property you're interested in, rather than having to memorize the index number as with NumPy arrays:
```
print(states['population'])
```
But you can also access the same information *almost* as you would with a NumPy array:
```
print(states.iloc[:, 1])
```
Note the use of the `.iloc` attribute of DataFrames.
This is to handle the fact that you can assign *entirely customized* integer indices to DataFrames, resulting in potentially confusing behavior when you slice them--if you slice with `1:3`, are you referring to the first and third items in the DataFrame, or the items you specifically indexed as the first and third items? With DataFrames, these can be two different concepts!
- Use `.iloc` if you want to use *implicit* ordering, meaning the automatic Python internal ordering.
- Use `.loc` if you want to use *explicit* ordering, or the ordering that you set when you built the DataFrame.
- Use `.ix` if you want a *hybrid* of the two.
If you just want the whipper-snappers to get off your lawn, don't worry about this distinction. As long as you don't explicitly set the indices yourself when you build a DataFrame, just use `iloc`.
### Missing data
So what do DataFrames have to do with data exploration?...besides making it really easy, of course.
pandas has some phenomenal missing-data capabilities built-in to Series and DataFrames. As an example by comparison, let's see what happens if we have a `None` or `NaN` in our NumPy array when we try to do arithmetic.
```
x = np.array([0, 1, None, 2])
print(x.sum())
```
Welp, that crashed and burned. What about using `NaN` instead?
```
x = np.array([0, 1, np.nan, 2])
print(x.sum())
```
Well, it didn't crash. But since "NaN" specifically stands for "Not A Number", it makes arithmetic difficult since any operation involving a `NaN` will return `NaN`.
A Series has a bunch of tools available to us to sniffing out missing values and handling them gracefully.
- **`isnull()`**: generate a boolean mask which indicates where there are missing values.
- **`notnull()`**: opposite of `isnull()`.
- **`dropna()`**: return a version of the data that drops all `NaN` values.
- **`fillna()`**: return a copy of the data with `NaN` values filled in with something else or otherwise imputed.
```
data = pd.Series([1, np.nan, 'hello', None])
print(data.isnull()) # Where are the null indices?
print()
print(data[data.notnull()]) # Use the boolean mask to pull out non-null indices.
```
This is but a tiny taste of the majesty that is the pandas package. I highly recommend checking it out further.
## Review Questions
Some questions to discuss and consider:
1: What are the advantages and disadvantages of using pandas DataFrames instead of NumPy arrays?
2: Name three strategies for visualizing and exploring 5-dimensional data. What are the pros and cons of each?
3: You're putting your data science skills to work and writing a program that automatically classifies web articles into semantic categories (e.g. sports, politics, food, etc). You start by counting words, resulting in a model with 100,000 dimensions (words are dimensions!). Can you come up with any kind of strategy for exploring these data?
## Course Administrivia
- One more week left of class!
- A9 is due Sunday evening. A10, the last assignment, will be out Tuesday and due next Friday evening.
- We'll hold one final Slack review session **next Friday at 12-1:30pm.** Come with any questions you may have!
- The final exam will be very much like the midterm, in that it will be JupyterHub-based and flexible in terms of when you want to take it. More details to come next week.
## Additional Resources
1. Grus, Joel. *Data Science from Scratch*, Chapter 10. 2015. ISBN-13: 978-1491901427
2. VanderPlas, Jake. *Python Data Science Handbook*, Chapter 4. 2015. ISBN-13: 978-1491912058
|
github_jupyter
|
import numpy as np
np.random.seed(3908544)
# Generate two random datasets.
data1 = np.random.normal(loc = 0, scale = 58, size = 1000)
data2 = 200 * np.random.random(1000) - 100
# What are their means and variances?
print("Dataset 1 :: {:.2f} (avg) :: {:.2f} (std)".format(data1.mean(), data1.std()))
print("Dataset 2 :: {:.2f} (avg) :: {:.2f} (std)".format(data2.mean(), data2.std()))
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
plt.figure().set_figwidth(12)
plt.subplot(121)
plt.title("Dataset 1")
_ = plt.hist(data1, bins = 20, range = (-100, 100))
plt.subplot(122)
plt.title("Dataset 2")
_ = plt.hist(data2, bins = 20, range = (-100, 100))
np.random.seed(8493248)
X = np.random.normal(size = 1000)
Y1 = (X + np.random.normal(size = 1000) / 2)
Y2 = (-X + np.random.normal(size = 1000) / 2)
plt.figure().set_figwidth(12)
plt.subplot(121)
plt.title("Dataset Y1")
_ = plt.hist(Y1, bins = 50, range = (-4, 4))
plt.subplot(122)
plt.title("Dataset Y2")
_ = plt.hist(Y2, bins = 50, range = (-4, 4))
plt.scatter(X, Y1, marker = ".", color = "black", label = "Dataset 1")
plt.scatter(X, Y2, marker = ".", color = "gray", label = "Dataset 2")
plt.xlabel("X")
plt.ylabel("Y")
plt.legend(loc = 0)
plt.title("Joint Distribution")
print(np.corrcoef(X, Y1)[0, 1])
print(np.corrcoef(X, Y2)[0, 1])
personA = np.array([63, 150]) # 63 inches, 150 pounds
personB = np.array([67, 160]) # 67 inches, 160 pounds
personC = np.array([70, 171]) # 70 inches, 171 pounds
plt.scatter(personA[0], personA[1])
plt.scatter(personB[0], personB[1])
plt.scatter(personC[0], personC[1])
import numpy.linalg as nla
print("A to B: {:.2f}".format( nla.norm(personA - personB) ))
print("A to C: {:.2f}".format( nla.norm(personA - personC) ))
print("B to C: {:.2f}".format( nla.norm(personB - personC) ))
personA = np.array([160.0, 150]) # 160 cm, 150 pounds
personB = np.array([170.2, 160]) # 170.2 cm, 160 pounds
personC = np.array([177.8, 171]) # 177.8 cm, 171 pounds
plt.scatter(personA[0], personA[1])
plt.scatter(personB[0], personB[1])
plt.scatter(personC[0], personC[1])
print("A to B: {:.2f}".format( nla.norm(personA - personB) ))
print("A to C: {:.2f}".format( nla.norm(personA - personC) ))
print("B to C: {:.2f}".format( nla.norm(personB - personC) ))
def rescale(data):
# First: subtract off the mean of each column.
data -= data.mean(axis = 0)
# Second: divide by the standard deviation of each column.
data /= data.std(axis = 0)
return data
np.random.seed(3248)
X = np.random.random((5, 3)) # Five rows with three dimensions.
print("=== BEFORE ===")
print("Means: {}\nStds: {}".format(X.mean(axis = 0), X.std(axis = 0)))
Xs = rescale(X)
print("=== AFTER ===")
print("Means: {}\nStds: {}".format(Xs.mean(axis = 0), Xs.std(axis = 0)))
import pandas as pd # "pd" is the import convention, like "np" is for NumPy
data = pd.Series([0.25, 0.5, 0.75, 1])
print(data)
data = pd.Series({2:'a', 1:'b', 3:'c'})
print(data)
# Standard Python dictionary, nothing new and exciting.
population_dict = {'California': 38332521,
'Texas': 26448193,
'New York': 19651127,
'Florida': 19552860,
'Illinois': 12882135}
population = pd.Series(population_dict) # Oh right: you can feed dicts to Series!
area_dict = {'California': 423967,
'Texas': 695662,
'New York': 141297,
'Florida': 170312,
'Illinois': 149995}
area = pd.Series(area_dict)
# Build the DataFrame!
states = pd.DataFrame({'population': population,
'area': area})
print(states)
print(states.index) # Our row names
print(states.columns) # Our Series / column names
print(states['population'])
print(states.iloc[:, 1])
x = np.array([0, 1, None, 2])
print(x.sum())
x = np.array([0, 1, np.nan, 2])
print(x.sum())
data = pd.Series([1, np.nan, 'hello', None])
print(data.isnull()) # Where are the null indices?
print()
print(data[data.notnull()]) # Use the boolean mask to pull out non-null indices.
| 0.622 | 0.992539 |
# Quiz 3
BEFORE YOU START THIS QUIZ:
1. Click on "Copy to Drive" to make a copy of the quiz,
2. Click on "Share",
3. Click on "Change" and select "Anyone with this link can edit"
4. Click "Copy link" and
5. Paste the link into [this Canvas assignment](https://canvas.olin.edu/courses/313/assignments/4985).
This quiz is open notes, open internet. The only thing you can't do is ask for help.
Copyright 2021 Allen Downey, [MIT License](http://opensource.org/licenses/MIT)
```
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/DSIRP/raw/main/american-english')
```
## Question 1
The following is the implementation of a binary search tree (BST) from `search.ipynb`.
```
class Node:
def __init__(self, data, left=None, right=None):
self.data = data
self.left = left
self.right = right
def __repr__(self):
return f'Node({self.data}, {repr(self.left)}, {repr(self.right)})'
class BSTree:
def __init__(self, root=None):
self.root = root
def __repr__(self):
return f'BSTree({repr(self.root)})'
def insert(tree, data):
tree.root = insert_rec(tree.root, data)
def insert_rec(node, data):
if node is None:
return Node(data)
if data < node.data:
node.left = insert_rec(node.left, data)
else:
node.right = insert_rec(node.right, data)
return node
```
The following cell reads words from a file and adds them to a BST.
But if you run it, you'll get a `RecursionError`.
```
filename = 'american-english'
tree = BSTree()
for line in open(filename):
for word in line.split():
insert(tree, word.strip())
```
However, if we put the words into a list, shuffle the list, and then put the shuffled words into the BST, it works.
```
word_list = []
for line in open(filename):
for word in line.split():
word_list.append(word.strip())
from random import shuffle
shuffle(word_list)
tree = BSTree()
for word in word_list:
insert(tree, word.strip())
```
Write a few clear, complete sentences to answer the following two questions:
1) Why did we get a `RecursionError`, and why does shuffling the words fix the problem?
2) What is the order of growth for the whole process; that is, reading the words into a list, shuffling the list, and then putting the shuffled words into a binary search tree. You can assume that `shuffle` is linear.
## Question 2
As we discussed in class, there are three versions of the search problem:
1) Checking whether an element is in a collection; for example, this is what the `in` operator does.
2) Finding the index of an element in an ordered collection; for example, this is what the string method `find` does.
3) In a collection of key-value pairs, finding the value that corresponds to a given key; this is what the dictionary method `get` does.
In `search.ipynb`, we used a BST to solve the first problem. In this exercise, you will modify it to solve the third problem.
Here's the code again (although notice that the names of the objects are `MapNode` and `BSTMap`).
```
class MapNode:
def __init__(self, data, left=None, right=None):
self.data = data
self.left = left
self.right = right
def __repr__(self):
return f'Node({self.data}, {repr(self.left)}, {repr(self.right)})'
class BSTMap:
def __init__(self, root=None):
self.root = root
def __repr__(self):
return f'BSTMap({repr(self.root)})'
def insert_map(tree, data):
tree.root = insert_map_rec(tree.root, data)
def insert_map_rec(node, data):
if node is None:
return MapNode(data)
if data < node.data:
node.left = insert_map_rec(node.left, data)
else:
node.right = insert_map_rec(node.right, data)
return node
```
Modify this code so that it stores keys and values, rather than just elements of a collection.
Then write a function called `get` that takes a `BSTMap` and a key:
* If the key is in the map, it should return the corresponding value;
* Otherwise it should raise a `KeyError` with an appropriate message.
You can use the following code to test your implementation.
```
tree_map = BSTMap()
keys = 'uniqueltrs'
values = range(len(keys))
for key, value in zip(keys, values):
print(key, value)
insert_map(tree_map, key, value)
tree_map
for key in keys:
print(key, get(tree_map, key))
```
The following should raise a `KeyError`.
```
get(tree_map, 'b')
```
## Alternative solution
Modify this code so that it stores keys and values, rather than just elements of a collection.
Then write a function called `get` that takes a `BSTMap` and a key:
* If the key is in the map, it should return the corresponding value;
* Otherwise it should raise a `KeyError` with an appropriate message.
You can use the following code to test your implementation.
```
tree_map = BSTMap()
keys = 'uniqueltrs'
values = range(len(keys))
for key, value in zip(keys, values):
print(key, value)
insert_map(tree_map, key, value)
tree_map
for key in keys:
print(key, get(tree_map, key))
```
|
github_jupyter
|
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/DSIRP/raw/main/american-english')
class Node:
def __init__(self, data, left=None, right=None):
self.data = data
self.left = left
self.right = right
def __repr__(self):
return f'Node({self.data}, {repr(self.left)}, {repr(self.right)})'
class BSTree:
def __init__(self, root=None):
self.root = root
def __repr__(self):
return f'BSTree({repr(self.root)})'
def insert(tree, data):
tree.root = insert_rec(tree.root, data)
def insert_rec(node, data):
if node is None:
return Node(data)
if data < node.data:
node.left = insert_rec(node.left, data)
else:
node.right = insert_rec(node.right, data)
return node
filename = 'american-english'
tree = BSTree()
for line in open(filename):
for word in line.split():
insert(tree, word.strip())
word_list = []
for line in open(filename):
for word in line.split():
word_list.append(word.strip())
from random import shuffle
shuffle(word_list)
tree = BSTree()
for word in word_list:
insert(tree, word.strip())
class MapNode:
def __init__(self, data, left=None, right=None):
self.data = data
self.left = left
self.right = right
def __repr__(self):
return f'Node({self.data}, {repr(self.left)}, {repr(self.right)})'
class BSTMap:
def __init__(self, root=None):
self.root = root
def __repr__(self):
return f'BSTMap({repr(self.root)})'
def insert_map(tree, data):
tree.root = insert_map_rec(tree.root, data)
def insert_map_rec(node, data):
if node is None:
return MapNode(data)
if data < node.data:
node.left = insert_map_rec(node.left, data)
else:
node.right = insert_map_rec(node.right, data)
return node
tree_map = BSTMap()
keys = 'uniqueltrs'
values = range(len(keys))
for key, value in zip(keys, values):
print(key, value)
insert_map(tree_map, key, value)
tree_map
for key in keys:
print(key, get(tree_map, key))
get(tree_map, 'b')
tree_map = BSTMap()
keys = 'uniqueltrs'
values = range(len(keys))
for key, value in zip(keys, values):
print(key, value)
insert_map(tree_map, key, value)
tree_map
for key in keys:
print(key, get(tree_map, key))
| 0.376967 | 0.91302 |
# Minimally Sufficient Pandas with Ted Petrou
* Author of Pandas Cookbook
* Founder of Dunder Data
# Collect Data
* Who knows that Pandas refers to a Python library as well as an east-Asian bear?
* Have you used Pandas before?
* Have you used Pandas in production before?
# Do these apply to you
* Don't know the difference between `[], .iloc, .loc, .ix, .at, .iat`
* Use `reset_index` frequently because you have no idea how to deal with MultiIndexes
* Use for-loops frequently
* Use `apply` frequently
* Struggle with Pandas, and find yourself wishing it was easy as R
# Pandas Quiz #1
# How do you select the food column?
```
import pandas as pd
df = pd.read_csv('data/sample_data.csv', index_col=0)
df
```
# Pandas Quiz #2
### How do you select the row just for Penelope?
```
df
```
# Pandas Quiz #3
### How would you select the food and age columns for everyone over the age of 30?
```
df
```
# Minimally Sufficient Pandas
* There are multiple ways to accomplish most tasks
* Often, there is not an obvious way to do things
* A small subset of the library covers nearly all of the possible tasks
* Knowing many obscure Pandas tricks is not helpful
* Developing a standard Pandas usage guide can be helpful
* Pandas can be written in a very implicit way. Be as explicit as possible.
* Ask yourself whether method B gives you more functionality than method A
* Pandas is difficult to use in production - striving for consistency and simplicity can make a big difference
* There are an incredible amount of issues/bugs and using a minimally sufficient subset of Pandas can help avoid landing on a bug
# Simple Guidelines
* Use only bracket notation and never dot notation to select single columns
* Columns with spaces do not work
* Column names that collide with methods do not work
* Only use string names for columns
* Avoid chained indexing, especially when assigning new values to subsets of data
* Do not do this: `df[df['col1'] > 10]['col2'] = 10`
* Never use `.ix` for subset selection. It is deprecated.
* No reason to use `.at` and `.iat`
* Use bracket notation instead of the `query` method to do boolean selection
* Use the arithmetic and comparison operators instead of their counterpart methods (`add`, `gt`, etc...)
* Use DataFrame/Series methods when they exist
* Avoid built-in `Python` functions
* Avoid the `apply` method when possible
* Do not store complex data types in DataFrame/Series values - i.e. no lists, Series, or DataFrames within DataFrames/Series
* Decide on a syntax for grouping (especially when aggregating)
* `df.groupby(['grouping', 'columns']).agg({'aggregating column': 'aggregating func'})`
* `df.groupby(['grouping', 'columns'])['aggregating column'].aggregating_func()`
* Have a standard way of handling a multi-level Index
* Should you reset to single level?
* Should you reset and rename multi-level column indexes?
* Be very careful when calling `apply` on a `groupby` - this is the slowest operation in Pandas
* Pre-calculate anything that is independent of the group
* `melt/pivot` vs `stack/unstack` - They both do the same thing
# Chained Indexing
Occurs when consecutive subset selection. If you see back to back brackets (`][`), you have done chained indexing.
```
df[['color', 'food', 'state']][['color', 'food']]
# using a single indexer
df.loc[df['age'] > 30, ['color', 'food']]
```
### Helpful to break apart row and column selection
```
rs = df['age'] > 30
cs = ['color', 'food']
df.loc[rs, cs]
```
# Two common scenarios when assigning subsets of data
1. You want to make an assignment to a particular subset of your DataFrame but want to keep doing analysis on the entire DataFrame
1. You want to select a subset of data and store it as its own variable and modify that subset without modifying your original data.
```
df1 = pd.read_csv('data/sample_data.csv', index_col=0)
df1
```
### No assignment!
```
df1.loc[['Aaron', 'Dean']]['color'] = 'PURPLE'
df1
```
### Idiomatic
```
rs = ['Aaron', 'Dean']
cs = 'color'
df1.loc[rs, cs] = 'PURPLE'
df1
```
# Summary of Scenario 1:
* Use exactly one set of brackets to make the assignment
* You know you've made a mistake when you see back to back brackets like this `][`
* Separate row and column selection by a comma within the same set of brackets
# Scenario 2
Scenario 2 exists when you take a subset of data and want to keep working with just that subset. You may not care at all about the original DataFrame, but you probably won't want to change its data.
In this scenario, you will use the `copy` method to create a fresh independent copy of your subset and then make changes to that.
```
df2 = pd.read_csv('data/sample_data.csv', index_col=0)
food_score = df2[['food', 'score']]
food_score
criteria= food_score['food'].isin(['Steak', 'Lamb'])
food_score.loc[criteria, 'score'] = 99
food_score
df2
```
### Idiomatic
Use the `copy` method:
```
food_score = df[['food', 'score']].copy()
criteria = food_score['food'].isin(['Steak', 'Lamb'])
food_score.loc[criteria, 'score'] = 99
food_score
```
# `.ix` is deprecated
Remove every trace of it from your code. It is ambiguous. `.loc` and `.iloc` are explicit. Use them.
### Very little reason to use `.at` and `.iat`
These two indexers select a single cell from a DataFrame/Series. There is almost never going to be a case when they are necessary. They provide a small speed-up over `.loc` and `.iloc`, but if you really wanted to select data faster then you should drop down into NumPy.
# `query` method
It is more readable but does not work with columns with spaces. It also adds no additional functionality over normal boolean indexing, so why use it?
```
df.query('age > 30')
df[df['age'] > 30]
df3 = df.copy()
df3 = df3.rename(columns={'food': 'fave food'})
df3
df3.query('fave food == "Steak"')
```
# Arithmetic and Comparison Operators
Use the arithmetic and comparison operators `+, -, *, /, <, >, <=, >=, ==, !=` over their counterpart methods `add, sub, mul, div, lt, gt, le, ge, eq, ne` unless you need to change the direction of an operation.
```
college = pd.read_csv('data/college.csv', index_col='instnm')
pd.options.display.max_columns = 100
college.head()
college_ugds = college.loc[:, 'ugds_white':'ugds_unkn']
college_ugds.head()
race_ugds_mean = college_ugds.mean()
race_ugds_mean
```
### Default is to align Series index with columns
```
college_ugds_mean_diff = college_ugds - race_ugds_mean
college_ugds_mean_diff.head(10)
race_school_min = college_ugds.min(axis='columns')
race_school_min.head(10)
# blows up due to outer join of index
college_ugds - race_school_min
```
Arithmetic and comparison **methods** default to `axis='columns'`. Almost all others default to axis='index'. We must use the `sub` method to change the direction of operation.
```
college_ugds.sub(race_school_min, axis='index').head(10)
```
# Use DataFrame/Series methods
A common mistake is to use a built-in core Python function instead of a DataFrame/Series method.
```
ugds = college['ugds'].dropna()
ugds.head(10)
sum(ugds)
ugds.sum()
```
## No difference except when there are missing values
```
sum(college['ugds'])
college['ugds'].sum()
```
## Large performance difference
```
ugds1 = ugds.sample(n=10**6, replace=True)
%timeit -n 5 sum(ugds1)
%timeit -n 5 ugds1.sum()
```
# `apply` - the method that does nothing but is used the most often
The `apply` method does basically nothing. It simply replaces a manual writing of a for loop.
```
college_ugds.head()
college_ugds.apply(lambda x: x.max())
%timeit -n 5 college_ugds.apply(lambda x: x.max())
college_ugds.max()
%timeit -n 5 college_ugds.max()
college_ugds.apply(lambda x: x.max(), axis='columns').head()
college_ugds.max(axis='columns').head()
```
### Huge time difference when doing `axis='columns'`
A for-loop over the rows is a very slow operations. Avoid at all costs.
```
%timeit -n 1 -r 1 college_ugds.apply(lambda x: x.max(), axis='columns')
%timeit -n 5 college_ugds.max(axis='columns').head()
```
# Acceptable usages of `apply`
Only use `apply` when a built in pandas method does not exist.
```
earnings_debt = college[['md_earn_wne_p10', 'grad_debt_mdn_supp']]
earnings_debt.head()
earnings_debt.dtypes
earnings_debt.astype('float')
pd.to_numeric(earnings_debt)
earnings_debt.apply(pd.to_numeric, errors='coerce').head()
```
# Storing complex objects inside DataFrames/Series
Just because Pandas allows you to do something, does not mean it is a good idea. There is not good support for non-scalar values stored within cells of DataFrames/Series. Store multiple values in separate columns.
```
# never do this
college_ugds.head(20).apply(lambda x: pd.Series({'max and min': [x.min(), x.max()]}), axis=1).head()
```
# Know the three components of a groupby aggregation
All groupby aggregations contain 3 components:
* Grouping Columns - Unique combinations of these for independent groups
* Aggregating Columns - The values in these columns will be aggregated to a single value
* Aggregating functions - The type of aggregation to be used. Must output a single value
# `groupby` syntax - standardize for readability
There are a number of syntaxes that get used for the `groupby` method.
```
# syntax that I use
state_math_sat_max = college.groupby('stabbr') \
.agg({'satmtmid': 'max'})
state_math_sat_max.head()
college.groupby('stabbr')['satmtmid'].agg('max').head()
# no reason to use the full word aggregate. Always use agg
college.groupby('stabbr')['satmtmid'].aggregate('max').head()
college.groupby('stabbr')['satmtmid'].max().head()
college[['stabbr', 'satmtmid']].groupby('stabbr').max().head()
```
# Handling a MultiIndex - Usually after grouping
```
col_stats = college.groupby(['stabbr', 'relaffil']) \
.agg({'ugds': ['min', 'max'],
'satmtmid': ['median', 'max']})
col_stats.head(10)
```
### I don't like MultiIndexes
Personally, I find that MultiIndexes add no value to pandas. Selecting subsets of data from them is not obvious. Instead, renaming the columns by hand is not a bad strategy. We can also reset the index.
```
col_stats.columns = ['min ugds', 'max ugds', 'median satmtmid', 'max satmtmid']
col_stats = col_stats.reset_index()
col_stats.head()
```
# Calling `apply` on a `groupby` object - be careful
Using `apply` within a `groupby` can lead to disastrous performance. It is one of the slowest operations in all of pandas.
### Finding the percentage of all undergraduates represented in the top 5 most populous colleges
To accomplish this, we write a custom function to sort the values of each group from greatest to least. We then select the first 5 values with .iloc and sum them. We divide this sum by the total.
```
def top5_perc(s):
s = s.sort_values(ascending=False)
top5_total = s.iloc[:5].sum()
total = s.sum()
return top5_total / total
college.groupby('stabbr').agg({'ugds': top5_perc}).head(10)
```
# Run operations that are independent of the group outside of the custom function
The best way to avoid giant performance leaks with groupby-apply is to run all operations that are independent of the group outside of the custom aggregation function. Here, we sort the entire DataFrame first.
```
def top5_perc_simple(s):
top5_total = s.iloc[:5].sum()
total = s.sum()
return top5_total / total
college.sort_values('ugds', ascending=False) \
.groupby('stabbr').agg({'ugds': top5_perc_simple}).head(10)
%timeit -n 5 college.groupby('stabbr').agg({'ugds': top5_perc})
%%timeit -n 5
college.sort_values('ugds', ascending=False) \
.groupby('stabbr').agg({'ugds': top5_perc_simple}).head(10)
```
# Pandas Power User Optimization
```
college_top5 = college.sort_values('ugds', ascending=False) \
.groupby('stabbr').head()
top5_total = college_top5.groupby('stabbr').agg({'ugds': 'sum'})
top5_total.head()
total = college.groupby('stabbr').agg({'ugds': 'sum'})
total.head()
(top5_total / total).head()
%%timeit -n 5
college_top5 = college.sort_values('ugds', ascending=False) \
.groupby('stabbr').head()
top5_total = college_top5.groupby('stabbr').agg({'ugds': 'sum'})
total = college.groupby('stabbr').agg({'ugds': 'sum'})
top5_total / total
```
# `melt` vs `stack`
These methods are virtually identical. I prefer `melt` as it avoids a multi-level index.
```
movie = pd.read_csv('data/movie.csv')
movie.head()
act1 = movie.melt(id_vars=['title'],
value_vars=['actor1', 'actor2', 'actor3'],
var_name='actor number',
value_name='actor name')
stacked = movie.set_index('title')[['actor1', 'actor2', 'actor3']].stack()
stacked.head()
stacked.reset_index(name='actor name').head(10)
act1.pivot(index='title', columns='actor number', values='actor name').head()
stacked.unstack().head()
```
# `pivot_table` vs `groupby` then `unstack`
`pivot_table` can directly create a pivot table. You can achieve the exact same result by grouping by multiple columns and then unstacking. I prefer the pivot table as it is clearer.
```
emp = pd.read_csv('data/employee.csv')
emp.head()
emp.pivot_table(index='race', columns='gender', values='salary')
race_gen_sal = emp.groupby(['race', 'gender']).agg({'salary': 'mean'})
race_gen_sal
race_gen_sal.unstack('gender')
```
|
github_jupyter
|
import pandas as pd
df = pd.read_csv('data/sample_data.csv', index_col=0)
df
df
df
df[['color', 'food', 'state']][['color', 'food']]
# using a single indexer
df.loc[df['age'] > 30, ['color', 'food']]
rs = df['age'] > 30
cs = ['color', 'food']
df.loc[rs, cs]
df1 = pd.read_csv('data/sample_data.csv', index_col=0)
df1
df1.loc[['Aaron', 'Dean']]['color'] = 'PURPLE'
df1
rs = ['Aaron', 'Dean']
cs = 'color'
df1.loc[rs, cs] = 'PURPLE'
df1
df2 = pd.read_csv('data/sample_data.csv', index_col=0)
food_score = df2[['food', 'score']]
food_score
criteria= food_score['food'].isin(['Steak', 'Lamb'])
food_score.loc[criteria, 'score'] = 99
food_score
df2
food_score = df[['food', 'score']].copy()
criteria = food_score['food'].isin(['Steak', 'Lamb'])
food_score.loc[criteria, 'score'] = 99
food_score
df.query('age > 30')
df[df['age'] > 30]
df3 = df.copy()
df3 = df3.rename(columns={'food': 'fave food'})
df3
df3.query('fave food == "Steak"')
college = pd.read_csv('data/college.csv', index_col='instnm')
pd.options.display.max_columns = 100
college.head()
college_ugds = college.loc[:, 'ugds_white':'ugds_unkn']
college_ugds.head()
race_ugds_mean = college_ugds.mean()
race_ugds_mean
college_ugds_mean_diff = college_ugds - race_ugds_mean
college_ugds_mean_diff.head(10)
race_school_min = college_ugds.min(axis='columns')
race_school_min.head(10)
# blows up due to outer join of index
college_ugds - race_school_min
college_ugds.sub(race_school_min, axis='index').head(10)
ugds = college['ugds'].dropna()
ugds.head(10)
sum(ugds)
ugds.sum()
sum(college['ugds'])
college['ugds'].sum()
ugds1 = ugds.sample(n=10**6, replace=True)
%timeit -n 5 sum(ugds1)
%timeit -n 5 ugds1.sum()
college_ugds.head()
college_ugds.apply(lambda x: x.max())
%timeit -n 5 college_ugds.apply(lambda x: x.max())
college_ugds.max()
%timeit -n 5 college_ugds.max()
college_ugds.apply(lambda x: x.max(), axis='columns').head()
college_ugds.max(axis='columns').head()
%timeit -n 1 -r 1 college_ugds.apply(lambda x: x.max(), axis='columns')
%timeit -n 5 college_ugds.max(axis='columns').head()
earnings_debt = college[['md_earn_wne_p10', 'grad_debt_mdn_supp']]
earnings_debt.head()
earnings_debt.dtypes
earnings_debt.astype('float')
pd.to_numeric(earnings_debt)
earnings_debt.apply(pd.to_numeric, errors='coerce').head()
# never do this
college_ugds.head(20).apply(lambda x: pd.Series({'max and min': [x.min(), x.max()]}), axis=1).head()
# syntax that I use
state_math_sat_max = college.groupby('stabbr') \
.agg({'satmtmid': 'max'})
state_math_sat_max.head()
college.groupby('stabbr')['satmtmid'].agg('max').head()
# no reason to use the full word aggregate. Always use agg
college.groupby('stabbr')['satmtmid'].aggregate('max').head()
college.groupby('stabbr')['satmtmid'].max().head()
college[['stabbr', 'satmtmid']].groupby('stabbr').max().head()
col_stats = college.groupby(['stabbr', 'relaffil']) \
.agg({'ugds': ['min', 'max'],
'satmtmid': ['median', 'max']})
col_stats.head(10)
col_stats.columns = ['min ugds', 'max ugds', 'median satmtmid', 'max satmtmid']
col_stats = col_stats.reset_index()
col_stats.head()
def top5_perc(s):
s = s.sort_values(ascending=False)
top5_total = s.iloc[:5].sum()
total = s.sum()
return top5_total / total
college.groupby('stabbr').agg({'ugds': top5_perc}).head(10)
def top5_perc_simple(s):
top5_total = s.iloc[:5].sum()
total = s.sum()
return top5_total / total
college.sort_values('ugds', ascending=False) \
.groupby('stabbr').agg({'ugds': top5_perc_simple}).head(10)
%timeit -n 5 college.groupby('stabbr').agg({'ugds': top5_perc})
%%timeit -n 5
college.sort_values('ugds', ascending=False) \
.groupby('stabbr').agg({'ugds': top5_perc_simple}).head(10)
college_top5 = college.sort_values('ugds', ascending=False) \
.groupby('stabbr').head()
top5_total = college_top5.groupby('stabbr').agg({'ugds': 'sum'})
top5_total.head()
total = college.groupby('stabbr').agg({'ugds': 'sum'})
total.head()
(top5_total / total).head()
%%timeit -n 5
college_top5 = college.sort_values('ugds', ascending=False) \
.groupby('stabbr').head()
top5_total = college_top5.groupby('stabbr').agg({'ugds': 'sum'})
total = college.groupby('stabbr').agg({'ugds': 'sum'})
top5_total / total
movie = pd.read_csv('data/movie.csv')
movie.head()
act1 = movie.melt(id_vars=['title'],
value_vars=['actor1', 'actor2', 'actor3'],
var_name='actor number',
value_name='actor name')
stacked = movie.set_index('title')[['actor1', 'actor2', 'actor3']].stack()
stacked.head()
stacked.reset_index(name='actor name').head(10)
act1.pivot(index='title', columns='actor number', values='actor name').head()
stacked.unstack().head()
emp = pd.read_csv('data/employee.csv')
emp.head()
emp.pivot_table(index='race', columns='gender', values='salary')
race_gen_sal = emp.groupby(['race', 'gender']).agg({'salary': 'mean'})
race_gen_sal
race_gen_sal.unstack('gender')
| 0.28607 | 0.980784 |
```
%matplotlib inline
import sys
sys.path.append('../../../')
from cnvfc import stats as cst
from cnvfc import tools as ctl
import numpy as np
import pandas as pd
import pathlib as pal
import seaborn as sbn
from matplotlib import pyplot as plt
from statsmodels.sandbox.stats.multicomp import multipletests as stm
n_iter = 1000
root_p = pal.Path('../../../data/')
del_v_con_p = root_p / 'processed/fc_profiles/cnv_22q_del_vs_con.tsv'
dc_null_p = root_p / 'processed/null_model/cnv_22q_null_model_genetic_status_deletion_vs_control.npy'
dup_v_con_p = root_p / 'processed/fc_profiles/cnv_22q_dup_vs_con.tsv'
dp_null_p = root_p / 'processed/null_model/cnv_22q_null_model_genetic_status_duplication_vs_control.npy'
dc = pd.read_csv(del_v_con_p, sep='\t')
dp = pd.read_csv(dup_v_con_p, sep='\t')
dc_null = np.load(dc_null_p)
dp_null = np.load(dp_null_p)
print('The average FC shift across all connections '
'is \n{:.2f}z for DEL and \n{:.2f}z for DUP carriers '
'\nin units of SD FC of CON subjects'.format(np.mean(dc.stand_betas),
np.mean(dp.stand_betas)))
```
## 22q deletion
```
print(ctl.report_connectivity_alterations(dc))
```
## 22q duplication
```
print(ctl.report_connectivity_alterations(dp))
```
## Visualize the alterations
```
# Display the global FC alterations of DELvCON and DUPvCON against each other
r = 1.5
g = sbn.jointplot(x=dp.stand_betas, y=dc.stand_betas,
kind='hex', ylim=(-r, r), xlim=(-r, r), joint_kws={"extent": (-r, r, -r, r)},
stat_func=None)
g.ax_joint.plot([0, 0], [-r, r], 'k')
g.ax_joint.plot([-r, r], [0, 0], 'k')
tmp = g.ax_joint.set(xlabel='$\Delta$ FC in 16p DUPvCON', ylabel='$\Delta$ FC in 22q DELvCON')
g.fig.suptitle('DELvCON vs DUPvCON');
```
## Put the global shifts into context of a null model
```
dc_null_beta = np.mean(dc_null, 1)
dp_null_beta = np.mean(dp_null, 1)
dc_emp_beta = np.mean(dc.stand_betas)
dp_emp_beta = np.mean(dp.stand_betas)
f = plt.figure(figsize=(6,4))
ax = plt.subplot(111)
# Plot the 16pDEL
g_delcon = sbn.distplot(dc_null_beta, hist=False, kde=True, ax=ax, label='NULL_22qDel_v_Con', color='red')
g_dupcon = sbn.distplot(dp_null_beta, hist=False, kde=True, ax=ax, label='NULL_22qDup_v_Con', color='blue')
# Now add the area under the curve
d_delcon = g_delcon.axes.lines[0].get_data()
d_dupcon = g_dupcon.axes.lines[1].get_data()
cut_delcon = np.max(np.where(dc_emp_beta > d_delcon[0]))
cut_dupcon = np.min(np.where(dp_emp_beta < d_dupcon[0]))
ax.fill_between(d_delcon[0][:cut_delcon], 0, d_delcon[1][:cut_delcon], color='tomato')
ax.fill_between(d_dupcon[0][cut_dupcon:], 0, d_dupcon[1][cut_dupcon:], color='steelblue')
ax.vlines(0, ax.get_ylim()[1], 0)
ax.set_yticks([])
ax.set_yticklabels([])
ax.set_xlim([-0.6, 0.6])
sbn.despine(ax=ax)
f.suptitle('22q $\Delta$ FC against NULL model', fontsize=18);
p_delcon = np.abs((np.sum(dc_emp_beta > dc_null_beta)+1)/(n_iter+1))
p_dupcon = np.abs((np.sum(dp_emp_beta < dp_null_beta)+1)/(n_iter+1))
print('On average 22q11.2 carriers have a shift of '
'\n{:.2f}z-scores (p={:.3f}) for DEL carriers and '
'\n{:.2f}z-scores (p={:.3f}) for DUP carriers.'.format(dc_emp_beta, p_delcon,
dp_emp_beta, p_dupcon))
```
|
github_jupyter
|
%matplotlib inline
import sys
sys.path.append('../../../')
from cnvfc import stats as cst
from cnvfc import tools as ctl
import numpy as np
import pandas as pd
import pathlib as pal
import seaborn as sbn
from matplotlib import pyplot as plt
from statsmodels.sandbox.stats.multicomp import multipletests as stm
n_iter = 1000
root_p = pal.Path('../../../data/')
del_v_con_p = root_p / 'processed/fc_profiles/cnv_22q_del_vs_con.tsv'
dc_null_p = root_p / 'processed/null_model/cnv_22q_null_model_genetic_status_deletion_vs_control.npy'
dup_v_con_p = root_p / 'processed/fc_profiles/cnv_22q_dup_vs_con.tsv'
dp_null_p = root_p / 'processed/null_model/cnv_22q_null_model_genetic_status_duplication_vs_control.npy'
dc = pd.read_csv(del_v_con_p, sep='\t')
dp = pd.read_csv(dup_v_con_p, sep='\t')
dc_null = np.load(dc_null_p)
dp_null = np.load(dp_null_p)
print('The average FC shift across all connections '
'is \n{:.2f}z for DEL and \n{:.2f}z for DUP carriers '
'\nin units of SD FC of CON subjects'.format(np.mean(dc.stand_betas),
np.mean(dp.stand_betas)))
print(ctl.report_connectivity_alterations(dc))
print(ctl.report_connectivity_alterations(dp))
# Display the global FC alterations of DELvCON and DUPvCON against each other
r = 1.5
g = sbn.jointplot(x=dp.stand_betas, y=dc.stand_betas,
kind='hex', ylim=(-r, r), xlim=(-r, r), joint_kws={"extent": (-r, r, -r, r)},
stat_func=None)
g.ax_joint.plot([0, 0], [-r, r], 'k')
g.ax_joint.plot([-r, r], [0, 0], 'k')
tmp = g.ax_joint.set(xlabel='$\Delta$ FC in 16p DUPvCON', ylabel='$\Delta$ FC in 22q DELvCON')
g.fig.suptitle('DELvCON vs DUPvCON');
dc_null_beta = np.mean(dc_null, 1)
dp_null_beta = np.mean(dp_null, 1)
dc_emp_beta = np.mean(dc.stand_betas)
dp_emp_beta = np.mean(dp.stand_betas)
f = plt.figure(figsize=(6,4))
ax = plt.subplot(111)
# Plot the 16pDEL
g_delcon = sbn.distplot(dc_null_beta, hist=False, kde=True, ax=ax, label='NULL_22qDel_v_Con', color='red')
g_dupcon = sbn.distplot(dp_null_beta, hist=False, kde=True, ax=ax, label='NULL_22qDup_v_Con', color='blue')
# Now add the area under the curve
d_delcon = g_delcon.axes.lines[0].get_data()
d_dupcon = g_dupcon.axes.lines[1].get_data()
cut_delcon = np.max(np.where(dc_emp_beta > d_delcon[0]))
cut_dupcon = np.min(np.where(dp_emp_beta < d_dupcon[0]))
ax.fill_between(d_delcon[0][:cut_delcon], 0, d_delcon[1][:cut_delcon], color='tomato')
ax.fill_between(d_dupcon[0][cut_dupcon:], 0, d_dupcon[1][cut_dupcon:], color='steelblue')
ax.vlines(0, ax.get_ylim()[1], 0)
ax.set_yticks([])
ax.set_yticklabels([])
ax.set_xlim([-0.6, 0.6])
sbn.despine(ax=ax)
f.suptitle('22q $\Delta$ FC against NULL model', fontsize=18);
p_delcon = np.abs((np.sum(dc_emp_beta > dc_null_beta)+1)/(n_iter+1))
p_dupcon = np.abs((np.sum(dp_emp_beta < dp_null_beta)+1)/(n_iter+1))
print('On average 22q11.2 carriers have a shift of '
'\n{:.2f}z-scores (p={:.3f}) for DEL carriers and '
'\n{:.2f}z-scores (p={:.3f}) for DUP carriers.'.format(dc_emp_beta, p_delcon,
dp_emp_beta, p_dupcon))
| 0.402392 | 0.625438 |
```
from spectacle.core.spectra import Spectrum1D
from spectacle.modeling.models import Absorption1D
from spectacle.core.lines import Line
from spectacle.process.lsf import LSF
from spectacle.analysis.metrics import correlate, npcorrelate, cross_correlate, autocorrelate
import matplotlib.pyplot as plt
import numpy as np
import uncertainties.unumpy as unp
%matplotlib notebook
plt.rcParams["figure.figsize"] = [12, 8]
```
# Correlation Metric Analysis
This notebook goes through a few different correlation metrics being explored right now.
```
# Generate two spectrums to use for correlation
disp = np.linspace(1150, 1250, 1000)
line = Line(name="HI", lambda_0=1.21567010E+03, v_doppler=1e7, column_density=10**14.66)
spectrum_model1 = Absorption1D(lines=[line])
spectrum1 = spectrum_model1(disp)
# spectrum1 = Spectrum1D(flux, dispersion=disp, lines=[line])
spectrum1.add_noise(0.0025)
spectrum1.uncertainty = np.sqrt(spectrum1.data * 0.01)
line = Line(name="HI2", lambda_0=1.19567010E+03, v_doppler=1e7, column_density=10**14.66)
spectrum_model2 = Absorption1D(lines=[line])
spectrum2 = spectrum_model2(disp)
# spectrum2 = Spectrum1D(flux, dispersion=disp, lines=[line])
spectrum2.add_noise(0.0025)
spectrum2.uncertainty = np.sqrt(spectrum2.data * 0.01)
# mask = (spectrum1.dispersion > 1150) & (spectrum1.dispersion < 1250)
f, (ax1, ax2) = plt.subplots(1, 2)
ax1.step(spectrum1.dispersion, spectrum1.data)#, yerr=spectrum1.uncertainty)
ax1.step(spectrum2.dispersion, spectrum2.data)#, yerr=spectrum2.uncertainty)
ax2.errorbar(spectrum1.dispersion, spectrum1.tau, yerr=spectrum1.tau_uncertainty)
ax2.errorbar(spectrum2.dispersion, spectrum2.tau, yerr=spectrum2.tau_uncertainty)
ax1.set_title("Flux")
ax1.set_ylabel("Normalized Flux")
ax1.set_xlabel("Wavelength [Angstrom]")
ax2.set_title("Optical Depth")
ax2.set_ylabel("Tau")
ax2.set_xlabel("Wavelength [Angstrom]")
plt.show()
```
## Peeples Correlation
### True
```
vals, uncerts, use_mask = correlate(spectrum1, spectrum2)
tvals, tuncerts, use_mask = correlate(spectrum1, spectrum2, use_tau=True)
flux_sum = unp.uarray(vals, uncerts).sum()
tau_sum = unp.uarray(tvals, tuncerts).sum()
print("Sum: {}".format(flux_sum.nominal_value, flux_sum.std_dev))
print("Tau Sum: {}".format(tau_sum.nominal_value, tau_sum.std_dev))
f, (ax1, ax2) = plt.subplots(2, 1)
ax1.errorbar(range(len(vals)), vals, yerr=uncerts)
ax1.set_title("Flux Correlation, Sum: {}".format(unp.uarray(vals, uncerts).sum()))
ax2.errorbar(range(len(tvals)), tvals, yerr=tuncerts)
ax2.set_title("Tau Correlation, Sum: {}+/-{}".format(tau_sum.nominal_value, tau_sum.std_dev))
plt.tight_layout()
plt.show()
```
### Lite
```
vals, uncerts, use_mask = correlate(spectrum1, spectrum2, mode='lite')
tvals, tuncerts, use_mask = correlate(spectrum1, spectrum2, mode='lite', use_tau=True)
tau_sum = unp.uarray(tvals, tuncerts).sum()
print("Sum: {}".format(unp.uarray(vals, uncerts).sum()))
print("Tau Sum: {}".format(tau_sum))
f, (ax1, ax2) = plt.subplots(2, 1)
ax1.errorbar(range(len(vals)), vals, yerr=uncerts)
ax1.set_title("Flux Correlation, Sum: {}".format(unp.uarray(vals, uncerts).sum()))
ax2.errorbar(range(len(tvals)), tvals)#, yerr=tuncerts)
ax2.set_title("Tau Correlation, Sum: {:.2}+/-{:.2}".format(tau_sum.nominal_value, tau_sum.std_dev))
plt.tight_layout()
plt.show()
```
### Full
```
vals, uncerts, use_mask = correlate(spectrum1, spectrum2, mode='full')
tvals, tuncerts, use_mask = correlate(spectrum1, spectrum2, mode='full', use_tau=True)
flux_sum = unp.uarray(vals, uncerts).sum()
tau_sum = unp.uarray(tvals, tuncerts).sum()
print("Sum: {}".format(flux_sum))
print("Tau Sum: {}".format(tau_sum))
f, (ax1, ax2) = plt.subplots(2, 1)
ax1.errorbar(range(len(vals)), vals, yerr=uncerts)
ax1.set_title("Flux Correlation, Sum: {}".format(unp.uarray(vals, uncerts).sum()))
ax2.errorbar(range(len(tvals)), tvals, yerr=tuncerts)
ax2.set_title("Tau Correlation, Sum: {:.2}+/-{:.2}".format(tau_sum.nominal_value, tau_sum.std_dev))
plt.tight_layout()
plt.show()
```
## Numpy Correlate
### `Valid` mode
Mode ‘valid’ returns output of length max(M, N) - min(M, N) + 1. The convolution product is only given for points where the signals overlap completely. Values outside the signal boundary have no effect.
```
vals, uncerts, use_mask = npcorrelate(spectrum1, spectrum2)
tvals, tuncerts, use_mask = npcorrelate(spectrum1, spectrum2, use_tau=True)
print(unp.uarray(vals, uncerts))
print(unp.uarray(tvals, tuncerts))
```
### `Same` mode
Mode ‘same’ returns output of length max(M, N). Boundary effects are still visible.
```
vals, uncerts, use_mask = npcorrelate(spectrum1, spectrum2, mode='same')
tvals, tuncerts, use_mask = npcorrelate(spectrum1, spectrum2, mode='same', use_tau=True)
f, (ax1, ax2) = plt.subplots(2, 1)
ax1.errorbar(range(len(vals)), vals, yerr=uncerts)
ax1.set_title("Flux Correlation, Sum: {}".format(unp.uarray(vals, uncerts).sum()))
ax2.errorbar(range(len(tvals)), tvals, yerr=tuncerts)
ax2.set_title("Tau Correlation, Sum: {}".format(unp.uarray(tvals, tuncerts).sum()))
plt.tight_layout()
plt.show()
```
### `Full` mode
By default, mode is ‘full’. This returns the convolution at each point of overlap, with an output shape of (N+M-1,). At the end-points of the convolution, the signals do not overlap completely, and boundary effects may be seen.
```
vals, uncerts, use_mask = npcorrelate(spectrum1, spectrum2, mode='full')
tvals, tuncerts, use_mask = npcorrelate(spectrum1, spectrum2, mode='full', use_tau=True)
f, (ax1, ax2) = plt.subplots(2, 1)
ax1.errorbar(range(len(vals)), vals, yerr=uncerts)
ax1.set_title("Flux Correlation, Sum: {}".format(unp.uarray(vals, uncerts).sum()))
ax2.errorbar(range(len(tvals)), tvals, yerr=tuncerts)
ax2.set_title("Tau Correlation, Sum: {}".format(unp.uarray(tvals, tuncerts).sum()))
plt.tight_layout()
plt.show()
```
## Peeples Autocorrelate
```
res1 = autocorrelate(spectrum1)
res2 = autocorrelate(spectrum2)
tres1 = autocorrelate(spectrum1, use_tau=True)
tres2 = autocorrelate(spectrum2, use_tau=True)
print("Spectrum1: {}".format(unp.uarray(*res1)))
print("Spectrum2: {}".format(unp.uarray(*res2)))
print("Tau Spectrum1: {}".format(unp.uarray(*tres1)))
print("Tau Spectrum2: {}".format(unp.uarray(*tres2)))
```
|
github_jupyter
|
from spectacle.core.spectra import Spectrum1D
from spectacle.modeling.models import Absorption1D
from spectacle.core.lines import Line
from spectacle.process.lsf import LSF
from spectacle.analysis.metrics import correlate, npcorrelate, cross_correlate, autocorrelate
import matplotlib.pyplot as plt
import numpy as np
import uncertainties.unumpy as unp
%matplotlib notebook
plt.rcParams["figure.figsize"] = [12, 8]
# Generate two spectrums to use for correlation
disp = np.linspace(1150, 1250, 1000)
line = Line(name="HI", lambda_0=1.21567010E+03, v_doppler=1e7, column_density=10**14.66)
spectrum_model1 = Absorption1D(lines=[line])
spectrum1 = spectrum_model1(disp)
# spectrum1 = Spectrum1D(flux, dispersion=disp, lines=[line])
spectrum1.add_noise(0.0025)
spectrum1.uncertainty = np.sqrt(spectrum1.data * 0.01)
line = Line(name="HI2", lambda_0=1.19567010E+03, v_doppler=1e7, column_density=10**14.66)
spectrum_model2 = Absorption1D(lines=[line])
spectrum2 = spectrum_model2(disp)
# spectrum2 = Spectrum1D(flux, dispersion=disp, lines=[line])
spectrum2.add_noise(0.0025)
spectrum2.uncertainty = np.sqrt(spectrum2.data * 0.01)
# mask = (spectrum1.dispersion > 1150) & (spectrum1.dispersion < 1250)
f, (ax1, ax2) = plt.subplots(1, 2)
ax1.step(spectrum1.dispersion, spectrum1.data)#, yerr=spectrum1.uncertainty)
ax1.step(spectrum2.dispersion, spectrum2.data)#, yerr=spectrum2.uncertainty)
ax2.errorbar(spectrum1.dispersion, spectrum1.tau, yerr=spectrum1.tau_uncertainty)
ax2.errorbar(spectrum2.dispersion, spectrum2.tau, yerr=spectrum2.tau_uncertainty)
ax1.set_title("Flux")
ax1.set_ylabel("Normalized Flux")
ax1.set_xlabel("Wavelength [Angstrom]")
ax2.set_title("Optical Depth")
ax2.set_ylabel("Tau")
ax2.set_xlabel("Wavelength [Angstrom]")
plt.show()
vals, uncerts, use_mask = correlate(spectrum1, spectrum2)
tvals, tuncerts, use_mask = correlate(spectrum1, spectrum2, use_tau=True)
flux_sum = unp.uarray(vals, uncerts).sum()
tau_sum = unp.uarray(tvals, tuncerts).sum()
print("Sum: {}".format(flux_sum.nominal_value, flux_sum.std_dev))
print("Tau Sum: {}".format(tau_sum.nominal_value, tau_sum.std_dev))
f, (ax1, ax2) = plt.subplots(2, 1)
ax1.errorbar(range(len(vals)), vals, yerr=uncerts)
ax1.set_title("Flux Correlation, Sum: {}".format(unp.uarray(vals, uncerts).sum()))
ax2.errorbar(range(len(tvals)), tvals, yerr=tuncerts)
ax2.set_title("Tau Correlation, Sum: {}+/-{}".format(tau_sum.nominal_value, tau_sum.std_dev))
plt.tight_layout()
plt.show()
vals, uncerts, use_mask = correlate(spectrum1, spectrum2, mode='lite')
tvals, tuncerts, use_mask = correlate(spectrum1, spectrum2, mode='lite', use_tau=True)
tau_sum = unp.uarray(tvals, tuncerts).sum()
print("Sum: {}".format(unp.uarray(vals, uncerts).sum()))
print("Tau Sum: {}".format(tau_sum))
f, (ax1, ax2) = plt.subplots(2, 1)
ax1.errorbar(range(len(vals)), vals, yerr=uncerts)
ax1.set_title("Flux Correlation, Sum: {}".format(unp.uarray(vals, uncerts).sum()))
ax2.errorbar(range(len(tvals)), tvals)#, yerr=tuncerts)
ax2.set_title("Tau Correlation, Sum: {:.2}+/-{:.2}".format(tau_sum.nominal_value, tau_sum.std_dev))
plt.tight_layout()
plt.show()
vals, uncerts, use_mask = correlate(spectrum1, spectrum2, mode='full')
tvals, tuncerts, use_mask = correlate(spectrum1, spectrum2, mode='full', use_tau=True)
flux_sum = unp.uarray(vals, uncerts).sum()
tau_sum = unp.uarray(tvals, tuncerts).sum()
print("Sum: {}".format(flux_sum))
print("Tau Sum: {}".format(tau_sum))
f, (ax1, ax2) = plt.subplots(2, 1)
ax1.errorbar(range(len(vals)), vals, yerr=uncerts)
ax1.set_title("Flux Correlation, Sum: {}".format(unp.uarray(vals, uncerts).sum()))
ax2.errorbar(range(len(tvals)), tvals, yerr=tuncerts)
ax2.set_title("Tau Correlation, Sum: {:.2}+/-{:.2}".format(tau_sum.nominal_value, tau_sum.std_dev))
plt.tight_layout()
plt.show()
vals, uncerts, use_mask = npcorrelate(spectrum1, spectrum2)
tvals, tuncerts, use_mask = npcorrelate(spectrum1, spectrum2, use_tau=True)
print(unp.uarray(vals, uncerts))
print(unp.uarray(tvals, tuncerts))
vals, uncerts, use_mask = npcorrelate(spectrum1, spectrum2, mode='same')
tvals, tuncerts, use_mask = npcorrelate(spectrum1, spectrum2, mode='same', use_tau=True)
f, (ax1, ax2) = plt.subplots(2, 1)
ax1.errorbar(range(len(vals)), vals, yerr=uncerts)
ax1.set_title("Flux Correlation, Sum: {}".format(unp.uarray(vals, uncerts).sum()))
ax2.errorbar(range(len(tvals)), tvals, yerr=tuncerts)
ax2.set_title("Tau Correlation, Sum: {}".format(unp.uarray(tvals, tuncerts).sum()))
plt.tight_layout()
plt.show()
vals, uncerts, use_mask = npcorrelate(spectrum1, spectrum2, mode='full')
tvals, tuncerts, use_mask = npcorrelate(spectrum1, spectrum2, mode='full', use_tau=True)
f, (ax1, ax2) = plt.subplots(2, 1)
ax1.errorbar(range(len(vals)), vals, yerr=uncerts)
ax1.set_title("Flux Correlation, Sum: {}".format(unp.uarray(vals, uncerts).sum()))
ax2.errorbar(range(len(tvals)), tvals, yerr=tuncerts)
ax2.set_title("Tau Correlation, Sum: {}".format(unp.uarray(tvals, tuncerts).sum()))
plt.tight_layout()
plt.show()
res1 = autocorrelate(spectrum1)
res2 = autocorrelate(spectrum2)
tres1 = autocorrelate(spectrum1, use_tau=True)
tres2 = autocorrelate(spectrum2, use_tau=True)
print("Spectrum1: {}".format(unp.uarray(*res1)))
print("Spectrum2: {}".format(unp.uarray(*res2)))
print("Tau Spectrum1: {}".format(unp.uarray(*tres1)))
print("Tau Spectrum2: {}".format(unp.uarray(*tres2)))
| 0.567218 | 0.901358 |
# Synopsis
# Configuration
```
db_name = 'persuasion.db'
OHCO = ['chap_num', 'para_num', 'sent_num', 'token_num']
```
# Libraries
```
import sqlite3
import pandas as pd
import numpy as np
```
# Process
```
with sqlite3.connect(db_name) as db:
K = pd.read_sql('SELECT * FROM token', db, index_col=OHCO)
V = pd.read_sql('SELECT * FROM vocab', db, index_col='term_id')
```
## Create DTM
### Create word mask
Let's filter out stopwords -- another hyperparameter.
```
WORDS = (K.punc == 0) & (K.num == 0) & K.term_id.isin(V[V.stop==0].index)
```
### Extrct BOW from tokens
To extract a bag-of-words model from our tokens table, we apply a simple `groupby()` operation. Note that we can drop in our hyperparameters easily -- CHAPS and 'term_id' and be replaced. We can easily write a function to simplify this process and make it more configurable.
```
BOW = K[WORDS].groupby(OHCO[:1]+['term_id'])['term_id'].count()
```
### Convert BOW to DTM
```
DTM = BOW.unstack().fillna(0)
```
## Compute Term Frequencies and Weights
### Compute TF
```
alpha = .000001 # We introduce an arbitrary smoothing value
alpha_sum = alpha * V.shape[0]
TF = DTM.apply(lambda x: (x + alpha) / (x.sum() + alpha_sum), axis=1)
```
### Compute TFIDF
```
N_docs = DTM.shape[0]
V['df'] = DTM[DTM > 0].count()
TFIDF = TF * np.log2(N_docs / V[V.stop==0]['df'])
```
### Compute TFTH (Experiment)
```
THM = -(TF * np.log2(TF))
TFTH = TF.apply(lambda x: x * THM.sum(), 1)
```
### Add stats to V
```
V['tf_sum'] = TF.sum()
V['tf_mean'] = TF.mean()
V['tf_max'] = TF.max()
V['tfidf_sum'] = TFIDF.sum()
V['tfidf_mean'] = TFIDF.mean()
V['tfidf_max'] = TFIDF.max()
V['tfth_sum'] = TFTH.sum()
V['tfth_mean'] = TFTH.mean()
V['tfth_max'] = TFTH.max()
V['th_sum'] = THM.sum()
V['th_mean'] = THM.mean()
V['th_max'] = THM.max()
```
## Create Docs table
```
D = DTM.sum(1).astype('int').to_frame().rename(columns={0:'term_count'})
D['tf'] = D.term_count / D.term_count.sum()
```
## Get all doc pairs
```
chap_ids = D.index.tolist()
pairs = [(i,j) for i in chap_ids for j in chap_ids if j > i]
P = pd.DataFrame(pairs).reset_index(drop=True).set_index([0,1])
P.index.names = ['doc_x','doc_y']
```
## Compute Euclidean distance
```
def euclidean(row):
D1 = TFIDF.loc[row.name[0]]
D2 = TFIDF.loc[row.name[1]]
x = (D1 - D2)**2
y = x.sum()
z = np.sqrt(y)
return z
P['euclidean'] = 0
P['euclidean'] = P.apply(euclidean, 1)
```
## Compute Cosine similarity
```
def cosine(row):
D1 = TFIDF.loc[row.name[0]]
D2 = TFIDF.loc[row.name[1]]
x = D1 * D2
y = x.sum()
a = np.sqrt((D1**2).sum())
b = np.sqrt((D2**2).sum())
c = a * b
z = y / c
return z
P['cosine'] = P.apply(cosine, 1)
```
# Save data
```
with sqlite3.connect(db_name) as db:
V.to_sql('vocab', db, if_exists='replace', index=True)
K.to_sql('token', db, if_exists='replace', index=True)
D.to_sql('doc', db, if_exists='replace', index=True)
P.to_sql('docpair', db, if_exists='replace', index=True)
# BOW.to_frame().rename(columns={'term_id':'n'}).to_sql('bow', db, if_exists='replace', index=True)
TFIDF.stack().to_frame().rename(columns={0:'term_weight'})\
.to_sql('dtm_tfidf', db, if_exists='replace', index=True)
# END
```
|
github_jupyter
|
db_name = 'persuasion.db'
OHCO = ['chap_num', 'para_num', 'sent_num', 'token_num']
import sqlite3
import pandas as pd
import numpy as np
with sqlite3.connect(db_name) as db:
K = pd.read_sql('SELECT * FROM token', db, index_col=OHCO)
V = pd.read_sql('SELECT * FROM vocab', db, index_col='term_id')
WORDS = (K.punc == 0) & (K.num == 0) & K.term_id.isin(V[V.stop==0].index)
BOW = K[WORDS].groupby(OHCO[:1]+['term_id'])['term_id'].count()
DTM = BOW.unstack().fillna(0)
alpha = .000001 # We introduce an arbitrary smoothing value
alpha_sum = alpha * V.shape[0]
TF = DTM.apply(lambda x: (x + alpha) / (x.sum() + alpha_sum), axis=1)
N_docs = DTM.shape[0]
V['df'] = DTM[DTM > 0].count()
TFIDF = TF * np.log2(N_docs / V[V.stop==0]['df'])
THM = -(TF * np.log2(TF))
TFTH = TF.apply(lambda x: x * THM.sum(), 1)
V['tf_sum'] = TF.sum()
V['tf_mean'] = TF.mean()
V['tf_max'] = TF.max()
V['tfidf_sum'] = TFIDF.sum()
V['tfidf_mean'] = TFIDF.mean()
V['tfidf_max'] = TFIDF.max()
V['tfth_sum'] = TFTH.sum()
V['tfth_mean'] = TFTH.mean()
V['tfth_max'] = TFTH.max()
V['th_sum'] = THM.sum()
V['th_mean'] = THM.mean()
V['th_max'] = THM.max()
D = DTM.sum(1).astype('int').to_frame().rename(columns={0:'term_count'})
D['tf'] = D.term_count / D.term_count.sum()
chap_ids = D.index.tolist()
pairs = [(i,j) for i in chap_ids for j in chap_ids if j > i]
P = pd.DataFrame(pairs).reset_index(drop=True).set_index([0,1])
P.index.names = ['doc_x','doc_y']
def euclidean(row):
D1 = TFIDF.loc[row.name[0]]
D2 = TFIDF.loc[row.name[1]]
x = (D1 - D2)**2
y = x.sum()
z = np.sqrt(y)
return z
P['euclidean'] = 0
P['euclidean'] = P.apply(euclidean, 1)
def cosine(row):
D1 = TFIDF.loc[row.name[0]]
D2 = TFIDF.loc[row.name[1]]
x = D1 * D2
y = x.sum()
a = np.sqrt((D1**2).sum())
b = np.sqrt((D2**2).sum())
c = a * b
z = y / c
return z
P['cosine'] = P.apply(cosine, 1)
with sqlite3.connect(db_name) as db:
V.to_sql('vocab', db, if_exists='replace', index=True)
K.to_sql('token', db, if_exists='replace', index=True)
D.to_sql('doc', db, if_exists='replace', index=True)
P.to_sql('docpair', db, if_exists='replace', index=True)
# BOW.to_frame().rename(columns={'term_id':'n'}).to_sql('bow', db, if_exists='replace', index=True)
TFIDF.stack().to_frame().rename(columns={0:'term_weight'})\
.to_sql('dtm_tfidf', db, if_exists='replace', index=True)
# END
| 0.246896 | 0.848659 |
# Principal component analysis with Scaler
This code template is for simple Principal Component Analysis(PCA) along feature scaling via Scale in python for dimensionality reduction technique. It is used to decompose a multivariate dataset into a set of successive orthogonal components that explain a maximum amount of the variance.
### Required Packages
```
import warnings
import itertools
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
from sklearn.decomposition import PCA
from sklearn.preprocessing import LabelEncoder, scale
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features= []
```
Target feature for prediction.
```
#y_value
target= ''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X = df[features]
Y = df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Rescaling
`Scale` standardizes a dataset along any axis. It standardizes features by removing the mean and scaling to unit variance.
scale is similar to StandardScaler in terms of feature transformation, but unlike StandardScaler, it lacks Transformer API i.e., it does not have fit_transform, transform and other related methods
[**For more Reference**](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.scale.html)
```
X_Scaled=scale(X)
X_Scaled=pd.DataFrame(data = X_Scaled,columns = X.columns)
X_Scaled.head()
```
### Choosing the number of components
A vital part of using PCA in practice is the ability to estimate how many components are needed to describe the data. This can be determined by looking at the cumulative explained variance ratio as a function of the number of components.
This curve quantifies how much of the total, dimensional variance is contained within the first N components.
```
pcaComponents = PCA().fit(X_Scaled)
plt.plot(np.cumsum(pcaComponents.explained_variance_ratio_))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
```
#### Scree plot
The scree plot helps you to determine the optimal number of components. The eigenvalue of each component in the initial solution is plotted. Generally, you want to extract the components on the steep slope. The components on the shallow slope contribute little to the solution.
```
PC_values = np.arange(pcaComponents.n_components_) + 1
plt.plot(PC_values, pcaComponents.explained_variance_ratio_, 'ro-', linewidth=2)
plt.title('Scree Plot')
plt.xlabel('Principal Component')
plt.ylabel('Proportion of Variance Explained')
plt.show()
```
# Model
PCA is used to decompose a multivariate dataset in a set of successive orthogonal components that explain a maximum amount of the variance. In scikit-learn, PCA is implemented as a transformer object that learns components in its fit method, and can be used on new data to project it on these components.
#### Tunning parameters reference :
[API](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html)
```
pca = PCA(n_components=8)
pcaX = pd.DataFrame(data = pca.fit_transform(X_Scaled))
```
#### Output Dataframe
```
finalDf = pd.concat([pcaX, Y], axis = 1)
finalDf.head()
```
#### Creator: Vikas Mishra , Github: [Profile](https://github.com/Vikaas08)
|
github_jupyter
|
import warnings
import itertools
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
from sklearn.decomposition import PCA
from sklearn.preprocessing import LabelEncoder, scale
warnings.filterwarnings('ignore')
#filepath
file_path= ""
#x_values
features= []
#y_value
target= ''
df=pd.read_csv(file_path)
df.head()
X = df[features]
Y = df[target]
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
X_Scaled=scale(X)
X_Scaled=pd.DataFrame(data = X_Scaled,columns = X.columns)
X_Scaled.head()
pcaComponents = PCA().fit(X_Scaled)
plt.plot(np.cumsum(pcaComponents.explained_variance_ratio_))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
PC_values = np.arange(pcaComponents.n_components_) + 1
plt.plot(PC_values, pcaComponents.explained_variance_ratio_, 'ro-', linewidth=2)
plt.title('Scree Plot')
plt.xlabel('Principal Component')
plt.ylabel('Proportion of Variance Explained')
plt.show()
pca = PCA(n_components=8)
pcaX = pd.DataFrame(data = pca.fit_transform(X_Scaled))
finalDf = pd.concat([pcaX, Y], axis = 1)
finalDf.head()
| 0.320502 | 0.988777 |
```
%matplotlib inline
```
<h1 align="center">Simple Conditional Statements</h1>
<h2>01.Excellent Result</h2>
The first task of this topic is to write a console program that introduces an estimate (decimal number) and prints "**Excellent**!" if the score is **5.50 or higher**.
```
num = float(input())
if num >= 5.50:
print("Excellent!")
```
<h2>02.Excellent or Not</h2>
The next task of this topic is to write a console program that introduces an estimate (decimal number)
and prints **"Excellent!"** if the score is **5.50 or higher**, or **"Not Excellent."** in the **opposite case**.
```
grade = float(input())
if grade >= 5.50:
print("Excellent!")
else:
print("Not excellent.")
```
<h1>03.Even or Odd</h1>
Write a program that enters an **integer** and print whether it is **even** or **odd**.
```
num = int(input())
if num % 2 == 0:
print("even")
else:
print("odd")
```
<h2>04.Greater Number</h2>
Write a program that introduces **two integers** and **prints the larger one**.
num_1 = int(input())
num_2 = int(input())
if num_1 >= num_2:
print(num_1)
else:
print(num_2)
<h2>05.Number 0...9 to Text</h2>
Write a program that enters an **integer** in the **range [0 ... 10]** and writes it **in English language**.
If the number is **out of range**, it says **"number too big"**
```
num = int(input())
if num == 0:
print("zero")
elif num == 1:
print("one")
elif num == 2:
print("two")
elif num == 3:
print("three")
elif num == 4:
print("four")
elif num == 5:
print("five")
elif num == 6:
print("six")
elif num == 7:
print("seven")
elif num == 8:
print("eight")
elif num == 9:
print("nine")
else:
print("number too big")
```
<h2>06.Bonus Score</h2>
An integer number is given. Bonus points based on the rules described below are charged.
Yes a program is written that calculates the bonus points for that number and the total number of points with the bonuses.
01.If the number is up to 100 inclusive, the bonus points are 5.
02.If the number is greater than 100, the bonus points are 20% of the number.
03.If the number is greater than 1000, the bonus points are 10% of the number.
04.Additional bonus points (charged separately from the previous ones):
o For even number - + 1 p.
o For a number that ends at 5 - + 2 points.
```
num = int(input())
bonus = 0
if num <= 100:
bonus += 5
elif num > 100 and num < 1000:
bonus += (num * 0.2)
elif num >= 1000:
bonus += (num * 0.1)
if num % 2 == 0:
bonus += 1
if num % 10 == 5:
bonus += 2
print(bonus)
print(bonus + num)
```
<h2>07.Sum Seconds</h2>
Three athletes finish for some seconds (between 1 and 50).
To write a program,which **sets the times** of the contestants and calculates their cumulative time in the **"minutes: seconds"** format.
Take seconds to lead zero.
```
first_Time = int(input())
second_Time = int(input())
third_Time = int(input())
total_Time = first_Time + second_Time + third_Time
minutes = int(total_Time / 60)
seconds = total_Time % 60
if total_Time < 60:
if total_Time <= 9:
print(f'0:0{seconds}')
else:
print(f'0:{seconds}')
elif total_Time >= 60:
if seconds <= 9:
print(f'{minutes}:0{seconds}')
else:
print(f'{minutes}:{seconds}')
```
<h2>08.Metric Converter</h2>
Write a program that **translates a distance** between the following 8 units: **m, mm, cm, mi, in, km, ft, yd.**
```
input_num = float(input())
input_unit = input()
output_unit = input()
if input_unit == "mm":
if output_unit == "mm":
print(input_num * 1,"mm")
elif output_unit == "cm":
print(input_num / 10,"cm")
elif output_unit == "m":
print(input_num / 1000,"m")
elif output_unit == "mi":
print((input_num / 1000) * 0.000621371192,"mi")
elif output_unit == "in":
print((input_num / 1000) * 39.3700787,"in")
elif output_unit == "km":
print((input_num / 1000) * 0.001,"km")
elif output_unit == "ft":
print((input_num / 1000) * 3.2808399,"ft")
elif output_unit == "yd":
print((input_num / 1000) * 1.0936133,"yd")
elif input_unit == "cm":
if output_unit == "mm":
print(input_num * 10,"mm")
elif output_unit == "cm":
print(input_num * 1,"cm")
elif output_unit == "m":
print(input_num / 100,"m")
elif output_unit == "mi":
print((input_num / 100) * 0.000621371192,"mi")
elif output_unit == "in":
print((input_num / 100) * 39.3700787,"in")
elif output_unit == "km":
print((input_num / 100) * 0.001,"km")
elif output_unit == "ft":
print((input_num / 100) * 3.2808399,"ft")
elif output_unit == "yd":
print((input_num / 100) * 1.0936133,"yd")
elif input_unit == "mi":
if output_unit == "mm":
print((input_num * 1609.344)* 1000,"mm")
elif output_unit == "cm":
print((input_num * 1609.344) * 100,"cm")
elif output_unit == "m":
print(input_num * 1609.344,"m")
elif output_unit == "mi":
print(input_num * 1,"mi")
elif output_unit == "in":
print((input_num * 1609.344) * 39.3700787,"in")
elif output_unit == "km":
print((input_num * 1609.344) * 0.001,"km")
elif output_unit == "ft":
print((input_num * 1609.344) * 3.2808399,"ft")
elif output_unit == "yd":
print((input_num * 1609.344) * 1.0936133,"yd")
elif input_unit == "in":
if output_unit == "mm":
print((input_num * 0.0254)* 1000,"mm")
elif output_unit == "cm":
print((input_num * 0.0254) * 100,"cm")
elif output_unit == "m":
print(input_num * 0.0254,"m")
elif output_unit == "mi":
print((input_num * 0.0254) * 0.000621371192,"mi")
elif output_unit == "in":
print((input_num * 1),"in")
elif output_unit == "km":
print((input_num * 0.0254) * 0.001,"km")
elif output_unit == "ft":
print((input_num * 0.0254) * 3.2808399,"ft")
elif output_unit == "yd":
print((input_num * 0.0254) * 1.0936133,"yd")
elif input_unit == "km":
if output_unit == "mm":
print((input_num * 1000)* 1000,"mm")
elif output_unit == "cm":
print((input_num * 1000) * 100,"cm")
elif output_unit == "m":
print(input_num * 1000,"m")
elif output_unit == "mi":
print((input_num * 1000) * 0.000621371192,"mi")
elif output_unit == "in":
print((input_num * 1000) * 39.3700787,"in")
elif output_unit == "km":
print((input_num * 1),"km")
elif output_unit == "ft":
print((input_num * 1000) * 3.2808399,"ft")
elif output_unit == "yd":
print((input_num * 1000) * 1.0936133,"yd")
elif input_unit == "ft":
if output_unit == "mm":
print((input_num / 3.2808399)* 1000,"mm")
elif output_unit == "cm":
print((input_num / 3.2808399) * 100,"cm")
elif output_unit == "m":
print(input_num / 3.2808399,"m")
elif output_unit == "mi":
print((input_num / 3.2808399) * 0.000621371192,"mi")
elif output_unit == "in":
print((input_num / 3.2808399) * 39.3700787,"in")
elif output_unit == "km":
print((input_num / 3.2808399) * 0.001,"km")
elif output_unit == "ft":
print((input_num * 1),"ft")
elif output_unit == "yd":
print((input_num / 3.2808399) * 1.0936133,"yd")
elif input_unit == "yd":
if output_unit == "mm":
print((input_num / 1.0936133)* 1000,"mm")
elif output_unit == "cm":
print((input_num / 1.0936133) * 100,"cm")
elif output_unit == "m":
print(input_num / 1.0936133,"m")
elif output_unit == "mi":
print((input_num / 1.0936133) * 0.000621371192,"mi")
elif output_unit == "in":
print((input_num / 1.0936133) * 39.3700787,"in")
elif output_unit == "km":
print((input_num / 1.0936133) * 0.001,"km")
elif output_unit == "ft":
print((input_num / 1.0936133) * 3.2808399,"ft")
elif output_unit == "yd":
print((input_num / 1),"yd")
elif input_unit == "m":
if output_unit == "mm":
print(input_num * 1000,"mm")
elif output_unit == "cm":
print(input_num * 100,"cm")
elif output_unit == "m":
print(input_num * 1,"m")
elif output_unit == "mi":
print(input_num * 0.000621371192,"mi")
elif output_unit == "in":
print(input_num * 39.3700787,"in")
elif output_unit == "km":
print(input_num * 0.001,"km")
elif output_unit == "ft":
print(input_num * 3.2808399,"ft")
elif output_unit == "yd":
print(input_num * 1.0936133,"yd")
```
<h2>09.Password Guess</h2>
Write a program that **enters a password** (one line with any text) and
checks if it is entered matches the phrase **"s3cr3t! P @ ssw0rd"**.
**In case of a collision**, bring **"Welcome"**.
**In case of inconsistency "Wrong Password!"**
```
password = input()
if password == "s3cr3t!P@ssw0rd":
print("Welcome")
else:
print("Wrong password!")
```
<h2>10.Number 100...200</h2>
Write a program that enters an integer and checks if it is below 100, between 100 and 200 or more 200.
Print relevant messages as in the examples below:
```
num = int(input())
if num < 100:
print("Less than 100")
elif num >= 100 and num <= 200:
print("Between 100 and 200")
elif num > 200:
print("Greater than 200")
```
<h2>11.Equal Words</h2>
Write a program that introduces **two words** and checks whether they **are the same**.
Do not make a difference between headwords and small words. Show **"yes" or "no"**.
```
first_Word = input().lower()
second_Word = input().lower()
if first_Word == second_Word:
print("yes")
else:
print("no")
```
<h2>12.Speed Info</h2>
Write a program that introduces a speed (decimal number) and prints speed information.
At speeds of **up to 10** (inclusive), print **"slow"**. At speeds over **10 and up to 50**, print **"average"**.
At speeds **over 50 and up to 150**, print **"fast"**. At speed above **150 and up to 1000**, print **"ultra fast"**.
At **higher speed**, print **"extremely fast"**.
```
speed = float(input())
if speed <= 10:
print("slow")
elif speed > 10 and speed <= 50:
print("average")
elif speed > 50 and speed <= 150:
print("fast")
elif speed > 150 and speed <= 1000:
print("ultra fast")
else:
print("extremely fast")
```
<h2>13.Area of Figures</h2>
Write a program that introduces the dimensions of a geometric figure and calculates its face.
The figures are four types: **a square, a rectangle, a circle, and a triangle**.
On the first line of the input reads the shape of the figure (square, rectangle, circle or triangle).
If the figure is a **square**, the next line reads **one number**- the length of its country.
If the figure is a **rectangle**, the next one two lines read **two numbers**- the lengths of his sides.
If the figure is a **circle**, the next line reads **one number**
the radius of the circle.
If the figure is a **triangle**, the next two lines read **two numbers** - the length of the
its side and the length of the height to it. Score to round to 3 digits after the decimal point.
```
import math
figure = input()
if figure == "square":
side = float(input())
area = side ** 2
print(format(area,'.3f'))
elif figure == "rectangle":
side_a = float(input())
side_b = float(input())
area = side_a * side_b
print(format(area,'.3f'))
elif figure == "circle":
radius = float(input())
area = radius ** 2 * math.pi
print(format(area,'.3f'))
elif figure == "triangle":
side = float(input())
height = float(input())
area = (side * height) / 2
print(format(area,'.3f'))
```
<h2>14.Time + 15 Minutes</h2>
Write a program that introduces **hours and minutes of 24 hours a day**and calculates how much time it will take **after 15 minutes**. The result is printed in hh: mm format.
Hours are always between 0 and 23 minutes are always between 0 and 59.
Hours are written in one or two digits. Minutes are always written with two digits, with lead zero when needed.
```
hours = int(input())
minutes = int(input())
minutes += 15
if minutes >= 60:
minutes %= 60
hours += 1
if hours >= 24:
hours -= 24
if minutes <= 9:
print(f'{hours}:0{minutes}')
else:
print(f'{hours}:{minutes}')
else:
if minutes <= 9:
print(f'{hours}:0{minutes}')
else:
print(f'{hours}:{minutes}')
else:
print(f'{hours}:{minutes}')
```
<h2>15.3 Equal Numbers</h2>
Enter 3 numbers and print whether they are the same (yes / no)
```
first_num = int(input())
second_num = int(input())
third_num = int(input())
sum = first_num + second_num + third_num
if sum / 3 == first_num:
print("yes")
else:
print("no")
```
|
github_jupyter
|
%matplotlib inline
num = float(input())
if num >= 5.50:
print("Excellent!")
grade = float(input())
if grade >= 5.50:
print("Excellent!")
else:
print("Not excellent.")
num = int(input())
if num % 2 == 0:
print("even")
else:
print("odd")
num = int(input())
if num == 0:
print("zero")
elif num == 1:
print("one")
elif num == 2:
print("two")
elif num == 3:
print("three")
elif num == 4:
print("four")
elif num == 5:
print("five")
elif num == 6:
print("six")
elif num == 7:
print("seven")
elif num == 8:
print("eight")
elif num == 9:
print("nine")
else:
print("number too big")
num = int(input())
bonus = 0
if num <= 100:
bonus += 5
elif num > 100 and num < 1000:
bonus += (num * 0.2)
elif num >= 1000:
bonus += (num * 0.1)
if num % 2 == 0:
bonus += 1
if num % 10 == 5:
bonus += 2
print(bonus)
print(bonus + num)
first_Time = int(input())
second_Time = int(input())
third_Time = int(input())
total_Time = first_Time + second_Time + third_Time
minutes = int(total_Time / 60)
seconds = total_Time % 60
if total_Time < 60:
if total_Time <= 9:
print(f'0:0{seconds}')
else:
print(f'0:{seconds}')
elif total_Time >= 60:
if seconds <= 9:
print(f'{minutes}:0{seconds}')
else:
print(f'{minutes}:{seconds}')
input_num = float(input())
input_unit = input()
output_unit = input()
if input_unit == "mm":
if output_unit == "mm":
print(input_num * 1,"mm")
elif output_unit == "cm":
print(input_num / 10,"cm")
elif output_unit == "m":
print(input_num / 1000,"m")
elif output_unit == "mi":
print((input_num / 1000) * 0.000621371192,"mi")
elif output_unit == "in":
print((input_num / 1000) * 39.3700787,"in")
elif output_unit == "km":
print((input_num / 1000) * 0.001,"km")
elif output_unit == "ft":
print((input_num / 1000) * 3.2808399,"ft")
elif output_unit == "yd":
print((input_num / 1000) * 1.0936133,"yd")
elif input_unit == "cm":
if output_unit == "mm":
print(input_num * 10,"mm")
elif output_unit == "cm":
print(input_num * 1,"cm")
elif output_unit == "m":
print(input_num / 100,"m")
elif output_unit == "mi":
print((input_num / 100) * 0.000621371192,"mi")
elif output_unit == "in":
print((input_num / 100) * 39.3700787,"in")
elif output_unit == "km":
print((input_num / 100) * 0.001,"km")
elif output_unit == "ft":
print((input_num / 100) * 3.2808399,"ft")
elif output_unit == "yd":
print((input_num / 100) * 1.0936133,"yd")
elif input_unit == "mi":
if output_unit == "mm":
print((input_num * 1609.344)* 1000,"mm")
elif output_unit == "cm":
print((input_num * 1609.344) * 100,"cm")
elif output_unit == "m":
print(input_num * 1609.344,"m")
elif output_unit == "mi":
print(input_num * 1,"mi")
elif output_unit == "in":
print((input_num * 1609.344) * 39.3700787,"in")
elif output_unit == "km":
print((input_num * 1609.344) * 0.001,"km")
elif output_unit == "ft":
print((input_num * 1609.344) * 3.2808399,"ft")
elif output_unit == "yd":
print((input_num * 1609.344) * 1.0936133,"yd")
elif input_unit == "in":
if output_unit == "mm":
print((input_num * 0.0254)* 1000,"mm")
elif output_unit == "cm":
print((input_num * 0.0254) * 100,"cm")
elif output_unit == "m":
print(input_num * 0.0254,"m")
elif output_unit == "mi":
print((input_num * 0.0254) * 0.000621371192,"mi")
elif output_unit == "in":
print((input_num * 1),"in")
elif output_unit == "km":
print((input_num * 0.0254) * 0.001,"km")
elif output_unit == "ft":
print((input_num * 0.0254) * 3.2808399,"ft")
elif output_unit == "yd":
print((input_num * 0.0254) * 1.0936133,"yd")
elif input_unit == "km":
if output_unit == "mm":
print((input_num * 1000)* 1000,"mm")
elif output_unit == "cm":
print((input_num * 1000) * 100,"cm")
elif output_unit == "m":
print(input_num * 1000,"m")
elif output_unit == "mi":
print((input_num * 1000) * 0.000621371192,"mi")
elif output_unit == "in":
print((input_num * 1000) * 39.3700787,"in")
elif output_unit == "km":
print((input_num * 1),"km")
elif output_unit == "ft":
print((input_num * 1000) * 3.2808399,"ft")
elif output_unit == "yd":
print((input_num * 1000) * 1.0936133,"yd")
elif input_unit == "ft":
if output_unit == "mm":
print((input_num / 3.2808399)* 1000,"mm")
elif output_unit == "cm":
print((input_num / 3.2808399) * 100,"cm")
elif output_unit == "m":
print(input_num / 3.2808399,"m")
elif output_unit == "mi":
print((input_num / 3.2808399) * 0.000621371192,"mi")
elif output_unit == "in":
print((input_num / 3.2808399) * 39.3700787,"in")
elif output_unit == "km":
print((input_num / 3.2808399) * 0.001,"km")
elif output_unit == "ft":
print((input_num * 1),"ft")
elif output_unit == "yd":
print((input_num / 3.2808399) * 1.0936133,"yd")
elif input_unit == "yd":
if output_unit == "mm":
print((input_num / 1.0936133)* 1000,"mm")
elif output_unit == "cm":
print((input_num / 1.0936133) * 100,"cm")
elif output_unit == "m":
print(input_num / 1.0936133,"m")
elif output_unit == "mi":
print((input_num / 1.0936133) * 0.000621371192,"mi")
elif output_unit == "in":
print((input_num / 1.0936133) * 39.3700787,"in")
elif output_unit == "km":
print((input_num / 1.0936133) * 0.001,"km")
elif output_unit == "ft":
print((input_num / 1.0936133) * 3.2808399,"ft")
elif output_unit == "yd":
print((input_num / 1),"yd")
elif input_unit == "m":
if output_unit == "mm":
print(input_num * 1000,"mm")
elif output_unit == "cm":
print(input_num * 100,"cm")
elif output_unit == "m":
print(input_num * 1,"m")
elif output_unit == "mi":
print(input_num * 0.000621371192,"mi")
elif output_unit == "in":
print(input_num * 39.3700787,"in")
elif output_unit == "km":
print(input_num * 0.001,"km")
elif output_unit == "ft":
print(input_num * 3.2808399,"ft")
elif output_unit == "yd":
print(input_num * 1.0936133,"yd")
password = input()
if password == "s3cr3t!P@ssw0rd":
print("Welcome")
else:
print("Wrong password!")
num = int(input())
if num < 100:
print("Less than 100")
elif num >= 100 and num <= 200:
print("Between 100 and 200")
elif num > 200:
print("Greater than 200")
first_Word = input().lower()
second_Word = input().lower()
if first_Word == second_Word:
print("yes")
else:
print("no")
speed = float(input())
if speed <= 10:
print("slow")
elif speed > 10 and speed <= 50:
print("average")
elif speed > 50 and speed <= 150:
print("fast")
elif speed > 150 and speed <= 1000:
print("ultra fast")
else:
print("extremely fast")
import math
figure = input()
if figure == "square":
side = float(input())
area = side ** 2
print(format(area,'.3f'))
elif figure == "rectangle":
side_a = float(input())
side_b = float(input())
area = side_a * side_b
print(format(area,'.3f'))
elif figure == "circle":
radius = float(input())
area = radius ** 2 * math.pi
print(format(area,'.3f'))
elif figure == "triangle":
side = float(input())
height = float(input())
area = (side * height) / 2
print(format(area,'.3f'))
hours = int(input())
minutes = int(input())
minutes += 15
if minutes >= 60:
minutes %= 60
hours += 1
if hours >= 24:
hours -= 24
if minutes <= 9:
print(f'{hours}:0{minutes}')
else:
print(f'{hours}:{minutes}')
else:
if minutes <= 9:
print(f'{hours}:0{minutes}')
else:
print(f'{hours}:{minutes}')
else:
print(f'{hours}:{minutes}')
first_num = int(input())
second_num = int(input())
third_num = int(input())
sum = first_num + second_num + third_num
if sum / 3 == first_num:
print("yes")
else:
print("no")
| 0.111265 | 0.922447 |
```
%matplotlib inline
from pylab import plot,ylim,xlabel,ylabel,show
from numpy import linspace,sin,cos
x = linspace(0,10,100)
y1 = sin(x)
y2 = cos(x)
plot(x,y1,"k-")
plot(x,y2,"k--")
ylim(-1.1,1.1)
xlabel("x axis")
ylabel("y = sin x or y = cos x")
%matplotlib inline
from matplotlib.pyplot import imshow
from PIL import Image, ImageDraw
w, h = 1200,1200
# create a new image with a white background
img = Image.new('RGB',(w,h),(0,0,0))
draw = ImageDraw.Draw(img)
# draw axis
draw.line((0,h/2,w,h/2),fill=(255,255,255))
draw.line((w/2,0,w/2,h),fill=(255,255,255))
imshow(img)
from PIL import Image, ImageDraw
im = Image.new('RGBA', (400, 400), (0, 0, 0, 0))
draw = ImageDraw.Draw(im)
draw.line((100,200, 150,300), fill=(255,0,0))
imshow(im)
import pylab #Imports matplotlib and a host of other useful modules
cir1 = pylab.Circle((0,0), radius=0.75, fc='y') #Creates a patch that looks like a circle (fc= face color)
cir2 = pylab.Circle((.5,.5), radius=0.25, alpha =.2, fc='b') #Repeat (alpha=.2 means make it very translucent)
ax = pylab.axes(aspect=1) #Creates empty axes (aspect=1 means scale things so that circles look like circles)
ax.add_patch(cir1) #Grab the current axes, add the patch to it
ax.add_patch(cir2) #Repeat
pylab.show()
# Import matplotlib (plotting) and numpy (numerical arrays).
# This enables their use in the Notebook.
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# Create an array of 30 values for x equally spaced from 0 to 5.
x = np.linspace(0, 5, 30)
y = x**2
# Plot y versus x
fig, ax = plt.subplots(nrows=1, ncols=1)
ax.plot(x, y, color='red')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_title('A simple graph of $y=x^2$');
# Import matplotlib (plotting) and numpy (numerical arrays).
# This enables their use in the Notebook.
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# Import IPython's interact function which is used below to
# build the interactive widgets
from IPython.html.widgets import interact
def plot_sine(frequency=4.0, grid_points=12, plot_original=True):
"""
Plot discrete samples of a sine wave on the interval ``[0, 1]``.
"""
x = np.linspace(0, 1, grid_points + 2)
y = np.sin(2 * frequency * np.pi * x)
xf = np.linspace(0, 1, 1000)
yf = np.sin(2 * frequency * np.pi * xf)
fig, ax = plt.subplots(figsize=(8, 6))
ax.set_xlabel('x')
ax.set_ylabel('signal')
ax.set_title('Aliasing in discretely sampled periodic signal')
if plot_original:
ax.plot(xf, yf, color='red', linestyle='solid', linewidth=2)
ax.plot(x, y, marker='o', linewidth=2)
# The interact function automatically builds a user interface for exploring the
# plot_sine function.
interact(plot_sine, frequency=(1.0, 22.0, 0.5), grid_points=(10, 16, 1), plot_original=True);
# Import matplotlib (plotting), skimage (image processing) and interact (user interfaces)
# This enables their use in the Notebook.
%matplotlib inline
from matplotlib import pyplot as plt
from skimage import data
from skimage.feature import blob_doh
from skimage.color import rgb2gray
from IPython.html.widgets import interact, fixed
# Extract the first 500px square of the Hubble Deep Field.
image = data.hubble_deep_field()[0:500, 0:500]
image_gray = rgb2gray(image)
def plot_blobs(max_sigma=30, threshold=0.1, gray=False):
"""
Plot the image and the blobs that have been found.
"""
blobs = blob_doh(image_gray, max_sigma=max_sigma, threshold=threshold)
fig, ax = plt.subplots(figsize=(8,8))
ax.set_title('Galaxies in the Hubble Deep Field')
if gray:
ax.imshow(image_gray, interpolation='nearest', cmap='gray_r')
circle_color = 'red'
else:
ax.imshow(image, interpolation='nearest')
circle_color = 'yellow'
for blob in blobs:
y, x, r = blob
c = plt.Circle((x, y), r, color=circle_color, linewidth=2, fill=False)
ax.add_patch(c)
# Use interact to explore the galaxy detection algorithm.
interact(plot_blobs, max_sigma=(10, 40, 2), threshold=(0.005, 0.02, 0.001));
```
|
github_jupyter
|
%matplotlib inline
from pylab import plot,ylim,xlabel,ylabel,show
from numpy import linspace,sin,cos
x = linspace(0,10,100)
y1 = sin(x)
y2 = cos(x)
plot(x,y1,"k-")
plot(x,y2,"k--")
ylim(-1.1,1.1)
xlabel("x axis")
ylabel("y = sin x or y = cos x")
%matplotlib inline
from matplotlib.pyplot import imshow
from PIL import Image, ImageDraw
w, h = 1200,1200
# create a new image with a white background
img = Image.new('RGB',(w,h),(0,0,0))
draw = ImageDraw.Draw(img)
# draw axis
draw.line((0,h/2,w,h/2),fill=(255,255,255))
draw.line((w/2,0,w/2,h),fill=(255,255,255))
imshow(img)
from PIL import Image, ImageDraw
im = Image.new('RGBA', (400, 400), (0, 0, 0, 0))
draw = ImageDraw.Draw(im)
draw.line((100,200, 150,300), fill=(255,0,0))
imshow(im)
import pylab #Imports matplotlib and a host of other useful modules
cir1 = pylab.Circle((0,0), radius=0.75, fc='y') #Creates a patch that looks like a circle (fc= face color)
cir2 = pylab.Circle((.5,.5), radius=0.25, alpha =.2, fc='b') #Repeat (alpha=.2 means make it very translucent)
ax = pylab.axes(aspect=1) #Creates empty axes (aspect=1 means scale things so that circles look like circles)
ax.add_patch(cir1) #Grab the current axes, add the patch to it
ax.add_patch(cir2) #Repeat
pylab.show()
# Import matplotlib (plotting) and numpy (numerical arrays).
# This enables their use in the Notebook.
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# Create an array of 30 values for x equally spaced from 0 to 5.
x = np.linspace(0, 5, 30)
y = x**2
# Plot y versus x
fig, ax = plt.subplots(nrows=1, ncols=1)
ax.plot(x, y, color='red')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_title('A simple graph of $y=x^2$');
# Import matplotlib (plotting) and numpy (numerical arrays).
# This enables their use in the Notebook.
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# Import IPython's interact function which is used below to
# build the interactive widgets
from IPython.html.widgets import interact
def plot_sine(frequency=4.0, grid_points=12, plot_original=True):
"""
Plot discrete samples of a sine wave on the interval ``[0, 1]``.
"""
x = np.linspace(0, 1, grid_points + 2)
y = np.sin(2 * frequency * np.pi * x)
xf = np.linspace(0, 1, 1000)
yf = np.sin(2 * frequency * np.pi * xf)
fig, ax = plt.subplots(figsize=(8, 6))
ax.set_xlabel('x')
ax.set_ylabel('signal')
ax.set_title('Aliasing in discretely sampled periodic signal')
if plot_original:
ax.plot(xf, yf, color='red', linestyle='solid', linewidth=2)
ax.plot(x, y, marker='o', linewidth=2)
# The interact function automatically builds a user interface for exploring the
# plot_sine function.
interact(plot_sine, frequency=(1.0, 22.0, 0.5), grid_points=(10, 16, 1), plot_original=True);
# Import matplotlib (plotting), skimage (image processing) and interact (user interfaces)
# This enables their use in the Notebook.
%matplotlib inline
from matplotlib import pyplot as plt
from skimage import data
from skimage.feature import blob_doh
from skimage.color import rgb2gray
from IPython.html.widgets import interact, fixed
# Extract the first 500px square of the Hubble Deep Field.
image = data.hubble_deep_field()[0:500, 0:500]
image_gray = rgb2gray(image)
def plot_blobs(max_sigma=30, threshold=0.1, gray=False):
"""
Plot the image and the blobs that have been found.
"""
blobs = blob_doh(image_gray, max_sigma=max_sigma, threshold=threshold)
fig, ax = plt.subplots(figsize=(8,8))
ax.set_title('Galaxies in the Hubble Deep Field')
if gray:
ax.imshow(image_gray, interpolation='nearest', cmap='gray_r')
circle_color = 'red'
else:
ax.imshow(image, interpolation='nearest')
circle_color = 'yellow'
for blob in blobs:
y, x, r = blob
c = plt.Circle((x, y), r, color=circle_color, linewidth=2, fill=False)
ax.add_patch(c)
# Use interact to explore the galaxy detection algorithm.
interact(plot_blobs, max_sigma=(10, 40, 2), threshold=(0.005, 0.02, 0.001));
| 0.801664 | 0.664859 |
```
#hide
!pip install -Uqq fastbook
import fastbook
fastbook.setup_book()
#hide
from fastbook import *
from fastai.vision.widgets import *
```
# OfflineTV who?
> A model to tell you which OfflineTV member is in the photo
- toc: true
- badges: true
- comments: true
- categories: [jupyter]
With our previous model, we failed to make an accurate categorical model. So, we'll have proper categories this time by having a proper objective.
- Objective: To identify the face of the OfflineTV member in the photo and give their name.
- Levers: Categories of OfflineTV members and threshold of acceptance
- Data: use Bing Image API with queries of "[name]"
My main fear is getting the data... a lot of the data might contain more than one face and may not even be their face or may be a thumbnail for a video. But, that's what the cleaning process is for :).
So, we set our key, create a tuple for our categories and set our path. I'm sadly going to have to exclude Brodin Plett from the list because of the lack of his [photos](https://www.bing.com/images/search?q=brodin%20plett&qs=n&form=QBIR&sp=-1&pq=brodin%20plett).
```
key = os.environ.get('AZURE_SEARCH_KEY', 'KEY')
names = 'Scarra', 'Pokimane', 'LilyPichu', 'Disguised Toast Jeremy Wang', 'Yvonnie offlinetv', 'Michael Reeves'
path = Path('otv')
```
Now we can download the data. We'll only download 40 images of each instead of 150 because it's difficult to get relevant data through Bing (even with 40, there will be some irrelevant images):
```
if not path.exists():
path.mkdir()
for o in names:
dest = (path/o)
dest.mkdir(exist_ok = True)
results = search_images_bing(key, o, max_images = 40)
download_images(dest, urls = results.attrgot('contentUrl'))
```
Check if they were downloaded properly:
```
fns = get_image_files(path)
fns
```
We see they downloaded properly and that they have their own folders.
Check for corrupt files:
```
failed = verify_images(fns)
failed
```
There're 0 corrupt files, so no reason to take further action.
Now we create the `DataBlock`:
```
otv = DataBlock(
blocks = (ImageBlock, CategoryBlock),
get_items = get_image_files,
splitter = RandomSplitter(valid_pct = 0.2, seed = 42),
get_y = parent_label,
item_tfms = RandomResizedCrop(224, min_scale = 0.5),
batch_tfms = aug_transforms()
)
```
Note that we already applied data augmentation to it. Now we create the dataloaders:
```
dls = otv.dataloaders(path)
dls.valid.show_batch(max_n = 8, nrows = 1)
```
We'll use resnet34 as our architecture and fine tune it with 10 epochs. With little data and this many epochs, we may lead to overfitting, but it could be okay, since we just need our model to memorize their faces (and a `.pkl` with < 100 Mb size).
```
learn = cnn_learner(dls, resnet34, metrics = error_rate)
learn.fine_tune(10)
```
The error_rate will be high initially because the data has many irrelevant images. So, we'll clean the data using the `ImageClassifierCleaner` GUI.
```
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix()
```
The confusion matrix is mostly diagonal, but some are being wrongly predicted.
```
interp.plot_top_losses(4, nrows = 2)
```
After a large number of reruns, I think we finally got a suitable model. We can export it and create an application for it.
You can find the application [here](https://mybinder.org/v2/gh/griolu/offlinetv_who/main?urlpath=%2Fvoila%2Frender%2Fofflinetv_who.ipynb).
```
#hide
cleaner = ImageClassifierCleaner(learn)
cleaner
#hide
for i in cleaner.delete(): cleaner.fns[i].unlink()
for i,c in cleaner.change(): shutil.move(str(cleaner.fns[i]), path/c)
#hide
path = Path()
learn.export()
#hide
from google.colab import files
files.download('export.pkl')
```
|
github_jupyter
|
#hide
!pip install -Uqq fastbook
import fastbook
fastbook.setup_book()
#hide
from fastbook import *
from fastai.vision.widgets import *
key = os.environ.get('AZURE_SEARCH_KEY', 'KEY')
names = 'Scarra', 'Pokimane', 'LilyPichu', 'Disguised Toast Jeremy Wang', 'Yvonnie offlinetv', 'Michael Reeves'
path = Path('otv')
if not path.exists():
path.mkdir()
for o in names:
dest = (path/o)
dest.mkdir(exist_ok = True)
results = search_images_bing(key, o, max_images = 40)
download_images(dest, urls = results.attrgot('contentUrl'))
fns = get_image_files(path)
fns
failed = verify_images(fns)
failed
otv = DataBlock(
blocks = (ImageBlock, CategoryBlock),
get_items = get_image_files,
splitter = RandomSplitter(valid_pct = 0.2, seed = 42),
get_y = parent_label,
item_tfms = RandomResizedCrop(224, min_scale = 0.5),
batch_tfms = aug_transforms()
)
dls = otv.dataloaders(path)
dls.valid.show_batch(max_n = 8, nrows = 1)
learn = cnn_learner(dls, resnet34, metrics = error_rate)
learn.fine_tune(10)
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix()
interp.plot_top_losses(4, nrows = 2)
#hide
cleaner = ImageClassifierCleaner(learn)
cleaner
#hide
for i in cleaner.delete(): cleaner.fns[i].unlink()
for i,c in cleaner.change(): shutil.move(str(cleaner.fns[i]), path/c)
#hide
path = Path()
learn.export()
#hide
from google.colab import files
files.download('export.pkl')
| 0.275519 | 0.851398 |
```
# %% writefile solver.py
# %load solver.py
import numpy as np
from deeplearning import optim
class Solver(object):
"""
A Solver encapsulates all the logic necessary for training classification
models. The Solver performs stochastic gradient descent using different
update rules defined in optim.py.
The solver accepts both training and validataion data and labels so it can
periodically check classification accuracy on both training and validation
data to watch out for overfitting.
To train a model, you will first construct a Solver instance, passing the
model, dataset, and various optoins (learning rate, batch size, etc) to the
constructor. You will then call the train() method to run the optimization
procedure and train the model.
After the train() method returns, model.params will contain the parameters
that performed best on the validation set over the course of training.
In addition, the instance variable solver.loss_history will contain a list
of all losses encountered during training and the instance variables
solver.train_acc_history and solver.val_acc_history will be lists containing
the accuracies of the model on the training and validation set at each epoch.
Example usage might look something like this:
data = {
'X_train': # training data
'y_train': # training labels
'X_val': # validation data
'X_train': # validation labels
}
model = MyAwesomeModel(hidden_size=100, reg=10)
solver = Solver(model, data,
update_rule='sgd',
optim_config={
'learning_rate': 1e-3,
},
lr_decay=0.95,
num_epochs=10, batch_size=100,
print_every=100)
solver.train()
A Solver works on a model object that must conform to the following API:
- model.params must be a dictionary mapping string parameter names to numpy
arrays containing parameter values.
- model.loss(X, y) must be a function that computes training-time loss and
gradients, and test-time classification scores, with the following inputs
and outputs:
Inputs:
- X: Array giving a minibatch of input data of shape (N, d_1, ..., d_k)
- y: Array of labels, of shape (N,) giving labels for X where y[i] is the
label for X[i].
Returns:
If y is None, run a test-time forward pass and return:
- scores: Array of shape (N, C) giving classification scores for X where
scores[i, c] gives the score of class c for X[i].
If y is not None, run a training time forward and backward pass and return
a tuple of:
- loss: Scalar giving the loss
- grads: Dictionary with the same keys as self.params mapping parameter
names to gradients of the loss with respect to those parameters.
"""
def __init__(self, model, data, **kwargs):
"""
Construct a new Solver instance.
Required arguments:
- model: A model object conforming to the API described above
- data: A dictionary of training and validation data with the following:
'X_train': Array of shape (N_train, d_1, ..., d_k) giving training images
'X_val': Array of shape (N_val, d_1, ..., d_k) giving validation images
'y_train': Array of shape (N_train,) giving labels for training images
'y_val': Array of shape (N_val,) giving labels for validation images
Optional arguments:
- update_rule: A string giving the name of an update rule in optim.py.
Default is 'sgd'.
- optim_config: A dictionary containing hyperparameters that will be
passed to the chosen update rule. Each update rule requires different
hyperparameters (see optim.py) but all update rules require a
'learning_rate' parameter so that should always be present.
- lr_decay: A scalar for learning rate decay; after each epoch the learning
rate is multiplied by this value.
- batch_size: Size of minibatches used to compute loss and gradient during
training.
- num_epochs: The number of epochs to run for during training.
- print_every: Integer; training losses will be printed every print_every
iterations.
- verbose: Boolean; if set to false then no output will be printed during
training.
"""
self.model = model
self.X_train = data['X_train']
self.y_train = data['y_train']
self.X_val = data['X_val']
self.y_val = data['y_val']
# Unpack keyword arguments
self.update_rule = kwargs.pop('update_rule', 'sgd')
self.optim_config = kwargs.pop('optim_config', {})
self.lr_decay = kwargs.pop('lr_decay', 1.0)
self.batch_size = kwargs.pop('batch_size', 100)
self.num_epochs = kwargs.pop('num_epochs', 10)
self.print_every = kwargs.pop('print_every', 10)
self.verbose = kwargs.pop('verbose', True)
# Throw an error if there are extra keyword arguments
if len(kwargs) > 0:
extra = ', '.join('"%s"' % k for k in kwargs.keys())
raise ValueError('Unrecognized arguments %s' % extra)
# Make sure the update rule exists, then replace the string
# name with the actual function
if not hasattr(optim, self.update_rule):
raise ValueError('Invalid update_rule "%s"' % self.update_rule)
self.update_rule = getattr(optim, self.update_rule)
self._reset()
def _reset(self):
"""
Set up some book-keeping variables for optimization. Don't call this
manually.
"""
# Set up some variables for book-keeping
self.epoch = 0
self.best_val_acc = 0
self.best_params = {}
self.loss_history = []
self.train_acc_history = []
self.val_acc_history = []
# Make a deep copy of the optim_config for each parameter
self.optim_configs = {}
for p in self.model.params:
d = {k: v for k, v in self.optim_config.iteritems()}
self.optim_configs[p] = d
def _step(self):
"""
Make a single gradient update. This is called by train() and should not
be called manually.
"""
# Make a minibatch of training data
num_train = self.X_train.shape[0]
batch_mask = np.random.choice(num_train, self.batch_size)
X_batch = self.X_train[batch_mask]
y_batch = self.y_train[batch_mask]
# Compute loss and gradient
loss, grads = self.model.loss(X_batch, y_batch)
self.loss_history.append(loss)
# Perform a parameter update
for p, w in self.model.params.iteritems():
dw = grads[p]
config = self.optim_configs[p]
next_w, next_config = self.update_rule(w, dw, config)
self.model.params[p] = next_w
self.optim_configs[p] = next_config
def check_accuracy(self, X, y, num_samples=None, batch_size=100):
"""
Check accuracy of the model on the provided data.
Inputs:
- X: Array of data, of shape (N, d_1, ..., d_k)
- y: Array of labels, of shape (N,)
- num_samples: If not None, subsample the data and only test the model
on num_samples datapoints.
- batch_size: Split X and y into batches of this size to avoid using too
much memory.
Returns:
- acc: Scalar giving the fraction of instances that were correctly
classified by the model.
"""
# Maybe subsample the data
N = X.shape[0]
if num_samples is not None and N > num_samples:
mask = np.random.choice(N, num_samples)
N = num_samples
X = X[mask]
y = y[mask]
# Compute predictions in batches
num_batches = N / batch_size
if N % batch_size != 0:
num_batches += 1
y_pred = []
for i in xrange(num_batches):
start = i * batch_size
end = (i + 1) * batch_size
scores = self.model.loss(X[start:end])
y_pred.append(np.argmax(scores, axis=1))
y_pred = np.hstack(y_pred)
acc = np.mean(y_pred == y)
return acc
def train(self):
"""
Run optimization to train the model.
"""
num_train = self.X_train.shape[0]
iterations_per_epoch = max(num_train / self.batch_size, 1)
num_iterations = self.num_epochs * iterations_per_epoch
for t in xrange(num_iterations):
self._step()
# Maybe print training loss
if self.verbose and t % self.print_every == 0:
print '(Iteration %d / %d) loss: %f' % (
t + 1, num_iterations, self.loss_history[-1])
# At the end of every epoch, increment the epoch counter and decay the
# learning rate.
epoch_end = (t + 1) % iterations_per_epoch == 0
if epoch_end:
self.epoch += 1
for k in self.optim_configs:
self.optim_configs[k]['learning_rate'] *= self.lr_decay
# Check train and val accuracy on the first iteration, the last
# iteration, and at the end of each epoch.
first_it = (t == 0)
last_it = (t == num_iterations + 1)
if first_it or last_it or epoch_end:
train_acc = self.check_accuracy(self.X_train, self.y_train,
num_samples=1000)
val_acc = self.check_accuracy(self.X_val, self.y_val)
self.train_acc_history.append(train_acc)
self.val_acc_history.append(val_acc)
if self.verbose:
print '(Epoch %d / %d) train acc: %f; val_acc: %f' % (
self.epoch, self.num_epochs, train_acc, val_acc)
# Keep track of the best model
if val_acc > self.best_val_acc:
self.best_val_acc = val_acc
self.best_params = {}
for k, v in self.model.params.iteritems():
self.best_params[k] = v.copy()
# At the end of training swap the best params into the model
self.model.params = self.best_params
```
|
github_jupyter
|
# %% writefile solver.py
# %load solver.py
import numpy as np
from deeplearning import optim
class Solver(object):
"""
A Solver encapsulates all the logic necessary for training classification
models. The Solver performs stochastic gradient descent using different
update rules defined in optim.py.
The solver accepts both training and validataion data and labels so it can
periodically check classification accuracy on both training and validation
data to watch out for overfitting.
To train a model, you will first construct a Solver instance, passing the
model, dataset, and various optoins (learning rate, batch size, etc) to the
constructor. You will then call the train() method to run the optimization
procedure and train the model.
After the train() method returns, model.params will contain the parameters
that performed best on the validation set over the course of training.
In addition, the instance variable solver.loss_history will contain a list
of all losses encountered during training and the instance variables
solver.train_acc_history and solver.val_acc_history will be lists containing
the accuracies of the model on the training and validation set at each epoch.
Example usage might look something like this:
data = {
'X_train': # training data
'y_train': # training labels
'X_val': # validation data
'X_train': # validation labels
}
model = MyAwesomeModel(hidden_size=100, reg=10)
solver = Solver(model, data,
update_rule='sgd',
optim_config={
'learning_rate': 1e-3,
},
lr_decay=0.95,
num_epochs=10, batch_size=100,
print_every=100)
solver.train()
A Solver works on a model object that must conform to the following API:
- model.params must be a dictionary mapping string parameter names to numpy
arrays containing parameter values.
- model.loss(X, y) must be a function that computes training-time loss and
gradients, and test-time classification scores, with the following inputs
and outputs:
Inputs:
- X: Array giving a minibatch of input data of shape (N, d_1, ..., d_k)
- y: Array of labels, of shape (N,) giving labels for X where y[i] is the
label for X[i].
Returns:
If y is None, run a test-time forward pass and return:
- scores: Array of shape (N, C) giving classification scores for X where
scores[i, c] gives the score of class c for X[i].
If y is not None, run a training time forward and backward pass and return
a tuple of:
- loss: Scalar giving the loss
- grads: Dictionary with the same keys as self.params mapping parameter
names to gradients of the loss with respect to those parameters.
"""
def __init__(self, model, data, **kwargs):
"""
Construct a new Solver instance.
Required arguments:
- model: A model object conforming to the API described above
- data: A dictionary of training and validation data with the following:
'X_train': Array of shape (N_train, d_1, ..., d_k) giving training images
'X_val': Array of shape (N_val, d_1, ..., d_k) giving validation images
'y_train': Array of shape (N_train,) giving labels for training images
'y_val': Array of shape (N_val,) giving labels for validation images
Optional arguments:
- update_rule: A string giving the name of an update rule in optim.py.
Default is 'sgd'.
- optim_config: A dictionary containing hyperparameters that will be
passed to the chosen update rule. Each update rule requires different
hyperparameters (see optim.py) but all update rules require a
'learning_rate' parameter so that should always be present.
- lr_decay: A scalar for learning rate decay; after each epoch the learning
rate is multiplied by this value.
- batch_size: Size of minibatches used to compute loss and gradient during
training.
- num_epochs: The number of epochs to run for during training.
- print_every: Integer; training losses will be printed every print_every
iterations.
- verbose: Boolean; if set to false then no output will be printed during
training.
"""
self.model = model
self.X_train = data['X_train']
self.y_train = data['y_train']
self.X_val = data['X_val']
self.y_val = data['y_val']
# Unpack keyword arguments
self.update_rule = kwargs.pop('update_rule', 'sgd')
self.optim_config = kwargs.pop('optim_config', {})
self.lr_decay = kwargs.pop('lr_decay', 1.0)
self.batch_size = kwargs.pop('batch_size', 100)
self.num_epochs = kwargs.pop('num_epochs', 10)
self.print_every = kwargs.pop('print_every', 10)
self.verbose = kwargs.pop('verbose', True)
# Throw an error if there are extra keyword arguments
if len(kwargs) > 0:
extra = ', '.join('"%s"' % k for k in kwargs.keys())
raise ValueError('Unrecognized arguments %s' % extra)
# Make sure the update rule exists, then replace the string
# name with the actual function
if not hasattr(optim, self.update_rule):
raise ValueError('Invalid update_rule "%s"' % self.update_rule)
self.update_rule = getattr(optim, self.update_rule)
self._reset()
def _reset(self):
"""
Set up some book-keeping variables for optimization. Don't call this
manually.
"""
# Set up some variables for book-keeping
self.epoch = 0
self.best_val_acc = 0
self.best_params = {}
self.loss_history = []
self.train_acc_history = []
self.val_acc_history = []
# Make a deep copy of the optim_config for each parameter
self.optim_configs = {}
for p in self.model.params:
d = {k: v for k, v in self.optim_config.iteritems()}
self.optim_configs[p] = d
def _step(self):
"""
Make a single gradient update. This is called by train() and should not
be called manually.
"""
# Make a minibatch of training data
num_train = self.X_train.shape[0]
batch_mask = np.random.choice(num_train, self.batch_size)
X_batch = self.X_train[batch_mask]
y_batch = self.y_train[batch_mask]
# Compute loss and gradient
loss, grads = self.model.loss(X_batch, y_batch)
self.loss_history.append(loss)
# Perform a parameter update
for p, w in self.model.params.iteritems():
dw = grads[p]
config = self.optim_configs[p]
next_w, next_config = self.update_rule(w, dw, config)
self.model.params[p] = next_w
self.optim_configs[p] = next_config
def check_accuracy(self, X, y, num_samples=None, batch_size=100):
"""
Check accuracy of the model on the provided data.
Inputs:
- X: Array of data, of shape (N, d_1, ..., d_k)
- y: Array of labels, of shape (N,)
- num_samples: If not None, subsample the data and only test the model
on num_samples datapoints.
- batch_size: Split X and y into batches of this size to avoid using too
much memory.
Returns:
- acc: Scalar giving the fraction of instances that were correctly
classified by the model.
"""
# Maybe subsample the data
N = X.shape[0]
if num_samples is not None and N > num_samples:
mask = np.random.choice(N, num_samples)
N = num_samples
X = X[mask]
y = y[mask]
# Compute predictions in batches
num_batches = N / batch_size
if N % batch_size != 0:
num_batches += 1
y_pred = []
for i in xrange(num_batches):
start = i * batch_size
end = (i + 1) * batch_size
scores = self.model.loss(X[start:end])
y_pred.append(np.argmax(scores, axis=1))
y_pred = np.hstack(y_pred)
acc = np.mean(y_pred == y)
return acc
def train(self):
"""
Run optimization to train the model.
"""
num_train = self.X_train.shape[0]
iterations_per_epoch = max(num_train / self.batch_size, 1)
num_iterations = self.num_epochs * iterations_per_epoch
for t in xrange(num_iterations):
self._step()
# Maybe print training loss
if self.verbose and t % self.print_every == 0:
print '(Iteration %d / %d) loss: %f' % (
t + 1, num_iterations, self.loss_history[-1])
# At the end of every epoch, increment the epoch counter and decay the
# learning rate.
epoch_end = (t + 1) % iterations_per_epoch == 0
if epoch_end:
self.epoch += 1
for k in self.optim_configs:
self.optim_configs[k]['learning_rate'] *= self.lr_decay
# Check train and val accuracy on the first iteration, the last
# iteration, and at the end of each epoch.
first_it = (t == 0)
last_it = (t == num_iterations + 1)
if first_it or last_it or epoch_end:
train_acc = self.check_accuracy(self.X_train, self.y_train,
num_samples=1000)
val_acc = self.check_accuracy(self.X_val, self.y_val)
self.train_acc_history.append(train_acc)
self.val_acc_history.append(val_acc)
if self.verbose:
print '(Epoch %d / %d) train acc: %f; val_acc: %f' % (
self.epoch, self.num_epochs, train_acc, val_acc)
# Keep track of the best model
if val_acc > self.best_val_acc:
self.best_val_acc = val_acc
self.best_params = {}
for k, v in self.model.params.iteritems():
self.best_params[k] = v.copy()
# At the end of training swap the best params into the model
self.model.params = self.best_params
| 0.898388 | 0.845879 |
<a href="https://colab.research.google.com/github/oyeabhijit/clear-colab-24211/blob/main/Data_Preprocessing.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Importing Libraries
```
import numpy as np
import matplotlib.pyplot as pd
import pandas as pd
```
# Importing CSV Dataset
```
dataset = pd.read_csv('Data.csv')
x = dataset.iloc[ : , :-1].values
y = dataset.iloc[ : , -1].values
print(x)
print(y)
```
# Difference between Class, Object and Methods
A class is the model of something we want to build. For example, if we make a house construction plan that gathers the instructions on how to build a house, then this construction plan is the class.
An object is an instance of the class. So if we take that same example of the house construction plan, then an object is simply a house. A house (the object) that was built by following the instructions of the construction plan (the class).
And therefore there can be many objects of the same class, because we can build many houses from the construction plan.
A method is a tool we can use on the object to complete a specific action. So in this same example, a tool can be to open the main door of the house if a guest is coming. A method can also be seen as a function that is applied onto the object, takes some inputs (that were defined in the class) and returns some output.
# Replacing Missing Value by Average of it's column
```
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(missing_values=np.nan, strategy='mean')
imputer.fit(x[ : , 1:3])
x[ : , 1:3] = imputer.transform(x[ : , 1:3])
print(x)
```
In line 3, imputer.fit(x[ : , 1:3]), [:] is to include all the bounds of the rows, where as [1:3] is used to include all the bounds from 1 to 2. But we also have 3 as the upper bound, that is because python excludes the upper bound function.
# Encoding Categorical Data
Encoding Indeoendent Variable
```
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
ct = ColumnTransformer(transformers = [('encoder', OneHotEncoder(), [0])], remainder = 'passthrough')
x = np.array(ct.fit_transform(x))
print(x)
```
# Encoding Dependent Variable
```
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
y = le.fit_transform(y)
print(y)
```
# Splitting Dataset into Testing & Training Set
```
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size = 0.2, random_state = 1)
print(x_train)
print(x_test)
print(y_train)
print(y_test)
```
# Feature Scaling
There are two types of Feature Scaling:
**Standardisation:** Subtracting each value of the feature by the mean of all the values of the feature and then dividing by standard deviation (square root of the variance). Feature varies between +3 and -3.
**Normalisation:** Subtracting each value of the minimum value of the feature. Then dividing by the difference between maximum value of the feature and the minimum value of the feature. Feature value varies between 0 and 1.
```
from IPython.display import Image
Image(filename='2.png')
# This cell is just to show the image. It has no relation to the Data Preprocessing program.
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
x_train[:, 3:]=sc.fit_transform(x_train[:, 3:])
x_test[:, 3:]=sc.transform(x_test[:, 3:])
print(x_train)
print(x_test)
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as pd
import pandas as pd
dataset = pd.read_csv('Data.csv')
x = dataset.iloc[ : , :-1].values
y = dataset.iloc[ : , -1].values
print(x)
print(y)
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(missing_values=np.nan, strategy='mean')
imputer.fit(x[ : , 1:3])
x[ : , 1:3] = imputer.transform(x[ : , 1:3])
print(x)
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
ct = ColumnTransformer(transformers = [('encoder', OneHotEncoder(), [0])], remainder = 'passthrough')
x = np.array(ct.fit_transform(x))
print(x)
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
y = le.fit_transform(y)
print(y)
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size = 0.2, random_state = 1)
print(x_train)
print(x_test)
print(y_train)
print(y_test)
from IPython.display import Image
Image(filename='2.png')
# This cell is just to show the image. It has no relation to the Data Preprocessing program.
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
x_train[:, 3:]=sc.fit_transform(x_train[:, 3:])
x_test[:, 3:]=sc.transform(x_test[:, 3:])
print(x_train)
print(x_test)
| 0.315947 | 0.987042 |
<img src="images/usm.jpg" width="480" height="240" align="left"/>
# MAT281 - Laboratorio N°05
## Objetivos de la clase
* Reforzar los conceptos básicos de visualización.
## Contenidos
* [Problema 01](#p1)
## Problema 01
<img src="http://nelsoncos.com/wp-content/uploads/2017/02/sales-icon.png" width="360" height="360" align="center"/>
EL conjunto de datos se denomina `company_sales_data.csv`, el cual contiene información tal como: número del mes, unidades, precio, etc.
Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
```
import pandas as pd
import os
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(rc={'figure.figsize':(10,8)})
# cargar datos
df = pd.read_csv(os.path.join("data","company_sales_data.csv"))
df.head()
df
```
El objetivo es tratar de obtener la mayor información posible de este conjunto de datos. Para cumplir este objetivo debe resolver las siguientes problemáticas:
**Observación.-** Puedes ocupar las librerías de Matplolib o Seaborn.
1. Lea el "total_profit" de todos los meses, muéstrelo usando un gráfico lineal y un gráfico de dispersión.
```
palette = sns.color_palette("hls", 6)
sns.set(rc={'figure.figsize':(4,4)})
sns.lineplot(
x='total_profit',
y='total_profit',
data=df,
ci = None,
palette=palette
)
sns.set(rc={'figure.figsize':(4,4)})
sns.scatterplot(
x='total_profit',
y='total_profit',
data=df,
palette=palette
)
```
2. Lea todos los datos de ventas de productos y muéstrelos utilizando un gráfico multilínea.
```
sns.set(rc={'figure.figsize':(8,5.5)})
sns.lineplot('month_number', 'facecream', data=df)
sns.lineplot('month_number', 'facewash', data=df)
sns.lineplot('month_number', 'toothpaste', data=df)
sns.lineplot('month_number', 'bathingsoap', data=df)
sns.lineplot('month_number', 'shampoo', data=df)
sns.lineplot('month_number', 'moisturizer', data=df)
plt.legend(['facecream','facewash','toothpaste','bathingsoap','shampoo','moisturizer'])
plt.xlabel("month_number")
plt.ylabel("Cantidades")
```
3. Lea los datos de ventas de productos de "facecream" y "facewash" y muéstrelos usando el gráfico de barras.
```
plt.figure(figsize=(10, 6))
barWidth=0.3
plt.bar(df['month_number'],df['facecream'],color='b',label='facecream')
plt.bar(df['month_number']+barWidth,df['facewash'],color='r',label='facewash')
plt.xlabel('mes')
plt.ylabel('total unidades')
plt.legend()
plt.show()
```
4. Lea todos los datos de ventas de productos y muéstrelos utilizando un gráfico box-plot.
```
stars_df=df.drop(['month_number','total_units','total_profit'],axis=1)
sns.boxplot(data=stars_df)
```
5. Calcule los datos de ventas totales del año pasado para cada producto y muéstrelos usando un gráfico circular
```
tipo=['facecream','facewash','toothpaste','bathingsoap','shampoo','moisturizer']
sumas=[df['facecream'].sum(),df['facewash'].sum(),df['toothpaste'].sum(),df['bathingsoap'].sum(),df['shampoo'].sum(),df['moisturizer'].sum()]
plt.pie(sumas,labels=tipo,autopct='%2.1f%%')
plt.show()
```
|
github_jupyter
|
import pandas as pd
import os
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(rc={'figure.figsize':(10,8)})
# cargar datos
df = pd.read_csv(os.path.join("data","company_sales_data.csv"))
df.head()
df
palette = sns.color_palette("hls", 6)
sns.set(rc={'figure.figsize':(4,4)})
sns.lineplot(
x='total_profit',
y='total_profit',
data=df,
ci = None,
palette=palette
)
sns.set(rc={'figure.figsize':(4,4)})
sns.scatterplot(
x='total_profit',
y='total_profit',
data=df,
palette=palette
)
sns.set(rc={'figure.figsize':(8,5.5)})
sns.lineplot('month_number', 'facecream', data=df)
sns.lineplot('month_number', 'facewash', data=df)
sns.lineplot('month_number', 'toothpaste', data=df)
sns.lineplot('month_number', 'bathingsoap', data=df)
sns.lineplot('month_number', 'shampoo', data=df)
sns.lineplot('month_number', 'moisturizer', data=df)
plt.legend(['facecream','facewash','toothpaste','bathingsoap','shampoo','moisturizer'])
plt.xlabel("month_number")
plt.ylabel("Cantidades")
plt.figure(figsize=(10, 6))
barWidth=0.3
plt.bar(df['month_number'],df['facecream'],color='b',label='facecream')
plt.bar(df['month_number']+barWidth,df['facewash'],color='r',label='facewash')
plt.xlabel('mes')
plt.ylabel('total unidades')
plt.legend()
plt.show()
stars_df=df.drop(['month_number','total_units','total_profit'],axis=1)
sns.boxplot(data=stars_df)
tipo=['facecream','facewash','toothpaste','bathingsoap','shampoo','moisturizer']
sumas=[df['facecream'].sum(),df['facewash'].sum(),df['toothpaste'].sum(),df['bathingsoap'].sum(),df['shampoo'].sum(),df['moisturizer'].sum()]
plt.pie(sumas,labels=tipo,autopct='%2.1f%%')
plt.show()
| 0.276788 | 0.915959 |
```
import datetime
import os
import yaml
import numpy as np
import pandas as pd
# Lecture du fichier d'environnement
ENV_FILE = '../env.yaml'
with open(ENV_FILE) as f:
params = yaml.load(f) #, Loader=yaml.FullLoader)
# Initialisation des chemins vers les fichiers
ROOT_DIR = os.path.dirname(os.path.abspath(ENV_FILE))
DATA_FILE = os.path.join(ROOT_DIR,
params['directories']['processed'],
params['files']['all_data'])
# Lecture du fichier de données
epidemie_df = (pd.read_csv(DATA_FILE, parse_dates=['Last Update'])
.assign(day=lambda _df: _df['Last Update'].dt.date)
.drop_duplicates(subset=['Country/Region', 'Province/State', 'day'])
[lambda df: df['day'] <= datetime.date(2020, 3, 12)]
)
epidemie_df.head()
france_df = (epidemie_df[epidemie_df['Country/Region'] == 'France']
.groupby(['Country/Region', 'day'])
.agg({'Confirmed': 'sum', 'Deaths': 'sum', 'Recovered': 'sum'})
.reset_index()
)
france_df.tail()
france_df.head()
france_df['Confirmed'].diff()
def get_country(self, country):
return (epidemie_df[epidemie_df['Country/Region'] == country]
.groupby(['Country/Region', 'day'])
.agg({'Confirmed': 'sum', 'Deaths': 'sum', 'Recovered': 'sum'})
.reset_index()
)
# Monkey Patch pd.DataFrame
pd.DataFrame.get_country = get_country
get_country(epidemie_df, "South Korea").head()
italy_df = epidemie_df.get_country('Italy')
italy_df.head()
korea_df = (epidemie_df[epidemie_df['Country/Region'] == 'South Korea']
.groupby(['Country/Region', 'day'])
.agg({'Confirmed': 'sum', 'Deaths': 'sum', 'Recovered': 'sum'})
.reset_index()
)
korea_df.tail()
korea_df['infected'] = korea_df['Confirmed'].diff()
italy_df['infected'] = italy_df['Confirmed'].diff()
import matplotlib.pyplot as plt
%matplotlib inline
fig = plt.figure(figsize=(12, 5))
plt.plot(korea_df['day'], korea_df['Confirmed'], label='S.Korea confirmed')
plt.plot(korea_df['day'], korea_df['infected'], label='S.Korea infected')
plt.plot(italy_df['day'], italy_df['Confirmed'], label='Italy confirmed')
plt.plot(italy_df['day'], italy_df['infected'], label='Italy infected')
plt.grid(True)
plt.legend()
plt.show()
beta, gamma = [0.01, 0.1]
def SIR(t, y):
S = y[0]
I = y[1]
R = y[2]
return([-beta*S*I, beta*S*I-gamma*I, gamma*I])
korea_df.loc[2:].head()
from scipy.integrate import solve_ivp
beta, gamma = [0.01, 0.1]
solution_korea = solve_ivp(SIR, [0, 40], [51_470_000, 1, 0], t_eval=np.arange(0, 40, 1))
solution_korea
def plot_epidemia(solution, infected, susceptible=False):
fig = plt.figure(figsize=(12, 5))
if susceptible:
plt.plot(solution.t, solution.y[0])
plt.plot(solution.t, solution.y[1])
plt.plot(solution.t, solution.y[2])
plt.plot(infected.reset_index(drop=True).index, infected, "k*:")
plt.grid("True")
if susceptible:
plt.legend(["Susceptible", "Infected", "Recovered", "Original Data"])
else:
plt.legend(["Infected", "Recovered", "Original Data"])
plt.show()
plot_epidemia(solution_korea, korea_df.loc[2:]['infected'])
```
### Approximation
```
korea_df['infected'].max()
korea_df['infected'].diff().max()
(korea_df['Recovered'].diff().loc[korea_df['infected'] != 0] / korea_df.loc[korea_df['infected'] != 0]['infected']).mean()
beta, gamma = [0.001, 0.1]
solution_korea = solve_ivp(SIR, [0, 40], [51_470_000, 1, 0], t_eval=np.arange(0, 41, 1))
plot_epidemia(solution_korea, korea_df.loc[2:]['infected'])
def sumsq_error(parameters):
beta, gamma = parameters
def SIR(t, y):
S = y[0]
I = y[1]
R = y[2]
return([-beta*S*I, beta*S*I-gamma*I, gamma*I])
solution = solve_ivp(SIR, [0, nb_steps-1], [total_population, 1, 0], t_eval=np.arange(0, nb_steps, 1))
return(sum((solution.y[1]-infected_population)**2))
total_population = 51_470_000
infected_population = korea_df.loc[2:]['infected']
nb_steps = len(infected_population)
%%time
from scipy.optimize import minimize
msol = minimize(sumsq_error, [0.001, 0.1], method='Nelder-Mead')
msol.x
# Djiby
beta_optimal = 5.67e-3
gamma_optimal = 24.7
# PC de la fac
beta_optimal = 0.06321101
gamma_optimal = 33.06340503
# Approximation Excel
beta_optimal = 1.5485e-9
gamma_optimal = 0.1839
beta = beta_optimal
gamma = gamma_optimal
def SIR(t, y):
S = y[0]
I = y[1]
R = y[2]
return([-beta*S*I, beta*S*I-gamma*I, gamma*I])
solution_korea_optimal = solve_ivp(SIR, [0, 40], [51_470_000*0.1, 1, 0], t_eval=np.arange(0, 40, 1))
solution_korea_optimal
plot_epidemia(solution_korea_optimal, korea_df.loc[2:]['infected'])
fig = plt.figure(figsize=(12, 5))
plt.plot(solution_korea_optimal.t, solution_korea_optimal.y[1])
plt.plot(korea_df.loc[2:]['infected'].reset_index(drop=True).index, korea_df.loc[2:]['infected'], "k*:")
plt.grid("True")
plt.legend(["Infected", "Original Data"])
plt.show()
china_df = epidemie_df.get_country('Mainland China')[:49]
china_df.tail()
china_df.set_index('day').plot.line(figsize=(12, 5));
beta, gamma = [0.001, 0.1]
china_df['infected'] = china_df['Confirmed'].diff()
nb_steps = china_df.shape[0]
solution_china = solve_ivp(SIR, [0, nb_steps-1], [1_350_000_000, 1, 0], t_eval=np.arange(0, nb_steps, 1))
fig = plt.figure(figsize=(12, 5))
plt.plot(solution_china.t, solution_china.y[1])
plt.plot(china_df['infected'].reset_index(drop=True).index, china_df['infected'], "k*:")
plt.title('China')
plt.grid("True")
plt.legend(["Infected", "Original Data"])
plt.show()
korea_df.to_clipboard()
```
|
github_jupyter
|
import datetime
import os
import yaml
import numpy as np
import pandas as pd
# Lecture du fichier d'environnement
ENV_FILE = '../env.yaml'
with open(ENV_FILE) as f:
params = yaml.load(f) #, Loader=yaml.FullLoader)
# Initialisation des chemins vers les fichiers
ROOT_DIR = os.path.dirname(os.path.abspath(ENV_FILE))
DATA_FILE = os.path.join(ROOT_DIR,
params['directories']['processed'],
params['files']['all_data'])
# Lecture du fichier de données
epidemie_df = (pd.read_csv(DATA_FILE, parse_dates=['Last Update'])
.assign(day=lambda _df: _df['Last Update'].dt.date)
.drop_duplicates(subset=['Country/Region', 'Province/State', 'day'])
[lambda df: df['day'] <= datetime.date(2020, 3, 12)]
)
epidemie_df.head()
france_df = (epidemie_df[epidemie_df['Country/Region'] == 'France']
.groupby(['Country/Region', 'day'])
.agg({'Confirmed': 'sum', 'Deaths': 'sum', 'Recovered': 'sum'})
.reset_index()
)
france_df.tail()
france_df.head()
france_df['Confirmed'].diff()
def get_country(self, country):
return (epidemie_df[epidemie_df['Country/Region'] == country]
.groupby(['Country/Region', 'day'])
.agg({'Confirmed': 'sum', 'Deaths': 'sum', 'Recovered': 'sum'})
.reset_index()
)
# Monkey Patch pd.DataFrame
pd.DataFrame.get_country = get_country
get_country(epidemie_df, "South Korea").head()
italy_df = epidemie_df.get_country('Italy')
italy_df.head()
korea_df = (epidemie_df[epidemie_df['Country/Region'] == 'South Korea']
.groupby(['Country/Region', 'day'])
.agg({'Confirmed': 'sum', 'Deaths': 'sum', 'Recovered': 'sum'})
.reset_index()
)
korea_df.tail()
korea_df['infected'] = korea_df['Confirmed'].diff()
italy_df['infected'] = italy_df['Confirmed'].diff()
import matplotlib.pyplot as plt
%matplotlib inline
fig = plt.figure(figsize=(12, 5))
plt.plot(korea_df['day'], korea_df['Confirmed'], label='S.Korea confirmed')
plt.plot(korea_df['day'], korea_df['infected'], label='S.Korea infected')
plt.plot(italy_df['day'], italy_df['Confirmed'], label='Italy confirmed')
plt.plot(italy_df['day'], italy_df['infected'], label='Italy infected')
plt.grid(True)
plt.legend()
plt.show()
beta, gamma = [0.01, 0.1]
def SIR(t, y):
S = y[0]
I = y[1]
R = y[2]
return([-beta*S*I, beta*S*I-gamma*I, gamma*I])
korea_df.loc[2:].head()
from scipy.integrate import solve_ivp
beta, gamma = [0.01, 0.1]
solution_korea = solve_ivp(SIR, [0, 40], [51_470_000, 1, 0], t_eval=np.arange(0, 40, 1))
solution_korea
def plot_epidemia(solution, infected, susceptible=False):
fig = plt.figure(figsize=(12, 5))
if susceptible:
plt.plot(solution.t, solution.y[0])
plt.plot(solution.t, solution.y[1])
plt.plot(solution.t, solution.y[2])
plt.plot(infected.reset_index(drop=True).index, infected, "k*:")
plt.grid("True")
if susceptible:
plt.legend(["Susceptible", "Infected", "Recovered", "Original Data"])
else:
plt.legend(["Infected", "Recovered", "Original Data"])
plt.show()
plot_epidemia(solution_korea, korea_df.loc[2:]['infected'])
korea_df['infected'].max()
korea_df['infected'].diff().max()
(korea_df['Recovered'].diff().loc[korea_df['infected'] != 0] / korea_df.loc[korea_df['infected'] != 0]['infected']).mean()
beta, gamma = [0.001, 0.1]
solution_korea = solve_ivp(SIR, [0, 40], [51_470_000, 1, 0], t_eval=np.arange(0, 41, 1))
plot_epidemia(solution_korea, korea_df.loc[2:]['infected'])
def sumsq_error(parameters):
beta, gamma = parameters
def SIR(t, y):
S = y[0]
I = y[1]
R = y[2]
return([-beta*S*I, beta*S*I-gamma*I, gamma*I])
solution = solve_ivp(SIR, [0, nb_steps-1], [total_population, 1, 0], t_eval=np.arange(0, nb_steps, 1))
return(sum((solution.y[1]-infected_population)**2))
total_population = 51_470_000
infected_population = korea_df.loc[2:]['infected']
nb_steps = len(infected_population)
%%time
from scipy.optimize import minimize
msol = minimize(sumsq_error, [0.001, 0.1], method='Nelder-Mead')
msol.x
# Djiby
beta_optimal = 5.67e-3
gamma_optimal = 24.7
# PC de la fac
beta_optimal = 0.06321101
gamma_optimal = 33.06340503
# Approximation Excel
beta_optimal = 1.5485e-9
gamma_optimal = 0.1839
beta = beta_optimal
gamma = gamma_optimal
def SIR(t, y):
S = y[0]
I = y[1]
R = y[2]
return([-beta*S*I, beta*S*I-gamma*I, gamma*I])
solution_korea_optimal = solve_ivp(SIR, [0, 40], [51_470_000*0.1, 1, 0], t_eval=np.arange(0, 40, 1))
solution_korea_optimal
plot_epidemia(solution_korea_optimal, korea_df.loc[2:]['infected'])
fig = plt.figure(figsize=(12, 5))
plt.plot(solution_korea_optimal.t, solution_korea_optimal.y[1])
plt.plot(korea_df.loc[2:]['infected'].reset_index(drop=True).index, korea_df.loc[2:]['infected'], "k*:")
plt.grid("True")
plt.legend(["Infected", "Original Data"])
plt.show()
china_df = epidemie_df.get_country('Mainland China')[:49]
china_df.tail()
china_df.set_index('day').plot.line(figsize=(12, 5));
beta, gamma = [0.001, 0.1]
china_df['infected'] = china_df['Confirmed'].diff()
nb_steps = china_df.shape[0]
solution_china = solve_ivp(SIR, [0, nb_steps-1], [1_350_000_000, 1, 0], t_eval=np.arange(0, nb_steps, 1))
fig = plt.figure(figsize=(12, 5))
plt.plot(solution_china.t, solution_china.y[1])
plt.plot(china_df['infected'].reset_index(drop=True).index, china_df['infected'], "k*:")
plt.title('China')
plt.grid("True")
plt.legend(["Infected", "Original Data"])
plt.show()
korea_df.to_clipboard()
| 0.478529 | 0.533641 |
```
r"""
A module defining some "nicer" fourier transform functions.
We define only two functions -- an arbitrary-dimension forward transform, and its inverse. In each case, the transform
is designed to replicate the continuous transform. That is, the transform is volume-normalised and obeys correct
Fourier conventions.
The actual FFT backend is provided by ``pyFFTW`` if it is installed, which provides a significant speedup, and
multi-threading.
Conveniently, we allow for arbitrary Fourier convention, according to the scheme in
http://mathworld.wolfram.com/FourierTransform.html. That is, we define the forward and inverse *n*-dimensional
transforms respectively as
.. math:: F(k) = \sqrt{\frac{|b|}{(2\pi)^{1-a}}}^n \int f(r) e^{-i b\mathbf{k}\cdot\mathbf{r}} d^n\mathbf{r}
and
.. math:: f(r) = \sqrt{\frac{|b|}{(2\pi)^{1+a}}}^n \int F(k) e^{+i b\mathbf{k}\cdot\mathbf{r}} d^n \mathbf{k}.
In both transforms, the corresponding co-ordinates are returned so a completely consistent transform is simple to get.
This makes switching from standard frequency to angular frequency very simple.
We note that currently, only positive values for b are implemented (in fact, using negative b is consistent, but
one must be careful that the frequencies returned are descending, rather than ascending).
"""
import warnings
__all__ = ['fft', 'ifft', 'fftfreq', 'fftshift', 'ifftshift']
HAVE_FFTW = False
from jax.numpy.fft import fftn, ifftn, ifftshift as _ifftshift, fftshift as _fftshift, fftfreq as _fftfreq
# To avoid MKL-related bugs, numpy needs to be imported after pyfftw: see https://github.com/pyFFTW/pyFFTW/issues/40
import numpy as np
def fft(X, L=None, Lk=None, a=0, b=2 * np.pi, left_edge=None, axes=None, ret_cubegrid=False):
r"""
Arbitrary-dimension nice Fourier Transform.
This function wraps numpy's ``fftn`` and applies some nice properties. Notably, the returned fourier transform
is equivalent to what would be expected from a continuous Fourier Transform (including normalisations etc.). In
addition, arbitrary conventions are supported (see :mod:`powerbox.dft` for details).
Default parameters have the same normalising conventions as ``numpy.fft.fftn``.
The output object always has the zero in the centre, with monotonically increasing spectral arguments.
Parameters
----------
X : array
An array with arbitrary dimensions defining the field to be transformed. Should correspond exactly
to the continuous function for which it is an analogue. A lower-dimensional transform can be specified by using
the ``axes`` argument.
L : float or array-like, optional
The length of the box which defines ``X``. If a scalar, each transformed dimension in ``X`` is assumed to have
the same length. If array-like, must be of the same length as the number of transformed dimensions. The default
returns the un-normalised DFT (same as numpy).
Lk : float or array-like, optional
The length of the fourier-space box which defines the dual of ``X``. Only one of L/Lk needs to be provided. If
provided, L takes precedence. If a scalar, each transformed dimension in ``X`` is assumed to have
the same length. If array-like, must be of the same length as the number of transformed dimensions.
a,b : float, optional
These define the Fourier convention used. See :mod:`powerbox.dft` for details. The defaults return the standard DFT
as defined in :mod:`numpy.fft`.
left_edge : float or array-like, optional
The co-ordinate at the left-edge for each dimension that is being transformed. By default, sets the left
edge to -L/2, so that the input is centred before transforming (i.e. equivalent to ``fftshift(fft(fftshift(X)))``)
axes : sequence of ints, optional
The axes to take the transform over. The default is to use all axes for the transform.
ret_cubegrid : bool, optional
Whether to return the entire grid of frequency magnitudes.
Returns
-------
ft : array
The DFT of X, normalised to be consistent with the continuous transform.
freq : list of arrays
The frequencies in each dimension, consistent with the Fourier conventions specified.
grid : array
Only returned if ``ret_cubegrid`` is ``True``. An array with shape given by ``axes`` specifying the magnitude
of the frequencies at each point of the fourier transform.
"""
if not HAVE_FFTW:
warnings.warn("You do not have pyFFTW installed. Installing it should give some speed increase.")
if axes is None:
axes = list(range(len(X.shape)))
N = np.array([X.shape[axis] for axis in axes])
# Get the box volume if given the fourier-space box volume
if L is None and Lk is None:
L = N
elif L is not None: # give precedence to L
if np.isscalar(L):
L = L * np.ones(len(axes))
elif Lk is not None:
if np.isscalar(Lk):
Lk = Lk * np.ones(len(axes))
L = N * 2 * np.pi / (Lk * b) # Take account of the fourier convention.
left_edge = _set_left_edge(left_edge, axes, L)
V = float(np.product(L)) # Volume of box
Vx = V / np.product(N) # Volume of cell
ft = Vx * fftshift(fftn(X, axes=axes), axes=axes) * np.sqrt(np.abs(b) / (2 * np.pi) ** (1 - a)) ** len(axes)
dx = np.array([float(l) / float(n) for l, n in zip(L, N)])
freq = np.array([fftfreq(n, d=d, b=b) for n, d in zip(N, dx)])
# Adjust phases of the result to align with the left edge properly.
ft = _adjust_phase(ft, left_edge, freq, axes, b)
return _retfunc(ft, freq, axes, ret_cubegrid)
def ifft(X, Lk=None, L=None, a=0, b=2 * np.pi, axes=None, left_edge=None, ret_cubegrid=False):
r"""
Arbitrary-dimension nice inverse Fourier Transform.
This function wraps numpy's ``ifftn`` and applies some nice properties. Notably, the returned fourier transform
is equivalent to what would be expected from a continuous inverse Fourier Transform (including normalisations etc.).
In addition, arbitrary conventions are supported (see :mod:`powerbox.dft` for details).
Default parameters have the same normalising conventions as ``numpy.fft.ifftn``.
Parameters
----------
X : array
An array with arbitrary dimensions defining the field to be transformed. Should correspond exactly
to the continuous function for which it is an analogue. A lower-dimensional transform can be specified by using
the ``axes`` argument. Note that if using a non-periodic function, the co-ordinates should be monotonically
increasing.
Lk : float or array-like, optional
The length of the box which defines ``X``. If a scalar, each transformed dimension in ``X`` is assumed to have
the same length. If array-like, must be of the same length as the number of transformed dimensions. The default
returns the un-normalised DFT (the same as numpy).
L : float or array-like, optional
The length of the real-space box, defining the dual of ``X``. Only one of Lk/L needs to be passed. If L is
passed, it is used. If a scalar, each transformed dimension in ``X`` is assumed to have
the same length. If array-like, must be of the same length as the number of transformed dimensions. The default
of ``Lk=1`` returns the un-normalised DFT.
a,b : float, optional
These define the Fourier convention used. See :mod:`powerbox.dft` for details. The defaults return the standard DFT
as defined in :mod:`numpy.fft`.
axes : sequence of ints, optional
The axes to take the transform over. The default is to use all axes for the transform.
left_edge : float or array-like, optional
The co-ordinate at the left-edge (in k-space) for each dimension that is being transformed. By default, sets the
left edge to -Lk/2, equivalent to the standard numpy ifft. This affects only the phases of the result.
ret_cubegrid : bool, optional
Whether to return the entire grid of real-space co-ordinate magnitudes.
Returns
-------
ft : array
The IDFT of X, normalised to be consistent with the continuous transform.
freq : list of arrays
The real-space co-ordinate grid in each dimension, consistent with the Fourier conventions specified.
grid : array
Only returned if ``ret_cubegrid`` is ``True``. An array with shape given by ``axes`` specifying the magnitude
of the real-space co-ordinates at each point of the inverse fourier transform.
"""
if not HAVE_FFTW:
warnings.warn("You do not have pyFFTW installed. Installing it should give some speed increase.")
if axes is None:
axes = list(range(len(X.shape)))
N = np.array([X.shape[axis] for axis in axes])
# Get the box volume if given the real-space box volume
if Lk is None and L is None:
Lk = 1
elif L is not None:
if np.isscalar(L):
L = np.array([L] * len(axes))
dx = L / N
Lk = 2 * np.pi / (dx * b)
elif np.isscalar(Lk):
Lk = [Lk] * len(axes)
Lk = np.array(Lk)
left_edge = _set_left_edge(left_edge, axes, Lk)
V = np.product(Lk)
dk = np.array([float(lk) / float(n) for lk, n in zip(Lk, N)])
ft = V * ifftn(X, axes=axes) * np.sqrt(np.abs(b) / (2 * np.pi) ** (1 + a)) ** len(axes)
ft = ifftshift(ft, axes=axes)
freq = np.array([fftfreq(n, d=d, b=b) for n, d in zip(N, dk)])
ft = _adjust_phase(ft, left_edge, freq, axes, -b)
return _retfunc(ft, freq, axes, ret_cubegrid)
def _adjust_phase(ft, left_edge, freq, axes, b):
for i, (l, f) in enumerate(zip(left_edge, freq)):
xp = np.exp(-b * 1j * f * l)
obj = tuple([None] * axes[i]) + (slice(None, None, None),) + tuple([None] * (ft.ndim - axes[i] - 1))
ft *= xp[obj]
return ft
def _set_left_edge(left_edge, axes, L):
if left_edge is None:
left_edge = [-l/2. for l in L]
else:
if np.isscalar(left_edge):
left_edge = [left_edge] * len(axes)
else:
assert len(left_edge) == len(axes)
return left_edge
def _retfunc(ft, freq, axes, ret_cubegrid):
if not ret_cubegrid:
return ft, freq
else:
grid = freq[0] ** 2
for i in range(1, len(axes)):
grid = np.add.outer(grid, freq[i] ** 2)
return ft, freq, np.sqrt(grid)
def fftshift(x, *args, **kwargs):
"""
The same as numpy's fftshift, except that it preserves units (if Astropy quantities are used)
All extra arguments are passed directly to numpy's `fftshift`.
"""
out = _fftshift(x, *args, **kwargs)
if hasattr(x, "unit"):
return out * x.unit
else:
return out
def ifftshift(x, *args, **kwargs):
"""
The same as numpy's ifftshift, except that it preserves units (if Astropy quantities are used)
All extra arguments are passed directly to numpy's `ifftshift`.
"""
out = _ifftshift(x, *args, **kwargs)
if hasattr(x, "unit"):
return out * x.unit
else:
return out
def fftfreq(N, d=1.0, b=2 * np.pi):
"""
Return the fourier frequencies for a box with N cells, using general Fourier convention.
Parameters
----------
N : int
The number of grid cells
d : float, optional
The interval between cells
b : float, optional
The fourier-convention of the frequency component (see :mod:`powerbox.dft` for details).
Returns
-------
freq : array
The N symmetric frequency components of the Fourier transform. Always centred at 0.
"""
return fftshift(_fftfreq(N, d=d)) * (2 * np.pi / b)
```
|
github_jupyter
|
r"""
A module defining some "nicer" fourier transform functions.
We define only two functions -- an arbitrary-dimension forward transform, and its inverse. In each case, the transform
is designed to replicate the continuous transform. That is, the transform is volume-normalised and obeys correct
Fourier conventions.
The actual FFT backend is provided by ``pyFFTW`` if it is installed, which provides a significant speedup, and
multi-threading.
Conveniently, we allow for arbitrary Fourier convention, according to the scheme in
http://mathworld.wolfram.com/FourierTransform.html. That is, we define the forward and inverse *n*-dimensional
transforms respectively as
.. math:: F(k) = \sqrt{\frac{|b|}{(2\pi)^{1-a}}}^n \int f(r) e^{-i b\mathbf{k}\cdot\mathbf{r}} d^n\mathbf{r}
and
.. math:: f(r) = \sqrt{\frac{|b|}{(2\pi)^{1+a}}}^n \int F(k) e^{+i b\mathbf{k}\cdot\mathbf{r}} d^n \mathbf{k}.
In both transforms, the corresponding co-ordinates are returned so a completely consistent transform is simple to get.
This makes switching from standard frequency to angular frequency very simple.
We note that currently, only positive values for b are implemented (in fact, using negative b is consistent, but
one must be careful that the frequencies returned are descending, rather than ascending).
"""
import warnings
__all__ = ['fft', 'ifft', 'fftfreq', 'fftshift', 'ifftshift']
HAVE_FFTW = False
from jax.numpy.fft import fftn, ifftn, ifftshift as _ifftshift, fftshift as _fftshift, fftfreq as _fftfreq
# To avoid MKL-related bugs, numpy needs to be imported after pyfftw: see https://github.com/pyFFTW/pyFFTW/issues/40
import numpy as np
def fft(X, L=None, Lk=None, a=0, b=2 * np.pi, left_edge=None, axes=None, ret_cubegrid=False):
r"""
Arbitrary-dimension nice Fourier Transform.
This function wraps numpy's ``fftn`` and applies some nice properties. Notably, the returned fourier transform
is equivalent to what would be expected from a continuous Fourier Transform (including normalisations etc.). In
addition, arbitrary conventions are supported (see :mod:`powerbox.dft` for details).
Default parameters have the same normalising conventions as ``numpy.fft.fftn``.
The output object always has the zero in the centre, with monotonically increasing spectral arguments.
Parameters
----------
X : array
An array with arbitrary dimensions defining the field to be transformed. Should correspond exactly
to the continuous function for which it is an analogue. A lower-dimensional transform can be specified by using
the ``axes`` argument.
L : float or array-like, optional
The length of the box which defines ``X``. If a scalar, each transformed dimension in ``X`` is assumed to have
the same length. If array-like, must be of the same length as the number of transformed dimensions. The default
returns the un-normalised DFT (same as numpy).
Lk : float or array-like, optional
The length of the fourier-space box which defines the dual of ``X``. Only one of L/Lk needs to be provided. If
provided, L takes precedence. If a scalar, each transformed dimension in ``X`` is assumed to have
the same length. If array-like, must be of the same length as the number of transformed dimensions.
a,b : float, optional
These define the Fourier convention used. See :mod:`powerbox.dft` for details. The defaults return the standard DFT
as defined in :mod:`numpy.fft`.
left_edge : float or array-like, optional
The co-ordinate at the left-edge for each dimension that is being transformed. By default, sets the left
edge to -L/2, so that the input is centred before transforming (i.e. equivalent to ``fftshift(fft(fftshift(X)))``)
axes : sequence of ints, optional
The axes to take the transform over. The default is to use all axes for the transform.
ret_cubegrid : bool, optional
Whether to return the entire grid of frequency magnitudes.
Returns
-------
ft : array
The DFT of X, normalised to be consistent with the continuous transform.
freq : list of arrays
The frequencies in each dimension, consistent with the Fourier conventions specified.
grid : array
Only returned if ``ret_cubegrid`` is ``True``. An array with shape given by ``axes`` specifying the magnitude
of the frequencies at each point of the fourier transform.
"""
if not HAVE_FFTW:
warnings.warn("You do not have pyFFTW installed. Installing it should give some speed increase.")
if axes is None:
axes = list(range(len(X.shape)))
N = np.array([X.shape[axis] for axis in axes])
# Get the box volume if given the fourier-space box volume
if L is None and Lk is None:
L = N
elif L is not None: # give precedence to L
if np.isscalar(L):
L = L * np.ones(len(axes))
elif Lk is not None:
if np.isscalar(Lk):
Lk = Lk * np.ones(len(axes))
L = N * 2 * np.pi / (Lk * b) # Take account of the fourier convention.
left_edge = _set_left_edge(left_edge, axes, L)
V = float(np.product(L)) # Volume of box
Vx = V / np.product(N) # Volume of cell
ft = Vx * fftshift(fftn(X, axes=axes), axes=axes) * np.sqrt(np.abs(b) / (2 * np.pi) ** (1 - a)) ** len(axes)
dx = np.array([float(l) / float(n) for l, n in zip(L, N)])
freq = np.array([fftfreq(n, d=d, b=b) for n, d in zip(N, dx)])
# Adjust phases of the result to align with the left edge properly.
ft = _adjust_phase(ft, left_edge, freq, axes, b)
return _retfunc(ft, freq, axes, ret_cubegrid)
def ifft(X, Lk=None, L=None, a=0, b=2 * np.pi, axes=None, left_edge=None, ret_cubegrid=False):
r"""
Arbitrary-dimension nice inverse Fourier Transform.
This function wraps numpy's ``ifftn`` and applies some nice properties. Notably, the returned fourier transform
is equivalent to what would be expected from a continuous inverse Fourier Transform (including normalisations etc.).
In addition, arbitrary conventions are supported (see :mod:`powerbox.dft` for details).
Default parameters have the same normalising conventions as ``numpy.fft.ifftn``.
Parameters
----------
X : array
An array with arbitrary dimensions defining the field to be transformed. Should correspond exactly
to the continuous function for which it is an analogue. A lower-dimensional transform can be specified by using
the ``axes`` argument. Note that if using a non-periodic function, the co-ordinates should be monotonically
increasing.
Lk : float or array-like, optional
The length of the box which defines ``X``. If a scalar, each transformed dimension in ``X`` is assumed to have
the same length. If array-like, must be of the same length as the number of transformed dimensions. The default
returns the un-normalised DFT (the same as numpy).
L : float or array-like, optional
The length of the real-space box, defining the dual of ``X``. Only one of Lk/L needs to be passed. If L is
passed, it is used. If a scalar, each transformed dimension in ``X`` is assumed to have
the same length. If array-like, must be of the same length as the number of transformed dimensions. The default
of ``Lk=1`` returns the un-normalised DFT.
a,b : float, optional
These define the Fourier convention used. See :mod:`powerbox.dft` for details. The defaults return the standard DFT
as defined in :mod:`numpy.fft`.
axes : sequence of ints, optional
The axes to take the transform over. The default is to use all axes for the transform.
left_edge : float or array-like, optional
The co-ordinate at the left-edge (in k-space) for each dimension that is being transformed. By default, sets the
left edge to -Lk/2, equivalent to the standard numpy ifft. This affects only the phases of the result.
ret_cubegrid : bool, optional
Whether to return the entire grid of real-space co-ordinate magnitudes.
Returns
-------
ft : array
The IDFT of X, normalised to be consistent with the continuous transform.
freq : list of arrays
The real-space co-ordinate grid in each dimension, consistent with the Fourier conventions specified.
grid : array
Only returned if ``ret_cubegrid`` is ``True``. An array with shape given by ``axes`` specifying the magnitude
of the real-space co-ordinates at each point of the inverse fourier transform.
"""
if not HAVE_FFTW:
warnings.warn("You do not have pyFFTW installed. Installing it should give some speed increase.")
if axes is None:
axes = list(range(len(X.shape)))
N = np.array([X.shape[axis] for axis in axes])
# Get the box volume if given the real-space box volume
if Lk is None and L is None:
Lk = 1
elif L is not None:
if np.isscalar(L):
L = np.array([L] * len(axes))
dx = L / N
Lk = 2 * np.pi / (dx * b)
elif np.isscalar(Lk):
Lk = [Lk] * len(axes)
Lk = np.array(Lk)
left_edge = _set_left_edge(left_edge, axes, Lk)
V = np.product(Lk)
dk = np.array([float(lk) / float(n) for lk, n in zip(Lk, N)])
ft = V * ifftn(X, axes=axes) * np.sqrt(np.abs(b) / (2 * np.pi) ** (1 + a)) ** len(axes)
ft = ifftshift(ft, axes=axes)
freq = np.array([fftfreq(n, d=d, b=b) for n, d in zip(N, dk)])
ft = _adjust_phase(ft, left_edge, freq, axes, -b)
return _retfunc(ft, freq, axes, ret_cubegrid)
def _adjust_phase(ft, left_edge, freq, axes, b):
for i, (l, f) in enumerate(zip(left_edge, freq)):
xp = np.exp(-b * 1j * f * l)
obj = tuple([None] * axes[i]) + (slice(None, None, None),) + tuple([None] * (ft.ndim - axes[i] - 1))
ft *= xp[obj]
return ft
def _set_left_edge(left_edge, axes, L):
if left_edge is None:
left_edge = [-l/2. for l in L]
else:
if np.isscalar(left_edge):
left_edge = [left_edge] * len(axes)
else:
assert len(left_edge) == len(axes)
return left_edge
def _retfunc(ft, freq, axes, ret_cubegrid):
if not ret_cubegrid:
return ft, freq
else:
grid = freq[0] ** 2
for i in range(1, len(axes)):
grid = np.add.outer(grid, freq[i] ** 2)
return ft, freq, np.sqrt(grid)
def fftshift(x, *args, **kwargs):
"""
The same as numpy's fftshift, except that it preserves units (if Astropy quantities are used)
All extra arguments are passed directly to numpy's `fftshift`.
"""
out = _fftshift(x, *args, **kwargs)
if hasattr(x, "unit"):
return out * x.unit
else:
return out
def ifftshift(x, *args, **kwargs):
"""
The same as numpy's ifftshift, except that it preserves units (if Astropy quantities are used)
All extra arguments are passed directly to numpy's `ifftshift`.
"""
out = _ifftshift(x, *args, **kwargs)
if hasattr(x, "unit"):
return out * x.unit
else:
return out
def fftfreq(N, d=1.0, b=2 * np.pi):
"""
Return the fourier frequencies for a box with N cells, using general Fourier convention.
Parameters
----------
N : int
The number of grid cells
d : float, optional
The interval between cells
b : float, optional
The fourier-convention of the frequency component (see :mod:`powerbox.dft` for details).
Returns
-------
freq : array
The N symmetric frequency components of the Fourier transform. Always centred at 0.
"""
return fftshift(_fftfreq(N, d=d)) * (2 * np.pi / b)
| 0.973682 | 0.962813 |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Understanding-Asynchronous-Operations-in-Aerospike" data-toc-modified-id="Understanding-Asynchronous-Operations-in-Aerospike-1"><span class="toc-item-num">1 </span>Understanding Asynchronous Operations in Aerospike</a></span><ul class="toc-item"><li><span><a href="#Introduction" data-toc-modified-id="Introduction-1.1"><span class="toc-item-num">1.1 </span>Introduction</a></span></li><li><span><a href="#Prerequisites" data-toc-modified-id="Prerequisites-1.2"><span class="toc-item-num">1.2 </span>Prerequisites</a></span></li><li><span><a href="#Initialization" data-toc-modified-id="Initialization-1.3"><span class="toc-item-num">1.3 </span>Initialization</a></span><ul class="toc-item"><li><span><a href="#Ensure-database-is-running" data-toc-modified-id="Ensure-database-is-running-1.3.1"><span class="toc-item-num">1.3.1 </span>Ensure database is running</a></span></li><li><span><a href="#Download-and-install-additional-components." data-toc-modified-id="Download-and-install-additional-components.-1.3.2"><span class="toc-item-num">1.3.2 </span>Download and install additional components.</a></span></li><li><span><a href="#Constants-and-Convenience-Functions" data-toc-modified-id="Constants-and-Convenience-Functions-1.3.3"><span class="toc-item-num">1.3.3 </span>Constants and Convenience Functions</a></span></li></ul></li><li><span><a href="#Open-a-Terminal-Tab" data-toc-modified-id="Open-a-Terminal-Tab-1.4"><span class="toc-item-num">1.4 </span>Open a Terminal Tab</a></span></li></ul></li><li><span><a href="#Synchronous,-Asynchronous,-and-Background-Operations" data-toc-modified-id="Synchronous,-Asynchronous,-and-Background-Operations-2"><span class="toc-item-num">2 </span>Synchronous, Asynchronous, and Background Operations</a></span></li><li><span><a href="#Asynchronous-Operations-For-Better-Resource-Efficiency" data-toc-modified-id="Asynchronous-Operations-For-Better-Resource-Efficiency-3"><span class="toc-item-num">3 </span>Asynchronous Operations For Better Resource Efficiency</a></span></li><li><span><a href="#Supported-Asynchronous-Operations" data-toc-modified-id="Supported-Asynchronous-Operations-4"><span class="toc-item-num">4 </span>Supported Asynchronous Operations</a></span></li><li><span><a href="#Execution-Model" data-toc-modified-id="Execution-Model-5"><span class="toc-item-num">5 </span>Execution Model</a></span><ul class="toc-item"><li><span><a href="#Application-Call-Sequence" data-toc-modified-id="Application-Call-Sequence-5.1"><span class="toc-item-num">5.1 </span>Application Call Sequence</a></span></li></ul></li><li><span><a href="#Understanding-Event-Loops" data-toc-modified-id="Understanding-Event-Loops-6"><span class="toc-item-num">6 </span>Understanding Event Loops</a></span><ul class="toc-item"><li><span><a href="#Event-Loop-Variants:-Netty,-NIO,-EPOLL" data-toc-modified-id="Event-Loop-Variants:-Netty,-NIO,-EPOLL-6.1"><span class="toc-item-num">6.1 </span>Event Loop Variants: Netty, NIO, EPOLL</a></span></li></ul></li><li><span><a href="#Async-Framework" data-toc-modified-id="Async-Framework-7"><span class="toc-item-num">7 </span>Async Framework</a></span><ul class="toc-item"><li><span><a href="#Initialize-event-loops" data-toc-modified-id="Initialize-event-loops-7.1"><span class="toc-item-num">7.1 </span>Initialize event loops</a></span></li><li><span><a href="#Initialize-Client" data-toc-modified-id="Initialize-Client-7.2"><span class="toc-item-num">7.2 </span>Initialize Client</a></span></li><li><span><a href="#Initialize-event-loop-throttles-and-atomic-operation-count." data-toc-modified-id="Initialize-event-loop-throttles-and-atomic-operation-count.-7.3"><span class="toc-item-num">7.3 </span>Initialize event loop throttles and atomic operation count.</a></span></li><li><span><a href="#Define-Listener-and-Handlers" data-toc-modified-id="Define-Listener-and-Handlers-7.4"><span class="toc-item-num">7.4 </span>Define Listener and Handlers</a></span></li><li><span><a href="#Submit-Async-Requests-Using-Throttling" data-toc-modified-id="Submit-Async-Requests-Using-Throttling-7.5"><span class="toc-item-num">7.5 </span>Submit Async Requests Using Throttling</a></span></li><li><span><a href="#Closing" data-toc-modified-id="Closing-7.6"><span class="toc-item-num">7.6 </span>Closing</a></span></li></ul></li><li><span><a href="#Nested-and-Inline-Async-Operations" data-toc-modified-id="Nested-and-Inline-Async-Operations-8"><span class="toc-item-num">8 </span>Nested and Inline Async Operations</a></span></li><li><span><a href="#Misc-Examples" data-toc-modified-id="Misc-Examples-9"><span class="toc-item-num">9 </span>Misc Examples</a></span><ul class="toc-item"><li><span><a href="#Delay-Queue-Full-Error" data-toc-modified-id="Delay-Queue-Full-Error-9.1"><span class="toc-item-num">9.1 </span>Delay Queue Full Error</a></span></li></ul></li><li><span><a href="#Comparing-Different-Settings" data-toc-modified-id="Comparing-Different-Settings-10"><span class="toc-item-num">10 </span>Comparing Different Settings</a></span></li><li><span><a href="#Takeaways-and-Conclusion" data-toc-modified-id="Takeaways-and-Conclusion-11"><span class="toc-item-num">11 </span>Takeaways and Conclusion</a></span></li><li><span><a href="#Clean-up" data-toc-modified-id="Clean-up-12"><span class="toc-item-num">12 </span>Clean up</a></span></li><li><span><a href="#Further-Exploration-and-Resources" data-toc-modified-id="Further-Exploration-and-Resources-13"><span class="toc-item-num">13 </span>Further Exploration and Resources</a></span><ul class="toc-item"><li><span><a href="#Next-steps" data-toc-modified-id="Next-steps-13.1"><span class="toc-item-num">13.1 </span>Next steps</a></span></li></ul></li></ul></div>
# Understanding Asynchronous Operations in Aerospike
This tutorial describes asynchronous operations in Aerospike: why they are used, the architecture, and how to program with async operations.
This notebook requires Aerospike datbase running on localhost. Visit [Aerospike notebooks repo](https://github.com/aerospike-examples/interactive-notebooks) for additional details and the docker container.
## Introduction
In this notebook, we will see the benefits, design, and specifics of programming with asynchronous operations in Aerospike.
Aerospike provides asynchronous APIs for many operations. We will describe the benefits of using async operations and key abstractions in the client related to async requests. After covering the theoretical ground, we will show how it all comes together with specific code examples.
The notebook tutorial has two parts:
- architecture and concepts, and
- coding examples.
The main topics include:
- Execution models in Aerospike
- Benefits of async
- Key concepts
- Framework for async programming
- Coding examples
## Prerequisites
This tutorial assumes familiarity with the following topics:
- [Aerospike Notebooks - Readme and Tips](../readme_tips.ipynb)
- [Hello World](hello_world.ipynb)
## Initialization
### Ensure database is running
This notebook requires that Aerospike database is running.
```
import io.github.spencerpark.ijava.IJava;
import io.github.spencerpark.jupyter.kernel.magic.common.Shell;
IJava.getKernelInstance().getMagics().registerMagics(Shell.class);
%sh asd
```
### Download and install additional components.
Install the Aerospike Java client and the Java Netty package, which is described later in the notebook.
```
%%loadFromPOM
<dependencies>
<dependency>
<groupId>com.aerospike</groupId>
<artifactId>aerospike-client</artifactId>
<version>5.0.0</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-all</artifactId>
<version>4.1.53.Final</version>
<scope>compile</scope>
</dependency>
</dependencies>
```
### Constants and Convenience Functions
We will use some constants and convenience functions throughout this tutorial, including the namespace "test" and set "async-ops".
```
final String Namespace = "test";
final String Set = "async-ops";
// truncate data, close client and event loops - called multiple times to initialize with different options
// described in greater detail later
void Cleanup() {
try {
client.truncate(null, Namespace, Set, null);
}
catch (AerospikeException e) {
// ignore
}
client.close();
eventLoops.close();
};
```
## Open a Terminal Tab
You may execute shell commands including Aerospike tools like [aql](https://docs.aerospike.com/docs/tools/aql/index.html) and [asadm](https://docs.aerospike.com/docs/tools/asadm/index.html) in the terminal tab. Open a terminal tab by selecting File->Open from the notebook menu, and then New->Terminal.
# Synchronous, Asynchronous, and Background Operations
An application uses the Aerospike client library (aka the client) to interact with Aerospike Database. The client sets up a connection to the appropriate server node and sends in a request for execution in one of the following modes:
Synchronous: The request thread makes the request, waits for the response, and processes the response upon arrival.
Asynchronous: The request thread submits one or more requests, and results are processed in one or more callback thread(s) as they arrive.
Background: The request thread submits the request and operation (or task) is completed in the background. The submission returns immediately, while the actual operation executes separately. The application can check the completion status of the task, and after it is completed may examine the results in the database with one or more separate requests.
Note, a background operation may be considered a special type of asynchronous operation, and it is applicable only for updates in Aerospike. By asynchrounous operations we refer to only those that return results in a callback.
# Asynchronous Operations For Better Resource Efficiency
During the time a request is sent to the server and the result arrives (“request latency”), the client and application need not wait idly if a high throughput is the goal. A higher throughput can be achieved through concurrent requests.
- Synchronous: The application can spawn multiple threads and process multiple requests in parallel, one per thread at a time.
- Asynchronous: The application can process requests asynchronously by submitting them in parallel without waiting for the results. The results are processed as they arrive in a different “callback” thread. An async request uses a dedicated connection to the server.
- Pipeline: Multiple requests could be sent over the same connection to the same server node, and their results received over the same connection. Thus there is greater sharing of threads and connections across multiple requests. Aerospike currently does not support pipeline processing.
In many cases, asynchronous processing can be more resource efficient and can deliver better throughput than multi-threaded synchronous processing because threads have memory and CPU (context-switch) overhead and their number may be limited by the OS.
On the other hand, the asynchronous model is more complex to program and debug. The application should make judicious use of synchronous, asynchronous, and background requests. The client can perform different type of commands in a single instance.
It should be noted that background operations when appropriate would typically deliver superior throughput especially in a non-UDF invocation.
# Supported Asynchronous Operations
Most CRUD operations have the async variant.
- Single record operations
- add, append, delete, apply(udf), get, getHeader, operate, prepend, put, touch
- Batch operations:
- exists (array listener and sequence listener), get (batch list and batch sequence listener), get (record array and record sequence listener), getHeader
- Query/scan: Callback handles a series of records, a single record at a time.
- query, queryPartitions
- scanAll, scanPartitions
- Metadata: createIndex, dropIndex
- Operational: info
Please refer to the [API documentation](https://docs.aerospike.com/apidocs/java/com/aerospike/client/AerospikeClient.html) for details.
# Execution Model
The async methods take two additional arguments than their sync variants: the “event loops” and “listener (callback)”. See the code in [Async Framework](#Async-Framework) section below.
- Event loops: An event-loop represents the loop of "submit a request" and "asynchronously process the result" for concurrent processing of events or requests. Multiple event loops are used to leverage multiple CPU cores.
- Listener: The listener encapsulates the processing of results.
- Listener types: Depending on the expected number of records in the result and whether they arrive at once or individually, different listener types are to be used such as a single record listener, a record array listner, or a record sequence listener.
- Completion handlers: A single record or record array is processed with the success or failure handler. In a record sequence listener, each record is prcessed with a "record" handler, whereas the success handler is called to mark the end of the sequence.
## Application Call Sequence
The application is responsible for spreading requests evenly across event loops as well as throttling the rate of requests if the request rate can exceed the client or server capacity. The call sequence involves these steps (see the code in [Async Framework](#Async-Framework) section below):
- Initialize event loops.
- Implement the listener with success and failure handlers.
- Submit requests across event loops, throttling to stay below maximum outstanding requests limit.
- Wait for all outstanding requests to finish.
# Understanding Event Loops
Let's look at the key concepts relating to event loops. As described above, an event loop represents concurrent submit-callback processing of requests. See the code in [Async Framework](#Async-Framework) section below.
**Number of event loops**: In order to maximize parallelism of the client hardware, as many event loops are created as the number of cores dedicated for the Aerospike application. An event pool is aligned with a CPU core, not to a server node or a request type.
**Concurrency level**: The maximum concurrency level in each event loop depends on the effective server throughput seen by the client, and in aggregate may not exceed it. A larger value would result in request timeouts and other failures.
**Connection pools and event loops**: Connection pools are allocated on a per node basis, and are independent of event pools. When an async request needs to connect to a node, it uses a connection from the node’s connection pool only for the duration of the request and then releases it.
**Connection pool size**: Concurrency across all loops must be supported by the number of connections in the connection pool. The connection pool per node should be set equal to or greater than the total number of outstanding requests across all event loops (because all requests may go to the same node in the extreme case).
**Delay queue buffer**: To buffer a temporary mismatch in processing and submission rates, there is a delay queue buffer in front of an event loop where requests are held until an async request slot becomes available in the event loop. The queued request is automatically assigned to a slot and processed without involvement of the application.
**Throttling**: The delay queue cannot buffer a long running mismatch in submission and processing speeds, however, and if the wait queue fills up, a request will not be accepted and the client will return “delay queue full” error. The application should throttle by keeping track of outstanding requests and issue a new request when an outstanding one finishes. If delay queue size is set to zero, throttling must also be handled in the application code.
## Event Loop Variants: Netty, NIO, EPOLL
Both Netty and Direct NIO event loops are supported in Aerospike.
[Netty](https://netty.io/) is an asynchronous event-driven network application framework for high-performance servers based on Java Non-blocking IO ([NIO](https://en.wikipedia.org/wiki/Non-blocking_I/O_(Java))) package. [Epoll](https://en.wikipedia.org/wiki/Epoll) (event poll)is a Linux specific construct and allows for a process to monitor multiple file descriptors and get notifications when I/O is possible on them.
Netty allows users to share their existing event loops with AerospikeClient which can improve performance. Netty event loops are also required when using TLS connections. However Netty is an optional external library dependency.
Direct NIO event loops are lighter weight and slightly faster than Netty defaults when not sharing event loops. Direct NIO does not have an external library dependency.
You should consider tradeoffs in using the types of event loops - refer to the links provided for further details.
# Async Framework
Below we walk through the steps in setting up a typical async operation framework.
## Initialize event loops
Initialize event loops. Allocate an event loop for each CPU core.
Examine the code snippets below.
- Initialize event policy. Select level of parallism desired; cannot exceed server throughput.
<pre>
EventPolicy eventPolicy = EventPolicy();
final CommandsPerEventLoop = 50;
eventPolicy.maxCommandsInProcess = commandsPerEventLoop;
</pre>
- Select delay queue buffer size in front of the event loop.
<pre>
maxCommandsInQueue = 50;
eventPolicy.maxCommandsInQueue = maxCommandsInQueue;
</pre>
- Create event loops object.
<pre>
// here we use direct nio and 2 events loops
numLoops = 2;
EventLoops eventLoops = new NioEventLoops(eventPolicy, numLoops);
</pre>
In the following cell, the function InitializeEventLoops allows initialization of different types of event loops. The function will be called multiple times later in the notebook to experiment with different settings.
```
import com.aerospike.client.async.EventPolicy;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.epoll.EpollEventLoopGroup;
import com.aerospike.client.async.NettyEventLoops;
import com.aerospike.client.async.EventLoops;
import com.aerospike.client.async.NioEventLoops;
enum EventLoopType{DIRECT_NIO, NETTY_NIO, NETTY_EPOLL};
// a function to create event loops with specified parameters
EventLoops InitializeEventLoops(EventLoopType eventLoopType, int numLoops, int commandsPerEventLoop,
int maxCommandsInQueue) {
EventPolicy eventPolicy = new EventPolicy();
eventPolicy.maxCommandsInProcess = commandsPerEventLoop;
eventPolicy.maxCommandsInQueue = maxCommandsInQueue;
EventLoops eventLoops = null;
switch(eventLoopType) {
case DIRECT_NIO:
eventLoops = new NioEventLoops(eventPolicy, numLoops);
break;
case NETTY_NIO:
NioEventLoopGroup nioGroup = new NioEventLoopGroup(numLoops);
eventLoops = new NettyEventLoops(eventPolicy, nioGroup);
break;
case NETTY_EPOLL:
EpollEventLoopGroup epollGroup = new EpollEventLoopGroup(numLoops);
eventLoops = new NettyEventLoops(eventPolicy, epollGroup);
break;
default:
System.out.println("Error: Invalid event loop type");
}
return eventLoops;
}
// initialize event loops
final int NumLoops = 2;
final int CommandsPerEventLoop = 50;
final int DelayQueueSize = 50;
EventLoops eventLoops = InitializeEventLoops(EventLoopType.DIRECT_NIO, NumLoops, CommandsPerEventLoop, DelayQueueSize);
System.out.format("Event loops initialized with num-loops: %s, commands-per-event-loop: %s, delay-queue-size: %s.\n",
NumLoops, CommandsPerEventLoop, DelayQueueSize);;
```
## Initialize Client
Examine the code snippets below.
- Initialize client policy with event loops.
<pre>
ClientPolicy policy = new ClientPolicy();
clientPolicy.eventLoops = eventLoops;
</pre>
- Set total concurrent connections per node by multiplying concurrency level at event loop (maxCommandsInProcess) by the number of event loops.
<pre>
concurrentMax = commandsPerEventLoop * numLoops;
</pre>
- This is the max number of commands or requests per node if all requests go to one node. Adjust the default connection pool size of 300 if concurrentMax is larger.
<pre>
if (clientPolicy.maxConnsPerNode < concurrentMax) {
clientPolicy.maxConnsPerNode = concurrentMax;
}
</pre>
- Initialize the client with the client policy and seed hosts in cluster.
<pre>
Host[] hosts = Host.parseHosts("localhost", 3000);
AerospikeClient client = new AerospikeClient(clientPolicy, hosts);
</pre>
In the following cell, the function InitializeClient allows initialization of the client with specified parameters.
```
import com.aerospike.client.policy.ClientPolicy;
import com.aerospike.client.Host;
import com.aerospike.client.AerospikeClient;
// a function to initialize the client with specified parameters
AerospikeClient InitializeClient(EventLoops eventLoops, int numLoops, int commandsPerEventLoop, Host[] hosts) {
ClientPolicy clientPolicy = new ClientPolicy();
clientPolicy.eventLoops = eventLoops;
int concurrentMax = commandsPerEventLoop * numLoops;
if (clientPolicy.maxConnsPerNode < concurrentMax) {
clientPolicy.maxConnsPerNode = concurrentMax;
}
AerospikeClient client = new AerospikeClient(clientPolicy, hosts);
return client;
}
// initialize the client
Host[] hosts = Host.parseHosts("localhost", 3000);
AerospikeClient client = InitializeClient(eventLoops, NumLoops, CommandsPerEventLoop, hosts);
System.out.print("Client initialized.\n");
```
## Initialize event loop throttles and atomic operation count.
The event loop throttles object is initialized with the number of event loops and commands per event loop. It provides two methods "waitForSlot" and "addSlot" to manage concurrency for an event loop, both take an index parameter that identifies the event loop.
<pre>
throttles = new Throttles(numLoops, commandsPerEventLoop);
</pre>
The operation count is used to track the number of finshed operations. Because multiple callback threads access and increment it concurrently, it is defined as an AtomicInteger, which has support for atomic operation get/increment operations.
<pre>
AtomicInteger asyncOpCount = new AtomicInteger();
</pre>
In the following cell, the function InitializeThrottles creates throttles for event loops with specified parameters.
```
import com.aerospike.client.async.Throttles;
// creates event loop throttles with specified parameters
Throttles InitializeThrottles(int numLoops, int commandsPerEventLoop) {
Throttles throttles = new Throttles(numLoops, commandsPerEventLoop);
return throttles;
}
// initialize event loop throttles
Throttles throttles = InitializeThrottles(NumLoops, CommandsPerEventLoop);
System.out.format("Throttles initialized for %s loops with %s concurrent operations per loop.\n",
NumLoops, CommandsPerEventLoop);
// initialize the atomic integer to keep track of async operations count
import java.util.concurrent.atomic.AtomicInteger;
AtomicInteger asyncOpCount = new AtomicInteger();
System.out.format("Atomic operation count initialized.");;
```
## Define Listener and Handlers
Define the listener with success and failure handlers to process results. Below, we have MyWriteListener derived from WriteListener to process insertion of records that:
- implements success and failure handlers
- releases a slot in the event loop on success or failure for another insert to proceed
throttles.addSlot(eventLoopIndex, 1);
- signals completion through monitor on failure or when the write count reaches the expected final count
monitor.notifyComplete();
- prints progress every "progressFreq" records
```
import com.aerospike.client.Key;
import com.aerospike.client.listener.WriteListener;
import com.aerospike.client.async.Monitor;
import com.aerospike.client.AerospikeException;
// write listener
// - implements success and failure handlers
// - releases a slot on success or failure for another insert to proceed
// - signals completion through monitor on failure or when the write count reaches the expected final count
// - prints progress every "progressFreq" records*/
class MyWriteListener implements WriteListener {
private final Key key;
private final int eventLoopIndex;
private final int finalCount;
private Monitor monitor;
private final int progressFreq;
public MyWriteListener(Key key, int eventLoopIndex, int finalCount, Monitor monitor, int progressFreq) {
this.key = key;
this.eventLoopIndex = eventLoopIndex;
this.finalCount = finalCount;
this.monitor = monitor;
this.progressFreq = progressFreq;
}
// Write success callback.
public void onSuccess(Key key) {
// Write succeeded.
throttles.addSlot(eventLoopIndex, 1);
int currentCount = asyncOpCount.incrementAndGet();
if ( progressFreq > 0 && currentCount % progressFreq == 0) {
System.out.format("Inserted %s records.\n", currentCount);
}
if (currentCount == finalCount) {
monitor.notifyComplete();
}
}
// Error callback.
public void onFailure(AerospikeException e) {
System.out.format("Put failed: namespace=%s set=%s key=%s exception=%s\n",
key.namespace, key.setName, key.userKey, e.getMessage());
monitor.notifyComplete();
}
}
System.out.print("Write listener defined.");
```
## Submit Async Requests Using Throttling
While submitting async requests it is important to keep below the planned concurrent capacity using throttling.
The function InsertRecords below inserts the specified number of records asynchronously with id-\<index\> as the user-key and two integer fields bin1 and bin2. It keeps track of and returns the elapsed time.
Throttling is achieved by waiting for an available slot in the event loop.
<pre>
if (throttles.waitForSlot(eventLoopIndex, 1)) {
// submit async request
}
</pre>
After submitting all requests, the main thread must wait for outstanding requests to complete before closing.
<pre>
monitor.waitTillComplete();
</pre>
```
import java.util.concurrent.TimeUnit;
import com.aerospike.client.Bin;
import com.aerospike.client.policy.WritePolicy;
import com.aerospike.client.async.EventLoop;
long InsertRecords(int numRecords, EventLoops eventLoops, Throttles throttles, int progressFreq) {
long startTime = System.nanoTime();
Monitor monitor = new Monitor();
asyncOpCount.set(0);
WritePolicy policy = new WritePolicy();
for (int i = 0; i < numRecords; i++) {
Key key = new Key(Namespace, Set, "id-"+i);
Bin bin1 = new Bin(new String("bin1"), i);
Bin bin2 = new Bin(new String("bin2"), numRecords*10+i);
EventLoop eventLoop = eventLoops.next();
int eventLoopIndex = eventLoop.getIndex();
if (throttles.waitForSlot(eventLoopIndex, 1)) {
try {
client.put(eventLoop, new MyWriteListener(key, eventLoopIndex, numRecords, monitor, progressFreq),
policy, key, bin1, bin2);
}
catch (Exception e) {
throttles.addSlot(eventLoopIndex, 1);
}
}
}
monitor.waitTillComplete();
long endTime = System.nanoTime();
return (endTime - startTime);
}
final int NumRecords = 100000;
long elapsedTime = InsertRecords(NumRecords, eventLoops, throttles, NumRecords/4);
System.out.format("Inserted %s records with %s event-loops and %s commands-per-loop in %s milliseconds.\n",
NumRecords, NumLoops, CommandsPerEventLoop, elapsedTime/1000000);;
```
## Closing
Both AerospikeClient and EventLoops should be closed before program shutdown. The latest client waits for pending async commands to finish before performing the actual close, so there is no need to externally track pending async commands. Earlier versions provide a waitToComplete() call on Monitor object to ensure async operations are completed. The Cleanup function implemented above truncates the database and closes client and event-loops.
```
// truncates database and closes client and event-loops
Cleanup();
System.out.println("Removed data and closed client and event loops.");
```
# Nested and Inline Async Operations
It is possible to nest a series of async calls, one in the processing logic of another. Some simple examples of such cascaded calls are:
- Retry the same operation in the failure handler
- Issue an async read to validate an async write operation
- Issue an async write to update a record retrieved from an async read operation.
The following code illustrates a simplistic example of how each record retrieved from an async filtered scan is updated asynchronously by incrementing the value of bin2. Note the inline implementation of WriteListener. The scan filter selects records between bin1 values of 1 and 1000. Throttling and progress report are also present as described above.
```
import com.aerospike.client.policy.ScanPolicy;
import com.aerospike.client.listener.RecordSequenceListener;
import com.aerospike.client.Record;
import com.aerospike.client.exp.Exp;
// Scan callback
class ScanRecordSequenceListener implements RecordSequenceListener {
private EventLoops eventLoops;
private Throttles throttles;
private Monitor scanMonitor;
private AtomicInteger writeCount = new AtomicInteger();
private int scanCount = 0;
private final int progressFreq;
public ScanRecordSequenceListener(EventLoops eventLoops, Throttles throttles, Monitor scanMonitor,
int progressFreq) {
this.eventLoops = eventLoops;
this.throttles = throttles;
this.scanMonitor = scanMonitor;
this.progressFreq = progressFreq;
}
public void onRecord(Key key, Record record) throws AerospikeException {
++scanCount;
if ( progressFreq > 0 && scanCount % progressFreq == 0) {
System.out.format("Scan returned %s records.\n", scanCount);
}
// submit async update operation with throttle
EventLoop eventLoop = eventLoops.next();
int eventLoopIndex = eventLoop.getIndex();
if (throttles.waitForSlot(eventLoopIndex, 1)) { // throttle by waiting for an available slot
try {
WritePolicy policy = new WritePolicy();
Bin bin2 = new Bin(new String("bin2"), 1);
client.add(eventLoop, new WriteListener() { // inline write listener
public void onSuccess(final Key key) {
// Write succeeded.
throttles.addSlot(eventLoopIndex, 1);
int currentCount = writeCount.incrementAndGet();
if ( progressFreq > 0 && currentCount % progressFreq == 0) {
System.out.format("Processed %s records.\n", currentCount);
}
}
public void onFailure(AerospikeException e) {
System.out.format("Put failed: namespace=%s set=%s key=%s exception=%s\n",
key.namespace, key.setName, key.userKey, e.getMessage());
throttles.addSlot(eventLoopIndex, 1);
int currentCount = writeCount.incrementAndGet();
if ( progressFreq > 0 && currentCount % progressFreq == 0) {
System.out.format("Processed %s records.\n", currentCount);
}
}
},
policy, key, bin2);
}
catch (Exception e) {
System.out.format("Error: exception in write listener - %s", e.getMessage());
}
}
}
public void onSuccess() {
if (scanCount != writeCount.get()) { // give the last write some time to finish
try {
Thread.sleep(100);
}
catch(InterruptedException e) {
System.out.format("Error: exception - %s", e);
}
}
scanMonitor.notifyComplete();
}
public void onFailure(AerospikeException e) {
System.out.format("Error: scan failed with exception - %s", e);
scanMonitor.notifyComplete();
}
}
// cleanup prior state
Cleanup();
// initialize data, event loops and client
int numRecords = 100000;
int numLoops = 2;
int commandsPerLoop = 25;
int delayQueueSize = 0;
eventLoops = InitializeEventLoops(EventLoopType.DIRECT_NIO, numLoops, commandsPerLoop, delayQueueSize);
client = InitializeClient(eventLoops, numLoops, commandsPerLoop, hosts);
throttles = InitializeThrottles(numLoops, commandsPerLoop);
InsertRecords(numRecords, eventLoops, throttles, 0);
System.out.format("Inserted %s records.\n", numRecords);
EventLoop eventLoop = eventLoops.next();
Monitor scanMonitor = new Monitor();
int progressFreq = 100;
// issue async scan that in turn issues async update on each returned record
ScanPolicy policy = new ScanPolicy();
policy.filterExp = Exp.build(
Exp.and(
Exp.le(Exp.intBin("bin1"), Exp.val(1000)),
Exp.ge(Exp.intBin("bin1"), Exp.val(1))));
client.scanAll(eventLoop, new ScanRecordSequenceListener(eventLoops, throttles, scanMonitor, progressFreq),
policy, Namespace, Set);
scanMonitor.waitTillComplete();
System.out.format("Done: nested async scan and update");;
```
# Misc Examples
## Delay Queue Full Error
If the delay queue fills up, a request will not be accepted and the client will return “delay queue full” error. Below we simulate this condition by having 25 slots and a delay queue of 20 in 2 event loops each (can handle total 90 outstanding requests) and issuing a hundred concurrent requests. The throttle is effectively turned off by a large setting for the number of requests to go through.
```
// clean up the current state
Cleanup();
// initialize data, event loops and client
int numRecords = 100;
int numLoops = 2;
int commandsPerLoop = 25;
int delayQueueSize = 20;
int noThrottle = 10000; //effectively no throttle
eventLoops = InitializeEventLoops(EventLoopType.DIRECT_NIO, numLoops, commandsPerLoop, delayQueueSize);
client = InitializeClient(eventLoops, numLoops, commandsPerLoop, hosts);
throttles = InitializeThrottles(numLoops, noThrottle);
// attempt to insert records above the available slots and delay queue capacity
long elapsedTime = InsertRecords(numRecords, eventLoops, throttles, 0);
System.out.format("%s ops/ms with event-loops: %s and commands-per-loop: %s.\n",
numRecords/(elapsedTime/1000000), numLoops, commandsPerLoop);;
```
# Comparing Different Settings
The code below allows comparison of insert throughput with different parameters: event loops type, number of event loops, and concurrency level in each loop. It doesn't produce meaningful results in the default notebook container setting where the client and server are running in the same container. A meaningful comparison can be drawn by pointing to the desired server cluster and also adjusting the client environment.
```
// Throughput with parameterized async insertion
int numRecords = 100000;
EventLoopType[] eventLoopOptions = {EventLoopType.DIRECT_NIO, EventLoopType.NETTY_NIO, EventLoopType.NETTY_EPOLL};
int[] numLoopsOptions = {2, 4, 8};
int[] commandsPerLoopOptions = {50, 100, 200};
for (EventLoopType eventLoopType: eventLoopOptions) {
for (int numLoops: numLoopsOptions) {
for (int commandsPerLoop: commandsPerLoopOptions) {
Cleanup();
eventLoops = InitializeEventLoops(eventLoopType, numLoops, commandsPerLoop, 0);
client = InitializeClient(eventLoops, numLoops, commandsPerLoop, hosts);
throttles = InitializeThrottles(numLoops, commandsPerLoop);
long elapsedTime = InsertRecords(numRecords, eventLoops, throttles, 0);
System.out.format("%s ops/ms with %s %s event-loops and %s commands-per-loop.\n",
numRecords/(elapsedTime/1000000), numLoops, eventLoopType, commandsPerLoop);
}
}
}
System.out.println("Done.");;
```
# Takeaways and Conclusion
The tutorial described the architecture of and key concepts in asynchronous operations in Aerospike client. It presented the programming framework in which async requests can be submitted and handled. It illustrated with code how event loops, throttling, inline async calls are implemented. The tradeoffs that a developer needs to make for which execution modes to employ - synchronous, asynchronous, or background - involve multiple factors including the nature of operations, client and server setup, throughput needs, and programming complexity.
# Clean up
Remove tutorial data and close connection.
```
Cleanup();
System.out.println("Removed tutorial data and closed server connection.");
```
# Further Exploration and Resources
Here are some links for further exploration
Resources
- Related notebooks
- [Implementing SQL Operations: SELECT](sql_select.ipynb),
- [Implementing SQL Operations: Aggregates - Part 1](sql_aggregates_1.ipynb) and [Part 2](sql_aggregates_2.ipynb).
- [Implementing SQL Operations: CREATE, UPDATE, DELETE](sql_updates.ipynb)
- [Working with Lists](java-working_with_lists.ipynb)
- [Working with Maps](java-working_with_maps.ipynb)
- Aerospike Developer Hub
- [Java Developers Resources](https://developer.aerospike.com/java-developers)
- Github repos
- [Java code examples](https://github.com/aerospike/aerospike-client-java/tree/master/examples/src/com/aerospike/examples)
- [Reactive programming examples for the Java client](https://github.com/aerospike/aerospike-client-java-reactive)
- Documentation
- [Java Client](https://www.aerospike.com/docs/client/java/index.html)
- [Java API Reference](https://www.aerospike.com/apidocs/java/)
- [Aerospike Documentation](https://docs.aerospike.com/docs/)
- Blog
- [Simple Web Application Using Java, Spring Boot, Aerospike and Docker](https://medium.com/aerospike-developer-blog/simple-web-application-using-java-spring-boot-aerospike-database-and-docker-ad13795e0089)
## Next steps
Visit [Aerospike notebooks repo](https://github.com/aerospike-examples/interactive-notebooks) to run additional Aerospike notebooks. To run a different notebook, download the notebook from the repo to your local machine, and then click on File->Open in the notebook menu, and select Upload.
|
github_jupyter
|
import io.github.spencerpark.ijava.IJava;
import io.github.spencerpark.jupyter.kernel.magic.common.Shell;
IJava.getKernelInstance().getMagics().registerMagics(Shell.class);
%sh asd
%%loadFromPOM
<dependencies>
<dependency>
<groupId>com.aerospike</groupId>
<artifactId>aerospike-client</artifactId>
<version>5.0.0</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-all</artifactId>
<version>4.1.53.Final</version>
<scope>compile</scope>
</dependency>
</dependencies>
final String Namespace = "test";
final String Set = "async-ops";
// truncate data, close client and event loops - called multiple times to initialize with different options
// described in greater detail later
void Cleanup() {
try {
client.truncate(null, Namespace, Set, null);
}
catch (AerospikeException e) {
// ignore
}
client.close();
eventLoops.close();
};
import com.aerospike.client.async.EventPolicy;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.epoll.EpollEventLoopGroup;
import com.aerospike.client.async.NettyEventLoops;
import com.aerospike.client.async.EventLoops;
import com.aerospike.client.async.NioEventLoops;
enum EventLoopType{DIRECT_NIO, NETTY_NIO, NETTY_EPOLL};
// a function to create event loops with specified parameters
EventLoops InitializeEventLoops(EventLoopType eventLoopType, int numLoops, int commandsPerEventLoop,
int maxCommandsInQueue) {
EventPolicy eventPolicy = new EventPolicy();
eventPolicy.maxCommandsInProcess = commandsPerEventLoop;
eventPolicy.maxCommandsInQueue = maxCommandsInQueue;
EventLoops eventLoops = null;
switch(eventLoopType) {
case DIRECT_NIO:
eventLoops = new NioEventLoops(eventPolicy, numLoops);
break;
case NETTY_NIO:
NioEventLoopGroup nioGroup = new NioEventLoopGroup(numLoops);
eventLoops = new NettyEventLoops(eventPolicy, nioGroup);
break;
case NETTY_EPOLL:
EpollEventLoopGroup epollGroup = new EpollEventLoopGroup(numLoops);
eventLoops = new NettyEventLoops(eventPolicy, epollGroup);
break;
default:
System.out.println("Error: Invalid event loop type");
}
return eventLoops;
}
// initialize event loops
final int NumLoops = 2;
final int CommandsPerEventLoop = 50;
final int DelayQueueSize = 50;
EventLoops eventLoops = InitializeEventLoops(EventLoopType.DIRECT_NIO, NumLoops, CommandsPerEventLoop, DelayQueueSize);
System.out.format("Event loops initialized with num-loops: %s, commands-per-event-loop: %s, delay-queue-size: %s.\n",
NumLoops, CommandsPerEventLoop, DelayQueueSize);;
import com.aerospike.client.policy.ClientPolicy;
import com.aerospike.client.Host;
import com.aerospike.client.AerospikeClient;
// a function to initialize the client with specified parameters
AerospikeClient InitializeClient(EventLoops eventLoops, int numLoops, int commandsPerEventLoop, Host[] hosts) {
ClientPolicy clientPolicy = new ClientPolicy();
clientPolicy.eventLoops = eventLoops;
int concurrentMax = commandsPerEventLoop * numLoops;
if (clientPolicy.maxConnsPerNode < concurrentMax) {
clientPolicy.maxConnsPerNode = concurrentMax;
}
AerospikeClient client = new AerospikeClient(clientPolicy, hosts);
return client;
}
// initialize the client
Host[] hosts = Host.parseHosts("localhost", 3000);
AerospikeClient client = InitializeClient(eventLoops, NumLoops, CommandsPerEventLoop, hosts);
System.out.print("Client initialized.\n");
import com.aerospike.client.async.Throttles;
// creates event loop throttles with specified parameters
Throttles InitializeThrottles(int numLoops, int commandsPerEventLoop) {
Throttles throttles = new Throttles(numLoops, commandsPerEventLoop);
return throttles;
}
// initialize event loop throttles
Throttles throttles = InitializeThrottles(NumLoops, CommandsPerEventLoop);
System.out.format("Throttles initialized for %s loops with %s concurrent operations per loop.\n",
NumLoops, CommandsPerEventLoop);
// initialize the atomic integer to keep track of async operations count
import java.util.concurrent.atomic.AtomicInteger;
AtomicInteger asyncOpCount = new AtomicInteger();
System.out.format("Atomic operation count initialized.");;
import com.aerospike.client.Key;
import com.aerospike.client.listener.WriteListener;
import com.aerospike.client.async.Monitor;
import com.aerospike.client.AerospikeException;
// write listener
// - implements success and failure handlers
// - releases a slot on success or failure for another insert to proceed
// - signals completion through monitor on failure or when the write count reaches the expected final count
// - prints progress every "progressFreq" records*/
class MyWriteListener implements WriteListener {
private final Key key;
private final int eventLoopIndex;
private final int finalCount;
private Monitor monitor;
private final int progressFreq;
public MyWriteListener(Key key, int eventLoopIndex, int finalCount, Monitor monitor, int progressFreq) {
this.key = key;
this.eventLoopIndex = eventLoopIndex;
this.finalCount = finalCount;
this.monitor = monitor;
this.progressFreq = progressFreq;
}
// Write success callback.
public void onSuccess(Key key) {
// Write succeeded.
throttles.addSlot(eventLoopIndex, 1);
int currentCount = asyncOpCount.incrementAndGet();
if ( progressFreq > 0 && currentCount % progressFreq == 0) {
System.out.format("Inserted %s records.\n", currentCount);
}
if (currentCount == finalCount) {
monitor.notifyComplete();
}
}
// Error callback.
public void onFailure(AerospikeException e) {
System.out.format("Put failed: namespace=%s set=%s key=%s exception=%s\n",
key.namespace, key.setName, key.userKey, e.getMessage());
monitor.notifyComplete();
}
}
System.out.print("Write listener defined.");
import java.util.concurrent.TimeUnit;
import com.aerospike.client.Bin;
import com.aerospike.client.policy.WritePolicy;
import com.aerospike.client.async.EventLoop;
long InsertRecords(int numRecords, EventLoops eventLoops, Throttles throttles, int progressFreq) {
long startTime = System.nanoTime();
Monitor monitor = new Monitor();
asyncOpCount.set(0);
WritePolicy policy = new WritePolicy();
for (int i = 0; i < numRecords; i++) {
Key key = new Key(Namespace, Set, "id-"+i);
Bin bin1 = new Bin(new String("bin1"), i);
Bin bin2 = new Bin(new String("bin2"), numRecords*10+i);
EventLoop eventLoop = eventLoops.next();
int eventLoopIndex = eventLoop.getIndex();
if (throttles.waitForSlot(eventLoopIndex, 1)) {
try {
client.put(eventLoop, new MyWriteListener(key, eventLoopIndex, numRecords, monitor, progressFreq),
policy, key, bin1, bin2);
}
catch (Exception e) {
throttles.addSlot(eventLoopIndex, 1);
}
}
}
monitor.waitTillComplete();
long endTime = System.nanoTime();
return (endTime - startTime);
}
final int NumRecords = 100000;
long elapsedTime = InsertRecords(NumRecords, eventLoops, throttles, NumRecords/4);
System.out.format("Inserted %s records with %s event-loops and %s commands-per-loop in %s milliseconds.\n",
NumRecords, NumLoops, CommandsPerEventLoop, elapsedTime/1000000);;
// truncates database and closes client and event-loops
Cleanup();
System.out.println("Removed data and closed client and event loops.");
import com.aerospike.client.policy.ScanPolicy;
import com.aerospike.client.listener.RecordSequenceListener;
import com.aerospike.client.Record;
import com.aerospike.client.exp.Exp;
// Scan callback
class ScanRecordSequenceListener implements RecordSequenceListener {
private EventLoops eventLoops;
private Throttles throttles;
private Monitor scanMonitor;
private AtomicInteger writeCount = new AtomicInteger();
private int scanCount = 0;
private final int progressFreq;
public ScanRecordSequenceListener(EventLoops eventLoops, Throttles throttles, Monitor scanMonitor,
int progressFreq) {
this.eventLoops = eventLoops;
this.throttles = throttles;
this.scanMonitor = scanMonitor;
this.progressFreq = progressFreq;
}
public void onRecord(Key key, Record record) throws AerospikeException {
++scanCount;
if ( progressFreq > 0 && scanCount % progressFreq == 0) {
System.out.format("Scan returned %s records.\n", scanCount);
}
// submit async update operation with throttle
EventLoop eventLoop = eventLoops.next();
int eventLoopIndex = eventLoop.getIndex();
if (throttles.waitForSlot(eventLoopIndex, 1)) { // throttle by waiting for an available slot
try {
WritePolicy policy = new WritePolicy();
Bin bin2 = new Bin(new String("bin2"), 1);
client.add(eventLoop, new WriteListener() { // inline write listener
public void onSuccess(final Key key) {
// Write succeeded.
throttles.addSlot(eventLoopIndex, 1);
int currentCount = writeCount.incrementAndGet();
if ( progressFreq > 0 && currentCount % progressFreq == 0) {
System.out.format("Processed %s records.\n", currentCount);
}
}
public void onFailure(AerospikeException e) {
System.out.format("Put failed: namespace=%s set=%s key=%s exception=%s\n",
key.namespace, key.setName, key.userKey, e.getMessage());
throttles.addSlot(eventLoopIndex, 1);
int currentCount = writeCount.incrementAndGet();
if ( progressFreq > 0 && currentCount % progressFreq == 0) {
System.out.format("Processed %s records.\n", currentCount);
}
}
},
policy, key, bin2);
}
catch (Exception e) {
System.out.format("Error: exception in write listener - %s", e.getMessage());
}
}
}
public void onSuccess() {
if (scanCount != writeCount.get()) { // give the last write some time to finish
try {
Thread.sleep(100);
}
catch(InterruptedException e) {
System.out.format("Error: exception - %s", e);
}
}
scanMonitor.notifyComplete();
}
public void onFailure(AerospikeException e) {
System.out.format("Error: scan failed with exception - %s", e);
scanMonitor.notifyComplete();
}
}
// cleanup prior state
Cleanup();
// initialize data, event loops and client
int numRecords = 100000;
int numLoops = 2;
int commandsPerLoop = 25;
int delayQueueSize = 0;
eventLoops = InitializeEventLoops(EventLoopType.DIRECT_NIO, numLoops, commandsPerLoop, delayQueueSize);
client = InitializeClient(eventLoops, numLoops, commandsPerLoop, hosts);
throttles = InitializeThrottles(numLoops, commandsPerLoop);
InsertRecords(numRecords, eventLoops, throttles, 0);
System.out.format("Inserted %s records.\n", numRecords);
EventLoop eventLoop = eventLoops.next();
Monitor scanMonitor = new Monitor();
int progressFreq = 100;
// issue async scan that in turn issues async update on each returned record
ScanPolicy policy = new ScanPolicy();
policy.filterExp = Exp.build(
Exp.and(
Exp.le(Exp.intBin("bin1"), Exp.val(1000)),
Exp.ge(Exp.intBin("bin1"), Exp.val(1))));
client.scanAll(eventLoop, new ScanRecordSequenceListener(eventLoops, throttles, scanMonitor, progressFreq),
policy, Namespace, Set);
scanMonitor.waitTillComplete();
System.out.format("Done: nested async scan and update");;
// clean up the current state
Cleanup();
// initialize data, event loops and client
int numRecords = 100;
int numLoops = 2;
int commandsPerLoop = 25;
int delayQueueSize = 20;
int noThrottle = 10000; //effectively no throttle
eventLoops = InitializeEventLoops(EventLoopType.DIRECT_NIO, numLoops, commandsPerLoop, delayQueueSize);
client = InitializeClient(eventLoops, numLoops, commandsPerLoop, hosts);
throttles = InitializeThrottles(numLoops, noThrottle);
// attempt to insert records above the available slots and delay queue capacity
long elapsedTime = InsertRecords(numRecords, eventLoops, throttles, 0);
System.out.format("%s ops/ms with event-loops: %s and commands-per-loop: %s.\n",
numRecords/(elapsedTime/1000000), numLoops, commandsPerLoop);;
// Throughput with parameterized async insertion
int numRecords = 100000;
EventLoopType[] eventLoopOptions = {EventLoopType.DIRECT_NIO, EventLoopType.NETTY_NIO, EventLoopType.NETTY_EPOLL};
int[] numLoopsOptions = {2, 4, 8};
int[] commandsPerLoopOptions = {50, 100, 200};
for (EventLoopType eventLoopType: eventLoopOptions) {
for (int numLoops: numLoopsOptions) {
for (int commandsPerLoop: commandsPerLoopOptions) {
Cleanup();
eventLoops = InitializeEventLoops(eventLoopType, numLoops, commandsPerLoop, 0);
client = InitializeClient(eventLoops, numLoops, commandsPerLoop, hosts);
throttles = InitializeThrottles(numLoops, commandsPerLoop);
long elapsedTime = InsertRecords(numRecords, eventLoops, throttles, 0);
System.out.format("%s ops/ms with %s %s event-loops and %s commands-per-loop.\n",
numRecords/(elapsedTime/1000000), numLoops, eventLoopType, commandsPerLoop);
}
}
}
System.out.println("Done.");;
Cleanup();
System.out.println("Removed tutorial data and closed server connection.");
| 0.458834 | 0.734739 |
```
test_name = "test6"
```
# data file
```
pi_list = range(9)
sigma_list = range(9)
f = open("{}.data".format(test_name), 'w')
for (k_pi, pi) in enumerate(pi_list):
for (k_sigma, sigma) in enumerate(sigma_list):
k = k_pi*len(sigma_list)+k_sigma
if (sigma < 3.0):
if (pi < 3.0):
f.write("{} 4.0\n".format(k))
elif (pi < 6.0):
f.write("{} -2.0\n".format(k))
else:
f.write("{} 5.0\n".format(k))
elif (sigma < 6.0):
if (pi < 3.0):
f.write("{} -8.0\n".format(k))
elif (pi < 6.0):
f.write("{} 10.0\n".format(k))
else:
f.write("{} -3.0\n".format(k))
else:
if (pi < 3.0):
f.write("{} 1.0\n".format(k))
elif (pi < 6.0):
f.write("{} -7.0\n".format(k))
else:
f.write("{} 6.0\n".format(k))
f.close()
```
# cov file
```
f = open("{}.cov".format(test_name), 'w')
for i in range(80):
f.write("{} {} 1.0\n".format(i, i))
f.write("{} {} 0.25\n".format(i, i + 1))
f.write("80 80 1.0\n")
f.close()
```
# rebin file
```
f = open("{}.rebin".format(test_name), 'w')
f.write("pi")
for i in range(10):
f.write(" {:.1f}".format(i))
f.write(" sigma")
for i in range(10):
f.write(" {:.1f}".format(i))
f.write("\n")
f.write("R0 pi")
for i in range(0, 10, 3):
f.write(" {:.1f}".format(i))
f.write(" sigma")
for i in range(0, 10, 3):
f.write(" {:.1f}".format(i))
f.close()
```
# sol file
```
cov_mat = np.zeros(81*81, dtype=float).reshape((81,81))
for i in range(80):
cov_mat[i, i] = 1.0
cov_mat[i, i+1] = 0.25
cov_mat[i+1, i] = 0.25
cov_mat[-1,-1] = 1.0
inv_cov_mat = np.linalg.inv(cov_mat)
transformation_matrix = np.zeros(81*9, dtype=float).reshape(81,9)
for i in range(9):
for j in range(81):
if i == 0 and j in [0, 1, 2, 9, 10, 11, 18, 19, 20]:
transformation_matrix[j,i] = 1.0
elif i == 1 and j in [3, 4, 5, 12, 13, 14, 21, 22, 23]:
transformation_matrix[j,i] = 1.0
elif i == 2 and j in [6, 7, 8, 15, 16, 17, 24, 25, 26]:
transformation_matrix[j,i] = 1.0
elif i == 3 and j in [27, 28, 29, 36, 37, 38, 45, 46, 47]:
transformation_matrix[j,i] = 1.0
elif i == 4 and j in [30, 31, 32, 39, 40, 41, 48, 49, 50]:
transformation_matrix[j,i] = 1.0
elif i == 5 and j in [33, 34, 35, 42, 43, 44, 51, 52, 53]:
transformation_matrix[j,i] = 1.0
elif i == 6 and j in [54, 55, 56, 63, 64, 65, 72, 73, 74]:
transformation_matrix[j,i] = 1.0
elif i == 7 and j in [57, 58, 59, 66, 67, 68, 75, 76, 77]:
transformation_matrix[j,i] = 1.0
elif i == 8 and j in [60, 61, 62, 69, 70, 71, 78, 79, 80]:
transformation_matrix[j,i] = 1.0
inv_cov_mat_new = np.dot(transformation_matrix.transpose(), np.dot(inv_cov_mat, transformation_matrix))
cov_mat_new = np.linalg.inv(inv_cov_mat_new)
print np.dot(inv_cov_mat, transformation_matrix)
pi_list = range(0, 9, 3)
sigma_list = range(0, 9, 3)
f = open("{}.sol".format(test_name), 'w')
f.write("R0 data")
for (k_pi, pi) in enumerate(pi_list):
for (k_sigma, sigma) in enumerate(sigma_list):
if (sigma < 3.0):
if (pi < 3.0):
f.write(" 4.0000")
elif (pi < 6.0):
f.write(" -2.0000")
else:
f.write(" 5.0000")
elif (sigma < 6.0):
if (pi < 3.0):
f.write(" -8.0000")
elif (pi < 6.0):
f.write(" 10.0000")
else:
f.write(" -3.0000")
else:
if (pi < 3.0):
f.write(" 1.0000")
elif (pi < 6.0):
f.write(" -7.0000")
else:
f.write(" 6.0000")
f.write(" cov")
for i in range(9):
for j in range(9):
f.write(" {:.4f}".format(cov_mat_new[i,j]))
f.close()
cov_mat = np.zeros(81*81, dtype=float).reshape((81,81))
for i in range(80):
cov_mat[i, i] = 1.0
cov_mat[i, i+1] = 0.25
cov_mat[i+1, i] = 0.25
cov_mat[-1,-1] = 1.0
print cov_mat
inv_cov_mat = np.linalg.inv(cov_mat)
print inv_cov_mat
transformation_matrix = np.zeros(81*9, dtype=float).reshape(81,9)
for i in range(9):
for j in range(81):
if i == 0 and j in [0, 1, 2, 9, 10, 11, 18, 19, 20]:
transformation_matrix[j,i] = 1.0
elif i == 1 and j in [3, 4, 5, 12, 13, 14, 21, 22, 23]:
transformation_matrix[j,i] = 1.0
elif i == 2 and j in [6, 7, 8, 15, 16, 17, 24, 25, 26]:
transformation_matrix[j,i] = 1.0
elif i == 3 and j in [27, 28, 29, 36, 36, 38, 45, 46, 47]:
transformation_matrix[j,i] = 1.0
elif i == 4 and j in [30, 31, 32, 39, 40, 41, 48, 49, 50]:
transformation_matrix[j,i] = 1.0
elif i == 5 and j in [33, 34, 35, 42, 43, 44, 51, 52, 53]:
transformation_matrix[j,i] = 1.0
elif i == 6 and j in [54, 55, 56, 63, 64, 65, 72, 73, 74]:
transformation_matrix[j,i] = 1.0
elif i == 7 and j in [57, 58, 59, 66, 67, 68, 75, 76, 77]:
transformation_matrix[j,i] = 1.0
elif i == 8 and j in [60, 61, 62, 69, 70, 71, 78, 79, 80]:
transformation_matrix[j,i] = 1.0
plt.figure()
plt.subplot(111)
plt.contour(transformation_matrix)
plt.show()
inv_cov_mat_new = np.dot(transformation_matrix.transpose(), np.dot(inv_cov_mat, transformation_matrix))
print inv_cov_mat_new
cov_mat_new = np.linalg.inv(inv_cov_mat_new)
print cov_mat_new.shape
print cov_mat_new
plt.figure()
plt.subplot(121)
plt.contourf(inv_cov_mat)
plt.subplot(122)
plt.contourf(inv_cov_mat_new)
plt.show()
```
|
github_jupyter
|
test_name = "test6"
pi_list = range(9)
sigma_list = range(9)
f = open("{}.data".format(test_name), 'w')
for (k_pi, pi) in enumerate(pi_list):
for (k_sigma, sigma) in enumerate(sigma_list):
k = k_pi*len(sigma_list)+k_sigma
if (sigma < 3.0):
if (pi < 3.0):
f.write("{} 4.0\n".format(k))
elif (pi < 6.0):
f.write("{} -2.0\n".format(k))
else:
f.write("{} 5.0\n".format(k))
elif (sigma < 6.0):
if (pi < 3.0):
f.write("{} -8.0\n".format(k))
elif (pi < 6.0):
f.write("{} 10.0\n".format(k))
else:
f.write("{} -3.0\n".format(k))
else:
if (pi < 3.0):
f.write("{} 1.0\n".format(k))
elif (pi < 6.0):
f.write("{} -7.0\n".format(k))
else:
f.write("{} 6.0\n".format(k))
f.close()
f = open("{}.cov".format(test_name), 'w')
for i in range(80):
f.write("{} {} 1.0\n".format(i, i))
f.write("{} {} 0.25\n".format(i, i + 1))
f.write("80 80 1.0\n")
f.close()
f = open("{}.rebin".format(test_name), 'w')
f.write("pi")
for i in range(10):
f.write(" {:.1f}".format(i))
f.write(" sigma")
for i in range(10):
f.write(" {:.1f}".format(i))
f.write("\n")
f.write("R0 pi")
for i in range(0, 10, 3):
f.write(" {:.1f}".format(i))
f.write(" sigma")
for i in range(0, 10, 3):
f.write(" {:.1f}".format(i))
f.close()
cov_mat = np.zeros(81*81, dtype=float).reshape((81,81))
for i in range(80):
cov_mat[i, i] = 1.0
cov_mat[i, i+1] = 0.25
cov_mat[i+1, i] = 0.25
cov_mat[-1,-1] = 1.0
inv_cov_mat = np.linalg.inv(cov_mat)
transformation_matrix = np.zeros(81*9, dtype=float).reshape(81,9)
for i in range(9):
for j in range(81):
if i == 0 and j in [0, 1, 2, 9, 10, 11, 18, 19, 20]:
transformation_matrix[j,i] = 1.0
elif i == 1 and j in [3, 4, 5, 12, 13, 14, 21, 22, 23]:
transformation_matrix[j,i] = 1.0
elif i == 2 and j in [6, 7, 8, 15, 16, 17, 24, 25, 26]:
transformation_matrix[j,i] = 1.0
elif i == 3 and j in [27, 28, 29, 36, 37, 38, 45, 46, 47]:
transformation_matrix[j,i] = 1.0
elif i == 4 and j in [30, 31, 32, 39, 40, 41, 48, 49, 50]:
transformation_matrix[j,i] = 1.0
elif i == 5 and j in [33, 34, 35, 42, 43, 44, 51, 52, 53]:
transformation_matrix[j,i] = 1.0
elif i == 6 and j in [54, 55, 56, 63, 64, 65, 72, 73, 74]:
transformation_matrix[j,i] = 1.0
elif i == 7 and j in [57, 58, 59, 66, 67, 68, 75, 76, 77]:
transformation_matrix[j,i] = 1.0
elif i == 8 and j in [60, 61, 62, 69, 70, 71, 78, 79, 80]:
transformation_matrix[j,i] = 1.0
inv_cov_mat_new = np.dot(transformation_matrix.transpose(), np.dot(inv_cov_mat, transformation_matrix))
cov_mat_new = np.linalg.inv(inv_cov_mat_new)
print np.dot(inv_cov_mat, transformation_matrix)
pi_list = range(0, 9, 3)
sigma_list = range(0, 9, 3)
f = open("{}.sol".format(test_name), 'w')
f.write("R0 data")
for (k_pi, pi) in enumerate(pi_list):
for (k_sigma, sigma) in enumerate(sigma_list):
if (sigma < 3.0):
if (pi < 3.0):
f.write(" 4.0000")
elif (pi < 6.0):
f.write(" -2.0000")
else:
f.write(" 5.0000")
elif (sigma < 6.0):
if (pi < 3.0):
f.write(" -8.0000")
elif (pi < 6.0):
f.write(" 10.0000")
else:
f.write(" -3.0000")
else:
if (pi < 3.0):
f.write(" 1.0000")
elif (pi < 6.0):
f.write(" -7.0000")
else:
f.write(" 6.0000")
f.write(" cov")
for i in range(9):
for j in range(9):
f.write(" {:.4f}".format(cov_mat_new[i,j]))
f.close()
cov_mat = np.zeros(81*81, dtype=float).reshape((81,81))
for i in range(80):
cov_mat[i, i] = 1.0
cov_mat[i, i+1] = 0.25
cov_mat[i+1, i] = 0.25
cov_mat[-1,-1] = 1.0
print cov_mat
inv_cov_mat = np.linalg.inv(cov_mat)
print inv_cov_mat
transformation_matrix = np.zeros(81*9, dtype=float).reshape(81,9)
for i in range(9):
for j in range(81):
if i == 0 and j in [0, 1, 2, 9, 10, 11, 18, 19, 20]:
transformation_matrix[j,i] = 1.0
elif i == 1 and j in [3, 4, 5, 12, 13, 14, 21, 22, 23]:
transformation_matrix[j,i] = 1.0
elif i == 2 and j in [6, 7, 8, 15, 16, 17, 24, 25, 26]:
transformation_matrix[j,i] = 1.0
elif i == 3 and j in [27, 28, 29, 36, 36, 38, 45, 46, 47]:
transformation_matrix[j,i] = 1.0
elif i == 4 and j in [30, 31, 32, 39, 40, 41, 48, 49, 50]:
transformation_matrix[j,i] = 1.0
elif i == 5 and j in [33, 34, 35, 42, 43, 44, 51, 52, 53]:
transformation_matrix[j,i] = 1.0
elif i == 6 and j in [54, 55, 56, 63, 64, 65, 72, 73, 74]:
transformation_matrix[j,i] = 1.0
elif i == 7 and j in [57, 58, 59, 66, 67, 68, 75, 76, 77]:
transformation_matrix[j,i] = 1.0
elif i == 8 and j in [60, 61, 62, 69, 70, 71, 78, 79, 80]:
transformation_matrix[j,i] = 1.0
plt.figure()
plt.subplot(111)
plt.contour(transformation_matrix)
plt.show()
inv_cov_mat_new = np.dot(transformation_matrix.transpose(), np.dot(inv_cov_mat, transformation_matrix))
print inv_cov_mat_new
cov_mat_new = np.linalg.inv(inv_cov_mat_new)
print cov_mat_new.shape
print cov_mat_new
plt.figure()
plt.subplot(121)
plt.contourf(inv_cov_mat)
plt.subplot(122)
plt.contourf(inv_cov_mat_new)
plt.show()
| 0.167287 | 0.722662 |
# Specific arguments for particular field geometry
The openPMD format supports 3 types of geometries:
- Cartesian 2D
- Cartesian 3D
- Cylindrical with azimuthal decomposition (thetaMode)
This notebook shows how to use the arguments of `get_field` which are specific to a given geometry.
## (optional) Preparing this notebook to run it locally
If you choose to run this notebook on your local machine, you will need to download the openPMD data files which will then be visualized. To do so, execute the following cell. (Downloading the data may take a few seconds.)
```
import os, sys
def download_if_absent( dataset_name ):
"Function that downloads and decompress a chosen dataset"
if os.path.exists( dataset_name ) is False:
import wget, tarfile
tar_name = "%s.tar.gz" %dataset_name
url = "https://github.com/openPMD/openPMD-example-datasets/raw/draft/%s" %tar_name
wget.download(url, tar_name)
with tarfile.open( tar_name ) as tar_file:
tar_file.extractall()
os.remove( tar_name )
download_if_absent( 'example-3d' )
download_if_absent( 'example-thetaMode' )
```
In addition, we choose here to incorporate the plots inside the notebook.
```
%matplotlib inline
```
## Preparing the API
Again, we need to import the `OpenPMDTimeSeries` object:
```
from opmd_viewer import OpenPMDTimeSeries
```
and to create objects that point to the 3D data and the cylindrical data.
(NB: The argument `check_all_files` below is optional. By default, `check_all_files` is `True`, and in this case the code checks that all files in the timeseries are consistent
i.e. that they all contain the same fields and particle quantities, with the same metadata. When `check_all_files` is `False`, these verifications are skipped, and this allows to create the `OpenPMDTimeSeries` object faster.)
```
ts_3d = OpenPMDTimeSeries('./example-3d/hdf5/', check_all_files=False )
ts_circ = OpenPMDTimeSeries('./example-thetaMode/hdf5/', check_all_files=False )
```
## 3D Cartesian geometry
For 3D Cartesian geometry, the `get_field` method has additional arguments, in order to select a 2D slice into the 3D volume:
- `slicing_dir` allows to choose the axis across which the slice is taken. See the examples below:
```
# Slice across y (i.e. in a plane parallel to x-z)
Ez1, info_Ez1 = ts_3d.get_field( field='E', coord='z', iteration=500,
slicing_dir='y', plot=True )
# Slice across z (i.e. in a plane parallel to x-y)
Ez2, info_Ez2 = ts_3d.get_field( field='E', coord='z', iteration=500,
slicing_dir='z', plot=True )
```
- For one given slicing direction, `slicing` allows to select which slice to take: `slicing` is a number between -1 and 1, where -1 indicates to take the slice at the lower bound of the slicing range (e.g. $z_min$ if `slicing_dir` is `z`) and 1 indicates to take the slice at the upper bound of the slicing range (e.g. $z_max$ if `slicing_dir` is `z`). For example:
```
# Slice across z, very close to zmin.
Ez2, info_Ez2 = ts_3d.get_field( field='E', coord='z', iteration=500,
slicing_dir='z', slicing=-0.9, plot=True )
```
When passing `slicing=None`, `get_field` returns a full 3D Cartesian array. This can be useful for further analysis by hand, with `numpy` (e.g. calculating the total energy in the field).
```
# Get the full 3D Cartesian array
Ez_3d, info_Ez_3d = ts_3d.get_field( field='E', coord='z', iteration=500, slicing=None )
print( Ez_3d.ndim )
```
## Cylindrical geometry (with azimuthal decomposition)
In for data in the `thetaMode` geometry, the fields are decomposed into azimuthal modes. Thus, the `get_field` method has an argument `m`, which allows to select the mode:
- Choosing an integer value for selects a particular mode (for instance, here one can see a laser-wakefield, which is entirely contained in the mode 0)
```
Ey, info_Ey = ts_circ.get_field( field='E', coord='y', iteration=500, m=0,
plot=True, theta=0.5)
```
- Choosing `m='all'` sums all the modes (for instance, here the laser field, which is in the mode 1, dominates the fields)
```
Ey, info_Ey = ts_circ.get_field( field='E', coord='y', iteration=500, m='all',
plot=True, theta=0.5)
```
The argument `theta` (in radians) selects the plane of observation: this plane contains the $z$ axis and has an angle `theta` with respect to the $x$ axis.
When passing `slicing=None`, `get_field` returns a full 3D Cartesian array. This can be useful for further analysis by hand, with `numpy` (e.g. calculating the total energy in the field), or for comparison with Cartesian simulations.
```
# Get the full 3D Cartesian array
Ey_3d, info_Ey3d = ts_circ.get_field( field='E', coord='y', iteration=500, theta=None )
print( Ey_3d.ndim )
```
- Finally, in cylindrical geometry, the users can also choose the coordinates `r` and `t` for the radial and azimuthal components of the fields. For instance:
```
Er, info_Er = ts_circ.get_field( field='E', coord='r', iteration=500, m=0,
plot=True, theta=0.5)
```
|
github_jupyter
|
import os, sys
def download_if_absent( dataset_name ):
"Function that downloads and decompress a chosen dataset"
if os.path.exists( dataset_name ) is False:
import wget, tarfile
tar_name = "%s.tar.gz" %dataset_name
url = "https://github.com/openPMD/openPMD-example-datasets/raw/draft/%s" %tar_name
wget.download(url, tar_name)
with tarfile.open( tar_name ) as tar_file:
tar_file.extractall()
os.remove( tar_name )
download_if_absent( 'example-3d' )
download_if_absent( 'example-thetaMode' )
%matplotlib inline
from opmd_viewer import OpenPMDTimeSeries
ts_3d = OpenPMDTimeSeries('./example-3d/hdf5/', check_all_files=False )
ts_circ = OpenPMDTimeSeries('./example-thetaMode/hdf5/', check_all_files=False )
# Slice across y (i.e. in a plane parallel to x-z)
Ez1, info_Ez1 = ts_3d.get_field( field='E', coord='z', iteration=500,
slicing_dir='y', plot=True )
# Slice across z (i.e. in a plane parallel to x-y)
Ez2, info_Ez2 = ts_3d.get_field( field='E', coord='z', iteration=500,
slicing_dir='z', plot=True )
# Slice across z, very close to zmin.
Ez2, info_Ez2 = ts_3d.get_field( field='E', coord='z', iteration=500,
slicing_dir='z', slicing=-0.9, plot=True )
# Get the full 3D Cartesian array
Ez_3d, info_Ez_3d = ts_3d.get_field( field='E', coord='z', iteration=500, slicing=None )
print( Ez_3d.ndim )
Ey, info_Ey = ts_circ.get_field( field='E', coord='y', iteration=500, m=0,
plot=True, theta=0.5)
Ey, info_Ey = ts_circ.get_field( field='E', coord='y', iteration=500, m='all',
plot=True, theta=0.5)
# Get the full 3D Cartesian array
Ey_3d, info_Ey3d = ts_circ.get_field( field='E', coord='y', iteration=500, theta=None )
print( Ey_3d.ndim )
Er, info_Er = ts_circ.get_field( field='E', coord='r', iteration=500, m=0,
plot=True, theta=0.5)
| 0.341583 | 0.989399 |
# Chapter 3. Basic Templates and Views
## 3.1 Introduction
### 3.1.1 Project "Thermos"
Social bookmarking site
### 3.1.2 An index page
(1) Generate HTML with Jinja2 template engine
(2) Add styling (CSS) using static content
### 3.1.3 A simple form for adding bookmarks
(1) Just HTML, no back-end logic yet
(2) Uniform styling for both pages with template inheritance
(3) Create maintainable links with `url_for`.
### 3.1.4 Custom error pages
## 3.2 Demo: starting a project
Go to http://www.initializr.com for simple responsive HTML5 Boilerplate.
```
# SAVE AS thermos.py.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from flask import Flask, render_template, url_for
app = Flask(__name__)
@app.route('/')
@app.route('/index')
def index():
return render_template('index.html')
if __name__ == '__main__':
app.run()
```
(1) To debug flask,
```python
app.run(debug=True)
```
(2) Fix the references to css and js files.
## 3.3 Review: debug mode and render_template
### 3.3.1 Application object
```python
app = Flask(__name__)
```
### 3.3.2 Run with debugging
```python
app.run(debug=True)
```
### 3.3.3 Rendering a template
```
import flask
help(flask.render_template)
```
(1) HTML templates by default are in the directory `templates`.
(2) Don't forget to return the result of render_template as the HTTP response otherwise you will get an Internal Server error.
## 3.4 Demo: HTML templates with Jinja2
### 3.4.1 Pass string parameters to the template via Jinja2.
In `index.html`:
```html
......
<div class="main-container">
<div class="main wrapper clearfix">
<article>
<header>
<h1>Welcome</h1>
<p>pytwis is a simple twitter clone powered by python, flask, and redis.</p>
</header>
<section>
<h2>Title: {{ title }}</h2>
<p>Text: {{ text }}</p>
</section>
......
```
```
# SAVE AS thermos.py.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from flask import Flask, render_template, url_for
app = Flask(__name__)
@app.route('/')
@app.route('/index')
def index():
return render_template('index.html', title='Title passed from view to template', text='Text passed from view to template')
if __name__ == '__main__':
app.run()
```
### 3.4.2 Pass a list to the template via Jinja2.
In `index.html`:
```html
......
<div class="main-container">
<div class="main wrapper clearfix">
<article>
<header>
<h1>Welcome</h1>
<p>pytwis is a simple twitter clone powered by python, flask, and redis.</p>
</header>
<section>
<h2>Title: {{ title }}</h2>
<p>Text: {{ text[1] }}</p>
</section>
......
```
```
# SAVE AS thermos.py.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from flask import Flask, render_template, url_for
app = Flask(__name__)
@app.route('/')
@app.route('/index')
def index():
return render_template('index.html',
title='Title passed from view to template',
text=['first', 'second', 'third'])
if __name__ == '__main__':
app.run()
```
### 3.4.3 Pass an object to the template via Jinja2.
In index.html
```html
......
<div class="main-container">
<div class="main wrapper clearfix">
<article>
<header>
<h1>Welcome</h1>
<p>pytwis is a simple twitter clone powered by python, flask, and redis.</p>
</header>
<section>
<h2>Title: {{ title }}</h2>
<p>Text: {{ user.firstname }} {{ user.lastname }}</p>
</section>
......
```
```
# SAVE AS thermos.py.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from flask import Flask, render_template, url_for
class User:
def __init__(self, firstname, lastname):
self.firstname = firstname
self.lastname = lastname
def initials(self):
return "{}. {}.".format(self.firstname[0], self.lastname[0])
app = Flask(__name__)
@app.route('/')
@app.route('/index')
def index():
return render_template('index.html',
title='Title passed from view to template',
user=User('Wei', 'Ren'))
if __name__ == '__main__':
app.run()
```
## 3.5 Review: Jinja2 basics
(1) `{{ var }}` renders value of var
```python
render_template(index.html, var="hello")
```
(2) Dot notation: `{{ var.x }}`
* Lookup an attribute `x` on `var`.
* Lookup an item `x` in `var`.
* Not found? Empty output.
(3) We can call functions on objects as well:
`{{ var.fn() }}` can pass arguments
## 3.6 Demo: Use url_for to generate links
(1) Create a new simple template page add.html with no style.
(2) Add the view for handling the route "/add".
(3) Use `url_for` to create a URL link at index.html.
```
# SAVE AS thermos.py
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from flask import Flask, render_template, url_for
class User:
def __init__(self, firstname, lastname):
self.firstname = firstname
self.lastname = lastname
def initials(self):
return "{}. {}.".format(self.firstname[0], self.lastname[0])
app = Flask(__name__)
@app.route('/')
@app.route('/index')
def index():
return render_template('index.html',
title='Title passed from view to template',
user=User('Wei', 'Ren'))
@app.route('/add')
def add():
return render_template('add.html')
if __name__ == '__main__':
app.run()
```
## 3.7 Template inheritance
(1) Create a new template called `base.html` which can be regarded as the parent template of all templates.
(2) Modify `index.html` and `add.html` to **extend** from `base.html`.
## 3.8 Review: Maintainable links with `url_for`
(1) `url_for(view)`
* Generates a URL to the given view
* Can pass arguments to the view
(2) `url_for` is available in template context.
(3) Why use `url_for` instead of a simple link?
* More maintainable: URL is only defined in `@app.route` call.
* Handles escaping of special characters and Unicode data transparently.
(4) Use `url_for` for all the static content.
```python
>>> from thermos import app
>>> app.url_map
Map([<Rule '/index' (HEAD, GET, OPTIONS) -> index>,
<Rule '/add' (HEAD, GET, OPTIONS) -> add>,
<Rule '/' (HEAD, GET, OPTIONS) -> index>,
<Rule '/static/<filename>' (HEAD, GET, OPTIONS) -> static>])
```
For the static content, `url_for` will reverse the mapping from the files to the endpoint `static`.
## 3.8 Review: Template inheritance
(1) `{% extends "base.html" %}`
* Extend a base template
* Must be first tag
(2) `{% block content %}...{% endblock %}`
Defines a block that can be overridden by child templates
## 3.9 Demo: Custom error pages
Make a copy of `404.html` and replace the error message in `404.html` by the one for the error code `500`.
```
# SAVE AS thermos.py
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from flask import Flask, render_template, url_for
class User:
def __init__(self, firstname, lastname):
self.firstname = firstname
self.lastname = lastname
def initials(self):
return "{}. {}.".format(self.firstname[0], self.lastname[0])
app = Flask(__name__)
@app.route('/')
@app.route('/index')
def index():
return render_template('index.html',
title='Title passed from view to template',
user=User('Wei', 'Ren'))
@app.route('/add')
def add():
return render_template('add.html')
@app.errorhandler(404):
def page_not_found(e):
return render_template('404.html'), 404
@app.errorhandler(500):
return render_template('505.html'), 500
if __name__ == '__main__':
app.run()
```
## 3.10 Resources and summary
### 3.10.1 Resources
(1) Jinja2
http://jinja.pocoo.org
(2) Initializr
http://www.initializr.com
(3) Flask URL building
http://flask.pocoo.org/docs/quickstart/#url-building
(4) Flask Bootstrap extension
https://pypi.python.org/pypi/Flask-Bootstrap
### 3.10.2 Summary
|
github_jupyter
|
# SAVE AS thermos.py.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from flask import Flask, render_template, url_for
app = Flask(__name__)
@app.route('/')
@app.route('/index')
def index():
return render_template('index.html')
if __name__ == '__main__':
app.run()
app.run(debug=True)
app = Flask(__name__)
app.run(debug=True)
import flask
help(flask.render_template)
......
<div class="main-container">
<div class="main wrapper clearfix">
<article>
<header>
<h1>Welcome</h1>
<p>pytwis is a simple twitter clone powered by python, flask, and redis.</p>
</header>
<section>
<h2>Title: {{ title }}</h2>
<p>Text: {{ text }}</p>
</section>
......
# SAVE AS thermos.py.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from flask import Flask, render_template, url_for
app = Flask(__name__)
@app.route('/')
@app.route('/index')
def index():
return render_template('index.html', title='Title passed from view to template', text='Text passed from view to template')
if __name__ == '__main__':
app.run()
......
<div class="main-container">
<div class="main wrapper clearfix">
<article>
<header>
<h1>Welcome</h1>
<p>pytwis is a simple twitter clone powered by python, flask, and redis.</p>
</header>
<section>
<h2>Title: {{ title }}</h2>
<p>Text: {{ text[1] }}</p>
</section>
......
# SAVE AS thermos.py.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from flask import Flask, render_template, url_for
app = Flask(__name__)
@app.route('/')
@app.route('/index')
def index():
return render_template('index.html',
title='Title passed from view to template',
text=['first', 'second', 'third'])
if __name__ == '__main__':
app.run()
......
<div class="main-container">
<div class="main wrapper clearfix">
<article>
<header>
<h1>Welcome</h1>
<p>pytwis is a simple twitter clone powered by python, flask, and redis.</p>
</header>
<section>
<h2>Title: {{ title }}</h2>
<p>Text: {{ user.firstname }} {{ user.lastname }}</p>
</section>
......
# SAVE AS thermos.py.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from flask import Flask, render_template, url_for
class User:
def __init__(self, firstname, lastname):
self.firstname = firstname
self.lastname = lastname
def initials(self):
return "{}. {}.".format(self.firstname[0], self.lastname[0])
app = Flask(__name__)
@app.route('/')
@app.route('/index')
def index():
return render_template('index.html',
title='Title passed from view to template',
user=User('Wei', 'Ren'))
if __name__ == '__main__':
app.run()
render_template(index.html, var="hello")
# SAVE AS thermos.py
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from flask import Flask, render_template, url_for
class User:
def __init__(self, firstname, lastname):
self.firstname = firstname
self.lastname = lastname
def initials(self):
return "{}. {}.".format(self.firstname[0], self.lastname[0])
app = Flask(__name__)
@app.route('/')
@app.route('/index')
def index():
return render_template('index.html',
title='Title passed from view to template',
user=User('Wei', 'Ren'))
@app.route('/add')
def add():
return render_template('add.html')
if __name__ == '__main__':
app.run()
>>> from thermos import app
>>> app.url_map
Map([<Rule '/index' (HEAD, GET, OPTIONS) -> index>,
<Rule '/add' (HEAD, GET, OPTIONS) -> add>,
<Rule '/' (HEAD, GET, OPTIONS) -> index>,
<Rule '/static/<filename>' (HEAD, GET, OPTIONS) -> static>])
# SAVE AS thermos.py
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from flask import Flask, render_template, url_for
class User:
def __init__(self, firstname, lastname):
self.firstname = firstname
self.lastname = lastname
def initials(self):
return "{}. {}.".format(self.firstname[0], self.lastname[0])
app = Flask(__name__)
@app.route('/')
@app.route('/index')
def index():
return render_template('index.html',
title='Title passed from view to template',
user=User('Wei', 'Ren'))
@app.route('/add')
def add():
return render_template('add.html')
@app.errorhandler(404):
def page_not_found(e):
return render_template('404.html'), 404
@app.errorhandler(500):
return render_template('505.html'), 500
if __name__ == '__main__':
app.run()
| 0.205695 | 0.738622 |
# Ray RLlib Multi-Armed Bandits - Exploration-Exploitation Strategies
© 2019-2022, Anyscale. All Rights Reserved

What strategy should we follow for selecting actions that balance the exploration-exploitation tradeoff, yielding the maximum average reward over time? This is the core challenge of RL/bandit algorithms.
This lesson has two goals, to give you an intuitive sense of what makes a good algorithm and to introduce several popular examples.
> **Tip:** For the first time through this material, you may wish to focus on the first goal, developing an intuitive sense of the requirements for a good algorithm. Come back later to explore the details of the algorithms discussed.
So, at least, read through the first sections, stopping at _UCB in More Detail_ under _Upper Confidence Bound_.
## What Makes a Good Exploration-Exploitation Algorithm?
Let's first assume we are considered only stationary bandits. The ideal algorithm achieves these properties:
1. It explores all the actions reasonably aggressively.
2. When exploring, it picks the action most likely to produce an optimal reward, rather than making random choices.
3. It converges quickly to the action that optimizes the mean reward.
4. It stops exploration once the optimal action is known and just exploits!
For non-stationary and context bandits, the optimal action will likely change over time, so some exploration may always be needed.
## Popular Algorithms
With these properties in mind, let's briefly discuss four algorithms. We'll use two of them in examples over several subsequent lessons.
### $\epsilon$-Greedy
One possible strategy is quite simple, called $\epsilon$-Greedy, where $\epsilon$ is small number that determines how frequently exploration is done. The best-known action is exploited most of the time ("greedily"), governed by probability $1 - \epsilon$ (i.e., in percentage terms $100*(1 - \epsilon)$%). With probability $\epsilon$, an action is picked at random in the hopes of finding a new action that provides even better rewards.
Typical values of $\epsilon$ are between 0.01 and 0.1. A larger value, like 0.1, explores more aggressively and finds the optimal policy more quickly, but afterwards the aggressive exploration strategy becomes a liability, as it only selects the optimal action ~90% of the time, continuing excessive exploration that is now counterproductive. In contrast, smaller values, like 0.01, are slower to find the optimal policy, but once found continue to select it ~99% of the time, so over time the mean reward is _higher_ for _smaller_ $\epsilon$ values, as the optimal action is selected more often.
How does $\epsilon$-Greedy stack up against our desired properties?
1. The higher the $\epsilon$ value, the more quickly the action space is explored.
2. It randomly picks the next action, so there is no "intelligence" involved in optimizing the choice.
3. The higher the $\epsilon$ value, the more quickly the optimal action is found.
4. Just as this algorithm makes no attempt to optimize the choice of action during exploration, it makes no attempt to throttle back exploration when the optimal value is found.
To address point 4, you could adopt an enhancement that decays the $\epsilon$ value over time, rather than keeping it fixed.
See [Wikipedia - MAB Approximate Solutions](https://en.wikipedia.org/wiki/Multi-armed_bandit) and [Sutton 2018](https://mitpress.mit.edu/books/reinforcement-learning-second-edition) for more information.
### Upper Confidence Bound
A limitation about $\epsilon$-greedy is that exploration is done indiscriminately. Is it possible to make a more informed choice about which alternative actions are more likely to yield a good result, so we preferentially pick one of them? That's what the Upper Confidence Bound (UCB) algorithm attempts to do. It weights some choices over others.
It's worth looking at the formula that governs the choice for the next action at time $t$:
$$A_t \doteq \frac{argmax}{a}\bigg[ Q_t(a) + c\sqrt{ \dfrac{\ln(t)}{N_t(a)} }\bigg]$$
It's not essential to fully understand all the details, but here is the gist of it; the best action to take at time $t$, $A_t$, is decided by picking the best known action for returning the highest value (the $Q_t(a)$ term in the brackets [...] computes this), but with a correction that encourages exploration, especially for smaller $t$, but penalizing particular actions $a$ if we've already picked them a lot previously (the second term starting with a constant $c$ that governs the "strength" of this correction).
UCB is one of the best performing algorithms [Sutton 2018](https://mitpress.mit.edu/books/reinforcement-learning-second-edition). How does it stack up against our desired properties?
1. Exploration is reasonably quick, governed by the $c$ hyperparameter for the "correction term".
2. It attempts to pick a good action when exploring, rather than randomly.
3. Finding the optimal action occurs efficiently, governed by the constant $c$.
4. The $ln(t)$ factor in the correction term grows more slowly over time relative to the counts $N_t(a)$, so exploration occurs less frequently at longer time scales.
Because UCB is based on prior measured results, it is an example of a _Frequentist_ approach that is _model free_, meaning we just measure outcomes, we don't build a model to explain the environment.
#### UCB in More Detail
Let's explain the equation in more detail. If you are just interested in developing an intuition about strategies, this is a good place to stop and go to the next lesson, [Simple Multi-Armed-Bandit](03-Simple-Multi-Armed-Bandit.ipynb).
* $A_t$ is the action we want to select at time $t$, the action that is most likely to produce the best reward or most likely to be worth exploring.
* For all the actions we can choose from, we pick the action $a$ that maximizes the formula in the brackets [...].
* $Q_t(a)$ is any equation we're using to measure the "value" received at time $t$ for action $a$. This is the greedy choice, i.e., the equation that tells us which action $a$ we currently know will give us the highest value. If we never wanted to explore, the second term in the brackets wouldn't exist. $Q_t(a)$ alone would always tell us to pick the best action we already know about. (The use of $Q$ comes from an early RL algorithm called _Q learning_ that models the _value_ returned from actions over time.)
* The second term in the brackets is the correction that UCB gives us. As time $t$ increases, the natural log of $t$ also increases, but slower and slower for larger $t$. This is good because we hope we will find the optimal action at some earlier time $t$, so exploration at large $t$ is less useful (as long as the bandit is stationary or slowly changing). However, the denominator, $N_t(a)$ is the number of times we've selected $a$ already. The more times we've already tried $a$, the less "interesting" it is to try again, so this term penalizes choosing $a$. Finally, $c$ is a constant, a "knob" or _hyperparameter_ that determines how much we weight exploration vs. exploitation.
When we use UCB in subsequent lessons, we'll use a simple _linear_ equation for $Q_t(a)$, i.e., something of the form $z = ax + by + c$.
See [Wikipedia - MAB Approximate solutions for contextual bandit](https://en.wikipedia.org/wiki/Multi-armed_bandit), [these references](../06-RL-References.ipynb#Upper-Confidence-Bound), and the [RLlib documentation](https://docs.ray.io/en/latest/rllib-algorithms.html?highlight=greedy#linear-upper-confidence-bound-contrib-linucb) for more information.
### Thompson Sampling
Thompson sampling, developed in the 30s, is similar to UCB in that it picks the action that is believed to have the highest potential of maximum reward. It is a _Bayesian, model-based_ approach, where the model is the posterior distribution and may incorporate prior belief about the environment.
The agent samples weights for each action, using their posterior distributions, and chooses the action that produces the highest reward. Calculating the exact posterior is intractable in most cases, so they are usually approximated. Hence, the algorithm models beliefs about the problem. Then, during each iteration, the agent initializes with a random belief acts acts optimally based on it.
One trade-off is that Thompson Sampling requires an accurate model of the past policy and may suffer from large variance when the past policy differs significantly from a policy being evaluated. You may observe this if you rerun experiments in subsequent lessons that use Thompson Sampling. The graphs of rewards and especially the ranges from high to low, may change significantly from run to run.
Relatively speaking, the Thompson Sampling exploration strategies are newer than UCB and tend to perform better (as we'll see in subsequent lessons), although the math for their theoretical performance is less rigorous than for UCB.
For more information, see [Wikipedia](https://en.wikipedia.org/wiki/Thompson_sampling), [A Tutorial on Thompson Sampling](https://web.stanford.edu/~bvr/pubs/TS_Tutorial.pdf), [RLlib documentation](https://docs.ray.io/en/latest/rllib-algorithms.html?highlight=greedy#linear-thompson-sampling-contrib-lints), and other references in [RL References](../References-Reinforcement-Learning.ipynb).
### Gradient Bandit Algorithms
Focusing explicitly on rewards isn't the only approach. What if we use a more general measure, a _preference_, for selecting an action $a$ at time $t$? We'll use $H_t(a)$ to represent this preference at time $t$ for action $a$. We need to model this so we have a probability of selecting an action $a$. Using the _soft-max distribution_ works, also known as the Gibbs or Boltzmann distribution:
$Pr\{A_t = a\} \doteq \frac{e^{H_t(a)}}{\sum^{k}_{b=1}e^{H_t(b)}} \doteq \pi_t(a)$
$\pi_t(a)$ is defined to encapsulate this formula for the probability of taking action $a$ at time $t$.
The term _gradient_ is used for this algorithm because the training update formula for $H_t(a)$ is very similar to the _stochastic gradient descent_ formula used in other ML problems.
After an action $A_t$ is selected at a time $t$ and reward $R_t$ is received, the action preferences are updated as follows:
$ H_{t+1}(A_t) \doteq H_t(A_t) + \alpha(R_t - \overset{\_}{R_t})(1 - \pi_t(A_t))$, and
$ H_{t+1}(a) \doteq H_t(a) - \alpha(R_t - \overset{\_}{R_t})(\pi_t(a))$, for all $a \ne A_t$
where $H_0(a)$ values are initialized to zero, $\alpha > 0$ is a step size parameter and $\overset{\_}{R_t}$ is the average of all the rewards up through and including time $t$. Note that if $R_t - \overset{\_}{R_t}$ is positive, meaning the current reward is larger than the average, the preference $H(A_t)$ increases. Otherwise, it decreases.
Note the plus vs. minus signs in the two equations before the $\alpha$ term. If our preference for $A_t$ increases, our preferences for the other actions should decrease.
How does Thompson Sampling satisfy our desired properties?
1. As shown, this algorithm doesn't have tuning parameters to control the rate of exploration or convergence to the optimal solution. However, the convergence is reasonably quick if the variance in reward values is relatively high, so that the difference $R_t - \overset{\_}{R_t}$ is also relatively large for low $t$ values.
2. It attempts to pick a good action when exploring, rather than randomly.
3. See 1.
4. As $\overset{\_}{R_t}$ converges to a maximum, the difference $R_t - \overset{\_}{R_t}$ and hence all the preference values $H_t(a)$ will become relatively stationary, with the optimal action having the highest $H$. Since the $H_t(a)$ values govern the probability of being selected, based on the _soft-max distribution_, if the optimal action has a significantly higher $H_t(a)$ than the other actions, it will be chosen most frequently. If the differences between $H_t(a)$ values are not large, then several will be chosen frequently, but that also means their rewards are relatively close. Hence, in either case, the average reward over time will still be close to optimal.
There are many more details about Thompson Sampling, but we won't discuss them further here. See [Sutton 2018](https://mitpress.mit.edu/books/reinforcement-learning-second-edition) for the details.
|
github_jupyter
|
# Ray RLlib Multi-Armed Bandits - Exploration-Exploitation Strategies
© 2019-2022, Anyscale. All Rights Reserved

What strategy should we follow for selecting actions that balance the exploration-exploitation tradeoff, yielding the maximum average reward over time? This is the core challenge of RL/bandit algorithms.
This lesson has two goals, to give you an intuitive sense of what makes a good algorithm and to introduce several popular examples.
> **Tip:** For the first time through this material, you may wish to focus on the first goal, developing an intuitive sense of the requirements for a good algorithm. Come back later to explore the details of the algorithms discussed.
So, at least, read through the first sections, stopping at _UCB in More Detail_ under _Upper Confidence Bound_.
## What Makes a Good Exploration-Exploitation Algorithm?
Let's first assume we are considered only stationary bandits. The ideal algorithm achieves these properties:
1. It explores all the actions reasonably aggressively.
2. When exploring, it picks the action most likely to produce an optimal reward, rather than making random choices.
3. It converges quickly to the action that optimizes the mean reward.
4. It stops exploration once the optimal action is known and just exploits!
For non-stationary and context bandits, the optimal action will likely change over time, so some exploration may always be needed.
## Popular Algorithms
With these properties in mind, let's briefly discuss four algorithms. We'll use two of them in examples over several subsequent lessons.
### $\epsilon$-Greedy
One possible strategy is quite simple, called $\epsilon$-Greedy, where $\epsilon$ is small number that determines how frequently exploration is done. The best-known action is exploited most of the time ("greedily"), governed by probability $1 - \epsilon$ (i.e., in percentage terms $100*(1 - \epsilon)$%). With probability $\epsilon$, an action is picked at random in the hopes of finding a new action that provides even better rewards.
Typical values of $\epsilon$ are between 0.01 and 0.1. A larger value, like 0.1, explores more aggressively and finds the optimal policy more quickly, but afterwards the aggressive exploration strategy becomes a liability, as it only selects the optimal action ~90% of the time, continuing excessive exploration that is now counterproductive. In contrast, smaller values, like 0.01, are slower to find the optimal policy, but once found continue to select it ~99% of the time, so over time the mean reward is _higher_ for _smaller_ $\epsilon$ values, as the optimal action is selected more often.
How does $\epsilon$-Greedy stack up against our desired properties?
1. The higher the $\epsilon$ value, the more quickly the action space is explored.
2. It randomly picks the next action, so there is no "intelligence" involved in optimizing the choice.
3. The higher the $\epsilon$ value, the more quickly the optimal action is found.
4. Just as this algorithm makes no attempt to optimize the choice of action during exploration, it makes no attempt to throttle back exploration when the optimal value is found.
To address point 4, you could adopt an enhancement that decays the $\epsilon$ value over time, rather than keeping it fixed.
See [Wikipedia - MAB Approximate Solutions](https://en.wikipedia.org/wiki/Multi-armed_bandit) and [Sutton 2018](https://mitpress.mit.edu/books/reinforcement-learning-second-edition) for more information.
### Upper Confidence Bound
A limitation about $\epsilon$-greedy is that exploration is done indiscriminately. Is it possible to make a more informed choice about which alternative actions are more likely to yield a good result, so we preferentially pick one of them? That's what the Upper Confidence Bound (UCB) algorithm attempts to do. It weights some choices over others.
It's worth looking at the formula that governs the choice for the next action at time $t$:
$$A_t \doteq \frac{argmax}{a}\bigg[ Q_t(a) + c\sqrt{ \dfrac{\ln(t)}{N_t(a)} }\bigg]$$
It's not essential to fully understand all the details, but here is the gist of it; the best action to take at time $t$, $A_t$, is decided by picking the best known action for returning the highest value (the $Q_t(a)$ term in the brackets [...] computes this), but with a correction that encourages exploration, especially for smaller $t$, but penalizing particular actions $a$ if we've already picked them a lot previously (the second term starting with a constant $c$ that governs the "strength" of this correction).
UCB is one of the best performing algorithms [Sutton 2018](https://mitpress.mit.edu/books/reinforcement-learning-second-edition). How does it stack up against our desired properties?
1. Exploration is reasonably quick, governed by the $c$ hyperparameter for the "correction term".
2. It attempts to pick a good action when exploring, rather than randomly.
3. Finding the optimal action occurs efficiently, governed by the constant $c$.
4. The $ln(t)$ factor in the correction term grows more slowly over time relative to the counts $N_t(a)$, so exploration occurs less frequently at longer time scales.
Because UCB is based on prior measured results, it is an example of a _Frequentist_ approach that is _model free_, meaning we just measure outcomes, we don't build a model to explain the environment.
#### UCB in More Detail
Let's explain the equation in more detail. If you are just interested in developing an intuition about strategies, this is a good place to stop and go to the next lesson, [Simple Multi-Armed-Bandit](03-Simple-Multi-Armed-Bandit.ipynb).
* $A_t$ is the action we want to select at time $t$, the action that is most likely to produce the best reward or most likely to be worth exploring.
* For all the actions we can choose from, we pick the action $a$ that maximizes the formula in the brackets [...].
* $Q_t(a)$ is any equation we're using to measure the "value" received at time $t$ for action $a$. This is the greedy choice, i.e., the equation that tells us which action $a$ we currently know will give us the highest value. If we never wanted to explore, the second term in the brackets wouldn't exist. $Q_t(a)$ alone would always tell us to pick the best action we already know about. (The use of $Q$ comes from an early RL algorithm called _Q learning_ that models the _value_ returned from actions over time.)
* The second term in the brackets is the correction that UCB gives us. As time $t$ increases, the natural log of $t$ also increases, but slower and slower for larger $t$. This is good because we hope we will find the optimal action at some earlier time $t$, so exploration at large $t$ is less useful (as long as the bandit is stationary or slowly changing). However, the denominator, $N_t(a)$ is the number of times we've selected $a$ already. The more times we've already tried $a$, the less "interesting" it is to try again, so this term penalizes choosing $a$. Finally, $c$ is a constant, a "knob" or _hyperparameter_ that determines how much we weight exploration vs. exploitation.
When we use UCB in subsequent lessons, we'll use a simple _linear_ equation for $Q_t(a)$, i.e., something of the form $z = ax + by + c$.
See [Wikipedia - MAB Approximate solutions for contextual bandit](https://en.wikipedia.org/wiki/Multi-armed_bandit), [these references](../06-RL-References.ipynb#Upper-Confidence-Bound), and the [RLlib documentation](https://docs.ray.io/en/latest/rllib-algorithms.html?highlight=greedy#linear-upper-confidence-bound-contrib-linucb) for more information.
### Thompson Sampling
Thompson sampling, developed in the 30s, is similar to UCB in that it picks the action that is believed to have the highest potential of maximum reward. It is a _Bayesian, model-based_ approach, where the model is the posterior distribution and may incorporate prior belief about the environment.
The agent samples weights for each action, using their posterior distributions, and chooses the action that produces the highest reward. Calculating the exact posterior is intractable in most cases, so they are usually approximated. Hence, the algorithm models beliefs about the problem. Then, during each iteration, the agent initializes with a random belief acts acts optimally based on it.
One trade-off is that Thompson Sampling requires an accurate model of the past policy and may suffer from large variance when the past policy differs significantly from a policy being evaluated. You may observe this if you rerun experiments in subsequent lessons that use Thompson Sampling. The graphs of rewards and especially the ranges from high to low, may change significantly from run to run.
Relatively speaking, the Thompson Sampling exploration strategies are newer than UCB and tend to perform better (as we'll see in subsequent lessons), although the math for their theoretical performance is less rigorous than for UCB.
For more information, see [Wikipedia](https://en.wikipedia.org/wiki/Thompson_sampling), [A Tutorial on Thompson Sampling](https://web.stanford.edu/~bvr/pubs/TS_Tutorial.pdf), [RLlib documentation](https://docs.ray.io/en/latest/rllib-algorithms.html?highlight=greedy#linear-thompson-sampling-contrib-lints), and other references in [RL References](../References-Reinforcement-Learning.ipynb).
### Gradient Bandit Algorithms
Focusing explicitly on rewards isn't the only approach. What if we use a more general measure, a _preference_, for selecting an action $a$ at time $t$? We'll use $H_t(a)$ to represent this preference at time $t$ for action $a$. We need to model this so we have a probability of selecting an action $a$. Using the _soft-max distribution_ works, also known as the Gibbs or Boltzmann distribution:
$Pr\{A_t = a\} \doteq \frac{e^{H_t(a)}}{\sum^{k}_{b=1}e^{H_t(b)}} \doteq \pi_t(a)$
$\pi_t(a)$ is defined to encapsulate this formula for the probability of taking action $a$ at time $t$.
The term _gradient_ is used for this algorithm because the training update formula for $H_t(a)$ is very similar to the _stochastic gradient descent_ formula used in other ML problems.
After an action $A_t$ is selected at a time $t$ and reward $R_t$ is received, the action preferences are updated as follows:
$ H_{t+1}(A_t) \doteq H_t(A_t) + \alpha(R_t - \overset{\_}{R_t})(1 - \pi_t(A_t))$, and
$ H_{t+1}(a) \doteq H_t(a) - \alpha(R_t - \overset{\_}{R_t})(\pi_t(a))$, for all $a \ne A_t$
where $H_0(a)$ values are initialized to zero, $\alpha > 0$ is a step size parameter and $\overset{\_}{R_t}$ is the average of all the rewards up through and including time $t$. Note that if $R_t - \overset{\_}{R_t}$ is positive, meaning the current reward is larger than the average, the preference $H(A_t)$ increases. Otherwise, it decreases.
Note the plus vs. minus signs in the two equations before the $\alpha$ term. If our preference for $A_t$ increases, our preferences for the other actions should decrease.
How does Thompson Sampling satisfy our desired properties?
1. As shown, this algorithm doesn't have tuning parameters to control the rate of exploration or convergence to the optimal solution. However, the convergence is reasonably quick if the variance in reward values is relatively high, so that the difference $R_t - \overset{\_}{R_t}$ is also relatively large for low $t$ values.
2. It attempts to pick a good action when exploring, rather than randomly.
3. See 1.
4. As $\overset{\_}{R_t}$ converges to a maximum, the difference $R_t - \overset{\_}{R_t}$ and hence all the preference values $H_t(a)$ will become relatively stationary, with the optimal action having the highest $H$. Since the $H_t(a)$ values govern the probability of being selected, based on the _soft-max distribution_, if the optimal action has a significantly higher $H_t(a)$ than the other actions, it will be chosen most frequently. If the differences between $H_t(a)$ values are not large, then several will be chosen frequently, but that also means their rewards are relatively close. Hence, in either case, the average reward over time will still be close to optimal.
There are many more details about Thompson Sampling, but we won't discuss them further here. See [Sutton 2018](https://mitpress.mit.edu/books/reinforcement-learning-second-edition) for the details.
| 0.935199 | 0.991692 |
# Appropriate figure sizes
```
import matplotlib.pyplot as plt
from tueplots import figsizes
# Increase the resolution of all the plots below
plt.rcParams.update({"figure.dpi": 150})
```
Figure sizes are tuples. They describe the figure sizes in `inches`, just like what matplotlib expects.
Outputs of `figsize` functions are dictionaries that match `rcParams`.
```
icml_size = figsizes.icml2022_full()
icml_size
```
We can use them to make differently sized figures. The height-to-width ratio is (loosely) based on the golden ratio.
```
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
plt.show()
```
### Figure sizes that match your latex template
```
plt.rcParams.update(figsizes.icml2022_full())
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
plt.show()
plt.rcParams.update(figsizes.neurips2021())
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
plt.show()
plt.rcParams.update(figsizes.aistats2022_full())
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
plt.show()
```
For double-column layouts such as ICML or AISTATS, there is also a single-column (i.e. half-width) version:
```
plt.rcParams.update(figsizes.icml2022_half())
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
plt.show()
plt.rcParams.update(figsizes.aistats2022_half())
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
plt.show()
```
### Figure sizes that match your subplot layouts
When working with `plt.subplots`, provide the `nrows` and `ncols` also to the figsize functions to get consistent subplot sizes by adjusting the overall figure height.
Why? Because each subplot is fixed to a specific format (usually the golden ratio), and the figure width is commonly tied to the specific journal style.
The remaining degree of freedom, the overall figure height, is adapted to make things look clean.
```
plt.rcParams.update(figsizes.neurips2021(nrows=1, ncols=3))
fig, axes = plt.subplots(nrows=1, ncols=3, sharex=True, sharey=True)
for ax in axes.flatten():
ax.plot([1.0, 2.0], [3.0, 4.0])
plt.show()
plt.rcParams.update(figsizes.neurips2021(nrows=2, ncols=3))
fig, axes = plt.subplots(nrows=2, ncols=3, sharex=True, sharey=True)
for ax in axes.flatten():
ax.plot([1.0, 2.0], [3.0, 4.0])
plt.show()
```
You can also customize the `height_to_width_ratio`:
```
plt.rcParams.update(figsizes.icml2022_half(nrows=2, ncols=2, height_to_width_ratio=1.0))
fig, axes = plt.subplots(nrows=2, ncols=2, sharex=True, sharey=True)
for ax in axes.flatten():
ax.plot([1.0, 2.0], [3.0, 4.0])
plt.show()
```
|
github_jupyter
|
import matplotlib.pyplot as plt
from tueplots import figsizes
# Increase the resolution of all the plots below
plt.rcParams.update({"figure.dpi": 150})
icml_size = figsizes.icml2022_full()
icml_size
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
plt.show()
plt.rcParams.update(figsizes.icml2022_full())
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
plt.show()
plt.rcParams.update(figsizes.neurips2021())
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
plt.show()
plt.rcParams.update(figsizes.aistats2022_full())
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
plt.show()
plt.rcParams.update(figsizes.icml2022_half())
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
plt.show()
plt.rcParams.update(figsizes.aistats2022_half())
fig, ax = plt.subplots()
ax.plot([1.0, 2.0], [3.0, 4.0])
plt.show()
plt.rcParams.update(figsizes.neurips2021(nrows=1, ncols=3))
fig, axes = plt.subplots(nrows=1, ncols=3, sharex=True, sharey=True)
for ax in axes.flatten():
ax.plot([1.0, 2.0], [3.0, 4.0])
plt.show()
plt.rcParams.update(figsizes.neurips2021(nrows=2, ncols=3))
fig, axes = plt.subplots(nrows=2, ncols=3, sharex=True, sharey=True)
for ax in axes.flatten():
ax.plot([1.0, 2.0], [3.0, 4.0])
plt.show()
plt.rcParams.update(figsizes.icml2022_half(nrows=2, ncols=2, height_to_width_ratio=1.0))
fig, axes = plt.subplots(nrows=2, ncols=2, sharex=True, sharey=True)
for ax in axes.flatten():
ax.plot([1.0, 2.0], [3.0, 4.0])
plt.show()
| 0.708515 | 0.994112 |
# <font color=darkblue>ENGR 1330-2022-1 Exam1-Laboratory Portion </font>
**BEJARANO, PAOLA**
**R11737023**
ENGR 1330 Exam 1 - Laboratory/Programming Skills
---
**Download** (right-click, save target as ...) this page as a jupyterlab notebook from: [s22-ex1-deploy.ipynb](http://54.243.252.9/engr-1330-webroot/5-ExamProblems/Exam1/Exam1/spring2022/s22-ex1-deploy.ipynb)
**If you are unable to download the file, create an empty notebook and copy paste the problems into Markdown cells and Code cells (problem-by-problem)**
---
## Problem 1 (10 pts) : <font color = 'magenta'>*Profile your computer*</font>
Execute the code cell below exactly as written. If you get an error just continue to the remaining problems.
```
# Preamble script block to identify host, user, and kernel
import sys
! hostname
! whoami
print(sys.executable)
print(sys.version)
print(sys.version_info)
```
---
## Problem 2 (10 pts): <font color = 'magenta'>*input(),typecast, string reversal, comparison based selection, print()*</font>
Build a script where the user will supply a number then determine if it is a palindrome number. A palindrome number is a number that is same after reversal. For example 545, is a palindrome number.
- Case 1: 545
- Case 2: 123
- Case 3: 666
```
# define variables
# interactive input
# computation/compare
# report result
# Case 1
num=int(input("Enter a number:"))
temp=num
rev=0
while(num>0):
dig=num%10
rev=rev*10+dig
num=num//10
if(temp==rev):
print("The number is palindrome!")
else:
print("Not a palindrome!")
# Case 2
num=int(input("Enter a number:"))
temp=num
rev=0
while(num>0):
dig=num%10
rev=rev*10+dig
num=num//10
if(temp==rev):
print("The number is palindrome!")
else:
print("Not a palindrome!")
# Case 3
num=int(input("Enter a number:"))
temp=num
rev=0
while(num>0):
dig=num%10
rev=rev*10+dig
num=num//10
if(temp==rev):
print("The number is palindrome!")
else:
print("Not a palindrome!")
```
---
## Problem 3 (15 pts): <font color = 'magenta'>*len(),compare,accumulator, populate an empty list,for loop, print()*</font>
Two lists are defined as
```
x= [1.0,1.1,1.2,1.3,1.4,1.5,1.6,1.7,1.8]
f_of_x = [1.543,1.668,1.811,1.971,2.151,2.352,2.577,2.828,3.107]
```
Create a script that determines the length of each list and if they are the same length then print the contents of each list row-wise, and the running sum of `f_of_x` so the output looks like
```
--x-- --f_of_x-- --sum--
1.0 1.543 1.543
1.1 1.668 3.211
... ... ...
... ... ...
1.7 2.828 16.901
1.8 3.107 20.008
```
Test your script using the two lists above, then with the two lists below:
```
x= [1.0,1.1,1.2,1.3,1.4,1.5,1.6,1.7,1.8]
f_of_x =[1.543, 3.211, 5.022, 6.993, 9.144, 11.496, 14.073, 16.901, 20.008]
```
```
# define variables
# Case 1
x= [1.0,1.1,1.2,1.3,1.4,1.5,1.6,1.7,1.8]
f_of_x = [1.543,1.668,1.811,1.971,2.151,2.352,2.577,2.828,3.107]
sumOfList= []
print(" --x-- ","--f_of_x--"," --sum-- ")
print("-------|----------|---------")
for x in range(1,9,1):
print("%4.f" % x, " |", "%4.f" % x, " |", x+ f_of_x)
# define variables
# Case 2
# ...
```
---
## Problem 4 Function (15 points) : <font color = 'magenta'> *def ..., input(),typecast,arithmetic based selection, print()* </font>
Build a function that takes as input two integer numbers. The function should return their product if the product is greater than 666, otherwise the function should return their sum.
Employ the function in an interactive script and test the following cases:
- Case 1: 65 and 10
- Case 2: 66 and 11
- Case 3: 25 and 5
```
# define variables
# interactive input
# computation/compare
# report result
# Case 1
num1 = int(input('Enter the first integer'))
num2 = int(input('Enter the second integer'))
product = num1*num2
sums = num1+num2
print('The product of both integers is:', product)
if product > 666:
print ('The sum of both integers is:',sums)
# Case 2
num1 = int(input('Enter the first integer'))
num2 = int(input('Enter the second integer'))
product = num1*num2
sums = num1+num2
print('The product of both integers is:', product)
if product > 666:
print ('The sum of both integers is:',sums)
# Case 3
num1 = int(input('Enter the first integer'))
num2 = int(input('Enter the second integer'))
product = num1*num2
sums = num1+num2
print('The product of both integers is:', product)
if product > 666:
print ('The sum of both integers is:',sums)
```
|
github_jupyter
|
# Preamble script block to identify host, user, and kernel
import sys
! hostname
! whoami
print(sys.executable)
print(sys.version)
print(sys.version_info)
# define variables
# interactive input
# computation/compare
# report result
# Case 1
num=int(input("Enter a number:"))
temp=num
rev=0
while(num>0):
dig=num%10
rev=rev*10+dig
num=num//10
if(temp==rev):
print("The number is palindrome!")
else:
print("Not a palindrome!")
# Case 2
num=int(input("Enter a number:"))
temp=num
rev=0
while(num>0):
dig=num%10
rev=rev*10+dig
num=num//10
if(temp==rev):
print("The number is palindrome!")
else:
print("Not a palindrome!")
# Case 3
num=int(input("Enter a number:"))
temp=num
rev=0
while(num>0):
dig=num%10
rev=rev*10+dig
num=num//10
if(temp==rev):
print("The number is palindrome!")
else:
print("Not a palindrome!")
x= [1.0,1.1,1.2,1.3,1.4,1.5,1.6,1.7,1.8]
f_of_x = [1.543,1.668,1.811,1.971,2.151,2.352,2.577,2.828,3.107]
--x-- --f_of_x-- --sum--
1.0 1.543 1.543
1.1 1.668 3.211
... ... ...
... ... ...
1.7 2.828 16.901
1.8 3.107 20.008
x= [1.0,1.1,1.2,1.3,1.4,1.5,1.6,1.7,1.8]
f_of_x =[1.543, 3.211, 5.022, 6.993, 9.144, 11.496, 14.073, 16.901, 20.008]
# define variables
# Case 1
x= [1.0,1.1,1.2,1.3,1.4,1.5,1.6,1.7,1.8]
f_of_x = [1.543,1.668,1.811,1.971,2.151,2.352,2.577,2.828,3.107]
sumOfList= []
print(" --x-- ","--f_of_x--"," --sum-- ")
print("-------|----------|---------")
for x in range(1,9,1):
print("%4.f" % x, " |", "%4.f" % x, " |", x+ f_of_x)
# define variables
# Case 2
# ...
# define variables
# interactive input
# computation/compare
# report result
# Case 1
num1 = int(input('Enter the first integer'))
num2 = int(input('Enter the second integer'))
product = num1*num2
sums = num1+num2
print('The product of both integers is:', product)
if product > 666:
print ('The sum of both integers is:',sums)
# Case 2
num1 = int(input('Enter the first integer'))
num2 = int(input('Enter the second integer'))
product = num1*num2
sums = num1+num2
print('The product of both integers is:', product)
if product > 666:
print ('The sum of both integers is:',sums)
# Case 3
num1 = int(input('Enter the first integer'))
num2 = int(input('Enter the second integer'))
product = num1*num2
sums = num1+num2
print('The product of both integers is:', product)
if product > 666:
print ('The sum of both integers is:',sums)
| 0.20268 | 0.779028 |
<a href="https://colab.research.google.com/github/Emotional-Text-to-Speech/dl-for-emo-tts/blob/master/Demo_DL_Based_Emotional_TTS.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# DL Based Emotional Text to Speech
In this demo, we provide an interface to generate emotional speech from user inputs for both the emotional label and the text.
The models that are trained are [Tacotron](https://github.com/Emotional-Text-to-Speech/tacotron_pytorch) and [DC-TTS](https://github.com/Emotional-Text-to-Speech/pytorch-dc-tts).
Further information about our approaches and *exactly how* did we develop this demo can be seen [here](https://github.com/Emotional-Text-to-Speech/dl-for-emo-tts).
---
---
## Download the required code and install the dependences
- Make sure you have clicked on ```Open in Playground``` to be able to run the cells. Set your runtime to ```GPU```. This can be done with the following steps:
- Click on ```Runtime``` on the menubar above
- Select ```Change runtime type```
- Select ```GPU``` from the ```Hardware accelerator``` dropdown and save.
- Run the cell below. It will automatically create the required directory structure. In order to run the cell, click on the **arrow** that is on the left column of the cell (hover over the ```[]``` symbol). Optionally, you can also press ```Shift + Enter ```
```
! git clone https://github.com/Emotional-Text-to-Speech/pytorch-dc-tts
! git clone --recursive https://github.com/Emotional-Text-to-Speech/tacotron_pytorch.git
! cd "tacotron_pytorch/" && pip install -e .
! pip install unidecode
! pip install gdown
! mkdir trained_models
import gdown
url = 'https://drive.google.com/uc?id=1rmhtEl3N3kAfnQM6J0vDGSCCHlHLK6kw'
output = 'trained_models/angry_dctts.pth'
gdown.download(url, output, quiet=False)
url = 'https://drive.google.com/uc?id=1bP0eJ6z4onr2klolzU17Y8SaNspxQjF-'
output = 'trained_models/neutral_dctts.pth'
gdown.download(url, output, quiet=False)
url = 'https://drive.google.com/uc?id=1WWE9zxS3FRgD0Y5yIdNmLY9-t5gnBsNt'
output = 'trained_models/ssrn.pth'
gdown.download(url, output, quiet=False)
url = 'https://drive.google.com/uc?id=1N6Ykrd1IaPiNdos_iv0J6JbY2gBDghod'
output = 'trained_models/disgust_tacotron.pth'
gdown.download(url, output, quiet=False)
url = 'https://drive.google.com/uc?id=15m0PZ8xaBocb_6wDjAU6S4Aunbr3TKkM'
output = 'trained_models/amused_tacotron.pth'
gdown.download(url, output, quiet=False)
url = 'https://drive.google.com/uc?id=1D6HGWYWvhdvLWQt4uOYqdmuVO7ZVLWNa'
output = 'trained_models/sleepiness_tacotron.pth'
gdown.download(url, output, quiet=False)
```
## Setup the required code
- Run the cell below. It will automatically create the required directory structure. In order to run the cell, click on the **arrow** that is on the left column of the cell (hover over the ```[]``` symbol). Optionally, you can also press ```Shift + Enter ```
```
%tensorflow_version 1.x
%pylab inline
rcParams["figure.figsize"] = (10,5)
import os
import sys
import numpy as np
sys.path.append('pytorch-dc-tts/')
sys.path.append('pytorch-dc-tts/models')
sys.path.append("tacotron_pytorch/")
sys.path.append("tacotron_pytorch/lib/tacotron")
# For the DC-TTS
import torch
from text2mel import Text2Mel
from ssrn import SSRN
from audio import save_to_wav, spectrogram2wav
from utils import get_last_checkpoint_file_name, load_checkpoint_test, save_to_png, load_checkpoint
from datasets.emovdb import vocab, get_test_data
# For the Tacotron
from text import text_to_sequence, symbols
# from util import audio
from tacotron_pytorch import Tacotron
from synthesis import tts as _tts
# For Audio/Display purposes
import librosa.display
import IPython
from IPython.display import Audio
from IPython.display import display
from google.colab import widgets
from google.colab import output
import warnings
warnings.filterwarnings('ignore')
torch.set_grad_enabled(False)
text2mel = Text2Mel(vocab).eval()
ssrn = SSRN().eval()
load_checkpoint('trained_models/ssrn.pth', ssrn, None)
model = Tacotron(n_vocab=len(symbols),
embedding_dim=256,
mel_dim=80,
linear_dim=1025,
r=5,
padding_idx=None,
use_memory_mask=False,
)
def visualize(alignment, spectrogram, Emotion):
label_fontsize = 16
tb = widgets.TabBar(['Alignment', 'Spectrogram'], location='top')
with tb.output_to('Alignment'):
imshow(alignment.T, aspect="auto", origin="lower", interpolation=None)
xlabel("Decoder timestamp", fontsize=label_fontsize)
ylabel("Encoder timestamp", fontsize=label_fontsize)
with tb.output_to('Spectrogram'):
if Emotion == 'Disgust' or Emotion == 'Amused' or Emotion == 'Sleepiness':
librosa.display.specshow(spectrogram.T, sr=fs,hop_length=hop_length, x_axis="time", y_axis="linear")
else:
librosa.display.specshow(spectrogram, sr=fs,hop_length=hop_length, x_axis="time", y_axis="linear")
xlabel("Time", fontsize=label_fontsize)
ylabel("Hz", fontsize=label_fontsize)
def tts_dctts(text2mel, ssrn, text):
sentences = [text]
max_N = len(text)
L = torch.from_numpy(get_test_data(sentences, max_N))
zeros = torch.from_numpy(np.zeros((1, 80, 1), np.float32))
Y = zeros
A = None
for t in range(210):
_, Y_t, A = text2mel(L, Y, monotonic_attention=True)
Y = torch.cat((zeros, Y_t), -1)
_, attention = torch.max(A[0, :, -1], 0)
attention = attention.item()
if L[0, attention] == vocab.index('E'): # EOS
break
_, Z = ssrn(Y)
Y = Y.cpu().detach().numpy()
A = A.cpu().detach().numpy()
Z = Z.cpu().detach().numpy()
return spectrogram2wav(Z[0, :, :].T), A[0, :, :], Y[0, :, :]
def tts_tacotron(model, text):
waveform, alignment, spectrogram = _tts(model, text)
return waveform, alignment, spectrogram
def present(waveform, Emotion, figures=False):
if figures!=False:
visualize(figures[0], figures[1], Emotion)
IPython.display.display(Audio(waveform, rate=fs))
fs = 20000 #20000
hop_length = 250
model.decoder.max_decoder_steps = 200
```
## Run the Demo
- Select an ```Emotion``` from the dropdown and enter the ```Text``` that you want to be generated.
- Run the cell below. It will automatically create the required directory structure. In order to run the cell, click on the **arrow** that is on the left column of the cell (hover over the ```[]``` symbol). Optionally, you can also press ```Shift + Enter ```
**Play the speech with the generated audio player and view the required plots by clicking on their respective tabs!**
```
#@title Select the emotion and type the text
%pylab inline
Emotion = "Neutral" #@param ["Neutral", "Angry", "Disgust", "Sleepiness", "Amused"]
Text = 'I am exhausted.' #@param {type:"string"}
wav, align, mel = None, None, None
if Emotion == "Neutral":
load_checkpoint('trained_models/'+Emotion.lower()+'_dctts.pth', text2mel, None)
wav, align, mel = tts_dctts(text2mel, ssrn, Text)
elif Emotion == "Angry":
load_checkpoint_test('trained_models/'+Emotion.lower()+'_dctts.pth', text2mel, None)
wav, align, mel = tts_dctts(text2mel, ssrn, Text)
# wav = wav.T
elif Emotion == "Disgust" or Emotion == "Amused" or Emotion == "Sleepiness":
checkpoint = torch.load('trained_models/'+Emotion.lower()+'_tacotron.pth', map_location=torch.device('cpu'))
model.load_state_dict(checkpoint["state_dict"])
wav, align, mel = tts_tacotron(model, Text)
present(wav, Emotion, (align,mel))
```
|
github_jupyter
|
- Select ```GPU``` from the ```Hardware accelerator``` dropdown and save.
- Run the cell below. It will automatically create the required directory structure. In order to run the cell, click on the **arrow** that is on the left column of the cell (hover over the ```[]``` symbol). Optionally, you can also press ```Shift + Enter ```
## Setup the required code
- Run the cell below. It will automatically create the required directory structure. In order to run the cell, click on the **arrow** that is on the left column of the cell (hover over the ```[]``` symbol). Optionally, you can also press ```Shift + Enter ```
## Run the Demo
- Select an ```Emotion``` from the dropdown and enter the ```Text``` that you want to be generated.
- Run the cell below. It will automatically create the required directory structure. In order to run the cell, click on the **arrow** that is on the left column of the cell (hover over the ```[]``` symbol). Optionally, you can also press ```Shift + Enter ```
**Play the speech with the generated audio player and view the required plots by clicking on their respective tabs!**
| 0.809916 | 0.942242 |
## Applicazione del transfer learning con MobileNet_V2
A high-quality, dataset of images containing fruits. The following fruits are included: Apples - (different varieties: Golden, Golden-Red, Granny Smith, Red, Red Delicious), Apricot, Avocado, Avocado ripe, Banana (Yellow, Red), Cactus fruit, Carambula, Cherry, Clementine, Cocos, Dates, Granadilla, Grape (Pink, White, White2), Grapefruit (Pink, White), Guava, Huckleberry, Kiwi, Kaki, Kumsquats, Lemon (normal, Meyer), Lime, Litchi, Mandarine, Mango, Maracuja, Nectarine, Orange, Papaya, Passion fruit, Peach, Pepino, Pear (different varieties, Abate, Monster, Williams), Pineapple, Pitahaya Red, Plum, Pomegranate, Quince, Raspberry, Salak, Strawberry, Tamarillo, Tangelo.
Training set size: 28736 images.
Validation set size: 9673 images.
Number of classes: 60 (fruits).
Image size: 100x100 pixels.
```
import numpy as np
import keras
from tensorflow.keras.layers import InputLayer, Input
from tensorflow.keras.layers import Reshape, MaxPooling2D, Cropping2D
from tensorflow.keras.layers import Conv2D, Dense, Flatten, Dropout
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import SGD, Adam
from tensorflow.python.keras.utils import to_categorical
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
from tensorflow.python.keras.preprocessing import image
from skimage import transform
import matplotlib.pyplot as plt
%matplotlib inline
from tensorflow.python.keras.applications.mobilenet_v2 import MobileNetV2
from tensorflow.python.keras.models import Model
from tensorflow.python.keras.layers import Dense, GlobalAveragePooling2D
from tensorflow.python.keras import backend as K
```
### Change the path of directories!!!
```
# Setting path location for validation, traing and testing images
validationPath = 'D:/DataSet Storage/FRUTTA/Validation'
trainPath = 'D:/DataSet Storage/FRUTTA/Training'
```
### Plot an image, for example E:/Training/Cocos/15_100.jpg
```
from IPython.display import Image as image_show
image_show('D:/DataSet Storage/FRUTTA/Training/Cocos/15_100.jpg', width = 200, height = 200)
```
### Now you define the functions ables to read mini-batch of data
```
# Making an image data generator object with augmentation for training
train_datagen = ImageDataGenerator(rescale=1./255,
rotation_range=30,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
# Making an image data generator object with no augmentation for validation
test_datagen = ImageDataGenerator(rescale=1./255)
```
### why train_datagen and test_datagen are different? answer . . .
The idea for manipulating the train data set using rotation, zoom, width/height shift, flipping and process such like is that to expose the model with all probably possible samples that it may deal with on prediction/classification phase. So to streghten the model and getting the most information from train data, we manipulate the samples in train set such that the manipulated samples also can be valid and possible samples as part of train set. But if we do the same process on validation and test sets we will increase the correlation between train and validation/test sets that is in conflict with the assumtion that sample are independent. That causes generally more accuracy on validation and test sets which is not real. To recap, it is recommended that do manipulation just on seen data rather than unseen data.
```
# Using the generator with batch size 32 for training directory
train_generator = train_datagen.flow_from_directory(trainPath,
target_size=(224, 224),
batch_size=32,
class_mode='categorical')
# Using the generator with batch size 17 for validation directory
validation_generator = test_datagen.flow_from_directory(validationPath,
target_size=(224, 224),
batch_size=17,
class_mode='categorical')
```
### you can control the dimensions of the generator outputs
```
validation_generator[0][0].shape
validation_generator[0][1].shape
```
### Now you need to define your model . . .
the default definition of MobileNet_V2 is:
MobileNetV2(input_shape=None, alpha=1.0, depth_multiplier=1, include_top=True, weights='imagenet', input_tensor=None, pooling=None, classes=1000)
but you have a different number of classes . . . . .
```
mnv2 = MobileNetV2(weights=None, classes=60)
print(model.summary())
model = Sequential()
model.add(InputLayer(input_shape = (224,224,3)))
model.add(Cropping2D(cropping=((2, 2), (2, 2))))
model.add(Conv2D(16, (5, 5), activation='relu'))
model.add(MaxPooling2D(pool_size=(5, 5)))
model.add(Conv2D(32, (5, 5), activation='relu'))
model.add(MaxPooling2D(pool_size=(5, 5),strides=(5,5)))
model.add(Flatten())
model.add(Dense(32, activation='relu'))
model.add(Dropout(0.50))
model.add(Dense(60, activation='softmax'))
```
### Define what layers you want to train . . .
```
print(model.summary())
```
### Compile the model . . .
```
model.compile(optimizer='Adam', loss='categorical_crossentropy', metrics=['accuracy'])
```
## to fit the model you can write an expression as:
history = model.fit_generator(train_generator,
epochs=20,validation_data=validation_generator,)
```
history = model.fit_generator(train_generator,
epochs=20,validation_data=validation_generator)
```
### Fine tuning?
### once you have obtained the final estimate of the model you must evaluate it with more details . . .
### take an image of a papaya from internet and try to apply your model . . .
|
github_jupyter
|
import numpy as np
import keras
from tensorflow.keras.layers import InputLayer, Input
from tensorflow.keras.layers import Reshape, MaxPooling2D, Cropping2D
from tensorflow.keras.layers import Conv2D, Dense, Flatten, Dropout
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import SGD, Adam
from tensorflow.python.keras.utils import to_categorical
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
from tensorflow.python.keras.preprocessing import image
from skimage import transform
import matplotlib.pyplot as plt
%matplotlib inline
from tensorflow.python.keras.applications.mobilenet_v2 import MobileNetV2
from tensorflow.python.keras.models import Model
from tensorflow.python.keras.layers import Dense, GlobalAveragePooling2D
from tensorflow.python.keras import backend as K
# Setting path location for validation, traing and testing images
validationPath = 'D:/DataSet Storage/FRUTTA/Validation'
trainPath = 'D:/DataSet Storage/FRUTTA/Training'
from IPython.display import Image as image_show
image_show('D:/DataSet Storage/FRUTTA/Training/Cocos/15_100.jpg', width = 200, height = 200)
# Making an image data generator object with augmentation for training
train_datagen = ImageDataGenerator(rescale=1./255,
rotation_range=30,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
# Making an image data generator object with no augmentation for validation
test_datagen = ImageDataGenerator(rescale=1./255)
# Using the generator with batch size 32 for training directory
train_generator = train_datagen.flow_from_directory(trainPath,
target_size=(224, 224),
batch_size=32,
class_mode='categorical')
# Using the generator with batch size 17 for validation directory
validation_generator = test_datagen.flow_from_directory(validationPath,
target_size=(224, 224),
batch_size=17,
class_mode='categorical')
validation_generator[0][0].shape
validation_generator[0][1].shape
mnv2 = MobileNetV2(weights=None, classes=60)
print(model.summary())
model = Sequential()
model.add(InputLayer(input_shape = (224,224,3)))
model.add(Cropping2D(cropping=((2, 2), (2, 2))))
model.add(Conv2D(16, (5, 5), activation='relu'))
model.add(MaxPooling2D(pool_size=(5, 5)))
model.add(Conv2D(32, (5, 5), activation='relu'))
model.add(MaxPooling2D(pool_size=(5, 5),strides=(5,5)))
model.add(Flatten())
model.add(Dense(32, activation='relu'))
model.add(Dropout(0.50))
model.add(Dense(60, activation='softmax'))
print(model.summary())
model.compile(optimizer='Adam', loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit_generator(train_generator,
epochs=20,validation_data=validation_generator)
| 0.850624 | 0.878783 |
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/zerotosingularity/tensorflow-20-experiments/blob/master/convert_to_coreml.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/zerotosingularity/tensorflow-20-experiments/blob/master/convert_to_coreml.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a target="_blank" href="https://github.com/apple/coremltools/issues/323"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Track bug status</a>
</td>
</table>
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
# Set to True if you want to use tf.keras instead keras.io
use_tf_keras = False
!pip search tf-nightly-gpu-2.0-preview
!pip install tf-nightly-gpu-2.0-preview
import tensorflow as tf
print(tf.__version__)
!cat /usr/local/cuda/version.txt
!python --version
!pip install coremltools
from __future__ import print_function
import os
import json
from tensorflow.keras.preprocessing import image
import tensorflow as tf
import tensorflow.keras as keras
def get_imagenet_class_labels():
"""
Get the imagenet class index
:return:
Thanks Nick Gaens for this helper method.
"""
# Link to imagenet class labels provided by Keras
class_label_path = 'https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json'
if not os.path.isfile("imagenet_class_index.json"):
location = tf.keras.utils.get_file('imagenet_class_index.json', origin=class_label_path,
extract=False)
with open(location) as json_data:
d = json.load(json_data)
# Create a list of the class labels
class_labels = []
for ii in range(len(d.keys())):
class_labels.append(d[str(ii)][1].encode('ascii', 'ignore'))
return class_labels
if use_tf_keras:
print("using tf.keras")
else:
print("using Keras.io")
!pip install -U keras
if use_tf_keras:
print("using tf.keras")
import tf.keras
else:
print("using keras")
import keras
print(keras.__version__)
dir(keras.applications)
model = keras.applications.MobileNetV2(weights='imagenet')
class_labels = get_imagenet_class_labels()
import coremltools
coreml_model = coremltools.converters.keras.convert(model,
input_names='data',
image_input_names='data',
class_labels=class_labels)
coreml_model.save('mobilenetv2.mlmodel')
```
|
github_jupyter
|
%reload_ext autoreload
%autoreload 2
%matplotlib inline
# Set to True if you want to use tf.keras instead keras.io
use_tf_keras = False
!pip search tf-nightly-gpu-2.0-preview
!pip install tf-nightly-gpu-2.0-preview
import tensorflow as tf
print(tf.__version__)
!cat /usr/local/cuda/version.txt
!python --version
!pip install coremltools
from __future__ import print_function
import os
import json
from tensorflow.keras.preprocessing import image
import tensorflow as tf
import tensorflow.keras as keras
def get_imagenet_class_labels():
"""
Get the imagenet class index
:return:
Thanks Nick Gaens for this helper method.
"""
# Link to imagenet class labels provided by Keras
class_label_path = 'https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json'
if not os.path.isfile("imagenet_class_index.json"):
location = tf.keras.utils.get_file('imagenet_class_index.json', origin=class_label_path,
extract=False)
with open(location) as json_data:
d = json.load(json_data)
# Create a list of the class labels
class_labels = []
for ii in range(len(d.keys())):
class_labels.append(d[str(ii)][1].encode('ascii', 'ignore'))
return class_labels
if use_tf_keras:
print("using tf.keras")
else:
print("using Keras.io")
!pip install -U keras
if use_tf_keras:
print("using tf.keras")
import tf.keras
else:
print("using keras")
import keras
print(keras.__version__)
dir(keras.applications)
model = keras.applications.MobileNetV2(weights='imagenet')
class_labels = get_imagenet_class_labels()
import coremltools
coreml_model = coremltools.converters.keras.convert(model,
input_names='data',
image_input_names='data',
class_labels=class_labels)
coreml_model.save('mobilenetv2.mlmodel')
| 0.529993 | 0.900311 |
# Analysis Part I
Find 95% lower (one-sided) and two-sided confidence intervals for the reduction in risk corresponding to the primary endpoint (data “Through day 29”), using method 3 and also using the cruder conservative approach via simultaneous Bonferroni confidence bounds for N⋅1 and N1⋅ described in the notes on causal inference. (For the Bonferroni approach to two-sided intervals, use Sterne’s method for the underlying hypergeometric confidence intervals. Feel free to re-use your own code from the previous problem set.)
```
from utils import hypergeom_conf_interval
from math import comb
from scipy.stats import binom, hypergeom
from cabin.cabin import *
n, m = 753, 752
N = n+m
n01 = 59
n11 = 11
n00 = m - n01
n10 = n - n11
alpha = 0.05
print(n11, n10, n01, n00)
```
## Method 3
```
result = tau_twoside(11, 742, 59, 684, 0.05, 100)
tau_lower = result['tau_lower']
tau_upper = result['tau_lower']
```
It takes long time to run, here we only shown the results:
tau_upper = -0.0314
tau_lower = -0.0975
## Bonferroni approach
#### Two-sided:
```
# two sided
N_1 = hypergeom_conf_interval(n, n11, N, 1-alpha/2, alternative='two-sided', method= "sterne")
N1_ = hypergeom_conf_interval(m, n01, N, 1-alpha/2, alternative='two-sided', method= "sterne")
tao_lower = (N_1[0] - N1_[1])/N
tao_upper = (N_1[1] - N1_[0])/N
print("tau_upper", round(tao_upper,4))
print("tau_lower", round(tao_lower,4))
```
#### Lower one-side
```
# lower one sided
N_1 = hypergeom_conf_interval(n, n11*N/n, N, 1-alpha/2, alternative='lower')
N1_ = hypergeom_conf_interval(m, n01*N/m, N, 1-alpha/2, alternative='upper')
tao_lower = (N_1[0] - N1_[1])/N
tao_upper = (N_1[1] - N1_[0])/N
print("tau_upper", round(tao_upper,4))
print("tau_lower", round(tao_lower,4))
```
#### Upper one-side
```
# lower one sided
N_1 = hypergeom_conf_interval(n, n11*N/n, N, 1-alpha/2, alternative='upper')
N1_ = hypergeom_conf_interval(m, n01*N/m, N, 1-alpha/2, alternative='lower')
tao_lower = (N_1[0] - N1_[1])/N
tao_upper = (N_1[1] - N1_[0])/N
print("tau_upper", round(tao_upper,4))
print("tau_lower", round(tao_lower,4))
```
## Discuss the differences between the two sets of confidence intervals.
Confidence interval for reduction in risk by using Method 3 is $\tau_{method3} = [ -0.0975, -0.0314]$. And the confidence interval for reduction by using sterne method is $\tau_{sterne} = [-0.0877, -0.0399]$. The sterne's confidence is narrow than the method3 confidence interval. In Sterne's tails, they may be of different sizes, and create a potentially tighter confidence interval than method 3.
## Is it statistically legitimate to use one-sided confidence intervals? Why or why not?
Yes, for this case, if the vaccine is effective, there should be fewer infected cases in the treatment group compared to the placebo group. We only need to check whether $\tau$ is smaller than 0 to find the upper one-sided confidence interval.
## Are the 2-sided confidence intervals preferable to the one-sided intervals? Why or why not?
Yes, the 2-sided confidence interval is preferable to the one-sided interval. Because we can check how far away the lower bound to the -1 to determine the effectiveness of the vaccine. If close to -1, it means that the vaccine is effective and merely any cases after taking the vaccine treatment.
|
github_jupyter
|
from utils import hypergeom_conf_interval
from math import comb
from scipy.stats import binom, hypergeom
from cabin.cabin import *
n, m = 753, 752
N = n+m
n01 = 59
n11 = 11
n00 = m - n01
n10 = n - n11
alpha = 0.05
print(n11, n10, n01, n00)
result = tau_twoside(11, 742, 59, 684, 0.05, 100)
tau_lower = result['tau_lower']
tau_upper = result['tau_lower']
# two sided
N_1 = hypergeom_conf_interval(n, n11, N, 1-alpha/2, alternative='two-sided', method= "sterne")
N1_ = hypergeom_conf_interval(m, n01, N, 1-alpha/2, alternative='two-sided', method= "sterne")
tao_lower = (N_1[0] - N1_[1])/N
tao_upper = (N_1[1] - N1_[0])/N
print("tau_upper", round(tao_upper,4))
print("tau_lower", round(tao_lower,4))
# lower one sided
N_1 = hypergeom_conf_interval(n, n11*N/n, N, 1-alpha/2, alternative='lower')
N1_ = hypergeom_conf_interval(m, n01*N/m, N, 1-alpha/2, alternative='upper')
tao_lower = (N_1[0] - N1_[1])/N
tao_upper = (N_1[1] - N1_[0])/N
print("tau_upper", round(tao_upper,4))
print("tau_lower", round(tao_lower,4))
# lower one sided
N_1 = hypergeom_conf_interval(n, n11*N/n, N, 1-alpha/2, alternative='upper')
N1_ = hypergeom_conf_interval(m, n01*N/m, N, 1-alpha/2, alternative='lower')
tao_lower = (N_1[0] - N1_[1])/N
tao_upper = (N_1[1] - N1_[0])/N
print("tau_upper", round(tao_upper,4))
print("tau_lower", round(tao_lower,4))
| 0.352536 | 0.980129 |
# 初始化
```
#@markdown - **挂载**
from google.colab import drive
drive.mount('GoogleDrive')
# #@markdown - **卸载**
# !fusermount -u GoogleDrive
```
# 代码区
```
#@title 混合高斯模型 { display-mode: "both" }
# 该程序实现EM算法对混合高斯模型参数的估计
# 混合高斯模型对随机数据的聚类
# coding: utf-8
import numpy as np
import numpy.matlib
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
#@markdown - **绑定数据**
class Bunch(dict):
def __init__(self,*args,**kwds):
super(Bunch,self).__init__(*args,**kwds)
self.__dict__ = self
#@markdown - **混合高斯模型类**
class GaussianMM:
def __init__(self):
self.mu = None
self.sigma = None
self.alpha = None
self.f_dim = None
self.num_mixed = None
# 初始化
def init_fn(self, f_dim=3, num_mixed=4):
self.f_dim = f_dim
self.num_mixed = num_mixed
self.mu = np.random.randn(num_mixed, f_dim) + 10
self.sigma = np.zeros((num_mixed, f_dim, f_dim))
for i in range(num_mixed):
self.sigma[i, :, :] = np.diag(np.random.randint(10, 25, size=(3, )))
self.alpha = [1. / num_mixed] * int(num_mixed)
return 'Initialization completed !'
# e-step
def e_step(self, X):
N, _ = X.shape
expec = np.zeros((N, self.num_mixed))
for i in range(N):
denom = 0
# numer = 0
F_list = []
S_list = []
for j in range(self.num_mixed):
sig_inv = np.linalg.inv(self.sigma[j, :, :])
expo_1 = np.matmul(-(X[i, :] - self.mu[j, :]), sig_inv)
expo_2 = np.matmul(expo_1, ((X[i, :] - self.mu[j, :])).reshape(-1, 1))
first_half = self.alpha[j] * np.exp(expo_2)
# first_half = alpha_[j] * np.exp(-(X[i, :] - mu[j, :]) * sig_inv * ((X[i, :] - mu[j, :])).reshape(-1, 1))
sec_half = np.sqrt(np.linalg.det(np.mat(self.sigma[j, :, :])))
F_list.append(first_half[0])
S_list.append(sec_half)
denom += first_half[0] / sec_half #分母
for j in range(self.num_mixed):
numer = F_list[j] / S_list[j] #分子
expec[i, j]= numer / denom #求期望
return expec
# m-step
def m_step(self, X, expec):
N, c = X.shape
lemda = 1e-15
for j in range(self.num_mixed):
denom = 0 #分母
numer = 0 #分子
sig = 0
for i in range(N):
numer += expec[i, j] * X[i, :]
denom += expec[i, j]
self.mu[j, :] = numer / denom #求均值
for i in range(N):
x_tran = (X[i, :] - self.mu[j, :]).reshape(-1, 1)
x_nor = (X[i, :] - self.mu[j, :]).reshape(1, -1)
sig += expec[i, j] * np.matmul(x_tran, x_nor)
self.alpha[j] = denom / N #求混合项系数
self.sigma[j, :, :] = sig / denom + np.diag(np.array([lemda] * c))
return self.mu, self.sigma, self.alpha
# 训练
def fit(self, X, err_mu=5, err_alpha=0.01, max_iter=100):
iter_num = 0
while True:
if iter_num == max_iter: break
iter_num += 1
mu_prev = self.mu.copy()
# print(mu_prev)
alpha_prev = self.alpha.copy()
# print(alpha_prev)
expec = self.e_step(X)
self.mu, self.sigma, self.alpha = self.m_step(X, expec)
print(u"迭代次数:", iter_num)
print(u"估计的均值:\n", self.mu)
print(u"估计的混合项系数:", self.alpha, '\n')
err = abs(mu_prev - self.mu).sum() #计算误差
err_a = abs(np.array(alpha_prev) - np.array(self.alpha)).sum()
if (err < err_mu) and (err_a < err_alpha): #达到精度退出迭代
print(u"\n最终误差:", [err, err_a])
break
print('训练已完成 !')
# 预测属于第几个高斯成分
def predict(self, X):
expec = self.e_step(X)
return np.argmax(expec, axis=1)
#@markdown - **生成随机数据函数**
def generate_random(sigma, N, mu1=[15., 25., 10], mu2=[30., 40., 30], mu3=[25., 10., 20], mu4=[40., 30., 40]):
c = sigma.shape[-1]
X = np.zeros((N, c))
target = np.zeros((N,1))
for i in range(N):
if np.random.random(1) < 0.25:
X[i, :] = np.random.multivariate_normal(mu1, sigma[0, :, :], 1) #用第一个高斯模型生成3维数据
target[i] = 0
elif 0.25 <= np.random.random(1) < 0.5:
X[i, :] = np.random.multivariate_normal(mu2, sigma[1, :, :], 1) #用第二个高斯模型生成3维数据
target[i] = 1
elif 0.5 <= np.random.random(1) < 0.75:
X[i, :] = np.random.multivariate_normal(mu3, sigma[2, :, :], 1) #用第三个高斯模型生成3维数据
target[i] = 2
else:
X[i, :] = np.random.multivariate_normal(mu4, sigma[3, :, :], 1) #用第四个高斯模型生成3维数据
target[i] = 3
return X, target
#@markdown - **生成带标签的随机数据**
k, N = 4, 400
# 初始化方差,生成样本与标签
sigma = np.zeros((k, 3, 3))
for i in range(k):
sigma[i, :, :] = np.diag(np.random.randint(10, 25, size=(3, )))
sample, target = generate_random(sigma, N)
feature_names = ['x_label', 'y_label', 'z_label'] # 特征数
target_names = ['gaussian1', 'gaussian2', 'gaussian3', 'gaussian4'] # 类别
data = Bunch(sample=sample, feature_names=feature_names, target=target, target_names=target_names)
#@markdown - **迭代训练,直到满足收敛条件**
# 初始化模型参数
model = GaussianMM()
err_mu = 1e-4 #@param {type: "number"}
err_alpha = 1e-4 #@param {type: "number"}
# -------------二类----------------
model.init_fn(f_dim=3, num_mixed=2)
# print('mu:\n', model.mu)
# print('sigma:\n', model.sigma)
# print('alpha:\n', model.alpha)
# 迭代训练,直到满足收敛条件
model.fit(data.sample, err_mu=err_mu, err_alpha=err_alpha, max_iter=100)
# 预测每个样本属于哪个成分
tar2 = model.predict(data.sample)
# -------------三类----------------
model.init_fn(f_dim=3, num_mixed=3)
model.fit(data.sample, err_mu=err_mu, err_alpha=err_alpha, max_iter=100)
tar3 = model.predict(data.sample)
# -------------四类----------------
model.init_fn(f_dim=3, num_mixed=4)
model.fit(data.sample, err_mu=err_mu, err_alpha=err_alpha, max_iter=100)
tar4 = model.predict(data.sample)
#@markdown - **显示训练数据的分布以及聚类结果**
#@markdown - **训练数据与二类**
titles = ['Random training data', 'Clustered data by 2-GMM']
DATA = [data.sample, data.sample]
color=['b','r','g','y']
fig = plt.figure(1, figsize=(16, 8))
fig.subplots_adjust(wspace=.01, hspace=.02)
for i, title, data_n in zip([1, 2], titles, DATA):
ax = fig.add_subplot(1, 2, i, projection='3d')
if title == 'Random training data':
ax.scatter(data_n[:,0], data_n[:,1], data_n[:,2], c='b', s=35, alpha=0.4, marker='o')
else:
for j in range(N):
ax.scatter(data_n[j, 0], data_n[j, 1], data_n[j, 2], c=color[tar2[j]], s=35, alpha=0.4, marker='P')
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
ax.view_init(elev=20., azim=-25)
ax.set_title(title, fontsize=14)
#@markdown - **三类与四类**
titles = ['Clustered data by 3-GMM', 'Clustered data by 4-GMM']
TAR = [tar3, tar4]
fig = plt.figure(2, figsize=(16, 8))
fig.subplots_adjust(wspace=.01, hspace=.02)
for i, title, data_n, tar in zip([1, 2], titles, DATA, TAR):
ax = fig.add_subplot(1, 2, i, projection='3d')
for j in range(N):
ax.scatter(data_n[j, 0], data_n[j, 1], data_n[j, 2], c=color[tar[j]], s=35, alpha=0.4, marker='P')
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
ax.view_init(elev=20., azim=-25)
ax.set_title(title, fontsize=14)
plt.show()
```
|
github_jupyter
|
#@markdown - **挂载**
from google.colab import drive
drive.mount('GoogleDrive')
# #@markdown - **卸载**
# !fusermount -u GoogleDrive
#@title 混合高斯模型 { display-mode: "both" }
# 该程序实现EM算法对混合高斯模型参数的估计
# 混合高斯模型对随机数据的聚类
# coding: utf-8
import numpy as np
import numpy.matlib
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
#@markdown - **绑定数据**
class Bunch(dict):
def __init__(self,*args,**kwds):
super(Bunch,self).__init__(*args,**kwds)
self.__dict__ = self
#@markdown - **混合高斯模型类**
class GaussianMM:
def __init__(self):
self.mu = None
self.sigma = None
self.alpha = None
self.f_dim = None
self.num_mixed = None
# 初始化
def init_fn(self, f_dim=3, num_mixed=4):
self.f_dim = f_dim
self.num_mixed = num_mixed
self.mu = np.random.randn(num_mixed, f_dim) + 10
self.sigma = np.zeros((num_mixed, f_dim, f_dim))
for i in range(num_mixed):
self.sigma[i, :, :] = np.diag(np.random.randint(10, 25, size=(3, )))
self.alpha = [1. / num_mixed] * int(num_mixed)
return 'Initialization completed !'
# e-step
def e_step(self, X):
N, _ = X.shape
expec = np.zeros((N, self.num_mixed))
for i in range(N):
denom = 0
# numer = 0
F_list = []
S_list = []
for j in range(self.num_mixed):
sig_inv = np.linalg.inv(self.sigma[j, :, :])
expo_1 = np.matmul(-(X[i, :] - self.mu[j, :]), sig_inv)
expo_2 = np.matmul(expo_1, ((X[i, :] - self.mu[j, :])).reshape(-1, 1))
first_half = self.alpha[j] * np.exp(expo_2)
# first_half = alpha_[j] * np.exp(-(X[i, :] - mu[j, :]) * sig_inv * ((X[i, :] - mu[j, :])).reshape(-1, 1))
sec_half = np.sqrt(np.linalg.det(np.mat(self.sigma[j, :, :])))
F_list.append(first_half[0])
S_list.append(sec_half)
denom += first_half[0] / sec_half #分母
for j in range(self.num_mixed):
numer = F_list[j] / S_list[j] #分子
expec[i, j]= numer / denom #求期望
return expec
# m-step
def m_step(self, X, expec):
N, c = X.shape
lemda = 1e-15
for j in range(self.num_mixed):
denom = 0 #分母
numer = 0 #分子
sig = 0
for i in range(N):
numer += expec[i, j] * X[i, :]
denom += expec[i, j]
self.mu[j, :] = numer / denom #求均值
for i in range(N):
x_tran = (X[i, :] - self.mu[j, :]).reshape(-1, 1)
x_nor = (X[i, :] - self.mu[j, :]).reshape(1, -1)
sig += expec[i, j] * np.matmul(x_tran, x_nor)
self.alpha[j] = denom / N #求混合项系数
self.sigma[j, :, :] = sig / denom + np.diag(np.array([lemda] * c))
return self.mu, self.sigma, self.alpha
# 训练
def fit(self, X, err_mu=5, err_alpha=0.01, max_iter=100):
iter_num = 0
while True:
if iter_num == max_iter: break
iter_num += 1
mu_prev = self.mu.copy()
# print(mu_prev)
alpha_prev = self.alpha.copy()
# print(alpha_prev)
expec = self.e_step(X)
self.mu, self.sigma, self.alpha = self.m_step(X, expec)
print(u"迭代次数:", iter_num)
print(u"估计的均值:\n", self.mu)
print(u"估计的混合项系数:", self.alpha, '\n')
err = abs(mu_prev - self.mu).sum() #计算误差
err_a = abs(np.array(alpha_prev) - np.array(self.alpha)).sum()
if (err < err_mu) and (err_a < err_alpha): #达到精度退出迭代
print(u"\n最终误差:", [err, err_a])
break
print('训练已完成 !')
# 预测属于第几个高斯成分
def predict(self, X):
expec = self.e_step(X)
return np.argmax(expec, axis=1)
#@markdown - **生成随机数据函数**
def generate_random(sigma, N, mu1=[15., 25., 10], mu2=[30., 40., 30], mu3=[25., 10., 20], mu4=[40., 30., 40]):
c = sigma.shape[-1]
X = np.zeros((N, c))
target = np.zeros((N,1))
for i in range(N):
if np.random.random(1) < 0.25:
X[i, :] = np.random.multivariate_normal(mu1, sigma[0, :, :], 1) #用第一个高斯模型生成3维数据
target[i] = 0
elif 0.25 <= np.random.random(1) < 0.5:
X[i, :] = np.random.multivariate_normal(mu2, sigma[1, :, :], 1) #用第二个高斯模型生成3维数据
target[i] = 1
elif 0.5 <= np.random.random(1) < 0.75:
X[i, :] = np.random.multivariate_normal(mu3, sigma[2, :, :], 1) #用第三个高斯模型生成3维数据
target[i] = 2
else:
X[i, :] = np.random.multivariate_normal(mu4, sigma[3, :, :], 1) #用第四个高斯模型生成3维数据
target[i] = 3
return X, target
#@markdown - **生成带标签的随机数据**
k, N = 4, 400
# 初始化方差,生成样本与标签
sigma = np.zeros((k, 3, 3))
for i in range(k):
sigma[i, :, :] = np.diag(np.random.randint(10, 25, size=(3, )))
sample, target = generate_random(sigma, N)
feature_names = ['x_label', 'y_label', 'z_label'] # 特征数
target_names = ['gaussian1', 'gaussian2', 'gaussian3', 'gaussian4'] # 类别
data = Bunch(sample=sample, feature_names=feature_names, target=target, target_names=target_names)
#@markdown - **迭代训练,直到满足收敛条件**
# 初始化模型参数
model = GaussianMM()
err_mu = 1e-4 #@param {type: "number"}
err_alpha = 1e-4 #@param {type: "number"}
# -------------二类----------------
model.init_fn(f_dim=3, num_mixed=2)
# print('mu:\n', model.mu)
# print('sigma:\n', model.sigma)
# print('alpha:\n', model.alpha)
# 迭代训练,直到满足收敛条件
model.fit(data.sample, err_mu=err_mu, err_alpha=err_alpha, max_iter=100)
# 预测每个样本属于哪个成分
tar2 = model.predict(data.sample)
# -------------三类----------------
model.init_fn(f_dim=3, num_mixed=3)
model.fit(data.sample, err_mu=err_mu, err_alpha=err_alpha, max_iter=100)
tar3 = model.predict(data.sample)
# -------------四类----------------
model.init_fn(f_dim=3, num_mixed=4)
model.fit(data.sample, err_mu=err_mu, err_alpha=err_alpha, max_iter=100)
tar4 = model.predict(data.sample)
#@markdown - **显示训练数据的分布以及聚类结果**
#@markdown - **训练数据与二类**
titles = ['Random training data', 'Clustered data by 2-GMM']
DATA = [data.sample, data.sample]
color=['b','r','g','y']
fig = plt.figure(1, figsize=(16, 8))
fig.subplots_adjust(wspace=.01, hspace=.02)
for i, title, data_n in zip([1, 2], titles, DATA):
ax = fig.add_subplot(1, 2, i, projection='3d')
if title == 'Random training data':
ax.scatter(data_n[:,0], data_n[:,1], data_n[:,2], c='b', s=35, alpha=0.4, marker='o')
else:
for j in range(N):
ax.scatter(data_n[j, 0], data_n[j, 1], data_n[j, 2], c=color[tar2[j]], s=35, alpha=0.4, marker='P')
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
ax.view_init(elev=20., azim=-25)
ax.set_title(title, fontsize=14)
#@markdown - **三类与四类**
titles = ['Clustered data by 3-GMM', 'Clustered data by 4-GMM']
TAR = [tar3, tar4]
fig = plt.figure(2, figsize=(16, 8))
fig.subplots_adjust(wspace=.01, hspace=.02)
for i, title, data_n, tar in zip([1, 2], titles, DATA, TAR):
ax = fig.add_subplot(1, 2, i, projection='3d')
for j in range(N):
ax.scatter(data_n[j, 0], data_n[j, 1], data_n[j, 2], c=color[tar[j]], s=35, alpha=0.4, marker='P')
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
ax.view_init(elev=20., azim=-25)
ax.set_title(title, fontsize=14)
plt.show()
| 0.211824 | 0.778944 |
```
import numpy as np
import scipy.linalg as la
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from imageio import imread
import sys
%matplotlib notebook
def binary_to_eig(image):
rot_image = np.rot90(image,k=-1)
idx = np.argwhere(rot_image)
return idx[:,0] + 1j*idx[:,1]
def grayscale_to_eig(image):
rot_image = np.rot90(image,k=-1)
rows,cols = np.unravel_index(np.argsort(rot_image,axis=None),shape=rot_image.shape)
colors = np.sort(A.flatten())
return rows + 1j*cols, colors
def grayscale_to_coords(image):
rot_image = np.rot90(image,k=-1)
rows,cols = np.unravel_index(np.argsort(rot_image,axis=None),shape=rot_image.shape)
colors = np.sort(image.flatten())
return rows+.5,cols+.5,colors
def animate_pixels(img1,img2,filename):
rows1,cols1,colors1 = grayscale_to_coords(img1)
rows2,cols2,colors2 = grayscale_to_coords(img2)
aspect_ratio = img1.shape[0]/img1.shape[1]
plt.ioff()
fig = plt.figure(figsize=(6.4,aspect_ratio*6.4))
ax = fig.add_subplot(111)
ax.set_aspect("equal")
plt.axis("off")
plt.xlim((0,img1.shape[1]))
plt.ylim((0,img1.shape[0]))
pixels = img1.shape[1]
pixels_per_inch = pixels/6.4
size = 72/pixels_per_inch
points = ax.scatter(rows1,cols1,c=colors1,cmap="gray",marker='s',s=size**2,vmin=0,vmax=1)
n=300
buffer = 30
colors = np.linspace(colors1,colors2,n)
rows = np.linspace(rows1,rows2,n)
cols = np.linspace(cols1,cols2,n)
pos = np.dstack((rows,cols))
def update(j):
if j >= buffer and j < buffer+n:
i = j-buffer
points.set_offsets(pos[i])
points.set_array(colors[i])
elif j >= 3*buffer+n and j < 3*buffer+2*n:
i = n-(j-(3*buffer+n))-1
points.set_offsets(pos[i])
points.set_array(colors[i])
# if j >= buffer and j < 3*buffer+2*n:
# i = j-buffer
# points.set_offsets(np.array([(1-t[i])*rows1+t[i]*rows2,(1-t[i])*cols1+t[i]*cols2]).T)
# points.set_array(colors[i])
ani = animation.FuncAnimation(fig,update,frames=2*n+4*buffer,interval=30)
ani.save(filename)
plt.close(fig)
plt.ion()
img1 = np.array(imread("camera2.png",as_gray=True))/256
img2 = np.array(imread("lena.png",as_gray=True))/256
plt.imshow(img1,cmap="gray")
plt.show()
plt.imshow(img2,cmap="gray")
plt.show()
animate_pixels(img1,img2,"mixing2.mp4")
x = [1,2,3,4]
y = [1,2,3,4]
scatter = plt.scatter(x,y)
plt.show()
plt.scatter(x,y,c=(colors/255))
plt.show()
colors = np.array([(50,50,50),(200,100,0),(0,100,200),(200,0,100)])
X = np.array([[50,20],[100,100],[0,100]])
X
print(X[:,0].argsort())
print(X[:,1].argsort())
from PIL import Image
im = Image.open("camera2.png")
im.resize((128,256))
im?
im.size
```
$A = w \times \alpha w$
$A= \alpha w^2$
$x^2 = \frac{A}{\alpha w^2}$
```
string = "file.mp4"
string[-4:]
```
|
github_jupyter
|
import numpy as np
import scipy.linalg as la
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from imageio import imread
import sys
%matplotlib notebook
def binary_to_eig(image):
rot_image = np.rot90(image,k=-1)
idx = np.argwhere(rot_image)
return idx[:,0] + 1j*idx[:,1]
def grayscale_to_eig(image):
rot_image = np.rot90(image,k=-1)
rows,cols = np.unravel_index(np.argsort(rot_image,axis=None),shape=rot_image.shape)
colors = np.sort(A.flatten())
return rows + 1j*cols, colors
def grayscale_to_coords(image):
rot_image = np.rot90(image,k=-1)
rows,cols = np.unravel_index(np.argsort(rot_image,axis=None),shape=rot_image.shape)
colors = np.sort(image.flatten())
return rows+.5,cols+.5,colors
def animate_pixels(img1,img2,filename):
rows1,cols1,colors1 = grayscale_to_coords(img1)
rows2,cols2,colors2 = grayscale_to_coords(img2)
aspect_ratio = img1.shape[0]/img1.shape[1]
plt.ioff()
fig = plt.figure(figsize=(6.4,aspect_ratio*6.4))
ax = fig.add_subplot(111)
ax.set_aspect("equal")
plt.axis("off")
plt.xlim((0,img1.shape[1]))
plt.ylim((0,img1.shape[0]))
pixels = img1.shape[1]
pixels_per_inch = pixels/6.4
size = 72/pixels_per_inch
points = ax.scatter(rows1,cols1,c=colors1,cmap="gray",marker='s',s=size**2,vmin=0,vmax=1)
n=300
buffer = 30
colors = np.linspace(colors1,colors2,n)
rows = np.linspace(rows1,rows2,n)
cols = np.linspace(cols1,cols2,n)
pos = np.dstack((rows,cols))
def update(j):
if j >= buffer and j < buffer+n:
i = j-buffer
points.set_offsets(pos[i])
points.set_array(colors[i])
elif j >= 3*buffer+n and j < 3*buffer+2*n:
i = n-(j-(3*buffer+n))-1
points.set_offsets(pos[i])
points.set_array(colors[i])
# if j >= buffer and j < 3*buffer+2*n:
# i = j-buffer
# points.set_offsets(np.array([(1-t[i])*rows1+t[i]*rows2,(1-t[i])*cols1+t[i]*cols2]).T)
# points.set_array(colors[i])
ani = animation.FuncAnimation(fig,update,frames=2*n+4*buffer,interval=30)
ani.save(filename)
plt.close(fig)
plt.ion()
img1 = np.array(imread("camera2.png",as_gray=True))/256
img2 = np.array(imread("lena.png",as_gray=True))/256
plt.imshow(img1,cmap="gray")
plt.show()
plt.imshow(img2,cmap="gray")
plt.show()
animate_pixels(img1,img2,"mixing2.mp4")
x = [1,2,3,4]
y = [1,2,3,4]
scatter = plt.scatter(x,y)
plt.show()
plt.scatter(x,y,c=(colors/255))
plt.show()
colors = np.array([(50,50,50),(200,100,0),(0,100,200),(200,0,100)])
X = np.array([[50,20],[100,100],[0,100]])
X
print(X[:,0].argsort())
print(X[:,1].argsort())
from PIL import Image
im = Image.open("camera2.png")
im.resize((128,256))
im?
im.size
string = "file.mp4"
string[-4:]
| 0.241221 | 0.742678 |
Why Spark? Because ... Pyspark
==========

Hadoop was the first open source system that introduced us to the
MapReduce paradigm of programming and Spark is the system that made it
faster, much much faster(100x).
There used to be a lot of data movement in Hadoop as it used to write
intermediate results to the file system.
This affected the speed at which you could do analysis.
Spark provided us with an in-memory model, so Spark doesn’t write too
much to the disk while working.
Simply, Spark is faster than Hadoop and a lot of people use Spark now.
***So without further ado let us get started.***
Load Some Data
==============
The next step is to upload some data we will use to learn Spark. We will end up using multiple datasets by the end of this but let us start with something very simple.
Let us add the file `shakespeare.txt`
You can see that the file is loaded to `shakespeare/shakespeare.txt` location.
```
# To download the data you would use the following commands:
#!wget -P shakespeare https://ocw.mit.edu/ans7870/6/6.006/s08/lecturenotes/files/t8.shakespeare.txt
#!mv shakespeare/t8.shakespeare.txt shakespeare/shakespeare.txt
!ls -l shakespeare
```
Our First Spark Program
=======================
I like to learn by examples so let’s get done with the “Hello World” of
Distributed computing: ***The WordCount Program.***
First, we need to create a `SparkSession`:
```
from pyspark.sql import SparkSession
spark = SparkSession\
.builder\
.appName("shakespeare")\
.master("spark://spark-master:7077")\
.config("hive.metastore.uris", "thrift://hive-metastore:9083")\
.config("spark.sql.warehouse.dir", "hdfs://namenode:8020/user/hive/warehouse")\
.config("spark.executor.memory", "1g")\
.config("spark.jars.packages", "org.apache.spark:spark-avro_2.12:3.2.0")\
.enableHiveSupport()\
.getOrCreate()
sc = spark.sparkContext
sc.setLogLevel("ERROR")
from pyspark.sql.functions import to_json,col
from pyspark.sql.types import *
from os.path import abspath
# Distribute the data - Create a RDD
lines = sc.textFile("shakespeare/shakespeare.txt")
x = 'This is the 100th Etext file presented by Project Gutenberg, and'
x.split(' ')
```
Now we can write our program:
```
# Distribute the data - Create a RDD
lines = sc.textFile("shakespeare/shakespeare.txt")
# Create a list with all words, Create tuple (word,1), reduce by key i.e. the word
counts = (lines.flatMap(lambda x: x.split(' '))
.map(lambda x: (x, 1))
.reduceByKey(lambda x,y : x + y))
# get the output on local
output = counts.take(10)
# print output
for (word, count) in output:
if word.strip() != "":
print(f"'{word}' occurs {count} times")
# print(counts.toDebugString().decode('utf-8'))
```
So that is a small example which counts the number of words in the
document and prints 10 of them.
And most of the work gets done in the second command.
Don’t worry if you are not able to follow this yet as I still need to
tell you about the things that make Spark work.
But before we get into Spark basics, Let us refresh some of our Python
Basics. Understanding Spark becomes a lot easier if you have used
[functional programming with Python](https://amzn.to/2SuAtzL).
For those of you who haven’t used it, below is a brief intro.
A functional approach to programming in Python
==============================================

Map
------
`map` is used to map a function to an array or a
list. Say you want to apply some function to every element in a list.
You can do this by simply using a for loop but python lambda functions
let you do this in a single line in Python.
```
my_list = [1,2,3,4,5,6,7,8,9,10]
# Lets say I want to square each term in my_list.
squared_list = map(lambda x:x**2, my_list)
print(list(squared_list))
```
In the above example, you could think of `map`
as a function which takes two arguments — A function and a list.
It then applies the function to every element of the list.
What lambda allows you to do is write an inline function. In here the
part `lambda x:x**2` defines a function that takes x as input and returns x².
You could have also provided a proper function in place of lambda. For
example:
```
def squared(x):
return x**2
my_list = [1,2,3,4,5,6,7,8,9,10]
# Lets say I want to square each term in my_list.
squared_list = map(squared, my_list)
print(list(squared_list))
```
The same result, but the lambda expressions make the code compact and a
lot more readable.
Filter
---------
The other function that is used extensively is the `filter` function. This function takes two arguments — A condition and the list to filter.
If you want to filter your list using some condition you use
`filter`.
```
my_list = [1,2,3,4,5,6,7,8,9,10]
# Lets say I want only the even numbers in my list.
filtered_list = filter(lambda x:x%2==0,my_list)
print(list(filtered_list))
```
Reduce
---------
The next function I want to talk about is the reduce function. This
function will be the workhorse in Spark.
This function takes two arguments — a function to reduce that takes two
arguments, and a list over which the reduce function is to be applied.
```
import functools
my_list = [1,2,3,4,5]
# Lets say I want to sum all elements in my list.
sum_list = functools.reduce(lambda x,y:x+y,my_list)
print(sum_list)
import functools
my_list = [1,2,3,4]
# Lets say I want to sum all elements in my list.
sum_list = functools.reduce(lambda x,y:x*y,my_list)
print(sum_list)
```
In python2 reduce used to be a part of Python, now we have to use
`reduce` as a part of `functools`.
Here the lambda function takes in two values x, y and returns their sum.
Intuitively you can think that the reduce function works as:
```
Reduce function first sends 1,2 ; the lambda function returns 3
Reduce function then sends 3,3 ; the lambda function returns 6
Reduce function then sends 6,4 ; the lambda function returns 10
Reduce function finally sends 10,5 ; the lambda function returns 15
```
A condition on the lambda function we use in reduce is that it must be:
- commutative that is a + b = b + a and
- associative that is (a + b) + c == a + (b + c).
In the above case, we used sum which is **commutative as well as
associative**. Other functions that we could have used: `max`**,** `min`, `*` etc.
Moving Again to Spark
=====================
As we have now got the fundamentals of Python Functional Programming out
of the way, lets again head to Spark.
But first, let us delve a little bit into how spark works. Spark
actually consists of two things a driver and workers.
Workers normally do all the work and the driver makes them do that work.
RDD
---
An RDD(Resilient Distributed Dataset) is a parallelized data structure
that gets distributed across the worker nodes. They are the basic units
of Spark programming.
In our wordcount example, in the first line
```py
lines = sc.textFile("/FileStore/tables/shakespeare.txt")
```
we took a text file and distributed it across worker nodes so that they
can work on it in parallel. We could also parallelize lists using the
function `sc.parallelize`
For example:
```
data = [1,2,3,4,5,6,7,8,9,10]
new_rdd = sc.parallelize(data,4)
new_rdd.collect()
```
In Spark, we can do two different types of operations on RDD:
Transformations and Actions.
1. **Transformations:** Create new datasets from existing RDDs
2. **Actions:** Mechanism to get results out of Spark
Transformation Basics
=====================

So let us say you have got your data in the form of an RDD.
To requote your data is now accessible to the worker machines. You want
to do some transformations on the data now.
You may want to filter, apply some function, etc.
In Spark, this is done using Transformation functions.
Spark provides many transformation functions. You can see a
comprehensive list
[**here**](http://spark.apache.org/docs/latest/rdd-programming-guide.html#transformations).
Some of the main ones that I use frequently are:
Map:
-------
Applies a given function to an RDD.
Note that the syntax is a little bit different from Python, but it
necessarily does the same thing. Don’t worry about `collect` yet. For now, just think of it as a function that collects the data in squared\_rdd back to a list.
```
data = [1,2,3,4,5,6,7,8,9,10]
rdd = sc.parallelize(data,4)
squared_rdd = rdd.map(lambda x:x**2)
result_list = squared_rdd.collect()
print(result_list)
```
Filter:
----------
Again no surprises here. Takes as input a condition and keeps only those
elements that fulfill that condition.
```
data = [1,2,3,4,5,6,7,8,9,10]
rdd = sc.parallelize(data,4)
filtered_rdd = rdd.filter(lambda x:x%2!=0)
filtered_rdd.collect()
```
distinct:
------------
Returns only distinct elements in an RDD.
```
data = [1,2,2,2,2,3,3,3,3,4,5,6,7,7,7,8,8,8,9,10]
rdd = sc.parallelize(data,4)
distinct_rdd = rdd.distinct()
sorted(distinct_rdd.collect())
```
flatmap:
-----------
Similar to `map`, but each input item can be
mapped to 0 or more output items.
```
data = [1,2,3,4]
rdd = sc.parallelize(data,4)
flat_rdd = rdd.flatMap(lambda x:[x,x**3])
flat_rdd.collect()
```
Reduce By Key:
-----------------
The parallel to the reduce in Hadoop MapReduce.
Now Spark cannot provide the value if it just worked with Lists.
In Spark, there is a concept of pair RDDs that makes it a lot more
flexible. Let's assume we have a data in which we have a product, its
category, and its selling price. We can still parallelize the data.
```
data = [('Apple','Fruit',200),('Banana','Fruit',24),('Tomato','Fruit',56),('Potato','Vegetable',103),('Carrot','Vegetable',34)]
rdd = sc.parallelize(data,4)
rdd.collect()
```
Right now our RDD `rdd` holds tuples.
Now we want to find out the total sum of revenue that we got from each
category.
To do that we have to transform our `rdd` to a
pair rdd so that it only contains key-value pairs/tuples.
```
category_price_rdd = rdd.map(lambda x: (x[1],x[2]))
category_price_rdd.collect()
```
Here we used the map function to get it in the format we wanted. When
working with textfile, the RDD that gets formed has got a lot of
strings. We use `map` to convert it into a format that we want.
So now our `category_price_rdd` contains the
product category and the price at which the product sold.
Now we want to reduce on the key category and sum the prices. We can do
this by:
```
category_total_price_rdd = category_price_rdd.reduceByKey(lambda x,y:x+y)
category_total_price_rdd.collect()
```
Group By Key:
----------------
Similar to `reduceByKey` but does not reduces
just puts all the elements in an iterator. For example, if we wanted to
keep as key the category and as the value all the products we would use
this function.
Let us again use `map` to get data in the
required form.
```
data = [('Apple','Fruit',200),('Banana','Fruit',24),('Tomato','Fruit',56),('Potato','Vegetable',103),('Carrot','Vegetable',34)]
rdd = sc.parallelize(data,4)
category_product_rdd = rdd.map(lambda x: (x[1],x[0]))
category_product_rdd.collect()
```
We then use `groupByKey` as:
```
grouped_products_by_category_rdd = category_product_rdd.groupByKey()
findata = grouped_products_by_category_rdd.collect()
for data in findata:
print(data[0],list(data[1]))
```
Here the `groupByKey` function worked and it
returned the category and the list of products in that category.
Action Basics
=============

You have filtered your data, mapped some functions on it. Done your
computation.
Now you want to get the data on your local machine or save it to a file
or show the results in the form of some graphs in excel or any
visualization tool.
You will need actions for that. A comprehensive list of actions is
provided
[**here**](http://spark.apache.org/docs/latest/rdd-programming-guide.html#actions)**.**
Some of the most common actions that I tend to use are:
collect:
-----------
We have already used this action many times. It takes the whole RDD and
brings it back to the driver program.
reduce:
----------
Aggregate the elements of the dataset using a function func (which takes
two arguments and returns one). The function should be commutative and
associative so that it can be computed correctly in parallel.
```
rdd = sc.parallelize([1,2,3,4,5])
rdd.reduce(lambda x,y : x+y)
```
take:
--------
Sometimes you will need to see what your RDD contains without getting
all the elements in memory itself. `take`
returns a list with the first n elements of the RDD.
```
rdd = sc.parallelize([1,2,3,4,5])
rdd.take(3)
```
takeOrdered:
---------------
`takeOrdered` returns the first n elements of
the RDD using either their natural order or a custom comparator.
```
rdd = sc.parallelize([5,3,12,23])
# descending order
rdd.takeOrdered(3,lambda s:-1*s)
rdd = sc.parallelize([(5,23),(3,34),(12,344),(23,29)])
# descending order
rdd.takeOrdered(3,lambda s:-1*s[1])
```
We have our basics covered finally. Let us get back to our wordcount
example
Understanding The WordCount Example
===================================

Now we sort of understand the transformations and the actions provided
to us by Spark.
It should not be difficult to understand the wordcount program now. Let
us go through the program line by line.
The first line creates an RDD and distributes it to the workers.
```
lines = sc.textFile("shakespeare/shakespeare.txt")
```
This RDD `lines` contains a list of sentences in
the file. You can see the rdd content using `take`
```
lines.take(5)
```
This RDD is of the form:
```py
['word1 word2 word3','word4 word3 word2']
```
This next line is actually the workhorse function in the whole script.
```
counts = (lines.flatMap(lambda x: x.split(' '))
.map(lambda x: (x, 1))
.reduceByKey(lambda x,y : x + y))
```
It contains a series of transformations that we do to the lines RDD.
First of all, we do a `flatmap` transformation.
The `flatmap` transformation takes as input the
lines and gives words as output. So after the `flatmap` transformation, the RDD is of the form:
```py
['word1','word2','word3','word4','word3','word2']
```
Next, we do a `map` transformation on the
`flatmap` output which converts the RDD to :
```py
[('word1',1),('word2',1),('word3',1),('word4',1),('word3',1),('word2',1)]
```
Finally, we do a `reduceByKey` transformation
which counts the number of time each word appeared.
After which the RDD approaches the final desirable form.
```py
[('word1',1),('word2',2),('word3',2),('word4',1)]
```
This next line is an action that takes the first 10 elements of the
resulting RDD locally.
```
output = counts.take(10)
```
This line just prints the output
```
for (word, count) in output:
print("%s: %i" % (word, count))
```
And that is it for the wordcount program. Hope you understand it now.
So till now, we talked about the Wordcount example and the basic
transformations and actions that you could use in Spark. But we don’t do
wordcount in real life.
We have to work on bigger problems which are much more complex. Worry
not! Whatever we have learned till now will let us do that and more.
Spark in Action with Example
============================

Let us work with a concrete example which takes care of some usual
transformations.
We will work on Movielens
[ml-100k.zip](https://github.com/rudrasingh21/Data-ML-100k-/raw/master/ml-100k.zip)
dataset which is a stable benchmark dataset. 100,000 ratings from 1000
users on 1700 movies. Released 4/1998.
Let us start by downloading the data.
```
# To download the data you would use the following commands:
!rm -rf ml-100k
!wget -P /tmp https://github.com/rudrasingh21/Data-ML-100k-/raw/master/ml-100k.zip
!unzip /tmp/ml-100k.zip -d .
!ls -l ml-100k
```
The Movielens dataset contains a lot of files but we are going to be
working with 3 files only:
1) **Users**: This file name is kept as `u.user`. The columns in this
file are:
```py
['user_id', 'age', 'sex', 'occupation', 'zip_code']
```
2) **Ratings**: This file name is kept as `u.data`. The columns in this
file are:
```py
['user_id', 'movie_id', 'rating', 'unix_timestamp']
```
3) **Movies**: This file name is kept as `u.item`. The columns in this
file are:
```py
['movie_id', 'title', 'release_date', 'video_release_date', 'imdb_url', and 18 more columns.....]
```
Our business partner now comes to us and asks us to find out the ***25
most rated movie titles*** from this data. How many times a movie has
been rated?
Let us load the data in different RDDs and see what the data contains.
```
userRDD = sc.textFile("ml-100k/u.user")
ratingRDD = sc.textFile("ml-100k/u.data")
movieRDD = sc.textFile("ml-100k/u.item")
print("userRDD:",userRDD.take(1))
print("ratingRDD:",ratingRDD.take(1))
print("movieRDD:",movieRDD.take(1))
```
We note that to answer this question we will need to use the
`ratingRDD`. But the `ratingRDD` does not have the movie name.
So we would have to merge `movieRDD` and `ratingRDD` using `movie_id`.
**How we would do that in Spark?**
Below is the code. We also use a new transformation `leftOuterJoin`. Do read the docs and comments in the below code.
```
# Create a RDD from RatingRDD that only contains the two columns of interest i.e. movie_id,rating.
RDD_movid_rating = ratingRDD.map(lambda x : (x.split("\t")[1],x.split("\t")[2]))
print("RDD_movid_rating:",RDD_movid_rating.take(4))
# Create a RDD from MovieRDD that only contains the two columns of interest i.e. movie_id,title.
RDD_movid_title = movieRDD.map(lambda x : (x.split("|")[0],x.split("|")[1]))
print("RDD_movid_title:",RDD_movid_title.take(2))
# merge these two pair RDDs based on movie_id. For this we will use the transformation leftOuterJoin(). See the transformation document.
rdd_movid_title_rating = RDD_movid_rating.leftOuterJoin(RDD_movid_title)
print("rdd_movid_title_rating:",rdd_movid_title_rating.take(1))
# use the RDD in previous step to create (movie,1) tuple pair RDD
rdd_title_rating = rdd_movid_title_rating.map(lambda x: (x[1][1],1 ))
print("rdd_title_rating:",rdd_title_rating.take(2))
# Use the reduceByKey transformation to reduce on the basis of movie_title
rdd_title_ratingcnt = rdd_title_rating.reduceByKey(lambda x,y: x+y)
print("rdd_title_ratingcnt:",rdd_title_ratingcnt.take(2))
# Get the final answer by using takeOrdered Transformation
print("#####################################")
print("25 most rated movies:",rdd_title_ratingcnt.takeOrdered(25,lambda x:-x[1]))
print("#####################################")
```
Star Wars is the most rated movie in the Movielens Dataset.
Now we could have done all this in a single command using the below
command but the code is a little messy now.
```
print(((ratingRDD.map(lambda x : (x.split("\t")[1],x.split("\t")[2]))).
leftOuterJoin(movieRDD.map(lambda x : (x.split("|")[0],x.split("|")[1])))).
map(lambda x: (x[1][1],1)).
reduceByKey(lambda x,y: x+y).
takeOrdered(25,lambda x:-x[1]))
```
I did this to show that you can use chaining functions with Spark and
you could bypass the process of variable creation.
Let us do one more. For practice:
Now we want to find the most highly rated 25 movies using the same
dataset. We actually want only those movies which have been rated at
least 100 times.
```
# We already have the RDD rdd_movid_title_rating: [(u'429', (u'5', u'Day the Earth Stood Still, The (1951)'))]
# We create an RDD that contains sum of all the ratings for a particular movie
rdd_title_ratingsum = (rdd_movid_title_rating.
map(lambda x: (x[1][1],int(x[1][0]))).
reduceByKey(lambda x,y:x+y))
print("rdd_title_ratingsum:",rdd_title_ratingsum.take(2))
# Merge this data with the RDD rdd_title_ratingcnt we created in the last step
# And use Map function to divide ratingsum by rating count.
rdd_title_ratingmean_rating_count = (rdd_title_ratingsum.
leftOuterJoin(rdd_title_ratingcnt).
map(lambda x:(x[0],(float(x[1][0])/x[1][1],x[1][1]))))
print("rdd_title_ratingmean_rating_count:",rdd_title_ratingmean_rating_count.take(1))
# We could use take ordered here only but we want to only get the movies which have count
# of ratings more than or equal to 100 so lets filter the data RDD.
rdd_title_rating_rating_count_gt_100 = (rdd_title_ratingmean_rating_count.
filter(lambda x: x[1][1]>=100))
print("rdd_title_rating_rating_count_gt_100:",rdd_title_rating_rating_count_gt_100.take(1))
# Get the final answer by using takeOrdered Transformation
print("#####################################")
print ("25 highly rated movies:")
print(rdd_title_rating_rating_count_gt_100.takeOrdered(25,lambda x:-x[1][0]))
print("#####################################")
```
We have talked about RDDs till now as they are very powerful.
You can use RDDs to work with non-relational databases too.
They let you do a lot of things that you couldn’t do with SparkSQL?
***Yes, you can use SQL with Spark too which I am going to talk about
now.***
Spark DataFrames
================

Spark has provided DataFrame API to work with relational data. Here is the
[documentation](https://docs.databricks.com/spark/latest/dataframes-datasets/introduction-to-dataframes-python.html#) for the adventurous folks.
Remember that in the background it still is all RDDs and that is why the
starting part of this post focussed on RDDs.
I will start with some common functionalities you will need to work with
Spark DataFrames. Would look a lot like Pandas with some syntax changes.
Reading the File
-------------------
```
ratings = spark.read.load("ml-100k/u.data",format="csv", sep="\t", inferSchema="true", header="false")
```
Show File
------------
Here is how we can show files using Spark Dataframes.
```
ratings.show(5)
ratings.printSchema()
```
Change Column names
----------------------
Good functionality. Always required. Don’t forget the `*` in front of the list.
```
ratings = ratings.toDF(*['user_id', 'movie_id', 'rating', 'unix_timestamp'])
ratings.show(5)
```
Some Basic Stats
-------------------
```
print(ratings.count()) #Row Count
print(len(ratings.columns)) #Column Count
```
We can also see the dataframe statistics using:
```
ratings.describe().show()
```
Select a few columns
-----------------------
```
ratings.select('user_id','movie_id').show(5)
```
Filter
---------
Filter a dataframe using multiple conditions:
```
ratings.filter((ratings.rating==5) & (ratings.user_id==253)).show(5)
```
Groupby
----------
We can use groupby function with a spark dataframe too. Pretty much same
as a pandas groupby with the exception that you will need to import
`pyspark.sql.functions`
```
from pyspark.sql import functions as F
ratings.groupBy("user_id").agg(F.count("user_id"),F.mean("rating")).show(5)
```
Here we have found the count of ratings and average rating from each
user_id
Sort
=======
```
ratings.sort("user_id").show(5)
```
We can also do a descending sort using `F.desc`
function as below.
```
# descending Sort
from pyspark.sql import functions as F
ratings.sort(F.desc("user_id")).show(5)
```
Joins/Merging with Spark Dataframes
===================================
We can use SQL with dataframes and thus we can merge dataframes using SQL.
Let us try to run some SQL on Ratings.
We first register the ratings df to a temporary table ratings\_table on
which we can run sql operations.
As you can see the result of the SQL select statement is again a Spark
Dataframe.
```
ratings.registerTempTable('ratings_table')
newDF = spark.sql('select * from ratings_table where rating > 4')
newDF.show(5)
```
Let us now add one more Spark Dataframe to the mix to see if we can use
join using the SQL queries:
```
# get one more dataframe to join
movies = spark.read.load("ml-100k/u.item",format="csv", sep="|", inferSchema="true", header="false")
# change column names
movies = movies.toDF(*["movie_id","movie_title","release_date","video_release_date","IMDb_URL","unknown","Action","Adventure","Animation ","Children","Comedy","Crime","Documentary","Drama","Fantasy","Film_Noir","Horror","Musical","Mystery","Romance","Sci_Fi","Thriller","War","Western"])
# display
movies.show(5)
```
Now let us try joining the tables on movie\_id to get the name of the
movie in the ratings table.
```
movies.registerTempTable('movies_table')
spark.sql("""
select
ratings_table.*,
movies_table.movie_title
from ratings_table
left join movies_table
on movies_table.movie_id = ratings_table.movie_id
""").show(5)
```
Let us try to do what we were doing earlier with the RDDs. Finding the
top 25 most rated movies:
```
mostrateddf = spark.sql("""
select
movie_id,
movie_title,
count(user_id) as num_ratings
from (
select
ratings_table.*,
movies_table.movie_title
from ratings_table
left join movies_table
on movies_table.movie_id = ratings_table.movie_id
)A
group by movie_id, movie_title
order by num_ratings desc
""")
mostrateddf.show(5)
```
And finding the top 25 highest rated movies having more than 100 votes:
```
highrateddf = spark.sql("""
select
movie_id,
movie_title,
avg(rating) as avg_rating,
count(movie_id) as num_ratings
from (
select
ratings_table.*,
movies_table.movie_title
from ratings_table
left join movies_table
on movies_table.movie_id = ratings_table.movie_id
)A
group by movie_id, movie_title
having num_ratings>100
order by avg_rating desc
""")
highrateddf.show(5, False)
```
Converting from Spark Dataframe to RDD and vice versa:
======================================================
Sometimes you may want to convert to RDD from a spark Dataframe or vice
versa so that you can have the best of both worlds.
To convert from DF to RDD, you can simply do :
```
highratedrdd = highrateddf.rdd
highratedrdd.take(2)
```
To go from an RDD to a dataframe:
```
from pyspark.sql import Row
# creating a RDD first
data = [('A',1),('B',2),('C',3),('D',4)]
rdd = sc.parallelize(data)
# map the schema using Row.
rdd_rows = rdd.map(lambda x: Row(name=x[0], age=int(x[1])))
# Convert the rdd to Dataframe
df = spark.createDataFrame(rdd_rows)
df.show(5)
df.registerTempTable('people')
spark.sql('select * from people where age > 3').show()
```
RDD provides you with ***more control*** at the cost of time and coding
effort. While Dataframes provide you with ***familiar coding***
platform. And now you can move back and forth between these two.
Conclusion
==========

This was long and congratulations if you reached the end.
[Spark](https://spark.apache.org/) has provided us with an interface where
we could use transformations and actions on our data. Spark also has the
Dataframe API to ease the transition to Big Data.
Hopefully, I’ve covered the basics well enough to pique your interest
and help you get started with Spark.
|
github_jupyter
|
# To download the data you would use the following commands:
#!wget -P shakespeare https://ocw.mit.edu/ans7870/6/6.006/s08/lecturenotes/files/t8.shakespeare.txt
#!mv shakespeare/t8.shakespeare.txt shakespeare/shakespeare.txt
!ls -l shakespeare
from pyspark.sql import SparkSession
spark = SparkSession\
.builder\
.appName("shakespeare")\
.master("spark://spark-master:7077")\
.config("hive.metastore.uris", "thrift://hive-metastore:9083")\
.config("spark.sql.warehouse.dir", "hdfs://namenode:8020/user/hive/warehouse")\
.config("spark.executor.memory", "1g")\
.config("spark.jars.packages", "org.apache.spark:spark-avro_2.12:3.2.0")\
.enableHiveSupport()\
.getOrCreate()
sc = spark.sparkContext
sc.setLogLevel("ERROR")
from pyspark.sql.functions import to_json,col
from pyspark.sql.types import *
from os.path import abspath
# Distribute the data - Create a RDD
lines = sc.textFile("shakespeare/shakespeare.txt")
x = 'This is the 100th Etext file presented by Project Gutenberg, and'
x.split(' ')
# Distribute the data - Create a RDD
lines = sc.textFile("shakespeare/shakespeare.txt")
# Create a list with all words, Create tuple (word,1), reduce by key i.e. the word
counts = (lines.flatMap(lambda x: x.split(' '))
.map(lambda x: (x, 1))
.reduceByKey(lambda x,y : x + y))
# get the output on local
output = counts.take(10)
# print output
for (word, count) in output:
if word.strip() != "":
print(f"'{word}' occurs {count} times")
# print(counts.toDebugString().decode('utf-8'))
my_list = [1,2,3,4,5,6,7,8,9,10]
# Lets say I want to square each term in my_list.
squared_list = map(lambda x:x**2, my_list)
print(list(squared_list))
def squared(x):
return x**2
my_list = [1,2,3,4,5,6,7,8,9,10]
# Lets say I want to square each term in my_list.
squared_list = map(squared, my_list)
print(list(squared_list))
my_list = [1,2,3,4,5,6,7,8,9,10]
# Lets say I want only the even numbers in my list.
filtered_list = filter(lambda x:x%2==0,my_list)
print(list(filtered_list))
import functools
my_list = [1,2,3,4,5]
# Lets say I want to sum all elements in my list.
sum_list = functools.reduce(lambda x,y:x+y,my_list)
print(sum_list)
import functools
my_list = [1,2,3,4]
# Lets say I want to sum all elements in my list.
sum_list = functools.reduce(lambda x,y:x*y,my_list)
print(sum_list)
Reduce function first sends 1,2 ; the lambda function returns 3
Reduce function then sends 3,3 ; the lambda function returns 6
Reduce function then sends 6,4 ; the lambda function returns 10
Reduce function finally sends 10,5 ; the lambda function returns 15
lines = sc.textFile("/FileStore/tables/shakespeare.txt")
data = [1,2,3,4,5,6,7,8,9,10]
new_rdd = sc.parallelize(data,4)
new_rdd.collect()
data = [1,2,3,4,5,6,7,8,9,10]
rdd = sc.parallelize(data,4)
squared_rdd = rdd.map(lambda x:x**2)
result_list = squared_rdd.collect()
print(result_list)
data = [1,2,3,4,5,6,7,8,9,10]
rdd = sc.parallelize(data,4)
filtered_rdd = rdd.filter(lambda x:x%2!=0)
filtered_rdd.collect()
data = [1,2,2,2,2,3,3,3,3,4,5,6,7,7,7,8,8,8,9,10]
rdd = sc.parallelize(data,4)
distinct_rdd = rdd.distinct()
sorted(distinct_rdd.collect())
data = [1,2,3,4]
rdd = sc.parallelize(data,4)
flat_rdd = rdd.flatMap(lambda x:[x,x**3])
flat_rdd.collect()
data = [('Apple','Fruit',200),('Banana','Fruit',24),('Tomato','Fruit',56),('Potato','Vegetable',103),('Carrot','Vegetable',34)]
rdd = sc.parallelize(data,4)
rdd.collect()
category_price_rdd = rdd.map(lambda x: (x[1],x[2]))
category_price_rdd.collect()
category_total_price_rdd = category_price_rdd.reduceByKey(lambda x,y:x+y)
category_total_price_rdd.collect()
data = [('Apple','Fruit',200),('Banana','Fruit',24),('Tomato','Fruit',56),('Potato','Vegetable',103),('Carrot','Vegetable',34)]
rdd = sc.parallelize(data,4)
category_product_rdd = rdd.map(lambda x: (x[1],x[0]))
category_product_rdd.collect()
grouped_products_by_category_rdd = category_product_rdd.groupByKey()
findata = grouped_products_by_category_rdd.collect()
for data in findata:
print(data[0],list(data[1]))
rdd = sc.parallelize([1,2,3,4,5])
rdd.reduce(lambda x,y : x+y)
rdd = sc.parallelize([1,2,3,4,5])
rdd.take(3)
rdd = sc.parallelize([5,3,12,23])
# descending order
rdd.takeOrdered(3,lambda s:-1*s)
rdd = sc.parallelize([(5,23),(3,34),(12,344),(23,29)])
# descending order
rdd.takeOrdered(3,lambda s:-1*s[1])
lines = sc.textFile("shakespeare/shakespeare.txt")
lines.take(5)
['word1 word2 word3','word4 word3 word2']
counts = (lines.flatMap(lambda x: x.split(' '))
.map(lambda x: (x, 1))
.reduceByKey(lambda x,y : x + y))
['word1','word2','word3','word4','word3','word2']
[('word1',1),('word2',1),('word3',1),('word4',1),('word3',1),('word2',1)]
[('word1',1),('word2',2),('word3',2),('word4',1)]
output = counts.take(10)
for (word, count) in output:
print("%s: %i" % (word, count))
# To download the data you would use the following commands:
!rm -rf ml-100k
!wget -P /tmp https://github.com/rudrasingh21/Data-ML-100k-/raw/master/ml-100k.zip
!unzip /tmp/ml-100k.zip -d .
!ls -l ml-100k
['user_id', 'age', 'sex', 'occupation', 'zip_code']
['user_id', 'movie_id', 'rating', 'unix_timestamp']
['movie_id', 'title', 'release_date', 'video_release_date', 'imdb_url', and 18 more columns.....]
userRDD = sc.textFile("ml-100k/u.user")
ratingRDD = sc.textFile("ml-100k/u.data")
movieRDD = sc.textFile("ml-100k/u.item")
print("userRDD:",userRDD.take(1))
print("ratingRDD:",ratingRDD.take(1))
print("movieRDD:",movieRDD.take(1))
# Create a RDD from RatingRDD that only contains the two columns of interest i.e. movie_id,rating.
RDD_movid_rating = ratingRDD.map(lambda x : (x.split("\t")[1],x.split("\t")[2]))
print("RDD_movid_rating:",RDD_movid_rating.take(4))
# Create a RDD from MovieRDD that only contains the two columns of interest i.e. movie_id,title.
RDD_movid_title = movieRDD.map(lambda x : (x.split("|")[0],x.split("|")[1]))
print("RDD_movid_title:",RDD_movid_title.take(2))
# merge these two pair RDDs based on movie_id. For this we will use the transformation leftOuterJoin(). See the transformation document.
rdd_movid_title_rating = RDD_movid_rating.leftOuterJoin(RDD_movid_title)
print("rdd_movid_title_rating:",rdd_movid_title_rating.take(1))
# use the RDD in previous step to create (movie,1) tuple pair RDD
rdd_title_rating = rdd_movid_title_rating.map(lambda x: (x[1][1],1 ))
print("rdd_title_rating:",rdd_title_rating.take(2))
# Use the reduceByKey transformation to reduce on the basis of movie_title
rdd_title_ratingcnt = rdd_title_rating.reduceByKey(lambda x,y: x+y)
print("rdd_title_ratingcnt:",rdd_title_ratingcnt.take(2))
# Get the final answer by using takeOrdered Transformation
print("#####################################")
print("25 most rated movies:",rdd_title_ratingcnt.takeOrdered(25,lambda x:-x[1]))
print("#####################################")
print(((ratingRDD.map(lambda x : (x.split("\t")[1],x.split("\t")[2]))).
leftOuterJoin(movieRDD.map(lambda x : (x.split("|")[0],x.split("|")[1])))).
map(lambda x: (x[1][1],1)).
reduceByKey(lambda x,y: x+y).
takeOrdered(25,lambda x:-x[1]))
# We already have the RDD rdd_movid_title_rating: [(u'429', (u'5', u'Day the Earth Stood Still, The (1951)'))]
# We create an RDD that contains sum of all the ratings for a particular movie
rdd_title_ratingsum = (rdd_movid_title_rating.
map(lambda x: (x[1][1],int(x[1][0]))).
reduceByKey(lambda x,y:x+y))
print("rdd_title_ratingsum:",rdd_title_ratingsum.take(2))
# Merge this data with the RDD rdd_title_ratingcnt we created in the last step
# And use Map function to divide ratingsum by rating count.
rdd_title_ratingmean_rating_count = (rdd_title_ratingsum.
leftOuterJoin(rdd_title_ratingcnt).
map(lambda x:(x[0],(float(x[1][0])/x[1][1],x[1][1]))))
print("rdd_title_ratingmean_rating_count:",rdd_title_ratingmean_rating_count.take(1))
# We could use take ordered here only but we want to only get the movies which have count
# of ratings more than or equal to 100 so lets filter the data RDD.
rdd_title_rating_rating_count_gt_100 = (rdd_title_ratingmean_rating_count.
filter(lambda x: x[1][1]>=100))
print("rdd_title_rating_rating_count_gt_100:",rdd_title_rating_rating_count_gt_100.take(1))
# Get the final answer by using takeOrdered Transformation
print("#####################################")
print ("25 highly rated movies:")
print(rdd_title_rating_rating_count_gt_100.takeOrdered(25,lambda x:-x[1][0]))
print("#####################################")
ratings = spark.read.load("ml-100k/u.data",format="csv", sep="\t", inferSchema="true", header="false")
ratings.show(5)
ratings.printSchema()
ratings = ratings.toDF(*['user_id', 'movie_id', 'rating', 'unix_timestamp'])
ratings.show(5)
print(ratings.count()) #Row Count
print(len(ratings.columns)) #Column Count
ratings.describe().show()
ratings.select('user_id','movie_id').show(5)
ratings.filter((ratings.rating==5) & (ratings.user_id==253)).show(5)
from pyspark.sql import functions as F
ratings.groupBy("user_id").agg(F.count("user_id"),F.mean("rating")).show(5)
ratings.sort("user_id").show(5)
# descending Sort
from pyspark.sql import functions as F
ratings.sort(F.desc("user_id")).show(5)
ratings.registerTempTable('ratings_table')
newDF = spark.sql('select * from ratings_table where rating > 4')
newDF.show(5)
# get one more dataframe to join
movies = spark.read.load("ml-100k/u.item",format="csv", sep="|", inferSchema="true", header="false")
# change column names
movies = movies.toDF(*["movie_id","movie_title","release_date","video_release_date","IMDb_URL","unknown","Action","Adventure","Animation ","Children","Comedy","Crime","Documentary","Drama","Fantasy","Film_Noir","Horror","Musical","Mystery","Romance","Sci_Fi","Thriller","War","Western"])
# display
movies.show(5)
movies.registerTempTable('movies_table')
spark.sql("""
select
ratings_table.*,
movies_table.movie_title
from ratings_table
left join movies_table
on movies_table.movie_id = ratings_table.movie_id
""").show(5)
mostrateddf = spark.sql("""
select
movie_id,
movie_title,
count(user_id) as num_ratings
from (
select
ratings_table.*,
movies_table.movie_title
from ratings_table
left join movies_table
on movies_table.movie_id = ratings_table.movie_id
)A
group by movie_id, movie_title
order by num_ratings desc
""")
mostrateddf.show(5)
highrateddf = spark.sql("""
select
movie_id,
movie_title,
avg(rating) as avg_rating,
count(movie_id) as num_ratings
from (
select
ratings_table.*,
movies_table.movie_title
from ratings_table
left join movies_table
on movies_table.movie_id = ratings_table.movie_id
)A
group by movie_id, movie_title
having num_ratings>100
order by avg_rating desc
""")
highrateddf.show(5, False)
highratedrdd = highrateddf.rdd
highratedrdd.take(2)
from pyspark.sql import Row
# creating a RDD first
data = [('A',1),('B',2),('C',3),('D',4)]
rdd = sc.parallelize(data)
# map the schema using Row.
rdd_rows = rdd.map(lambda x: Row(name=x[0], age=int(x[1])))
# Convert the rdd to Dataframe
df = spark.createDataFrame(rdd_rows)
df.show(5)
df.registerTempTable('people')
spark.sql('select * from people where age > 3').show()
| 0.326164 | 0.956796 |
```
!conda activate ctg
import os
import sys
import numpy as np
import pandas as pd
from sklearn.model_selection import StratifiedKFold, train_test_split, StratifiedShuffleSplit
import matplotlib.pyplot as plt
from typing import Tuple, List, Optional, Dict
import seaborn as sns
import warnings
from tqdm.auto import tqdm
from collections import defaultdict, Counter
warnings.simplefilter('ignore')
%matplotlib inline
import sys
sys.path.append('..')
from src.helpers.helpers import argar1_labels, argar5_labels, pH_labels
DATA_DIR = '../../data/database/database/signals'
META_FILE = '../meta.csv'
RESULTS_DIR = '../output/pics'
df_train = pd.read_csv(META_FILE)
patients = df_train['patient'].values
df_train = df_train.drop(['Unnamed: 0'], axis = 1)
# normal
norm = df_train['patient'][df_train['pH'] >= 7.15].values
inter = df_train['patient'][(df_train['pH'] < 7.15) & (df_train['pH'] >= 7.05)].values
pathol = df_train['patient'][df_train['pH'] < 7.05].values
df = df_train.copy()
df.head()
```
## See Targets
```
#histogram
plt.figure(figsize=(8,5))
df['Apgar5'].hist(bins=30)
#histogram
plt.figure(figsize=(8,5))
df['Apgar1'].hist(bins=30)
#histogram
plt.figure(figsize=(8,5))
df['pH'].hist(bins=100)
apgar5 = df[df['Apgar5'] <= 5]
len(apgar5)
apgar1 = df[df['Apgar1'] < 5]
len(apgar1)
apgar1 = df[df['Apgar1'] == 9]
len(apgar1)
len(df[df['pH'] < 7.0])
len(df[df['pH'] >= 7.2])
len(df[(df_train['pH'] < 7.1)&(df_train['pH'] >= 7.0)])
len(df[(df_train['pH'] < 7.2)&(df_train['pH'] >= 7.15)])
len(df[(df_train['pH'] < 7.15)&(df_train['pH'] >= 7.1)])
def pH_labels(df_train: pd.DataFrame, target_col: str = 'target') -> pd.DataFrame:
"""
Generate label for each signal based on pH values:
pH >= 7.15 - normal, label = 0
pH >= 7.05 and < 7.15 - intermediate, label = 2
pH < 7.05 - pathological, label = 1
Args:
df_train: (pd.DataFrame) patients meta data
Output: patient df with labels
We only consider first stage of labour
"""
# create target column
df_train[target_col] = -1
df_train[target_col][df_train['pH'] >= 7.2] = 0 # 375
df_train[target_col][(df_train['pH'] < 7.2)&(df_train['pH'] >= 7.15)] = 1 # 72
df_train[target_col][(df_train['pH'] < 7.15)&(df_train['pH'] >= 7.1)] = 2 # 49
df_train[target_col][(df_train['pH'] < 7.1)&(df_train['pH'] >= 7.0)] = 3 # 36
df_train[target_col][df_train['pH'] < 7.0] = 4 # 20
return df_train
def argar1_labels(df_train: pd.DataFrame, target_col: str = 'target') -> pd.DataFrame:
"""
Generate label for each signal based on pH values:
Apgar1 > 8 - normal, label = 0
Apgar1 == 5-7 - intermediate, label = 2
Apgar1 < 5- pathological, label = 1
Args:
df_train: (pd.DataFrame) patients meta data
Output: patient df with labels
We only consider first stage of labour
"""
# create target column
df_train[target_col] = -1
df_train[target_col][(df_train['Apgar1'] > 8)] = 0
df_train[target_col][(df_train['Apgar1'] == 7)|(df_train['Apgar1'] == 8)] = 1
df_train[target_col][(df_train['Apgar1'] == 5)|(df_train['Apgar1'] == 6)] = 2
df_train[target_col][df_train['Apgar1'] < 5] = 3 # 27
return df_train
def argar5_labels(df_train: pd.DataFrame, target_col: str = 'target') -> pd.DataFrame:
"""
Generate label for each signal based on pH values:
Apgar5 > 7 - normal, label = 0
Apgar1 == 7 - intermediate, label = 2
Apgar1 < 7 - pathological, label = 1
Args:
df_train: (pd.DataFrame) patients meta data
Output: patient df with labels
We only consider first stage of labour
"""
# create target column
df_train[target_col] = -1
df_train[target_col][(df_train['Apgar5'] > 8)] = 0
df_train[target_col][(df_train['Apgar5'] == 8)] = 1
df_train[target_col][(df_train['Apgar5'] == 6)|(df_train['Apgar5'] == 7)] = 2
df_train[target_col][(df_train['Apgar5'] <= 5)] = 3
return df_train
df = pH_labels(df, 'target')
#histogram
plt.figure(figsize=(8,5))
df['target'].hist(bins=100)
df = argar1_labels(df, 'target_apgar1')
#histogram
plt.figure(figsize=(8,5))
df['target_apgar1'].hist(bins=100)
df = argar5_labels(df, 'target_apgar5')
#histogram
plt.figure(figsize=(8,5))
df['target_apgar5'].hist(bins=100)
```
## Multilable stratification, pH and Apgar
http://scikit.ml/stratification.html
https://github.com/trent-b/iterative-stratification
```
#!pip install scikit-multilearn
from skmultilearn.cluster import LabelCooccurrenceGraphBuilder
#!pip install iterative-stratification
#MultilabelStratifiedKFold
from iterstrat.ml_stratifiers import MultilabelStratifiedKFold
def multilable_train_test_split(df: pd.DataFrame, X: np.array, y: np.array, if_save: bool) -> pd.DataFrame:
"""
Create folds using iterative stratification of multi label data
Source: https://github.com/trent-b/iterative-stratification
Args:
df : train meta dataframe
X : X Series to split
y : y list of tuples to use for stratification
nb_folds : number of folds
if_save : boolean flag weather to save the folds
Output:
df: train meta with splitted folds
"""
df["test_fold"] = -2 # set all folds to -1 initially
mskf = MultilabelStratifiedKFold(n_splits=10, random_state=1234)
# split folds
for fold, (train_index, test_index) in enumerate(mskf.split(X, y)):
df.loc[test_index, "test_fold"] = fold
# save dataframe with folds (optionally)
if if_save:
df.to_csv(os.path.join(DATA_DIR, f"test_folds.csv"), index=False)
return df
def train_test_split(df: pd.DataFrame, X: np.array, y: np.array, if_save: bool) -> pd.DataFrame:
"""
Create folds for the test
Args:
df : train meta dataframe
X : X Series to split
y : y Series to use for stratification
nb_folds : number of folds
if_save : boolean flag weather to save the folds
Output:
df: train meta with splitted folds
"""
df["test_fold"] = -2 # set all folds to -1 initially
#skf = StratifiedShuffleSplit(n_splits=10, test_size=0.1, train_size=0.9, random_state=1234)
skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=12)
# split folds
for fold, (train_index, test_index) in enumerate(skf.split(X, y)):
df.loc[test_index, "test_fold"] = fold
# save dataframe with folds (optionally)
if if_save:
df.to_csv(os.path.join("../test_folds_strat.csv"), index=False)
return df
SAVE_PATH = '../../data/preprocessed/npy450_3_ph/'
dropped_patients = np.load(SAVE_PATH+'dropped_patients.npy')
dropped_patients = np.unique(dropped_patients)
dropped_patients, len(dropped_patients)
saved_patients = np.load(SAVE_PATH+'saved_patients.npy')
saved_patients = np.unique(saved_patients)
len(saved_patients)
df_s = df[(df["patient"].isin(saved_patients))&(df["Deliv. type"]==1)]
df_s["Deliv. type"].describe()
df_s.to_csv(os.path.join(DATA_DIR, "df_filtered.csv"), index=False)
df= pd.read_csv(os.path.join(DATA_DIR, "df_filtered.csv"))
patients = df['patient'].values
len(patients)
target = df['target'].values
len(target)
labels = df[['target', 'Apgar1']].values
labels
df = multilable_train_test_split(df=df, X=patients, y=labels, if_save=True)
graph_builder = LabelCooccurrenceGraphBuilder(weighted=True, include_self_edges=False)
label_names = ['target', 'Apgar1', 'Deliv. type']
edge_map = graph_builder.transform(labels)
print("{} labels, {} edges".format(len(label_names), len(edge_map)))
print(edge_map)
def test_folds(folds_df: pd.DataFrame, column_name: str = 'test_fold') -> None:
# sanity checks
for fold in range(4):
train_fold = folds_df[folds_df[column_name] != fold]
valid_fold = folds_df[folds_df[column_name] == fold]
cls_counts = Counter(cls for classes in train_fold[['target', 'Apgar1']].values
for cls in classes)
print('train_fold counts :', cls_counts)
cls_counts = Counter(cls for classes in valid_fold[['target', 'Apgar1']].values
for cls in classes)
print('valid_fold counts :', cls_counts)
test_folds(df)
df.head()
def create_folds(df: pd.DataFrame, X: np.array, y: np.array, nb_folds: int, column_name: str = 'fold', if_save: bool = False) -> pd.DataFrame:
"""
Create folds
Args:
df : train meta dataframe
X : X Series to split
y : y Series to use for stratification
nb_folds : number of folds
if_save : boolean flag weather to save the folds
Output:
df: train meta with splitted folds
"""
df[column_name] = -1 # set all folds to -1 initially
skf = StratifiedKFold(n_splits=nb_folds, shuffle=True, random_state=12)
# split folds
for fold, (train_index, test_index) in enumerate(skf.split(X, y)):
df.loc[test_index, column_name] = fold
# save dataframe with folds (optionally)
if if_save:
df.to_csv(os.path.join(DATA_DIR, "test_folds.csv"), index=False)
return df
df = create_folds(df=df, X=patients, y=target, nb_folds=10, column_name='test_fold_ph', if_save=True)
df.head()
test_folds(df, "test_fold_ph")
```
## Train/validation split for some test fold
```
test_fold = 0
df_tv = df[df["test_fold_ph"] != test_fold]
df_tv.to_csv(os.path.join(DATA_DIR, "dftrain_test0.csv"), index=False)
df0= pd.read_csv(os.path.join(DATA_DIR, "dftrain_test0.csv"))
df0["test_fold_ph"].describe()
df0.head()
patients = df0['patient'].values
targets = df0['target'].values
len(patients), len(targets)
df0['fold'] = -1 # set all folds to -1 initially
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=1234)
# split folds
for fold, (train_ind, test_ind) in enumerate(skf.split(patients, targets)):
df0.loc[test_ind, 'fold'] = fold
df0.head()
test_folds(df0, "fold")
df0.to_csv(os.path.join(DATA_DIR, "df_test0_folds.csv"), index=False)
```
|
github_jupyter
|
!conda activate ctg
import os
import sys
import numpy as np
import pandas as pd
from sklearn.model_selection import StratifiedKFold, train_test_split, StratifiedShuffleSplit
import matplotlib.pyplot as plt
from typing import Tuple, List, Optional, Dict
import seaborn as sns
import warnings
from tqdm.auto import tqdm
from collections import defaultdict, Counter
warnings.simplefilter('ignore')
%matplotlib inline
import sys
sys.path.append('..')
from src.helpers.helpers import argar1_labels, argar5_labels, pH_labels
DATA_DIR = '../../data/database/database/signals'
META_FILE = '../meta.csv'
RESULTS_DIR = '../output/pics'
df_train = pd.read_csv(META_FILE)
patients = df_train['patient'].values
df_train = df_train.drop(['Unnamed: 0'], axis = 1)
# normal
norm = df_train['patient'][df_train['pH'] >= 7.15].values
inter = df_train['patient'][(df_train['pH'] < 7.15) & (df_train['pH'] >= 7.05)].values
pathol = df_train['patient'][df_train['pH'] < 7.05].values
df = df_train.copy()
df.head()
#histogram
plt.figure(figsize=(8,5))
df['Apgar5'].hist(bins=30)
#histogram
plt.figure(figsize=(8,5))
df['Apgar1'].hist(bins=30)
#histogram
plt.figure(figsize=(8,5))
df['pH'].hist(bins=100)
apgar5 = df[df['Apgar5'] <= 5]
len(apgar5)
apgar1 = df[df['Apgar1'] < 5]
len(apgar1)
apgar1 = df[df['Apgar1'] == 9]
len(apgar1)
len(df[df['pH'] < 7.0])
len(df[df['pH'] >= 7.2])
len(df[(df_train['pH'] < 7.1)&(df_train['pH'] >= 7.0)])
len(df[(df_train['pH'] < 7.2)&(df_train['pH'] >= 7.15)])
len(df[(df_train['pH'] < 7.15)&(df_train['pH'] >= 7.1)])
def pH_labels(df_train: pd.DataFrame, target_col: str = 'target') -> pd.DataFrame:
"""
Generate label for each signal based on pH values:
pH >= 7.15 - normal, label = 0
pH >= 7.05 and < 7.15 - intermediate, label = 2
pH < 7.05 - pathological, label = 1
Args:
df_train: (pd.DataFrame) patients meta data
Output: patient df with labels
We only consider first stage of labour
"""
# create target column
df_train[target_col] = -1
df_train[target_col][df_train['pH'] >= 7.2] = 0 # 375
df_train[target_col][(df_train['pH'] < 7.2)&(df_train['pH'] >= 7.15)] = 1 # 72
df_train[target_col][(df_train['pH'] < 7.15)&(df_train['pH'] >= 7.1)] = 2 # 49
df_train[target_col][(df_train['pH'] < 7.1)&(df_train['pH'] >= 7.0)] = 3 # 36
df_train[target_col][df_train['pH'] < 7.0] = 4 # 20
return df_train
def argar1_labels(df_train: pd.DataFrame, target_col: str = 'target') -> pd.DataFrame:
"""
Generate label for each signal based on pH values:
Apgar1 > 8 - normal, label = 0
Apgar1 == 5-7 - intermediate, label = 2
Apgar1 < 5- pathological, label = 1
Args:
df_train: (pd.DataFrame) patients meta data
Output: patient df with labels
We only consider first stage of labour
"""
# create target column
df_train[target_col] = -1
df_train[target_col][(df_train['Apgar1'] > 8)] = 0
df_train[target_col][(df_train['Apgar1'] == 7)|(df_train['Apgar1'] == 8)] = 1
df_train[target_col][(df_train['Apgar1'] == 5)|(df_train['Apgar1'] == 6)] = 2
df_train[target_col][df_train['Apgar1'] < 5] = 3 # 27
return df_train
def argar5_labels(df_train: pd.DataFrame, target_col: str = 'target') -> pd.DataFrame:
"""
Generate label for each signal based on pH values:
Apgar5 > 7 - normal, label = 0
Apgar1 == 7 - intermediate, label = 2
Apgar1 < 7 - pathological, label = 1
Args:
df_train: (pd.DataFrame) patients meta data
Output: patient df with labels
We only consider first stage of labour
"""
# create target column
df_train[target_col] = -1
df_train[target_col][(df_train['Apgar5'] > 8)] = 0
df_train[target_col][(df_train['Apgar5'] == 8)] = 1
df_train[target_col][(df_train['Apgar5'] == 6)|(df_train['Apgar5'] == 7)] = 2
df_train[target_col][(df_train['Apgar5'] <= 5)] = 3
return df_train
df = pH_labels(df, 'target')
#histogram
plt.figure(figsize=(8,5))
df['target'].hist(bins=100)
df = argar1_labels(df, 'target_apgar1')
#histogram
plt.figure(figsize=(8,5))
df['target_apgar1'].hist(bins=100)
df = argar5_labels(df, 'target_apgar5')
#histogram
plt.figure(figsize=(8,5))
df['target_apgar5'].hist(bins=100)
#!pip install scikit-multilearn
from skmultilearn.cluster import LabelCooccurrenceGraphBuilder
#!pip install iterative-stratification
#MultilabelStratifiedKFold
from iterstrat.ml_stratifiers import MultilabelStratifiedKFold
def multilable_train_test_split(df: pd.DataFrame, X: np.array, y: np.array, if_save: bool) -> pd.DataFrame:
"""
Create folds using iterative stratification of multi label data
Source: https://github.com/trent-b/iterative-stratification
Args:
df : train meta dataframe
X : X Series to split
y : y list of tuples to use for stratification
nb_folds : number of folds
if_save : boolean flag weather to save the folds
Output:
df: train meta with splitted folds
"""
df["test_fold"] = -2 # set all folds to -1 initially
mskf = MultilabelStratifiedKFold(n_splits=10, random_state=1234)
# split folds
for fold, (train_index, test_index) in enumerate(mskf.split(X, y)):
df.loc[test_index, "test_fold"] = fold
# save dataframe with folds (optionally)
if if_save:
df.to_csv(os.path.join(DATA_DIR, f"test_folds.csv"), index=False)
return df
def train_test_split(df: pd.DataFrame, X: np.array, y: np.array, if_save: bool) -> pd.DataFrame:
"""
Create folds for the test
Args:
df : train meta dataframe
X : X Series to split
y : y Series to use for stratification
nb_folds : number of folds
if_save : boolean flag weather to save the folds
Output:
df: train meta with splitted folds
"""
df["test_fold"] = -2 # set all folds to -1 initially
#skf = StratifiedShuffleSplit(n_splits=10, test_size=0.1, train_size=0.9, random_state=1234)
skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=12)
# split folds
for fold, (train_index, test_index) in enumerate(skf.split(X, y)):
df.loc[test_index, "test_fold"] = fold
# save dataframe with folds (optionally)
if if_save:
df.to_csv(os.path.join("../test_folds_strat.csv"), index=False)
return df
SAVE_PATH = '../../data/preprocessed/npy450_3_ph/'
dropped_patients = np.load(SAVE_PATH+'dropped_patients.npy')
dropped_patients = np.unique(dropped_patients)
dropped_patients, len(dropped_patients)
saved_patients = np.load(SAVE_PATH+'saved_patients.npy')
saved_patients = np.unique(saved_patients)
len(saved_patients)
df_s = df[(df["patient"].isin(saved_patients))&(df["Deliv. type"]==1)]
df_s["Deliv. type"].describe()
df_s.to_csv(os.path.join(DATA_DIR, "df_filtered.csv"), index=False)
df= pd.read_csv(os.path.join(DATA_DIR, "df_filtered.csv"))
patients = df['patient'].values
len(patients)
target = df['target'].values
len(target)
labels = df[['target', 'Apgar1']].values
labels
df = multilable_train_test_split(df=df, X=patients, y=labels, if_save=True)
graph_builder = LabelCooccurrenceGraphBuilder(weighted=True, include_self_edges=False)
label_names = ['target', 'Apgar1', 'Deliv. type']
edge_map = graph_builder.transform(labels)
print("{} labels, {} edges".format(len(label_names), len(edge_map)))
print(edge_map)
def test_folds(folds_df: pd.DataFrame, column_name: str = 'test_fold') -> None:
# sanity checks
for fold in range(4):
train_fold = folds_df[folds_df[column_name] != fold]
valid_fold = folds_df[folds_df[column_name] == fold]
cls_counts = Counter(cls for classes in train_fold[['target', 'Apgar1']].values
for cls in classes)
print('train_fold counts :', cls_counts)
cls_counts = Counter(cls for classes in valid_fold[['target', 'Apgar1']].values
for cls in classes)
print('valid_fold counts :', cls_counts)
test_folds(df)
df.head()
def create_folds(df: pd.DataFrame, X: np.array, y: np.array, nb_folds: int, column_name: str = 'fold', if_save: bool = False) -> pd.DataFrame:
"""
Create folds
Args:
df : train meta dataframe
X : X Series to split
y : y Series to use for stratification
nb_folds : number of folds
if_save : boolean flag weather to save the folds
Output:
df: train meta with splitted folds
"""
df[column_name] = -1 # set all folds to -1 initially
skf = StratifiedKFold(n_splits=nb_folds, shuffle=True, random_state=12)
# split folds
for fold, (train_index, test_index) in enumerate(skf.split(X, y)):
df.loc[test_index, column_name] = fold
# save dataframe with folds (optionally)
if if_save:
df.to_csv(os.path.join(DATA_DIR, "test_folds.csv"), index=False)
return df
df = create_folds(df=df, X=patients, y=target, nb_folds=10, column_name='test_fold_ph', if_save=True)
df.head()
test_folds(df, "test_fold_ph")
test_fold = 0
df_tv = df[df["test_fold_ph"] != test_fold]
df_tv.to_csv(os.path.join(DATA_DIR, "dftrain_test0.csv"), index=False)
df0= pd.read_csv(os.path.join(DATA_DIR, "dftrain_test0.csv"))
df0["test_fold_ph"].describe()
df0.head()
patients = df0['patient'].values
targets = df0['target'].values
len(patients), len(targets)
df0['fold'] = -1 # set all folds to -1 initially
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=1234)
# split folds
for fold, (train_ind, test_ind) in enumerate(skf.split(patients, targets)):
df0.loc[test_ind, 'fold'] = fold
df0.head()
test_folds(df0, "fold")
df0.to_csv(os.path.join(DATA_DIR, "df_test0_folds.csv"), index=False)
| 0.567697 | 0.642629 |
## Normalizacja danych
### Czym jest norma?
Jest to funkcja spełniająca określone właściwości dotyczące skalowalności i addytywności oraz przypisująca ściśle dodatnią liczbę rzeczywistą każdemu wektorowi w zdefiniowanej przestrzeni wektorowej. Szczególnym przypadkiem jest wektor zerowy, któremu jest przypisana wartość $0$.
W najprostszych słowach **norma** to długość wektora w danej przestrzeni.
W jednowymiarowej przestrzeni $||x||_1=||x||_2=\cdots=||x||_p=|x|$
Przykłady norm dla wektora $x \in \mathbb{R}^3, x = (x_1,x_2,x_3)$:
- $||x||_1 = |x_1|^1+|x_2|^1+|x_3|^1$
- $||x||_2 = (|x_1|^2+|x_2|^2+|x_2|^2)^{\frac{1}{2}} = \sqrt{(x_1^2+x_2^2+x_2^2)}$
- ...
- $||x||_p = |x_1|^{\frac{1}{p}}+|x_2|^{\frac{1}{p}}+|x_2|^{\frac{1}{p}}$
- $||x||_\infty = \max(|x_1|,|x_2|,|x_3|)$
### Czym jest normowanie wektorów?
```
import numpy as np
import matplotlib.pyplot as plt
V = np.array([[1,1],[-2,2],[4,-3]])
origin = [0], [0] # origin point
plt.quiver(*origin, V[:,0], V[:,1], scale=10)
plt.show()
from sklearn import preprocessing
V_normed = preprocessing.normalize(V, norm = 'l1', axis = 1)
plt.quiver(*origin, V_normed[:,0], V_normed[:,1], scale=10)
plt.show()
from sklearn import preprocessing
V_normed = preprocessing.normalize(V, norm = 'l2',axis=1)
plt.quiver(*origin, V_normed[:,0], V_normed[:,1], scale=10)
plt.show()
from sklearn import preprocessing
V_normed = preprocessing.normalize(V, norm = 'max',axis = 1)
plt.quiver(*origin, V_normed[:,0], V_normed[:,1], scale=10)
plt.show()
```
### Przykład dla danych tabularycznych
```
import numpy as np
x = np.array([[2,2,3,10],[2,2,3,20],[2,2,3,30]])
x
l1_norm = preprocessing.normalize(x, norm = 'l1',axis = 1)
l1_norm
l1_norm.sum(axis=1)
l2_norm = preprocessing.normalize(x, norm = 'l2',axis = 1)
l2_norm
l2_norm.sum(axis=1)
max_norm = preprocessing.normalize(x, norm = 'max',axis = 1)
max_norm
max_norm.sum(axis=1)
```
- Przetestujmy sobie normowanie per kolumna
- Przetestujmy wygodne sklearnowe API
### Krótkie uzupełnienie - najprostszy normalizer - MinMaxScaler
$x_{scaled} = \frac{x-min(x)}{max(x)-min(x)}$
Ten mechanizm działa lepiej w przypadkach, w których StandardScaler (poprzednie zajęcia) może nie działać tak dobrze. Jeśli rozkład danych nie jest normalny lub odchylenie standardowe jest bardzo małe, wtedy należy wykorzystać MinMaxScaler.
```
x = np.array([[-1, 2], [-0.5, 6], [0, 10], [1, 18]])
x
x_std = (x - x.min(axis=0)) / (x.max(axis=0) - x.min(axis=0))
x_std
```
- Przetestujmy wygodne sklearnowe API
### Po co wykonujemy normalizacje?
### Jakie zmienia się nasz wektor i jakie wartości może przyjąć?
### Czym różni się normalizacja od standaryzacji?
Standaryzacja zmiennej x dla przypomnienia: $z = \frac{(x - mean(x))}{std(x)}$
## Klasyfikacja
Klasyfikacja to rodzaj algorytmu statystycznego, który przydziela obserwacje statystyczne do klas, bazując na atrybutach tych obserwacji.
**Definicja:**
Dla danego zbioru danych trenujących $\{(x_1,y),\ldots,(x_n,y)\}$ algorytm potrafi znaleźć funkcję klasyfikującją $h: X -> Y$, która przydziela obiektowi $x\in X$ klasę $y \in Y$.
- prawdopodobieństwo aposteriori: $P(Y=i|X)$
- funkcja klasyfikacyjna przyjmuje postać: $h(X) = argmax_{1,\ldots,y} P(Y=i|X)$
Przykłady klasyfikacji:
- wykrywanie czy pacjent jest chory na daną chorobę na podstawie wyników badań
- klasyfikacja maili jako spam/nie-spam
- czy transakcja dokonana na koncie klienta banku to oszustwo/kradzież czy też normalna transakcja
- rozpoznawania na obrazu różnych rodzajów zwierząt
- rozpoznawanie czy pasażer przeżyje katastrofę na titanicu
Na potrzeby uproszczenia wyjaśniania w dalszej części labów, skupimy się tylko na klasyfikacji binarnej!
Zajmiemy się zbiorem gdzie klasyfikujemy u pacjentów czy występuje choroba serca czy nie.
```
import pandas as pd
np.random.seed = 42
data = pd.read_csv('heart.csv')
data.head()
# Szybko sprawdzamy podstatowe cechy danych
na_ratio_cols = data.isna().mean(axis=0)
na_ratio_cols
y = np.array(data['chd'])
X = data.drop(['chd'],axis=1)
y
X.head()
```
#### Szybkie ćwiczenie - wykonaj dowolne kodowanie zmiennej kategorycznej
```
map_dict = {'Present': 1, 'Absent':0}
X['famhist'] = X['famhist'].map(map_dict)
X.head()
```
### Jaki znacie najprostszy klasyfikator?
```
from sklearn.dummy import DummyClassifier
dc = DummyClassifier(strategy='uniform', random_state=42)
dc.fit(X,y)
y_proba = dc.predict_proba(X)
y_hat = dc.predict(X)
print("proba: " + str(y_proba[0:10,0]) + '\ny: ' + str(y_hat[0:10]) + '\ny_hat: ' + str(y[0:10]))
```
- Jakieś inne proste klasyfikatory?
## Regresja logistyczna - czemu by nie prognozować prawdopodobieństwa za pomocą regresji liniowej?
**Przypomnienie:** uogólniony model liniowy: $y_{i}=\beta _{0}1+\beta _{1}x_{i1}+\cdots +\beta _{p}x_{ip} = x^T \beta$
- Jaki jest podstawowy problem z wykorzystaniem regresji do modelowania prawdopodobieństwa?
- Jakie macie propozycje rozwiązania tego problemu?
$odds = \frac{P(Y=1|X)}{P(Y=0|X)} = \frac{p}{1-p}$ $\in (0,1)$
$\log({odds}) \in (-\infty, \infty)$
Co pozwala nam modelować powyższe równanie dzięki regresji liniowej, po przekształceniu równania, uzyskujemy prawdopodobieństwo sukcesu:
$x^T \beta = \log({\frac{p}{1-p}}) \Rightarrow p = \frac{1}{1+\exp({-x^T \beta})}$
```
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(max_iter=1000)
lr.fit(X,y)
y_hat = lr.predict(X)
print('y: ' + str(y_hat[0:10]) + '\ny_hat: ' + str(y[0:10]))
```
- Jakie są zalety regresji logistycznej?
## Drzewo decyzyjne
- Jak wykorzystać model drzewa do predykcji klasyfikacji/regresji?
- jakie problemy może to generować?
```
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier()
tree.fit(X,y)
y_hat = tree.predict(X)
print('y: ' + str(y_hat[0:10]) + '\ny_hat: ' + str(y[0:10]))
```
## SVM
Znalezienie równania hiperpłaszczyzny, która najlepiej dzieli nasz zbiór danych na klasy
- Co jeżeli nie istnieje taka płaszczyzna?
- Co jeżeli nasze dane nie są separowalne liniowo, tylko np. radialnie?
```
from sklearn.svm import SVC
svm = SVC()
svm.fit(X,y)
y_hat = svm.predict(X)
print('y: ' + str(y_hat[0:10]) + '\ny_hat: ' + str(y[0:10]))
```
- Jakie widzicie wady/zalety tego algorytmu?
## Naiwny Klasyfikator Bayesowski
Jest oparty na założeniu o wzajemnej niezależności zmiennych. Często nie mają one żadnego związku z rzeczywistością i właśnie z tego powodu nazywa się je naiwnymi.
```
from sklearn.naive_bayes import GaussianNB
nb = GaussianNB()
nb.fit(X,y)
y_hat = nb.predict(X)
print('y: ' + str(y_hat[0:10]) + '\ny_hat: ' + str(y[0:10]))
```
- Jakie widzicie wady/zalety tego algorytmu?
## Sposoby podziału danych
- Jak radzić sobie z overfitingiem?
- Jakie znacie sposoby podziału danych na treningowe i testowe?
### Zbiór treningowy i testowy
Prosty podział danych na część, na której uczymy model i na część która służy nam do sprawdzenia jego skuteczności.
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
print(X.shape,X_train.shape,X_test.shape)
```
**Szybkie zadanie:** Podzielić dane w taki sposób jak powyżej i nauczyć na zbiorze treningowym regresje logistyczną
- Jakie widzicie wady podejścia train/test split?
### Crossvalidation
- Czy możemy stosować CV dzieląc zbiór, tak by w zbiorze walidacyjnym pozostała tylko jedna obserwacja danych?
- Czy sprawdzając performance modelu przez CV, możemy potem nauczyć model na całym zbiorze danych?
- Czy dobierając parametry do modelu, powinniśmy wydzielić dodatkowy zbiór testowy, a CV przeprowadzać tylko na części treningowej?
```
from sklearn.model_selection import cross_val_score
cross_val_score(lr, X, y, scoring='accuracy', cv = 10)
```
## Miary ocen jakości klasyfikatorów
- Jakie znacie miary oceny klasyfikatorów?
Na potrzeby zadania wygenerujmy sobie wynik:
```
lr.fit(X_train,y_train)
y_hat = lr.predict(X_test)
print("y_test: "+ str(y_test) + "\n\ny_hat: " + str(y_hat))
```
### Accuracy
$ACC = \frac{TP+TN}{ALL}$
Bardzo intuicyjna miara - ile obserwacji zakwalifikowaliśmy poprawnie.
- Jaki jest problem z accuracy?
```
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_hat)
```
### Precision & Recall
**Precision** mówi o tym jak dokładny jest model wśród pozytywnej klasy, ile z przewidzianych jest faktycznie pozytywnych.
$PREC = \frac{TP}{TP+FP}= \frac{TP}{\text{TOTAL PREDICTED POSITIVE}}$
- Jakie widzicie zastosowania takiej miary?
$RECALL = \frac{TP}{TP+FN} = \frac{TP}{\text{TOTAL ACTUAL POSITIVE}}$
- Jakie widzicie zastosowania takiej miary?
```
from sklearn.metrics import precision_score
precision_score(y_test, y_hat)
from sklearn.metrics import recall_score
recall_score(y_test, y_hat)
```
### F1 Score
Szukanie balansu pomiędzy PRECISION i RECALL:
$F1 = 2\frac{PREC * RECALL}{PREC + RECALL}$
```
from sklearn.metrics import f1_score
f1_score(y_test, y_hat)
```
### ROC AUC
Receiver Operating Characterictic (ROC), lub po prostu krzywa ROC, to wykres, który ilustruje efektywność binarnego klasyfikatora, niezależnie od progu dyskryminacyjnego. Na osi Y jest TPR, czyli RECALL, na osi X jest FPR, czyli $1 - SPECIFITY$.
$FPR = 1- SPECIFITY = 1 - \frac{TN}{TN+FP}$
SPECIFITY - przykład: odsetek zdrowych osób, które są prawidłowo zidentyfikowane jako nie cierpiące na chorobę.
```
y_hat_proba = lr.predict_proba(X_test)[:,1]
from sklearn.metrics import roc_curve, auc
fpr = dict()
tpr = dict()
roc_auc = dict()
fpr, tpr, _ = roc_curve(y_test, y_hat_proba)
roc_auc = auc(fpr, tpr)
plt.figure(figsize=(10, 6))
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
```
- Jaką widzicie przewagę tej miary nad poprzednimi?
```
from sklearn.metrics import roc_auc_score
roc_auc_score(y_test,y_hat_proba)
```
**Zadanie** - przetestować 3 modele przedstawione dziś na zajęciach i sprawdzić, który jest lepszy na podstawie wyżej wymienionych miar.
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
V = np.array([[1,1],[-2,2],[4,-3]])
origin = [0], [0] # origin point
plt.quiver(*origin, V[:,0], V[:,1], scale=10)
plt.show()
from sklearn import preprocessing
V_normed = preprocessing.normalize(V, norm = 'l1', axis = 1)
plt.quiver(*origin, V_normed[:,0], V_normed[:,1], scale=10)
plt.show()
from sklearn import preprocessing
V_normed = preprocessing.normalize(V, norm = 'l2',axis=1)
plt.quiver(*origin, V_normed[:,0], V_normed[:,1], scale=10)
plt.show()
from sklearn import preprocessing
V_normed = preprocessing.normalize(V, norm = 'max',axis = 1)
plt.quiver(*origin, V_normed[:,0], V_normed[:,1], scale=10)
plt.show()
import numpy as np
x = np.array([[2,2,3,10],[2,2,3,20],[2,2,3,30]])
x
l1_norm = preprocessing.normalize(x, norm = 'l1',axis = 1)
l1_norm
l1_norm.sum(axis=1)
l2_norm = preprocessing.normalize(x, norm = 'l2',axis = 1)
l2_norm
l2_norm.sum(axis=1)
max_norm = preprocessing.normalize(x, norm = 'max',axis = 1)
max_norm
max_norm.sum(axis=1)
x = np.array([[-1, 2], [-0.5, 6], [0, 10], [1, 18]])
x
x_std = (x - x.min(axis=0)) / (x.max(axis=0) - x.min(axis=0))
x_std
import pandas as pd
np.random.seed = 42
data = pd.read_csv('heart.csv')
data.head()
# Szybko sprawdzamy podstatowe cechy danych
na_ratio_cols = data.isna().mean(axis=0)
na_ratio_cols
y = np.array(data['chd'])
X = data.drop(['chd'],axis=1)
y
X.head()
map_dict = {'Present': 1, 'Absent':0}
X['famhist'] = X['famhist'].map(map_dict)
X.head()
from sklearn.dummy import DummyClassifier
dc = DummyClassifier(strategy='uniform', random_state=42)
dc.fit(X,y)
y_proba = dc.predict_proba(X)
y_hat = dc.predict(X)
print("proba: " + str(y_proba[0:10,0]) + '\ny: ' + str(y_hat[0:10]) + '\ny_hat: ' + str(y[0:10]))
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(max_iter=1000)
lr.fit(X,y)
y_hat = lr.predict(X)
print('y: ' + str(y_hat[0:10]) + '\ny_hat: ' + str(y[0:10]))
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier()
tree.fit(X,y)
y_hat = tree.predict(X)
print('y: ' + str(y_hat[0:10]) + '\ny_hat: ' + str(y[0:10]))
from sklearn.svm import SVC
svm = SVC()
svm.fit(X,y)
y_hat = svm.predict(X)
print('y: ' + str(y_hat[0:10]) + '\ny_hat: ' + str(y[0:10]))
from sklearn.naive_bayes import GaussianNB
nb = GaussianNB()
nb.fit(X,y)
y_hat = nb.predict(X)
print('y: ' + str(y_hat[0:10]) + '\ny_hat: ' + str(y[0:10]))
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
print(X.shape,X_train.shape,X_test.shape)
from sklearn.model_selection import cross_val_score
cross_val_score(lr, X, y, scoring='accuracy', cv = 10)
lr.fit(X_train,y_train)
y_hat = lr.predict(X_test)
print("y_test: "+ str(y_test) + "\n\ny_hat: " + str(y_hat))
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_hat)
from sklearn.metrics import precision_score
precision_score(y_test, y_hat)
from sklearn.metrics import recall_score
recall_score(y_test, y_hat)
from sklearn.metrics import f1_score
f1_score(y_test, y_hat)
y_hat_proba = lr.predict_proba(X_test)[:,1]
from sklearn.metrics import roc_curve, auc
fpr = dict()
tpr = dict()
roc_auc = dict()
fpr, tpr, _ = roc_curve(y_test, y_hat_proba)
roc_auc = auc(fpr, tpr)
plt.figure(figsize=(10, 6))
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
from sklearn.metrics import roc_auc_score
roc_auc_score(y_test,y_hat_proba)
| 0.649467 | 0.880232 |
# Running the thermodynamic analysis using the functions of the pipeline
Each model resulting from the first analysis meking use of GapFilling for finding new functionalities has a different combination of reactions allowing the substrate to be used for growth and/or be converted into the target compound. Hence, the list of reactions involved in the transformations changes among the variants. Thus the pipeline uses pathway length and thermodynamic driving force as two parameters for the evaluation of the strain designs. In this pipeline the MDF value [1] of a pathway is used to determine its thermodynamic feasibility. The aim of this module (Figure 1, step iii) is to first identify which reactions are involved in the pathway and then calculate the pathway’s Max-min driving force (MDF) value. The thermidynamic analysis done by the pipeline make use of a the Python package [equilibrator-pathway](https://gitlab.com/equilibrator/equilibrator-pathway) for MDF calculation. This package is still under development and the the module of the pipeline reliying on it is the one that still needs further improvements.
The functions for thermodynamic analysis are called within the analysis module by the function *cons_prod_dict*. In the following tutorial some of the functions for the thermodynamic analysis are used individually on one of the combination of the strains of *E. coli* growing on methane.
The logic of the analysis is the following:
1. Add the reactions for growth on methane
2. Check if it grows on methane
3. Check if it produce the target (i.e. itacon)
4. Use the pipeline function for the rest of the analysis
```
import cobra
from cobra.manipulation import modify
import csv
from pipeline_package import input_parser, import_models, analysis, path_definition_mdf
data_repo = "../inputs"
model = import_models.get_reference_model(data_repo, '../inputs/ecoli_tutorial.csv')
universal = import_models.get_universal_main(data_repo, '../inputs/ecoli_tutorial.csv')
input_parser.parser('../inputs/ecoli_tutorial.csv', universal, model)
model.reactions.EX_ch4_e
```
## 1. Add reactions needed for growth on methane
```
mmo = universal.reactions.R01143
model.add_reaction(mmo)
model.reactions.R01143
alcd1 = universal.reactions.ALCD1
model.add_reaction(alcd1)
model.reactions.ALCD1
```
## 2. Check if it indeed grows on methane
```
print(model.objective.expression)
from cobra.flux_analysis import phenotype_phase_plane
from cobra.flux_analysis import production_envelope
metex = model.reactions.EX_ch4_e
metex.bounds
metex.lower_bound = -1000
glc = model.reactions.EX_glc__D_e
glc.lower_bound = 0
biomass = analysis.get_biomass_equation(model)
biomass.upper_bound = 0.877
biomass.lower_bound = 0.04385
fba_check = model.optimize()
print(fba_check.objective_value, '\n', fba_check.fluxes['EX_ch4_e'])
for i in model.reactions:
if i.flux <= -10 and 'EX_' in i.id:
print(i.id, i.reaction, i.flux)
```
It does consume as much methane as possible!
## 3. Does it produce the target?
```
fba_check.fluxes['EX_lac__L_e']
lacex = model.reactions.get_by_id('EX_lac__L_e')
lacex.lower_bound = 2
fba_check_production = model.optimize()
print(fba_check_production.objective_value, '\n', fba_check_production.fluxes['EX_ch4_e'])
fba_check_production.fluxes['EX_lac__L_e']
```
##### It does not produce lactate while the objective of the model is the biomass reaction but it produces lactate when its exchange reaction is set to be the objective of the model's optimization
## 4. Use the pipelines function for the rest of the analysis
These two functions *whole_procedure_path_definition* and *mdf_analysis* are used. The former prepares for the analysis of MDF. The latter makes use of the function from [equilibrator-pathway](https://gitlab.com/equilibrator/equilibrator-pathway) package to calculate MDF. More details on the functionaliies below.
*whole_procedure_path_definition* Calls the functions involved in finding the minimal set of reactions for the production of the target, in particular:
1. adds the free balancing reactions
2. remove maintenance
3. constrains substrate and product using the stoichiometric
coefficients
4. checks the carbon source to be the expected one
5. set the target as model objective
6. checks the metabolism
7. gets reactions list from pFBA
8. processes the raw list to get the minimal reaction set
9. generates SBtab file with the results ready for MDF calculation
*mdf_analysis* Salls the function for the identification fo the minimal
1. set of reactions active in the conversion.
2. uses the gneerated SBtab for mdf calculation
```
pruned_path = path_definition_mdf.whole_procedure_path_definition('../inputs/ecoli_tutorial.csv', model, '{}_Run_{}.tsv'.format('lac', '9-10'), 'input_mdf_{}_Run_{}.tsv'.format('lac', '9-10'))
mdf_value, path_length = path_definition_mdf.mdf_analysis('../inputs/ecoli_tutorial.csv', model, '{}_Run_{}.tsv'.format('lac', '9-10'), 'input_mdf_{}_Run_{}.tsv'.format('lac', '9-10'))
thermodynamic = {}
if mdf_value == None:
thermodynamic['mdf'] = mdf_value
else:
thermodynamic['mdf'] = mdf_value.magnitude
thermodynamic['pathway_langth'] = path_length
thermodynamic
for i in pruned_path:
print(i.id, i.reaction, i.bounds, i.flux)
print(model.objective.expression)
```
<h1 style="color: red;"> TODO </h1>
<h4 style="color: blue;"> Debug: why does it have FBA obj value = None when the previous FBA calculation give a value? Check the thermodynamic module. </h4>
## Conclusions
This module has not be fully debugged yet, since the strategy used is not the most ideal one, but is one of the more easily available at the moment in which the pipeline has been written. The identification and evaluation of pathways could be facilitated by an algorithm able to enumerate all the possible pathways for a desired phenotype (e.g production of L-lactate from *E. coli* strain assimilating methane) and calculate their MDF values. This is exactly what [OptMDFpathway](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006492) does [2]. This program has been implemented as function in [CellNetAnalyzer toolbox](https://www2.mpi-magdeburg.mpg.de/projects/cna/cna.html) in Matlab [3]. CellNetAnalyser toolbox is being [rewritten in Python](https://github.com/ARB-Lab/CNApy), but the OptMDFpathway function still has to be implemented. Once that function will be available the pipeline will make use of it in substitution to the pFBA-MDF approach.
##### References
1. E. Noor, A. Bar-Even, A. Flamholz, E. Reznik, W. Liebermeister, and R. Milo, “Pathway Thermodynamics Highlights Kinetic Obstacles in Central Metabolism,” PLoS Comput. Biol., vol. 10, no. 2, p. e1003483, Feb. 2014, doi: 10.1371/journal.pcbi.1003483
2. O. Hädicke, A. von Kamp, T. Aydogan, and S. Klamt, “OptMDFpathway: Identification of metabolic pathways with maximal thermodynamic driving force and its application for analyzing the endogenous CO2 fixation potential of Escherichia coli,” PLoS Comput. Biol., vol. 14, no. 9, p. e1006492, Sep. 2018, doi: 10.1371/journal.pcbi.1006492.
3. A. von Kamp, S. Thiele, O. Hädicke, and S. Klamt, “Use of CellNetAnalyzer in biotechnology and metabolic engineering,” Journal of Biotechnology, vol. 261. Elsevier B.V., pp. 221–228, Nov. 10, 2017, doi: 10.1016/j.jbiotec.2017.05.001.
|
github_jupyter
|
import cobra
from cobra.manipulation import modify
import csv
from pipeline_package import input_parser, import_models, analysis, path_definition_mdf
data_repo = "../inputs"
model = import_models.get_reference_model(data_repo, '../inputs/ecoli_tutorial.csv')
universal = import_models.get_universal_main(data_repo, '../inputs/ecoli_tutorial.csv')
input_parser.parser('../inputs/ecoli_tutorial.csv', universal, model)
model.reactions.EX_ch4_e
mmo = universal.reactions.R01143
model.add_reaction(mmo)
model.reactions.R01143
alcd1 = universal.reactions.ALCD1
model.add_reaction(alcd1)
model.reactions.ALCD1
print(model.objective.expression)
from cobra.flux_analysis import phenotype_phase_plane
from cobra.flux_analysis import production_envelope
metex = model.reactions.EX_ch4_e
metex.bounds
metex.lower_bound = -1000
glc = model.reactions.EX_glc__D_e
glc.lower_bound = 0
biomass = analysis.get_biomass_equation(model)
biomass.upper_bound = 0.877
biomass.lower_bound = 0.04385
fba_check = model.optimize()
print(fba_check.objective_value, '\n', fba_check.fluxes['EX_ch4_e'])
for i in model.reactions:
if i.flux <= -10 and 'EX_' in i.id:
print(i.id, i.reaction, i.flux)
fba_check.fluxes['EX_lac__L_e']
lacex = model.reactions.get_by_id('EX_lac__L_e')
lacex.lower_bound = 2
fba_check_production = model.optimize()
print(fba_check_production.objective_value, '\n', fba_check_production.fluxes['EX_ch4_e'])
fba_check_production.fluxes['EX_lac__L_e']
pruned_path = path_definition_mdf.whole_procedure_path_definition('../inputs/ecoli_tutorial.csv', model, '{}_Run_{}.tsv'.format('lac', '9-10'), 'input_mdf_{}_Run_{}.tsv'.format('lac', '9-10'))
mdf_value, path_length = path_definition_mdf.mdf_analysis('../inputs/ecoli_tutorial.csv', model, '{}_Run_{}.tsv'.format('lac', '9-10'), 'input_mdf_{}_Run_{}.tsv'.format('lac', '9-10'))
thermodynamic = {}
if mdf_value == None:
thermodynamic['mdf'] = mdf_value
else:
thermodynamic['mdf'] = mdf_value.magnitude
thermodynamic['pathway_langth'] = path_length
thermodynamic
for i in pruned_path:
print(i.id, i.reaction, i.bounds, i.flux)
print(model.objective.expression)
| 0.321034 | 0.976378 |
# Project 5: NLP on Financial Statements
<!--<badge>--><a href="https://colab.research.google.com/github/ggasbarri/nlp-financial-statements/blob/main/nlp-financial-statements.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a><!--</badge>-->
## Instructions
Each problem consists of a function to implement and instructions on how to implement the function. The parts of the function that need to be implemented are marked with a `# TODO` comment. After implementing the function, run the cell to test it against the unit tests we've provided. For each problem, we provide one or more unit tests from our `project_tests` package. These unit tests won't tell you if your answer is correct, but will warn you of any major errors. Your code will be checked for the correct solution when you submit it to Udacity.
## Packages
When you implement the functions, you'll only need to you use the packages you've used in the classroom, like [Pandas](https://pandas.pydata.org/) and [Numpy](http://www.numpy.org/). These packages will be imported for you. We recommend you don't add any import statements, otherwise the grader might not be able to run your code.
The other packages that we're importing are `project_helper` and `project_tests`. These are custom packages built to help you solve the problems. The `project_helper` module contains utility functions and graph functions. The `project_tests` contains the unit tests for all the problems.
### Install Packages
```
import sys
!{sys.executable} -m pip install -r requirements.txt
```
### Load Packages
```
import nltk
import numpy as np
import pandas as pd
import pickle
import pprint
import project_helper
import project_tests
from tqdm import tqdm
```
### Download NLP Corpora
You'll need two corpora to run this project: the stopwords corpus for removing stopwords and wordnet for lemmatizing.
```
nltk.download('stopwords')
nltk.download('wordnet')
```
## Get 10ks
We'll be running NLP analysis on 10-k documents. To do that, we first need to download the documents. For this project, we'll download 10-ks for a few companies. To lookup documents for these companies, we'll use their CIK. If you would like to run this against other stocks, we've provided the dict `additional_cik` for more stocks. However, the more stocks you try, the long it will take to run.
```
cik_lookup = {
'AMZN': '0001018724',
'BMY': '0000014272',
'CNP': '0001130310',
'CVX': '0000093410',
'FL': '0000850209',
'FRT': '0000034903',
'HON': '0000773840'}
additional_cik = {
'AEP': '0000004904',
'AXP': '0000004962',
'BA': '0000012927',
'BK': '0001390777',
'CAT': '0000018230',
'DE': '0000315189',
'DIS': '0001001039',
'DTE': '0000936340',
'ED': '0001047862',
'EMR': '0000032604',
'ETN': '0001551182',
'GE': '0000040545',
'IBM': '0000051143',
'IP': '0000051434',
'JNJ': '0000200406',
'KO': '0000021344',
'LLY': '0000059478',
'MCD': '0000063908',
'MO': '0000764180',
'MRK': '0000310158',
'MRO': '0000101778',
'PCG': '0001004980',
'PEP': '0000077476',
'PFE': '0000078003',
'PG': '0000080424',
'PNR': '0000077360',
'SYY': '0000096021',
'TXN': '0000097476',
'UTX': '0000101829',
'WFC': '0000072971',
'WMT': '0000104169',
'WY': '0000106535',
'XOM': '0000034088'}
```
### Get list of 10-ks
The SEC has a limit on the number of calls you can make to the website per second. In order to avoid hiding that limit, we've created the `SecAPI` class. This will cache data from the SEC and prevent you from going over the limit.
```
sec_api = project_helper.SecAPI()
```
With the class constructed, let's pull a list of filled 10-ks from the SEC for each company.
```
from bs4 import BeautifulSoup
def get_sec_data(cik, doc_type, start=0, count=60):
newest_pricing_data = pd.to_datetime('2018-01-01')
rss_url = 'https://www.sec.gov/cgi-bin/browse-edgar?action=getcompany' \
'&CIK={}&type={}&start={}&count={}&owner=exclude&output=atom' \
.format(cik, doc_type, start, count)
sec_data = sec_api.get(rss_url)
feed = BeautifulSoup(sec_data.encode('ascii'), 'xml').feed
entries = [
(
entry.content.find('filing-href').getText(),
entry.content.find('filing-type').getText(),
entry.content.find('filing-date').getText())
for entry in feed.find_all('entry', recursive=False)
if pd.to_datetime(entry.content.find('filing-date').getText()) <= newest_pricing_data]
return entries
```
Let's pull the list using the `get_sec_data` function, then display some of the results. For displaying some of the data, we'll use Amazon as an example.
```
example_ticker = 'AMZN'
sec_data = {}
for ticker, cik in cik_lookup.items():
sec_data[ticker] = get_sec_data(cik, '10-K')
pprint.pprint(sec_data[example_ticker][:5])
```
### Download 10-ks
As you see, this is a list of urls. These urls point to a file that contains metadata related to each filling. Since we don't care about the metadata, we'll pull the filling by replacing the url with the filling url.
```
raw_fillings_by_ticker = {}
for ticker, data in sec_data.items():
raw_fillings_by_ticker[ticker] = {}
for index_url, file_type, file_date in tqdm(data, desc='Downloading {} Fillings'.format(ticker), unit='filling'):
if (file_type == '10-K'):
file_url = index_url.replace('-index.htm', '.txt').replace('.txtl', '.txt')
raw_fillings_by_ticker[ticker][file_date] = sec_api.get(file_url)
print('Example Document:\n\n{}...'.format(next(iter(raw_fillings_by_ticker[example_ticker].values()))[:1000]))
```
### Get Documents
With theses fillings downloaded, we want to break them into their associated documents. These documents are sectioned off in the fillings with the tags `<DOCUMENT>` for the start of each document and `</DOCUMENT>` for the end of each document. There's no overlap with these documents, so each `</DOCUMENT>` tag should come after the `<DOCUMENT>` with no `<DOCUMENT>` tag in between.
Implement `get_documents` to return a list of these documents from a filling. Make sure not to include the tag in the returned document text.
```
import re
def get_documents(text):
"""
Extract the documents from the text
Parameters
----------
text : str
The text with the document strings inside
Returns
-------
extracted_docs : list of str
The document strings found in `text`
"""
regex = re.compile(r"<DOCUMENT>((.|\n)*?)<\/DOCUMENT>", flags=re.IGNORECASE)
matches = regex.finditer(text)
return [match.group(1) for match in matches]
project_tests.test_get_documents(get_documents)
```
With the `get_documents` function implemented, let's extract all the documents.
```
filling_documents_by_ticker = {}
for ticker, raw_fillings in raw_fillings_by_ticker.items():
filling_documents_by_ticker[ticker] = {}
for file_date, filling in tqdm(raw_fillings.items(), desc='Getting Documents from {} Fillings'.format(ticker), unit='filling'):
filling_documents_by_ticker[ticker][file_date] = get_documents(filling)
print('\n\n'.join([
'Document {} Filed on {}:\n{}...'.format(doc_i, file_date, doc[:200])
for file_date, docs in filling_documents_by_ticker[example_ticker].items()
for doc_i, doc in enumerate(docs)][:3]))
```
### Get Document Types
Now that we have all the documents, we want to find the 10-k form in this 10-k filing. Implement the `get_document_type` function to return the type of document given. The document type is located on a line with the `<TYPE>` tag. For example, a form of type "TEST" would have the line `<TYPE>TEST`. Make sure to return the type as lowercase, so this example would be returned as "test".
```
def get_document_type(doc):
"""
Return the document type lowercased
Parameters
----------
doc : str
The document string
Returns
-------
doc_type : str
The document type lowercased
"""
return re.compile(r"<TYPE>(\S*)").search(doc).group(1).lower()
project_tests.test_get_document_type(get_document_type)
```
With the `get_document_type` function, we'll filter out all non 10-k documents.
```
ten_ks_by_ticker = {}
for ticker, filling_documents in filling_documents_by_ticker.items():
ten_ks_by_ticker[ticker] = []
for file_date, documents in filling_documents.items():
for document in documents:
if get_document_type(document) == '10-k':
ten_ks_by_ticker[ticker].append({
'cik': cik_lookup[ticker],
'file': document,
'file_date': file_date})
project_helper.print_ten_k_data(ten_ks_by_ticker[example_ticker][:5], ['cik', 'file', 'file_date'])
```
## Preprocess the Data
### Clean Up
As you can see, the text for the documents are very messy. To clean this up, we'll remove the html and lowercase all the text.
```
def remove_html_tags(text):
text = BeautifulSoup(text, 'html.parser').get_text()
return text
def clean_text(text):
text = text.lower()
text = remove_html_tags(text)
return text
```
Using the `clean_text` function, we'll clean up all the documents.
```
for ticker, ten_ks in ten_ks_by_ticker.items():
for ten_k in tqdm(ten_ks, desc='Cleaning {} 10-Ks'.format(ticker), unit='10-K'):
ten_k['file_clean'] = clean_text(ten_k['file'])
project_helper.print_ten_k_data(ten_ks_by_ticker[example_ticker][:5], ['file_clean'])
```
### Lemmatize
With the text cleaned up, it's time to distill the verbs down. Implement the `lemmatize_words` function to lemmatize verbs in the list of words provided.
```
from nltk.stem import WordNetLemmatizer
from nltk.corpus import wordnet
def lemmatize_words(words):
"""
Lemmatize words
Parameters
----------
words : list of str
List of words
Returns
-------
lemmatized_words : list of str
List of lemmatized words
"""
return [WordNetLemmatizer().lemmatize(word,'v') for word in words]
project_tests.test_lemmatize_words(lemmatize_words)
```
With the `lemmatize_words` function implemented, let's lemmatize all the data.
```
word_pattern = re.compile('\w+')
for ticker, ten_ks in ten_ks_by_ticker.items():
for ten_k in tqdm(ten_ks, desc='Lemmatize {} 10-Ks'.format(ticker), unit='10-K'):
ten_k['file_lemma'] = lemmatize_words(word_pattern.findall(ten_k['file_clean']))
project_helper.print_ten_k_data(ten_ks_by_ticker[example_ticker][:5], ['file_lemma'])
```
### Remove Stopwords
```
from nltk.corpus import stopwords
lemma_english_stopwords = lemmatize_words(stopwords.words('english'))
for ticker, ten_ks in ten_ks_by_ticker.items():
for ten_k in tqdm(ten_ks, desc='Remove Stop Words for {} 10-Ks'.format(ticker), unit='10-K'):
ten_k['file_lemma'] = [word for word in ten_k['file_lemma'] if word not in lemma_english_stopwords]
print('Stop Words Removed')
```
## Analysis on 10ks
### Loughran McDonald Sentiment Word Lists
We'll be using the Loughran and McDonald sentiment word lists. These word lists cover the following sentiment:
- Negative
- Positive
- Uncertainty
- Litigious
- Constraining
- Superfluous
- Modal
This will allow us to do the sentiment analysis on the 10-ks. Let's first load these word lists. We'll be looking into a few of these sentiments.
```
import os
sentiments = ['negative', 'positive', 'uncertainty', 'litigious', 'constraining', 'interesting']
sentiment_df = pd.read_csv(os.path.join('..', '..', 'data', 'project_5_loughran_mcdonald', 'loughran_mcdonald_master_dic_2016.csv'))
sentiment_df.columns = [column.lower() for column in sentiment_df.columns] # Lowercase the columns for ease of use
# Remove unused information
sentiment_df = sentiment_df[sentiments + ['word']]
sentiment_df[sentiments] = sentiment_df[sentiments].astype(bool)
sentiment_df = sentiment_df[(sentiment_df[sentiments]).any(1)]
# Apply the same preprocessing to these words as the 10-k words
sentiment_df['word'] = lemmatize_words(sentiment_df['word'].str.lower())
sentiment_df = sentiment_df.drop_duplicates('word')
sentiment_df.head()
```
### Bag of Words
using the sentiment word lists, let's generate sentiment bag of words from the 10-k documents. Implement `get_bag_of_words` to generate a bag of words that counts the number of sentiment words in each doc. You can ignore words that are not in `sentiment_words`.
```
from collections import defaultdict, Counter
from sklearn.feature_extraction.text import CountVectorizer
def get_bag_of_words(sentiment_words, docs):
"""
Generate a bag of words from documents for a certain sentiment
Parameters
----------
sentiment_words: Pandas Series
Words that signify a certain sentiment
docs : list of str
List of documents used to generate bag of words
Returns
-------
bag_of_words : 2-d Numpy Ndarray of int
Bag of words sentiment for each document
The first dimension is the document.
The second dimension is the word.
"""
return CountVectorizer(vocabulary=sentiment_words).fit_transform(docs).toarray()
project_tests.test_get_bag_of_words(get_bag_of_words)
```
Using the `get_bag_of_words` function, we'll generate a bag of words for all the documents.
```
sentiment_bow_ten_ks = {}
for ticker, ten_ks in ten_ks_by_ticker.items():
lemma_docs = [' '.join(ten_k['file_lemma']) for ten_k in ten_ks]
sentiment_bow_ten_ks[ticker] = {
sentiment: get_bag_of_words(sentiment_df[sentiment_df[sentiment]]['word'], lemma_docs)
for sentiment in sentiments}
project_helper.print_ten_k_data([sentiment_bow_ten_ks[example_ticker]], sentiments)
```
### Jaccard Similarity
Using the bag of words, let's calculate the jaccard similarity on the bag of words and plot it over time. Implement `get_jaccard_similarity` to return the jaccard similarities between each tick in time. Since the input, `bag_of_words_matrix`, is a bag of words for each time period in order, you just need to compute the jaccard similarities for each neighboring bag of words. Make sure to turn the bag of words into a boolean array when calculating the jaccard similarity.
```
from sklearn.metrics import jaccard_similarity_score
def get_jaccard_similarity(bag_of_words_matrix):
"""
Get jaccard similarities for neighboring documents
Parameters
----------
bag_of_words : 2-d Numpy Ndarray of int
Bag of words sentiment for each document
The first dimension is the document.
The second dimension is the word.
Returns
-------
jaccard_similarities : list of float
Jaccard similarities for neighboring documents
"""
iterable = bag_of_words_matrix.astype(bool)
return [jaccard_similarity_score(iterable[index], iterable[index+1]) for index in range(iterable.shape[0]-1)]
project_tests.test_get_jaccard_similarity(get_jaccard_similarity)
```
Using the `get_jaccard_similarity` function, let's plot the similarities over time.
```
# Get dates for the universe
file_dates = {
ticker: [ten_k['file_date'] for ten_k in ten_ks]
for ticker, ten_ks in ten_ks_by_ticker.items()}
jaccard_similarities = {
ticker: {
sentiment_name: get_jaccard_similarity(sentiment_values)
for sentiment_name, sentiment_values in ten_k_sentiments.items()}
for ticker, ten_k_sentiments in sentiment_bow_ten_ks.items()}
project_helper.plot_similarities(
[jaccard_similarities[example_ticker][sentiment] for sentiment in sentiments],
file_dates[example_ticker][1:],
'Jaccard Similarities for {} Sentiment'.format(example_ticker),
sentiments)
```
### TFIDF
using the sentiment word lists, let's generate sentiment TFIDF from the 10-k documents. Implement `get_tfidf` to generate TFIDF from each document, using sentiment words as the terms. You can ignore words that are not in `sentiment_words`.
```
from sklearn.feature_extraction.text import TfidfVectorizer
def get_tfidf(sentiment_words, docs):
"""
Generate TFIDF values from documents for a certain sentiment
Parameters
----------
sentiment_words: Pandas Series
Words that signify a certain sentiment
docs : list of str
List of documents used to generate bag of words
Returns
-------
tfidf : 2-d Numpy Ndarray of float
TFIDF sentiment for each document
The first dimension is the document.
The second dimension is the word.
"""
return TfidfVectorizer(vocabulary=sentiment_words).fit_transform(docs).toarray()
project_tests.test_get_tfidf(get_tfidf)
```
Using the `get_tfidf` function, let's generate the TFIDF values for all the documents.
```
sentiment_tfidf_ten_ks = {}
for ticker, ten_ks in ten_ks_by_ticker.items():
lemma_docs = [' '.join(ten_k['file_lemma']) for ten_k in ten_ks]
sentiment_tfidf_ten_ks[ticker] = {
sentiment: get_tfidf(sentiment_df[sentiment_df[sentiment]]['word'], lemma_docs)
for sentiment in sentiments}
project_helper.print_ten_k_data([sentiment_tfidf_ten_ks[example_ticker]], sentiments)
```
### Cosine Similarity
Using the TFIDF values, we'll calculate the cosine similarity and plot it over time. Implement `get_cosine_similarity` to return the cosine similarities between each tick in time. Since the input, `tfidf_matrix`, is a TFIDF vector for each time period in order, you just need to computer the cosine similarities for each neighboring vector.
```
from sklearn.metrics.pairwise import cosine_similarity
def get_cosine_similarity(tfidf_matrix):
"""
Get cosine similarities for each neighboring TFIDF vector/document
Parameters
----------
tfidf : 2-d Numpy Ndarray of float
TFIDF sentiment for each document
The first dimension is the document.
The second dimension is the word.
Returns
-------
cosine_similarities : list of float
Cosine similarities for neighboring documents
"""
iterable = tfidf_matrix
return [cosine_similarity(iterable[index].reshape(1, -1), iterable[index+1].reshape(1, -1))[0][0] \
for index in range(iterable.shape[0]-1)]
project_tests.test_get_cosine_similarity(get_cosine_similarity)
```
Let's plot the cosine similarities over time.
```
cosine_similarities = {
ticker: {
sentiment_name: get_cosine_similarity(sentiment_values)
for sentiment_name, sentiment_values in ten_k_sentiments.items()}
for ticker, ten_k_sentiments in sentiment_tfidf_ten_ks.items()}
project_helper.plot_similarities(
[cosine_similarities[example_ticker][sentiment] for sentiment in sentiments],
file_dates[example_ticker][1:],
'Cosine Similarities for {} Sentiment'.format(example_ticker),
sentiments)
```
## Evaluate Alpha Factors
Just like we did in project 4, let's evaluate the alpha factors. For this section, we'll just be looking at the cosine similarities, but it can be applied to the jaccard similarities as well.
### Price Data
Let's get yearly pricing to run the factor against, since 10-Ks are produced annually.
```
pricing = pd.read_csv('../../data/project_5_yr/yr-quotemedia.csv', parse_dates=['date'])
pricing = pricing.pivot(index='date', columns='ticker', values='adj_close')
pricing
```
### Dict to DataFrame
The alphalens library uses dataframes, so we we'll need to turn our dictionary into a dataframe.
```
cosine_similarities_df_dict = {'date': [], 'ticker': [], 'sentiment': [], 'value': []}
for ticker, ten_k_sentiments in cosine_similarities.items():
for sentiment_name, sentiment_values in ten_k_sentiments.items():
for sentiment_values, sentiment_value in enumerate(sentiment_values):
cosine_similarities_df_dict['ticker'].append(ticker)
cosine_similarities_df_dict['sentiment'].append(sentiment_name)
cosine_similarities_df_dict['value'].append(sentiment_value)
cosine_similarities_df_dict['date'].append(file_dates[ticker][1:][sentiment_values])
cosine_similarities_df = pd.DataFrame(cosine_similarities_df_dict)
cosine_similarities_df['date'] = pd.DatetimeIndex(cosine_similarities_df['date']).year
cosine_similarities_df['date'] = pd.to_datetime(cosine_similarities_df['date'], format='%Y')
cosine_similarities_df.head()
```
### Alphalens Format
In order to use a lot of the alphalens functions, we need to aligned the indices and convert the time to unix timestamp. In this next cell, we'll do just that.
```
import alphalens as al
factor_data = {}
skipped_sentiments = []
for sentiment in sentiments:
cs_df = cosine_similarities_df[(cosine_similarities_df['sentiment'] == sentiment)]
cs_df = cs_df.pivot(index='date', columns='ticker', values='value')
try:
data = al.utils.get_clean_factor_and_forward_returns(cs_df.stack(), pricing, quantiles=5, bins=None, periods=[1])
factor_data[sentiment] = data
except:
skipped_sentiments.append(sentiment)
if skipped_sentiments:
print('\nSkipped the following sentiments:\n{}'.format('\n'.join(skipped_sentiments)))
factor_data[sentiments[0]].head()
```
### Alphalens Format with Unix Time
Alphalen's `factor_rank_autocorrelation` and `mean_return_by_quantile` functions require unix timestamps to work, so we'll also create factor dataframes with unix time.
```
unixt_factor_data = {
factor: data.set_index(pd.MultiIndex.from_tuples(
[(x.timestamp(), y) for x, y in data.index.values],
names=['date', 'asset']))
for factor, data in factor_data.items()}
```
### Factor Returns
Let's view the factor returns over time. We should be seeing it generally move up and to the right.
```
ls_factor_returns = pd.DataFrame()
for factor_name, data in factor_data.items():
ls_factor_returns[factor_name] = al.performance.factor_returns(data).iloc[:, 0]
(1 + ls_factor_returns).cumprod().plot()
```
### Basis Points Per Day per Quantile
It is not enough to look just at the factor weighted return. A good alpha is also monotonic in quantiles. Let's looks the basis points for the factor returns.
```
qr_factor_returns = pd.DataFrame()
for factor_name, data in unixt_factor_data.items():
qr_factor_returns[factor_name] = al.performance.mean_return_by_quantile(data)[0].iloc[:, 0]
(10000*qr_factor_returns).plot.bar(
subplots=True,
sharey=True,
layout=(5,3),
figsize=(14, 14),
legend=False)
```
### Turnover Analysis
Without doing a full and formal backtest, we can analyze how stable the alphas are over time. Stability in this sense means that from period to period, the alpha ranks do not change much. Since trading is costly, we always prefer, all other things being equal, that the ranks do not change significantly per period. We can measure this with the **Factor Rank Autocorrelation (FRA)**.
```
ls_FRA = pd.DataFrame()
for factor, data in unixt_factor_data.items():
ls_FRA[factor] = al.performance.factor_rank_autocorrelation(data)
ls_FRA.plot(title="Factor Rank Autocorrelation")
```
### Sharpe Ratio of the Alphas
The last analysis we'll do on the factors will be sharpe ratio. Let's see what the sharpe ratio for the factors are. Generally, a Sharpe Ratio of near 1.0 or higher is an acceptable single alpha for this universe.
```
daily_annualization_factor = np.sqrt(252)
(daily_annualization_factor * ls_factor_returns.mean() / ls_factor_returns.std()).round(2)
```
That's it! You've successfully done sentiment analysis on 10-ks!
## Submission
Now that you're done with the project, it's time to submit it. Click the submit button in the bottom right. One of our reviewers will give you feedback on your project with a pass or not passed grade. You can continue to the next section while you wait for feedback.
|
github_jupyter
|
import sys
!{sys.executable} -m pip install -r requirements.txt
import nltk
import numpy as np
import pandas as pd
import pickle
import pprint
import project_helper
import project_tests
from tqdm import tqdm
nltk.download('stopwords')
nltk.download('wordnet')
cik_lookup = {
'AMZN': '0001018724',
'BMY': '0000014272',
'CNP': '0001130310',
'CVX': '0000093410',
'FL': '0000850209',
'FRT': '0000034903',
'HON': '0000773840'}
additional_cik = {
'AEP': '0000004904',
'AXP': '0000004962',
'BA': '0000012927',
'BK': '0001390777',
'CAT': '0000018230',
'DE': '0000315189',
'DIS': '0001001039',
'DTE': '0000936340',
'ED': '0001047862',
'EMR': '0000032604',
'ETN': '0001551182',
'GE': '0000040545',
'IBM': '0000051143',
'IP': '0000051434',
'JNJ': '0000200406',
'KO': '0000021344',
'LLY': '0000059478',
'MCD': '0000063908',
'MO': '0000764180',
'MRK': '0000310158',
'MRO': '0000101778',
'PCG': '0001004980',
'PEP': '0000077476',
'PFE': '0000078003',
'PG': '0000080424',
'PNR': '0000077360',
'SYY': '0000096021',
'TXN': '0000097476',
'UTX': '0000101829',
'WFC': '0000072971',
'WMT': '0000104169',
'WY': '0000106535',
'XOM': '0000034088'}
sec_api = project_helper.SecAPI()
from bs4 import BeautifulSoup
def get_sec_data(cik, doc_type, start=0, count=60):
newest_pricing_data = pd.to_datetime('2018-01-01')
rss_url = 'https://www.sec.gov/cgi-bin/browse-edgar?action=getcompany' \
'&CIK={}&type={}&start={}&count={}&owner=exclude&output=atom' \
.format(cik, doc_type, start, count)
sec_data = sec_api.get(rss_url)
feed = BeautifulSoup(sec_data.encode('ascii'), 'xml').feed
entries = [
(
entry.content.find('filing-href').getText(),
entry.content.find('filing-type').getText(),
entry.content.find('filing-date').getText())
for entry in feed.find_all('entry', recursive=False)
if pd.to_datetime(entry.content.find('filing-date').getText()) <= newest_pricing_data]
return entries
example_ticker = 'AMZN'
sec_data = {}
for ticker, cik in cik_lookup.items():
sec_data[ticker] = get_sec_data(cik, '10-K')
pprint.pprint(sec_data[example_ticker][:5])
raw_fillings_by_ticker = {}
for ticker, data in sec_data.items():
raw_fillings_by_ticker[ticker] = {}
for index_url, file_type, file_date in tqdm(data, desc='Downloading {} Fillings'.format(ticker), unit='filling'):
if (file_type == '10-K'):
file_url = index_url.replace('-index.htm', '.txt').replace('.txtl', '.txt')
raw_fillings_by_ticker[ticker][file_date] = sec_api.get(file_url)
print('Example Document:\n\n{}...'.format(next(iter(raw_fillings_by_ticker[example_ticker].values()))[:1000]))
import re
def get_documents(text):
"""
Extract the documents from the text
Parameters
----------
text : str
The text with the document strings inside
Returns
-------
extracted_docs : list of str
The document strings found in `text`
"""
regex = re.compile(r"<DOCUMENT>((.|\n)*?)<\/DOCUMENT>", flags=re.IGNORECASE)
matches = regex.finditer(text)
return [match.group(1) for match in matches]
project_tests.test_get_documents(get_documents)
filling_documents_by_ticker = {}
for ticker, raw_fillings in raw_fillings_by_ticker.items():
filling_documents_by_ticker[ticker] = {}
for file_date, filling in tqdm(raw_fillings.items(), desc='Getting Documents from {} Fillings'.format(ticker), unit='filling'):
filling_documents_by_ticker[ticker][file_date] = get_documents(filling)
print('\n\n'.join([
'Document {} Filed on {}:\n{}...'.format(doc_i, file_date, doc[:200])
for file_date, docs in filling_documents_by_ticker[example_ticker].items()
for doc_i, doc in enumerate(docs)][:3]))
def get_document_type(doc):
"""
Return the document type lowercased
Parameters
----------
doc : str
The document string
Returns
-------
doc_type : str
The document type lowercased
"""
return re.compile(r"<TYPE>(\S*)").search(doc).group(1).lower()
project_tests.test_get_document_type(get_document_type)
ten_ks_by_ticker = {}
for ticker, filling_documents in filling_documents_by_ticker.items():
ten_ks_by_ticker[ticker] = []
for file_date, documents in filling_documents.items():
for document in documents:
if get_document_type(document) == '10-k':
ten_ks_by_ticker[ticker].append({
'cik': cik_lookup[ticker],
'file': document,
'file_date': file_date})
project_helper.print_ten_k_data(ten_ks_by_ticker[example_ticker][:5], ['cik', 'file', 'file_date'])
def remove_html_tags(text):
text = BeautifulSoup(text, 'html.parser').get_text()
return text
def clean_text(text):
text = text.lower()
text = remove_html_tags(text)
return text
for ticker, ten_ks in ten_ks_by_ticker.items():
for ten_k in tqdm(ten_ks, desc='Cleaning {} 10-Ks'.format(ticker), unit='10-K'):
ten_k['file_clean'] = clean_text(ten_k['file'])
project_helper.print_ten_k_data(ten_ks_by_ticker[example_ticker][:5], ['file_clean'])
from nltk.stem import WordNetLemmatizer
from nltk.corpus import wordnet
def lemmatize_words(words):
"""
Lemmatize words
Parameters
----------
words : list of str
List of words
Returns
-------
lemmatized_words : list of str
List of lemmatized words
"""
return [WordNetLemmatizer().lemmatize(word,'v') for word in words]
project_tests.test_lemmatize_words(lemmatize_words)
word_pattern = re.compile('\w+')
for ticker, ten_ks in ten_ks_by_ticker.items():
for ten_k in tqdm(ten_ks, desc='Lemmatize {} 10-Ks'.format(ticker), unit='10-K'):
ten_k['file_lemma'] = lemmatize_words(word_pattern.findall(ten_k['file_clean']))
project_helper.print_ten_k_data(ten_ks_by_ticker[example_ticker][:5], ['file_lemma'])
from nltk.corpus import stopwords
lemma_english_stopwords = lemmatize_words(stopwords.words('english'))
for ticker, ten_ks in ten_ks_by_ticker.items():
for ten_k in tqdm(ten_ks, desc='Remove Stop Words for {} 10-Ks'.format(ticker), unit='10-K'):
ten_k['file_lemma'] = [word for word in ten_k['file_lemma'] if word not in lemma_english_stopwords]
print('Stop Words Removed')
import os
sentiments = ['negative', 'positive', 'uncertainty', 'litigious', 'constraining', 'interesting']
sentiment_df = pd.read_csv(os.path.join('..', '..', 'data', 'project_5_loughran_mcdonald', 'loughran_mcdonald_master_dic_2016.csv'))
sentiment_df.columns = [column.lower() for column in sentiment_df.columns] # Lowercase the columns for ease of use
# Remove unused information
sentiment_df = sentiment_df[sentiments + ['word']]
sentiment_df[sentiments] = sentiment_df[sentiments].astype(bool)
sentiment_df = sentiment_df[(sentiment_df[sentiments]).any(1)]
# Apply the same preprocessing to these words as the 10-k words
sentiment_df['word'] = lemmatize_words(sentiment_df['word'].str.lower())
sentiment_df = sentiment_df.drop_duplicates('word')
sentiment_df.head()
from collections import defaultdict, Counter
from sklearn.feature_extraction.text import CountVectorizer
def get_bag_of_words(sentiment_words, docs):
"""
Generate a bag of words from documents for a certain sentiment
Parameters
----------
sentiment_words: Pandas Series
Words that signify a certain sentiment
docs : list of str
List of documents used to generate bag of words
Returns
-------
bag_of_words : 2-d Numpy Ndarray of int
Bag of words sentiment for each document
The first dimension is the document.
The second dimension is the word.
"""
return CountVectorizer(vocabulary=sentiment_words).fit_transform(docs).toarray()
project_tests.test_get_bag_of_words(get_bag_of_words)
sentiment_bow_ten_ks = {}
for ticker, ten_ks in ten_ks_by_ticker.items():
lemma_docs = [' '.join(ten_k['file_lemma']) for ten_k in ten_ks]
sentiment_bow_ten_ks[ticker] = {
sentiment: get_bag_of_words(sentiment_df[sentiment_df[sentiment]]['word'], lemma_docs)
for sentiment in sentiments}
project_helper.print_ten_k_data([sentiment_bow_ten_ks[example_ticker]], sentiments)
from sklearn.metrics import jaccard_similarity_score
def get_jaccard_similarity(bag_of_words_matrix):
"""
Get jaccard similarities for neighboring documents
Parameters
----------
bag_of_words : 2-d Numpy Ndarray of int
Bag of words sentiment for each document
The first dimension is the document.
The second dimension is the word.
Returns
-------
jaccard_similarities : list of float
Jaccard similarities for neighboring documents
"""
iterable = bag_of_words_matrix.astype(bool)
return [jaccard_similarity_score(iterable[index], iterable[index+1]) for index in range(iterable.shape[0]-1)]
project_tests.test_get_jaccard_similarity(get_jaccard_similarity)
# Get dates for the universe
file_dates = {
ticker: [ten_k['file_date'] for ten_k in ten_ks]
for ticker, ten_ks in ten_ks_by_ticker.items()}
jaccard_similarities = {
ticker: {
sentiment_name: get_jaccard_similarity(sentiment_values)
for sentiment_name, sentiment_values in ten_k_sentiments.items()}
for ticker, ten_k_sentiments in sentiment_bow_ten_ks.items()}
project_helper.plot_similarities(
[jaccard_similarities[example_ticker][sentiment] for sentiment in sentiments],
file_dates[example_ticker][1:],
'Jaccard Similarities for {} Sentiment'.format(example_ticker),
sentiments)
from sklearn.feature_extraction.text import TfidfVectorizer
def get_tfidf(sentiment_words, docs):
"""
Generate TFIDF values from documents for a certain sentiment
Parameters
----------
sentiment_words: Pandas Series
Words that signify a certain sentiment
docs : list of str
List of documents used to generate bag of words
Returns
-------
tfidf : 2-d Numpy Ndarray of float
TFIDF sentiment for each document
The first dimension is the document.
The second dimension is the word.
"""
return TfidfVectorizer(vocabulary=sentiment_words).fit_transform(docs).toarray()
project_tests.test_get_tfidf(get_tfidf)
sentiment_tfidf_ten_ks = {}
for ticker, ten_ks in ten_ks_by_ticker.items():
lemma_docs = [' '.join(ten_k['file_lemma']) for ten_k in ten_ks]
sentiment_tfidf_ten_ks[ticker] = {
sentiment: get_tfidf(sentiment_df[sentiment_df[sentiment]]['word'], lemma_docs)
for sentiment in sentiments}
project_helper.print_ten_k_data([sentiment_tfidf_ten_ks[example_ticker]], sentiments)
from sklearn.metrics.pairwise import cosine_similarity
def get_cosine_similarity(tfidf_matrix):
"""
Get cosine similarities for each neighboring TFIDF vector/document
Parameters
----------
tfidf : 2-d Numpy Ndarray of float
TFIDF sentiment for each document
The first dimension is the document.
The second dimension is the word.
Returns
-------
cosine_similarities : list of float
Cosine similarities for neighboring documents
"""
iterable = tfidf_matrix
return [cosine_similarity(iterable[index].reshape(1, -1), iterable[index+1].reshape(1, -1))[0][0] \
for index in range(iterable.shape[0]-1)]
project_tests.test_get_cosine_similarity(get_cosine_similarity)
cosine_similarities = {
ticker: {
sentiment_name: get_cosine_similarity(sentiment_values)
for sentiment_name, sentiment_values in ten_k_sentiments.items()}
for ticker, ten_k_sentiments in sentiment_tfidf_ten_ks.items()}
project_helper.plot_similarities(
[cosine_similarities[example_ticker][sentiment] for sentiment in sentiments],
file_dates[example_ticker][1:],
'Cosine Similarities for {} Sentiment'.format(example_ticker),
sentiments)
pricing = pd.read_csv('../../data/project_5_yr/yr-quotemedia.csv', parse_dates=['date'])
pricing = pricing.pivot(index='date', columns='ticker', values='adj_close')
pricing
cosine_similarities_df_dict = {'date': [], 'ticker': [], 'sentiment': [], 'value': []}
for ticker, ten_k_sentiments in cosine_similarities.items():
for sentiment_name, sentiment_values in ten_k_sentiments.items():
for sentiment_values, sentiment_value in enumerate(sentiment_values):
cosine_similarities_df_dict['ticker'].append(ticker)
cosine_similarities_df_dict['sentiment'].append(sentiment_name)
cosine_similarities_df_dict['value'].append(sentiment_value)
cosine_similarities_df_dict['date'].append(file_dates[ticker][1:][sentiment_values])
cosine_similarities_df = pd.DataFrame(cosine_similarities_df_dict)
cosine_similarities_df['date'] = pd.DatetimeIndex(cosine_similarities_df['date']).year
cosine_similarities_df['date'] = pd.to_datetime(cosine_similarities_df['date'], format='%Y')
cosine_similarities_df.head()
import alphalens as al
factor_data = {}
skipped_sentiments = []
for sentiment in sentiments:
cs_df = cosine_similarities_df[(cosine_similarities_df['sentiment'] == sentiment)]
cs_df = cs_df.pivot(index='date', columns='ticker', values='value')
try:
data = al.utils.get_clean_factor_and_forward_returns(cs_df.stack(), pricing, quantiles=5, bins=None, periods=[1])
factor_data[sentiment] = data
except:
skipped_sentiments.append(sentiment)
if skipped_sentiments:
print('\nSkipped the following sentiments:\n{}'.format('\n'.join(skipped_sentiments)))
factor_data[sentiments[0]].head()
unixt_factor_data = {
factor: data.set_index(pd.MultiIndex.from_tuples(
[(x.timestamp(), y) for x, y in data.index.values],
names=['date', 'asset']))
for factor, data in factor_data.items()}
ls_factor_returns = pd.DataFrame()
for factor_name, data in factor_data.items():
ls_factor_returns[factor_name] = al.performance.factor_returns(data).iloc[:, 0]
(1 + ls_factor_returns).cumprod().plot()
qr_factor_returns = pd.DataFrame()
for factor_name, data in unixt_factor_data.items():
qr_factor_returns[factor_name] = al.performance.mean_return_by_quantile(data)[0].iloc[:, 0]
(10000*qr_factor_returns).plot.bar(
subplots=True,
sharey=True,
layout=(5,3),
figsize=(14, 14),
legend=False)
ls_FRA = pd.DataFrame()
for factor, data in unixt_factor_data.items():
ls_FRA[factor] = al.performance.factor_rank_autocorrelation(data)
ls_FRA.plot(title="Factor Rank Autocorrelation")
daily_annualization_factor = np.sqrt(252)
(daily_annualization_factor * ls_factor_returns.mean() / ls_factor_returns.std()).round(2)
| 0.437583 | 0.934515 |
# <center> Introduction to Spark In-memory Computing via Python PySpark </center>
```
!bash launch_spark_cluster.sh
import sys
import os
import pyspark
env_spark_home=os.path.join(os.environ['HOME'],"software","spark-2.4.5-bin-hadoop2.7")
env_spark_conf_dir=os.path.join(env_spark_home,"conf")
env_pyspark_python=os.path.join("/software","anaconda3","5.1.0","bin","python")
os.environ['SPARK_HOME'] = env_spark_home
os.environ['SPARK_CONF_DIR'] = env_spark_conf_dir
os.environ['PYSPARK_PYTHON'] = env_pyspark_python
fp = open(os.path.join(env_spark_conf_dir,"master"))
node_list = fp.readlines()
import pyspark
conf = pyspark.SparkConf()
conf.setMaster("spark://" + node_list[0].strip() + ":7077")
conf.setAppName('big-data-workshop')
conf.set("spark.driver.memory","5g")
conf.set("spark.executor.instances", "3")
conf.set("spark.executor.memory","13g")
conf.set("spark.executor.cores","8")
sc = pyspark.SparkContext(conf=conf)
print(sc)
```
### Airlines Data
**Spark SQL**
- Spark module for structured data processing
- provides more information about the structure of both the data and the computation being performed for additional optimization
- execute SQL queries written using either a basic SQL syntax or HiveQL
**DataFrame**
- a distributed collection of data organized into named columns
- conceptually equivalent to a table in a relational database or a data frame in R/Python, but with richer optimizations under the hood
- can be constructed from a wide array of sources such as: structured data files, tables in Hive, external databases, or existing RDDs
```
sqlContext = pyspark.SQLContext(sc)
sqlContext
airlines = sqlContext.read.format("com.databricks.spark.csv")\
.option("header", "true")\
.option("inferschema", "true")\
.load("/zfs/citi/airlines/data/")\
.cache()
%%time
airlines.count()
%%time
airlines.count()
airlines.printSchema()
```
You can interact with a DataFrame via SQLContext using SQL statements by registering the DataFrame as a table
```
airlines.registerTempTable("airlines")
```
*How many unique airlines are there?*
```
uniqueAirline = sqlContext.sql("SELECT DISTINCT UniqueCarrier \
FROM airlines")
uniqueAirline.show()
```
*Calculate how many flights completed by each carrier over time*
```
%%time
carrierFlightCount = sqlContext.sql("SELECT UniqueCarrier, COUNT(UniqueCarrier) AS FlightCount \
FROM airlines GROUP BY UniqueCarrier")
carrierFlightCount.show()
```
*How do you display full carrier names?*
```
carriers = sqlContext.read.format("com.databricks.spark.csv")\
.option("header", "true")\
.option("inferschema", "true")\
.load("/zfs/citi/airlines/metadata/carriers.csv")\
.cache()
carriers.registerTempTable("carriers")
carriers.printSchema()
%%time
carrierFlightCountFullName = sqlContext.sql("SELECT c.Description, a.UniqueCarrier, COUNT(a.UniqueCarrier) AS FlightCount \
FROM airlines AS a \
INNER JOIN carriers AS c \
ON c.Code = a.UniqueCarrier \
GROUP BY a.UniqueCarrier, c.Description \
ORDER BY a.UniqueCarrier")
carrierFlightCountFullName.show()
```
*What is the averaged departure delay time for each airline?*
```
%%time
avgDepartureDelay = sqlContext.sql("SELECT FIRST(c.Description), FIRST(a.UniqueCarrier), AVG(a.DepDelay) AS AvgDepDelay \
FROM airlines AS a \
INNER JOIN carriers AS c \
ON c.Code = a.UniqueCarrier \
GROUP BY a.UniqueCarrier \
ORDER BY a.UniqueCarrier")
avgDepartureDelay.show()
airlines.unpersist()
sc.stop()
!bash stop_spark_cluster.sh
```
|
github_jupyter
|
!bash launch_spark_cluster.sh
import sys
import os
import pyspark
env_spark_home=os.path.join(os.environ['HOME'],"software","spark-2.4.5-bin-hadoop2.7")
env_spark_conf_dir=os.path.join(env_spark_home,"conf")
env_pyspark_python=os.path.join("/software","anaconda3","5.1.0","bin","python")
os.environ['SPARK_HOME'] = env_spark_home
os.environ['SPARK_CONF_DIR'] = env_spark_conf_dir
os.environ['PYSPARK_PYTHON'] = env_pyspark_python
fp = open(os.path.join(env_spark_conf_dir,"master"))
node_list = fp.readlines()
import pyspark
conf = pyspark.SparkConf()
conf.setMaster("spark://" + node_list[0].strip() + ":7077")
conf.setAppName('big-data-workshop')
conf.set("spark.driver.memory","5g")
conf.set("spark.executor.instances", "3")
conf.set("spark.executor.memory","13g")
conf.set("spark.executor.cores","8")
sc = pyspark.SparkContext(conf=conf)
print(sc)
sqlContext = pyspark.SQLContext(sc)
sqlContext
airlines = sqlContext.read.format("com.databricks.spark.csv")\
.option("header", "true")\
.option("inferschema", "true")\
.load("/zfs/citi/airlines/data/")\
.cache()
%%time
airlines.count()
%%time
airlines.count()
airlines.printSchema()
airlines.registerTempTable("airlines")
uniqueAirline = sqlContext.sql("SELECT DISTINCT UniqueCarrier \
FROM airlines")
uniqueAirline.show()
%%time
carrierFlightCount = sqlContext.sql("SELECT UniqueCarrier, COUNT(UniqueCarrier) AS FlightCount \
FROM airlines GROUP BY UniqueCarrier")
carrierFlightCount.show()
carriers = sqlContext.read.format("com.databricks.spark.csv")\
.option("header", "true")\
.option("inferschema", "true")\
.load("/zfs/citi/airlines/metadata/carriers.csv")\
.cache()
carriers.registerTempTable("carriers")
carriers.printSchema()
%%time
carrierFlightCountFullName = sqlContext.sql("SELECT c.Description, a.UniqueCarrier, COUNT(a.UniqueCarrier) AS FlightCount \
FROM airlines AS a \
INNER JOIN carriers AS c \
ON c.Code = a.UniqueCarrier \
GROUP BY a.UniqueCarrier, c.Description \
ORDER BY a.UniqueCarrier")
carrierFlightCountFullName.show()
%%time
avgDepartureDelay = sqlContext.sql("SELECT FIRST(c.Description), FIRST(a.UniqueCarrier), AVG(a.DepDelay) AS AvgDepDelay \
FROM airlines AS a \
INNER JOIN carriers AS c \
ON c.Code = a.UniqueCarrier \
GROUP BY a.UniqueCarrier \
ORDER BY a.UniqueCarrier")
avgDepartureDelay.show()
airlines.unpersist()
sc.stop()
!bash stop_spark_cluster.sh
| 0.156427 | 0.7773 |
# Tutorial for Interactig with OpenStack Swift from Python
### Purpose
The purpose of this notebook is to show how to interact with an instance of OpenStack Swift from Python. This notebook uses the official OpenStack SDK for Python and has been tested using Anancoda Python 3. Please note that not all the error checking best practices has been implemented in this tutorial for keeping the focus in the essential mechanics of the SDK in a reduced amount of code.
### Reference documents
* Source of the Python bindings to the OpenStack object storage API: [https://github.com/openstack/python-swiftclient](https://github.com/openstack/python-swiftclient)
* `python-swiftclient` in PyPi: [https://pypi.python.org/pypi/python-swiftclient](https://pypi.python.org/pypi/python-swiftclient)
* `swifclient` documentation: [http://docs.openstack.org/developer/python-swiftclient/swiftclient.html#swiftclient.client.head_object](http://docs.openstack.org/developer/python-swiftclient/swiftclient.html#swiftclient.client.head_object)
### Connect to the OpenStack Swift service
We use the OpenStack service credentials extracted from the environmental variables:
* `OS_AUTH_URL`: authentication end point (Keystone)
* `OS_USERNAME`: user name
* `OS_TENANT_NAME`: tenant name
* `OS_PASSWORD`: very long key
You can specify your credentials directly in this notebook, if you wish. The values associated to this credentials must be provided to you by the administration of the OpenStack Swift instance you need to interact with.
```
import os
import datetime
import tempfile
import hashlib
import swiftclient as swift
import requests
authurl = os.getenv('OS_AUTH_URL', 'your_auth_endpoint')
user = os.getenv('OS_USERNAME', 'your_username')
tenant_name = os.getenv('OS_TENANT_NAME', 'your_tenant_name')
key = os.getenv('OS_PASSWORD', 'your_password')
conn = swift.Connection(authurl=authurl, user=user, key=key, tenant_name=tenant_name, auth_version='2')
```
We have now a connection object `conn` through which we will interact with Swift.
### Create a new container
Let's create a new container for our tests, we will destroy later. The name of the container is composed of the prefix `butlerswift` followed by the user name and the creation timestamp.
Creating a new container is an idempotent operation: if the container already exists it is a no-operation.
```
container_prefix = 'butlerswift-{0}-'.format(user)
container_name = container_prefix + '{:%Y%m%d%H%M%S}'.format(datetime.datetime.now())
conn.put_container(container_name)
```
### Check the existence of the container
To check the existence of the container we issue an `HTTP HEAD` request. If the container exists, a set of `HTTP` headers associated to this container is returned in the form of a dictionary. We use the value of one of them, `x-container-object-count` which tells us the number of objects in the container.
```
try:
headers = conn.head_container(container_name)
num_objects = headers['x-container-object-count']
print("Container \'{0}\' does exist and contains {1} objects".format(container_name, num_objects))
except swift.ClientException as err:
print("Container \'{0}\' not found: {1}".format(container_name, err))
```
### Upload a new object into the container
Download a sample FITS file and store it in our local disk. We will then use that file to upload it to Swift.
```
def download_file(url):
""" Download a file from the argument url and store its contents in a local file
The name of the local file is built from the last component of the url path.
The file is downloaded only if there is no local file with the same name. Therefore, if this
function is called several times with the same url the file will be downloaded only once.
Returns the file name on disk.
"""
file_name = url.split('/')[-1]
if os.path.exists(file_name):
return file_name
req = requests.get(url, stream=True)
with open(file_name, 'wb') as f:
for chunk in req.iter_content(chunk_size=1024*2014):
if chunk:
f.write(chunk)
return file_name
# Download a sample FITS file from http://fits.gsfc.nasa.gov/fits_samples.html
url = "http://fits.gsfc.nasa.gov/samples/WFPC2u5780205r_c0fx.fits"
file_name = download_file(url)
```
Now upload the local FITS file to Swift:
```
# This is the object key: in Swift, an object is uniquely identified by the container it resides in and its object key.
# The slash (/) characters in the key have no meaning for Swift.
def get_file_size(file_name):
statinfo = os.stat(file_name)
return statinfo.st_size
object_size = get_file_size(file_name)
object_key = "fits/" + file_name
with open(file_name, 'rb') as f:
# put_object returns the object's etag
etag = conn.put_object(container=container_name, obj=object_key, contents=f, content_length=object_size)
```
### Check for the existence of a particular object
To check the existence of an object within a container the SDK will issue an `HTTP HEAD` request. The Swift service responds with a dictionary of headers. We use the value of the header `content-length` to retrieve the size in bytes of the object.
```
try:
headers = conn.head_object(container_name, object_key)
count = headers['content-length']
print("Object \'{0}\' does exist and contains {1} bytes".format(object_key, count))
except swift.ClientException as err:
print("Object \'{0}\' not found: {1}".format(object_key, err))
```
### List all the objects of a container
```
try:
headers, objects = conn.get_container(container_name)
# Show some information about this container
num_objects = int(headers['x-container-object-count'])
print("Container \'{0}\':".format(container_name))
print(" number of bytes used:", headers['x-container-bytes-used'])
print(" number of objects:", num_objects)
# Show some details of the objects of this container
if num_objects > 0:
print("\nObject details:")
for o in objects:
print(" ", o['bytes'], o['name'])
except swift.ClientException as err:
print("Container \'{0}\' not found: {1}".format(container_name, err))
```
### Download an object
Here we download the contents of a Swift object to a disk file and compare its contents against the contents of the the original file uploaded above.
```
def get_md5_digest(file_name):
"""Computes and returns the MD5 digest of a disk file
"""
hasher = hashlib.md5()
with open(file_name, 'rb') as f:
block_size = 64 * 1024
buffer = f.read(block_size)
while len(buffer) > 0:
hasher.update(buffer)
buffer = f.read(block_size)
return hasher.hexdigest()
# Download a Swift object and compare its contents to the contents of the original file
copy_file_name = 'copy-' + file_name
try:
headers, contents = conn.get_object(container_name, object_key)
with open(copy_file_name, 'wb') as f:
f.write(contents)
copy_md5 = get_md5_digest(copy_file_name)
original_md5 = get_md5_digest(file_name)
if copy_md5 != original_md5:
print("the contents of the uploaded file and the downloaded file do not match")
except swift.ClientException as err:
print("Could not download object \'{0}/{1}\' not found: {2}".format(container_name, object_key, err))
```
### Delete my containers
Delete the containers created by the execution of this notebook, that is, the containers with prefix "`butlerswift-user-`"
```
def delete_container(conn, container):
try:
# Delete all the objects in this container
headers, objects = conn.get_container(container)
for o in objects:
conn.delete_object(container, o['name'])
# Delete the container itself
conn.delete_container(container)
except swift.ClientException as err:
print("Error deleting \'{0}\' not found: {1}".format(container, err))
# Delete all my containers, i.e. all containers starting with prefix "butlerswift-user-"
resp_headers, containers = conn.get_account()
for c in containers:
name = c['name']
if name.startswith(container_prefix):
print("Deleting container \'{}\'".format(name))
delete_container(conn, name)
```
|
github_jupyter
|
import os
import datetime
import tempfile
import hashlib
import swiftclient as swift
import requests
authurl = os.getenv('OS_AUTH_URL', 'your_auth_endpoint')
user = os.getenv('OS_USERNAME', 'your_username')
tenant_name = os.getenv('OS_TENANT_NAME', 'your_tenant_name')
key = os.getenv('OS_PASSWORD', 'your_password')
conn = swift.Connection(authurl=authurl, user=user, key=key, tenant_name=tenant_name, auth_version='2')
container_prefix = 'butlerswift-{0}-'.format(user)
container_name = container_prefix + '{:%Y%m%d%H%M%S}'.format(datetime.datetime.now())
conn.put_container(container_name)
try:
headers = conn.head_container(container_name)
num_objects = headers['x-container-object-count']
print("Container \'{0}\' does exist and contains {1} objects".format(container_name, num_objects))
except swift.ClientException as err:
print("Container \'{0}\' not found: {1}".format(container_name, err))
def download_file(url):
""" Download a file from the argument url and store its contents in a local file
The name of the local file is built from the last component of the url path.
The file is downloaded only if there is no local file with the same name. Therefore, if this
function is called several times with the same url the file will be downloaded only once.
Returns the file name on disk.
"""
file_name = url.split('/')[-1]
if os.path.exists(file_name):
return file_name
req = requests.get(url, stream=True)
with open(file_name, 'wb') as f:
for chunk in req.iter_content(chunk_size=1024*2014):
if chunk:
f.write(chunk)
return file_name
# Download a sample FITS file from http://fits.gsfc.nasa.gov/fits_samples.html
url = "http://fits.gsfc.nasa.gov/samples/WFPC2u5780205r_c0fx.fits"
file_name = download_file(url)
# This is the object key: in Swift, an object is uniquely identified by the container it resides in and its object key.
# The slash (/) characters in the key have no meaning for Swift.
def get_file_size(file_name):
statinfo = os.stat(file_name)
return statinfo.st_size
object_size = get_file_size(file_name)
object_key = "fits/" + file_name
with open(file_name, 'rb') as f:
# put_object returns the object's etag
etag = conn.put_object(container=container_name, obj=object_key, contents=f, content_length=object_size)
try:
headers = conn.head_object(container_name, object_key)
count = headers['content-length']
print("Object \'{0}\' does exist and contains {1} bytes".format(object_key, count))
except swift.ClientException as err:
print("Object \'{0}\' not found: {1}".format(object_key, err))
try:
headers, objects = conn.get_container(container_name)
# Show some information about this container
num_objects = int(headers['x-container-object-count'])
print("Container \'{0}\':".format(container_name))
print(" number of bytes used:", headers['x-container-bytes-used'])
print(" number of objects:", num_objects)
# Show some details of the objects of this container
if num_objects > 0:
print("\nObject details:")
for o in objects:
print(" ", o['bytes'], o['name'])
except swift.ClientException as err:
print("Container \'{0}\' not found: {1}".format(container_name, err))
def get_md5_digest(file_name):
"""Computes and returns the MD5 digest of a disk file
"""
hasher = hashlib.md5()
with open(file_name, 'rb') as f:
block_size = 64 * 1024
buffer = f.read(block_size)
while len(buffer) > 0:
hasher.update(buffer)
buffer = f.read(block_size)
return hasher.hexdigest()
# Download a Swift object and compare its contents to the contents of the original file
copy_file_name = 'copy-' + file_name
try:
headers, contents = conn.get_object(container_name, object_key)
with open(copy_file_name, 'wb') as f:
f.write(contents)
copy_md5 = get_md5_digest(copy_file_name)
original_md5 = get_md5_digest(file_name)
if copy_md5 != original_md5:
print("the contents of the uploaded file and the downloaded file do not match")
except swift.ClientException as err:
print("Could not download object \'{0}/{1}\' not found: {2}".format(container_name, object_key, err))
def delete_container(conn, container):
try:
# Delete all the objects in this container
headers, objects = conn.get_container(container)
for o in objects:
conn.delete_object(container, o['name'])
# Delete the container itself
conn.delete_container(container)
except swift.ClientException as err:
print("Error deleting \'{0}\' not found: {1}".format(container, err))
# Delete all my containers, i.e. all containers starting with prefix "butlerswift-user-"
resp_headers, containers = conn.get_account()
for c in containers:
name = c['name']
if name.startswith(container_prefix):
print("Deleting container \'{}\'".format(name))
delete_container(conn, name)
| 0.295636 | 0.858719 |
# Fashion-MNIST
* Fashion-MNIST is a dataset of Zalando's article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples.
* Each example is a 28x28 grayscale image, associated with a label from 10 classes.
* Zalando intends Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms.
* It shares the same image size and structure of training and testing splits.
## Topics
1. [**Exploring the Dataset**](#there_you_go_1)
> * [1.1 Importing Libraries ](#there_you_go_1.1)
* [1.2 Extract dataset ](#there_you_go_1.2)
* [1.3 Features ](#there_you_go_1.3)
* [1.4 Examine Dimensions ](#there_you_go_1.4)
* [1.5 Examine NaN values ](#there_you_go_1.5)
2. [**Visualizing the Dataset**](#there_you_go_2)
> * [2.1 Plotting Random Images ](#there_you_go_2.1)
* [2.2 Distribution of Labels ](#there_you_go_2.2)
3. [**Data PreProcessing**](#there_you_go_3)
> * [3.1 Setting Random Seeds ](#there_you_go_3.1)
* [3.2 Splitting Data ](#there_you_go_3.2)
* [3.3 Reshaping Images ](#there_you_go_3.3)
* [3.4 Normalization ](#there_you_go_3.4)
* [3.5 One Hot Encoding ](#there_you_go_3.5)
4. [**Training ConvNet**](#there_you_go_4)
> * [4.1 Building a ConvNet ](#there_you_go_4.1)
* [4.2 Compiling Model ](#there_you_go_4.2)
* [4.3 Model Summary ](#there_you_go_4.3)
* [4.4 Learning Rate Decay ](#there_you_go_4.4)
* [4.5 Data Augmentation ](#there_you_go_4.5)
* [4.6 Fitting the Model](#there_you_go_4.6)
5. [**Evaluating the Model**](#there_you_go_5)
> * [5.1 Plotting Train and Validation curves ](#there_you_go_5.1)
6. [**Plotting Confusion Matrix**](#there_you_go_6)
7. [**Visualization of Predicted Classes**](#there_you_go_7)
> * [7.1 Correctly Predicted Classes](#there_you_go_7.1)
* [7.2 Incorrectly Predicted Classes](#there_you_go_7.2)
8. [**Classification Report**](#there_you_go_8)
9. [**Predicting on Test Data**](#there_you_go_9)
<a id="there_you_go_1"></a>
# 1) Exploring The Dataset
<a id="there_you_go_1.1"></a>
## 1.1) Importing Libraries
```
# Ignore warnings :
import warnings
warnings.filterwarnings('ignore')
# Handle table-like data and matrices :
import numpy as np
import pandas as pd
import math
import itertools
# Modelling Algorithms :
# Classification
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier , GradientBoostingClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis , QuadraticDiscriminantAnalysis
# Regression
from sklearn.linear_model import LinearRegression,Ridge,Lasso,RidgeCV, ElasticNet
from sklearn.ensemble import RandomForestRegressor,BaggingRegressor,GradientBoostingRegressor,AdaBoostRegressor
from sklearn.svm import SVR
from sklearn.neighbors import KNeighborsRegressor
from sklearn.neural_network import MLPRegressor
# Modelling Helpers :
from sklearn.preprocessing import Imputer , Normalizer , scale
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.feature_selection import RFECV
from sklearn.model_selection import GridSearchCV , KFold , cross_val_score
#preprocessing :
from sklearn.preprocessing import MinMaxScaler , StandardScaler, Imputer, LabelEncoder
#evaluation metrics :
# Regression
from sklearn.metrics import mean_squared_log_error,mean_squared_error, r2_score,mean_absolute_error
# Classification
from sklearn.metrics import accuracy_score,precision_score,recall_score,f1_score
# Deep Learning Libraries
from keras.models import Sequential, load_model
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D, BatchNormalization
from keras.optimizers import Adam,SGD,Adagrad,Adadelta,RMSprop
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ReduceLROnPlateau, LearningRateScheduler
from keras.utils import to_categorical
# Visualisation
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.pylab as pylab
import seaborn as sns
import missingno as msno
# Configure visualisations
%matplotlib inline
mpl.style.use( 'ggplot' )
plt.style.use('fivethirtyeight')
sns.set(context="notebook", palette="dark", style = 'whitegrid' , color_codes=True)
params = {
'axes.labelsize': "large",
'xtick.labelsize': 'x-large',
'legend.fontsize': 20,
'figure.dpi': 150,
'figure.figsize': [25, 7]
}
plt.rcParams.update(params)
# Center all plots
from IPython.core.display import HTML
HTML("""
<style>
.output_png {
display: table-cell;
text-align: center;
vertical-align: middle;
}
</style>
""");
```
<a id="there_you_go_1.2"></a>
## 1.2) Extract Dataset
```
train = pd.read_csv('../input/fashion-mnist_train.csv')
test = pd.read_csv('../input/fashion-mnist_test.csv')
df = train.copy()
df_test = test.copy()
df.head()
```
<a id="there_you_go_1.3"></a>
## 1.3) Features
* **Label: ** The Target variable.
* **Pixels: ** The smallest unit of a Digital Image or Graphic that can be displayed on Digital Display Device.
Where humans can see the objects due to the Light Receptors in their Eyes which send Signals via the Optic Nerve to the Primary Visual Cortex, where the input is processed ,
Computers on the other hand, see the Image as 2-dimensional arrays of numbers, known as pixels. They Classify Images based on Boundaries and Curvatures of the Object (Represented by pixel values, either RGB or GrayScale) .
This is the Partial View of the Labels and the Dataset.

<a id="there_you_go_1.4"></a>
## 1.4) Examine Dimensions
```
print('Train: ', df.shape)
print('Test: ', df_test.shape)
```
* **So, there are 60,000 Training Samples and 10,000 Test Samples.**
* **Each example is a 28x28 grayscale image, associated with a label from 10 classes.**
> * Each pixel has a single pixel-value associated with it, indicating the lightness or darkness of that pixel, with higher numbers meaning darker.
> * This pixel-value is an integer between 0 and 255, inclusive.
* **The first column of the Training Samples consists of Class Labels and represents the article of Clothing.**
```
df.label.unique()
```
**Labels :**
* **0 - ** T-shirt/top
* **1 - ** Trouser
* **2 - ** Pullover
* **3 - ** Dress
* **4 - ** Coat
* **5 - ** Sandals
* **6 - ** Shirt
* **7 - ** Sneaker
* **8 - ** Bag
* **9 - ** Ankle Boots
<a id="there_you_go_1.5"></a>
## 1.5) Examine NaN Values
```
# Train
df.isnull().any().sum()
# Test
df_test.isnull().any().sum()
```
**Great, So there are No Null Values in Train and Test Set.**
<a id="there_you_go_2"></a>
# 2) Visualizing the Dataset
<a id="there_you_go_2.1"></a>
## 2.1) Plotting Random Images
```
# Mapping Classes
clothing = {0 : 'T-shirt/top',
1 : 'Trouser',
2 : 'Pullover',
3 : 'Dress',
4 : 'Coat',
5 : 'Sandal',
6 : 'Shirt',
7 : 'Sneaker',
8 : 'Bag',
9 : 'Ankle boot'}
fig, axes = plt.subplots(4, 4, figsize = (9,9))
for row in axes:
for axe in row:
index = np.random.randint(60000)
img = df.drop('label', axis=1).values[index].reshape(28,28)
cloths = df['label'][index]
axe.imshow(img, cmap='gray')
axe.set_title(clothing[cloths])
axe.set_axis_off()
```
**Look at these Images, I bet, there will be images which even the Humans won't be able to claasify.**
<a id="there_you_go_2.2"></a>
## 2.2) Distribution of Labels
**Let's look at the Distribution of labels to anaylze if there are any skewed classes.**
```
df['label'].value_counts()
sns.factorplot(x='label', data=df, kind='count', size=3, aspect= 1.5)
```
* **We can see that all classes are equally Distributed.**
* **So, there is no need for OverSampling or UnderSampling.**
<a id="there_you_go_3"></a>
# 3) Data PreProcessing
<a id="there_you_go_3.1"></a>
## 3.1) Setting Random Seeds
```
# Setting Random Seeds for Reproducibilty.
seed = 66
np.random.seed(seed)
```
<a id="there_you_go_3.2"></a>
## 3.2) Splitting Data into Train and Validation Set
Now we are gonna split the training data into Train and Validation Set. Train set is used for Training the model and Validation set is used for Evaluating our Model's Performance on the Dataset.
This is achieved using the train_test_split method of scikit learn library.
```
X = train.iloc[:,1:]
Y = train.iloc[:,0]
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.1, random_state=seed)
```
<a id="there_you_go_3.3"></a>
## 3.3) Reshaping the Images
* Note that we have Images as 1D vector each containing 784 pixels. Before we feed the data to the CNN we must reshape the data into (28x28x1) 3D matrices.
* This is because Keras wants an Extra Dimension in the end, for channels. If this had been RGB images, there would have been 3 channels, but as MNIST is gray scale it only uses one.
```
# The first parameter in reshape indicates the number of examples.
# We pass it as -1, which means that it is an unknown dimension and we want numpy to figure it out.
# reshape(examples, height, width, channels)
x_train = x_train.values.reshape((-1, 28, 28, 1))
x_test = x_test.values.reshape((-1, 28, 28, 1))
df_test.drop('label', axis=1, inplace=True)
df_test = df_test.values.reshape((-1, 28, 28, 1))
```
<a id="there_you_go_3.4"></a>
## 3.4) Normalization
The Pixel Values are often stored as __*Integer*__ Numbers in the range 0 to 255, the range that a single 8-bit byte can offer.
They need to be scaled down to [0,1] in order for Optimization Algorithms to work much faster. Here, we acheive Zero Mean and Unit Variance.
Normalization is carried out as follows:
> x = (x - min) / (max - min) ; Here min=0 and max=255

```
# You need to make sure that your Image is cast into double/float from int before you do this scaling
# as you will most likely generate floating point numbers.
# And had it been int, the values will be truncated to zero.
x_train = x_train.astype("float32")/255
x_test = x_test.astype("float32")/255
df_test = df_test.astype("float32")/255
```
<a id="there_you_go_3.5"></a>
## 3.5) One Hot Encoding
The labels are given as integers between 0-9. We need to one hot encode them , Eg 8 [0, 0, 0, 0, 0, 0, 0, 0, 1, 0] .
We have 10 digits [0-9] or classes, therefore we one-hot-encode the target variable with 10 classes
```
y_train = to_categorical(y_train, num_classes=10)
y_test = to_categorical(y_test, num_classes=10)
print(y_train.shape)
print(y_test.shape)
```
<a id="there_you_go_4"></a>
# 4) Training a Convolutional Neural Network
<a id="there_you_go_4.1"></a>
## 4.1) Building a ConvNet
* Steps:
1) At First, we use **Sequential Keras API** which is just a linear stack of layers. We add one layer at a time starting from input.
2) Next We add **Convolutional Layers**, which are the Building blocks of ConvNets. Convolutional Layers has set of Independent Filters whose depth is equal to Input and other dimensions can be set manually. These Filters when convolved over the Input Image produce Feature Maps.
It includes some HyperParameters such as **The number of filters, Dimensions of Filter (F), Stride (S), Padding(P) , Activation Function etc. which we input manually. Let the Input Volume Size be deonted by (W) ,**
**Then, the Output will have Dimensions given by -->**
**(Height, Width) = ( ( W − F + 2P ) / S ) + 1**
And the Depth will be equal to Number of Filters Specified.
3) Next We add **Pooling Layers**, which are used for Dimensionality Reduction or DownSampling the Input. These are used where we have lot of Input Features. It reduces the amount of Parameters and Computational power required drastically, thus reducing Overfitting. These along with Convolutional layers are able to learn more Complex features of the Image.
4) We add **Batch Normalization** where we acheive Zero mean and Variance one. It scales down outliers and forces the network to learn features in a distributed way, not relying too much on a Particular Weight and makes the model better Generalize the Images.
5) To avoid Overfitting We add **Dropout**. This randomly drops some percentage of neurons, and thus the weights gets Re-Aligned. The remaining Neurons learn more features and this reduces the dependency on any one Neuron. DropOut is a Regularization Technique, which Penalizes the Parameters. Generally we set the DropOutRate between 0.2-0.5 .
6) Finally we add **Flatten layer** to map the input to a 1D vector. We then add Fully connected Layers after some convolutional/pooling layers. It combines all the Features of the Previous Layers.
7) Lastly, we add the **Output Layer**. It has units equal to the number of classes to be identified. Here, we use 'sigmoid' function if it is Binary Classification otherwise 'softmax' activation function in case of Multi-Class Classification.
```
# Building a ConvNet
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(3, 3), activation='relu', strides=1, padding='same',
data_format='channels_last', input_shape=(28,28,1)))
model.add(BatchNormalization())
model.add(Conv2D(filters=32, kernel_size=(3, 3), activation='relu', strides=1, padding='same',
data_format='channels_last'))
model.add(BatchNormalization())
model.add(Dropout(0.25))
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu', strides=1, padding='same',
data_format='channels_last'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(filters=128, kernel_size=(3, 3), activation='relu', strides=1, padding='same',
data_format='channels_last'))
model.add(BatchNormalization())
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(128, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
```
<a id="there_you_go_4.2"></a>
## 4.2) Compiling the Model
1) We need to compile the model. We have to specify the optimizer used by the model We have many choices like Adam, RMSprop etc.. Refer to Keras doc for a comprehensive list of the optimizers available.
2) Next we need to specify the loss function for the neural network which we want to minimize.
For Binary Classification we use "binary_crossentropy" and for Multi-class Classification we use "categorical_crossentropy".
3) Finally, We need to specify the metric to evaluate our models performance. Here I have used accuracy.
```
# Optimizer
optimizer = Adam(lr=0.001, beta_1=0.9, beta_2=0.999 )
# Compiling the model
model.compile(optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"])
```
<a id="there_you_go_4.3"></a>
## 4.3) Model Summary
```
model.summary()
```
<a id="there_you_go_4.4"></a>
## 4.4) Learning Rate Decay
* The Learning rate should be properly tuned , such that it is not too high to take very large steps, neither it should be too small , which would not alter the Weights and Biases.
* We will use **LearningRateScheduler** here, which takes the step decay function as argument and return the updated learning rates for use in optimzer at every epoch stage. Basically it outputs a new learning rate at every epoch stage.
```
reduce_lr = LearningRateScheduler(lambda x: 1e-3 * 0.9 ** x)
```
<a id="there_you_go_4.5"></a>
## 4.5) Data Augmentation
```
datagen = ImageDataGenerator(
rotation_range = 8, # randomly rotate images in the range (degrees, 0 to 180)
zoom_range = 0.1, # Randomly zoom image
shear_range = 0.3,# shear angle in counter-clockwise direction in degrees
width_shift_range=0.08, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.08, # randomly shift images vertically (fraction of total height)
vertical_flip=True) # randomly flip images
datagen.fit(x_train)
```
<a id="there_you_go_4.6"></a>
## 4.6) Fitting the Model
```
batch_size = 128
epochs = 40
# Fit the Model
history = model.fit_generator(datagen.flow(x_train, y_train, batch_size = batch_size), epochs = epochs,
validation_data = (x_test, y_test), verbose=2,
steps_per_epoch=x_train.shape[0] // batch_size,
callbacks = [reduce_lr])
```
<a id="there_you_go_5"></a>
# 5) Evaluating the Model
```
score = model.evaluate(x_test, y_test)
print('Loss: {:.4f}'.format(score[0]))
print('Accuracy: {:.4f}'.format(score[1]))
```
<a id="there_you_go_5.1"></a>
## 5.1) Plotting the Training and Validation Curves
```
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title("Model Loss")
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend(['Train', 'Test'])
plt.show()
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title("Model Accuracy")
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(['Train', 'Test'])
plt.show()
```
**The Training and Validation Curves being close, we can conclude that the Model is not Overfitting the Data.**
<a id="there_you_go_6"></a>
# 6) Confusion Matrix
A confusion matrix is a table that is often used to describe the performance of a classification model (or "classifier") on a set of test data for which the true values are known.
Let's view the the Performance of our classification model on the data using Confusion Matrix.
```
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=90)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Predict the values from the validation dataset
Y_pred = model.predict(x_test)
# Convert predictions classes to one hot vectors
Y_pred_classes = np.argmax(Y_pred,axis = 1)
# Convert validation observations to one hot vectors
Y_true = np.argmax(y_test,axis = 1)
# compute the confusion matrix
confusion_mtx = confusion_matrix(Y_true, Y_pred_classes)
# plot the confusion matrix
plot_confusion_matrix(confusion_mtx,
classes = ['T-shirt/Top','Trouser','Pullover','Dress','Coat','Sandal','Shirt','Sneaker','Bag','Ankle Boot'])
```
* **We can see that a large number of T-shirt are misclassified as Shirt.**
* **Followed by, Shirts wrongly classified as Coat.**
<a id="there_you_go_7"></a>
# 7) Visualization of Predicted Classes
<a id="there_you_go_7.1"></a>
## 7.1) Correctly Predicted Classes
```
correct = []
for i in range(len(y_test)):
if(Y_pred_classes[i] == Y_true[i]):
correct.append(i)
if(len(correct) == 4):
break
fig, ax = plt.subplots(2,2, figsize=(12,6))
fig.set_size_inches(10,10)
ax[0,0].imshow(x_test[correct[0]].reshape(28,28), cmap='gray')
ax[0,0].set_title("Predicted Label : " + str(clothing[Y_pred_classes[correct[0]]]) + "\n"+"Actual Label : " +
str(clothing[Y_true[correct[0]]]))
ax[0,1].imshow(x_test[correct[1]].reshape(28,28), cmap='gray')
ax[0,1].set_title("Predicted Label : " + str(clothing[Y_pred_classes[correct[1]]]) + "\n"+"Actual Label : " +
str(clothing[Y_true[correct[1]]]))
ax[1,0].imshow(x_test[correct[2]].reshape(28,28), cmap='gray')
ax[1,0].set_title("Predicted Label : " + str(clothing[Y_pred_classes[correct[2]]]) + "\n"+"Actual Label : " +
str(clothing[Y_true[correct[2]]]))
ax[1,1].imshow(x_test[correct[3]].reshape(28,28), cmap='gray')
ax[1,1].set_title("Predicted Label : " + str(clothing[Y_pred_classes[correct[3]]]) + "\n"+"Actual Label : " +
str(clothing[Y_true[correct[3]]]))
```
<a id="there_you_go_7.2"></a>
## 7.2) Incorrectly Predicted Classes
```
incorrect = []
for i in range(len(y_test)):
if(not Y_pred_classes[i] == Y_true[i]):
incorrect.append(i)
if(len(incorrect) == 4):
break
fig, ax = plt.subplots(2,2, figsize=(12,6))
fig.set_size_inches(10,10)
ax[0,0].imshow(x_test[incorrect[0]].reshape(28,28), cmap='gray')
ax[0,0].set_title("Predicted Label : " + str(clothing[Y_pred_classes[incorrect[0]]]) + "\n"+"Actual Label : " +
str(clothing[Y_true[incorrect[0]]]))
ax[0,1].imshow(x_test[incorrect[1]].reshape(28,28), cmap='gray')
ax[0,1].set_title("Predicted Label : " + str(clothing[Y_pred_classes[incorrect[1]]]) + "\n"+"Actual Label : " +
str(clothing[Y_true[incorrect[1]]]))
ax[1,0].imshow(x_test[incorrect[2]].reshape(28,28), cmap='gray')
ax[1,0].set_title("Predicted Label : " + str(clothing[Y_pred_classes[incorrect[2]]]) + "\n"+"Actual Label : " +
str(clothing[Y_true[incorrect[2]]]))
ax[1,1].imshow(x_test[incorrect[3]].reshape(28,28), cmap='gray')
ax[1,1].set_title("Predicted Label : " + str(clothing[Y_pred_classes[incorrect[3]]]) + "\n"+"Actual Label : " +
str(clothing[Y_true[incorrect[3]]]))
```
<a id="there_you_go_8"></a>
# 8) Classification Report
The classification report visualizer displays the precision, recall, F1, and support scores for the model.
* **Precision: **
> Precision is the ability of a classiifer not to label an instance positive that is actually negative. Basically, it is defined as as the ratio of true positives to the sum of true and false positives. “For all instances classified positive, what percent was correct?”
* **Recall: **
> Recall is the ability of a classifier to find all positive instances. For each class it is defined as the ratio of true positives to the sum of true positives and false negatives. “For all instances that were actually positive, what percent was classified correctly?”
* **F1 Score: **
> The F1 score is a weighted harmonic mean of precision and recall such that the best score is 1.0 and the worst is 0.0 . Generally speaking, F1 scores are lower than accuracy measures as they embed precision and recall into their computation.
* **Support: **
> Support is the number of actual occurrences of the class in the specified dataset. Imbalanced support in the training data may indicate structural weaknesses in the reported scores of the classifier and could indicate the need for stratified sampling or rebalancing.
```
classes = ['T-shirt/Top','Trouser','Pullover','Dress','Coat','Sandal','Shirt','Sneaker','Bag','Ankle Boot']
print(classification_report(Y_true, Y_pred_classes, target_names = classes))
```
**Look at the Precision of the Shirts, we can see that our model predicted 74% of Shirts correctly out of the total images it predicted as Shirts. We did conclude the same from the confusion matrix, where we saw that a lot of T-shirts were misclassified as Shirts.**
# 9) Predicting on the Test Data
Let's Evaluate the Models performance on the Test Data.
```
X = df_test
Y = to_categorical(test.iloc[:,0])
score = model.evaluate(X, Y)
print("Loss: {:.4f}".format(score[0]))
print("Accuracy: {:.4f}".format(score[1]))
```
Our model predicted 93.4% of Test Images correctly, which indicates that the model did pretty good job in generalizing the data.
|
github_jupyter
|
# Ignore warnings :
import warnings
warnings.filterwarnings('ignore')
# Handle table-like data and matrices :
import numpy as np
import pandas as pd
import math
import itertools
# Modelling Algorithms :
# Classification
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier , GradientBoostingClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis , QuadraticDiscriminantAnalysis
# Regression
from sklearn.linear_model import LinearRegression,Ridge,Lasso,RidgeCV, ElasticNet
from sklearn.ensemble import RandomForestRegressor,BaggingRegressor,GradientBoostingRegressor,AdaBoostRegressor
from sklearn.svm import SVR
from sklearn.neighbors import KNeighborsRegressor
from sklearn.neural_network import MLPRegressor
# Modelling Helpers :
from sklearn.preprocessing import Imputer , Normalizer , scale
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.feature_selection import RFECV
from sklearn.model_selection import GridSearchCV , KFold , cross_val_score
#preprocessing :
from sklearn.preprocessing import MinMaxScaler , StandardScaler, Imputer, LabelEncoder
#evaluation metrics :
# Regression
from sklearn.metrics import mean_squared_log_error,mean_squared_error, r2_score,mean_absolute_error
# Classification
from sklearn.metrics import accuracy_score,precision_score,recall_score,f1_score
# Deep Learning Libraries
from keras.models import Sequential, load_model
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D, BatchNormalization
from keras.optimizers import Adam,SGD,Adagrad,Adadelta,RMSprop
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ReduceLROnPlateau, LearningRateScheduler
from keras.utils import to_categorical
# Visualisation
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.pylab as pylab
import seaborn as sns
import missingno as msno
# Configure visualisations
%matplotlib inline
mpl.style.use( 'ggplot' )
plt.style.use('fivethirtyeight')
sns.set(context="notebook", palette="dark", style = 'whitegrid' , color_codes=True)
params = {
'axes.labelsize': "large",
'xtick.labelsize': 'x-large',
'legend.fontsize': 20,
'figure.dpi': 150,
'figure.figsize': [25, 7]
}
plt.rcParams.update(params)
# Center all plots
from IPython.core.display import HTML
HTML("""
<style>
.output_png {
display: table-cell;
text-align: center;
vertical-align: middle;
}
</style>
""");
train = pd.read_csv('../input/fashion-mnist_train.csv')
test = pd.read_csv('../input/fashion-mnist_test.csv')
df = train.copy()
df_test = test.copy()
df.head()
print('Train: ', df.shape)
print('Test: ', df_test.shape)
df.label.unique()
# Train
df.isnull().any().sum()
# Test
df_test.isnull().any().sum()
# Mapping Classes
clothing = {0 : 'T-shirt/top',
1 : 'Trouser',
2 : 'Pullover',
3 : 'Dress',
4 : 'Coat',
5 : 'Sandal',
6 : 'Shirt',
7 : 'Sneaker',
8 : 'Bag',
9 : 'Ankle boot'}
fig, axes = plt.subplots(4, 4, figsize = (9,9))
for row in axes:
for axe in row:
index = np.random.randint(60000)
img = df.drop('label', axis=1).values[index].reshape(28,28)
cloths = df['label'][index]
axe.imshow(img, cmap='gray')
axe.set_title(clothing[cloths])
axe.set_axis_off()
df['label'].value_counts()
sns.factorplot(x='label', data=df, kind='count', size=3, aspect= 1.5)
# Setting Random Seeds for Reproducibilty.
seed = 66
np.random.seed(seed)
X = train.iloc[:,1:]
Y = train.iloc[:,0]
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.1, random_state=seed)
# The first parameter in reshape indicates the number of examples.
# We pass it as -1, which means that it is an unknown dimension and we want numpy to figure it out.
# reshape(examples, height, width, channels)
x_train = x_train.values.reshape((-1, 28, 28, 1))
x_test = x_test.values.reshape((-1, 28, 28, 1))
df_test.drop('label', axis=1, inplace=True)
df_test = df_test.values.reshape((-1, 28, 28, 1))
# You need to make sure that your Image is cast into double/float from int before you do this scaling
# as you will most likely generate floating point numbers.
# And had it been int, the values will be truncated to zero.
x_train = x_train.astype("float32")/255
x_test = x_test.astype("float32")/255
df_test = df_test.astype("float32")/255
y_train = to_categorical(y_train, num_classes=10)
y_test = to_categorical(y_test, num_classes=10)
print(y_train.shape)
print(y_test.shape)
# Building a ConvNet
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(3, 3), activation='relu', strides=1, padding='same',
data_format='channels_last', input_shape=(28,28,1)))
model.add(BatchNormalization())
model.add(Conv2D(filters=32, kernel_size=(3, 3), activation='relu', strides=1, padding='same',
data_format='channels_last'))
model.add(BatchNormalization())
model.add(Dropout(0.25))
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu', strides=1, padding='same',
data_format='channels_last'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(filters=128, kernel_size=(3, 3), activation='relu', strides=1, padding='same',
data_format='channels_last'))
model.add(BatchNormalization())
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(128, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
# Optimizer
optimizer = Adam(lr=0.001, beta_1=0.9, beta_2=0.999 )
# Compiling the model
model.compile(optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"])
model.summary()
reduce_lr = LearningRateScheduler(lambda x: 1e-3 * 0.9 ** x)
datagen = ImageDataGenerator(
rotation_range = 8, # randomly rotate images in the range (degrees, 0 to 180)
zoom_range = 0.1, # Randomly zoom image
shear_range = 0.3,# shear angle in counter-clockwise direction in degrees
width_shift_range=0.08, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.08, # randomly shift images vertically (fraction of total height)
vertical_flip=True) # randomly flip images
datagen.fit(x_train)
batch_size = 128
epochs = 40
# Fit the Model
history = model.fit_generator(datagen.flow(x_train, y_train, batch_size = batch_size), epochs = epochs,
validation_data = (x_test, y_test), verbose=2,
steps_per_epoch=x_train.shape[0] // batch_size,
callbacks = [reduce_lr])
score = model.evaluate(x_test, y_test)
print('Loss: {:.4f}'.format(score[0]))
print('Accuracy: {:.4f}'.format(score[1]))
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title("Model Loss")
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend(['Train', 'Test'])
plt.show()
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title("Model Accuracy")
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(['Train', 'Test'])
plt.show()
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=90)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Predict the values from the validation dataset
Y_pred = model.predict(x_test)
# Convert predictions classes to one hot vectors
Y_pred_classes = np.argmax(Y_pred,axis = 1)
# Convert validation observations to one hot vectors
Y_true = np.argmax(y_test,axis = 1)
# compute the confusion matrix
confusion_mtx = confusion_matrix(Y_true, Y_pred_classes)
# plot the confusion matrix
plot_confusion_matrix(confusion_mtx,
classes = ['T-shirt/Top','Trouser','Pullover','Dress','Coat','Sandal','Shirt','Sneaker','Bag','Ankle Boot'])
correct = []
for i in range(len(y_test)):
if(Y_pred_classes[i] == Y_true[i]):
correct.append(i)
if(len(correct) == 4):
break
fig, ax = plt.subplots(2,2, figsize=(12,6))
fig.set_size_inches(10,10)
ax[0,0].imshow(x_test[correct[0]].reshape(28,28), cmap='gray')
ax[0,0].set_title("Predicted Label : " + str(clothing[Y_pred_classes[correct[0]]]) + "\n"+"Actual Label : " +
str(clothing[Y_true[correct[0]]]))
ax[0,1].imshow(x_test[correct[1]].reshape(28,28), cmap='gray')
ax[0,1].set_title("Predicted Label : " + str(clothing[Y_pred_classes[correct[1]]]) + "\n"+"Actual Label : " +
str(clothing[Y_true[correct[1]]]))
ax[1,0].imshow(x_test[correct[2]].reshape(28,28), cmap='gray')
ax[1,0].set_title("Predicted Label : " + str(clothing[Y_pred_classes[correct[2]]]) + "\n"+"Actual Label : " +
str(clothing[Y_true[correct[2]]]))
ax[1,1].imshow(x_test[correct[3]].reshape(28,28), cmap='gray')
ax[1,1].set_title("Predicted Label : " + str(clothing[Y_pred_classes[correct[3]]]) + "\n"+"Actual Label : " +
str(clothing[Y_true[correct[3]]]))
incorrect = []
for i in range(len(y_test)):
if(not Y_pred_classes[i] == Y_true[i]):
incorrect.append(i)
if(len(incorrect) == 4):
break
fig, ax = plt.subplots(2,2, figsize=(12,6))
fig.set_size_inches(10,10)
ax[0,0].imshow(x_test[incorrect[0]].reshape(28,28), cmap='gray')
ax[0,0].set_title("Predicted Label : " + str(clothing[Y_pred_classes[incorrect[0]]]) + "\n"+"Actual Label : " +
str(clothing[Y_true[incorrect[0]]]))
ax[0,1].imshow(x_test[incorrect[1]].reshape(28,28), cmap='gray')
ax[0,1].set_title("Predicted Label : " + str(clothing[Y_pred_classes[incorrect[1]]]) + "\n"+"Actual Label : " +
str(clothing[Y_true[incorrect[1]]]))
ax[1,0].imshow(x_test[incorrect[2]].reshape(28,28), cmap='gray')
ax[1,0].set_title("Predicted Label : " + str(clothing[Y_pred_classes[incorrect[2]]]) + "\n"+"Actual Label : " +
str(clothing[Y_true[incorrect[2]]]))
ax[1,1].imshow(x_test[incorrect[3]].reshape(28,28), cmap='gray')
ax[1,1].set_title("Predicted Label : " + str(clothing[Y_pred_classes[incorrect[3]]]) + "\n"+"Actual Label : " +
str(clothing[Y_true[incorrect[3]]]))
classes = ['T-shirt/Top','Trouser','Pullover','Dress','Coat','Sandal','Shirt','Sneaker','Bag','Ankle Boot']
print(classification_report(Y_true, Y_pred_classes, target_names = classes))
X = df_test
Y = to_categorical(test.iloc[:,0])
score = model.evaluate(X, Y)
print("Loss: {:.4f}".format(score[0]))
print("Accuracy: {:.4f}".format(score[1]))
| 0.718792 | 0.946448 |
<img align="right" src="images/ninologo.png" width="150"/>
<img align="right" src="images/tf-small.png" width="125"/>
<img align="right" src="images/dans.png" width="150"/>
# Start
This notebook gets you started with using
[Text-Fabric](https://github.com/Nino-cunei/uruk/blob/master/docs/textfabric.md) for coding in cuneiform tablet transcriptions.
Familiarity with the underlying
[data model](https://annotation.github.io/text-fabric/tf/about/datamodel.html)
is recommended.
For provenance, see the documentation:
[about](https://github.com/Nino-cunei/uruk/blob/master/docs/about.md).
## Overview
* we tell you how to get Text-Fabric on your system;
* we tell you how to get the Uruk IV-III corpus on your system.
## Installing Text-Fabric
### Python
You need to have Python on your system. Most systems have it out of the box,
but alas, that is python2 and we need at least python **3.6**.
Install it from [python.org](https://www.python.org) or from
[Anaconda](https://www.anaconda.com/download).
### Jupyter notebook
You need [Jupyter](http://jupyter.org).
If it is not already installed:
```
pip3 install jupyter
```
### TF itself
```
pip3 install text-fabric
```
### Get the data
Text-Fabric will get the data for you and store it on your system.
If you have cloned the github repo with the data,
[Nino-cunei/uruk](https://github.com/Nino-cunei/uruk),
your data is already in place, and nothing will be downloaded.
Otherwise, on first run, Text-Fabric will load the data and store it in the folder
`text-fabric-data` in your home directory.
This only happens if the data is not already there.
Not only transcription data will be downloaded, also linearts and photos.
These images are contained in a zipfile of 550 MB,
so take care that you have a good internet connection when it comes to downloading the images.
## Start the engines
Navigate to this directory in a terminal and say
```
jupyter notebook
```
(just literally).
Your browser opens with a directory view, and you'll see `start.ipynb`.
Click on it. A new browser tab opens, and a Python engine has been allocated to this
notebook.
Now we are ready to compute .
The next cell is a code cell that can be executed if you have downloaded this
notebook and have issued the `jupyter notebook` command.
You execute a code cell by standing in it and press `Shift Enter`.
### The code
```
%load_ext autoreload
%autoreload 2
import sys, os
from tf.app import use
```
View the next cell as an *incantation*.
You just have to say it to get things underway.
For the very last version, use `hot`.
For the latest release, use `latest`.
If you have cloned the repos (TF app and data), use `clone`.
If you do not want/need to upgrade, leave out the checkout specifiers.
```
A = use("uruk:clone", checkout="clone", hoist=globals())
# A = use('uruk:hot', checkout="hot", hoist=globals())
# A = use('uruk:latest', checkout="latest", hoist=globals())
# A = use('uruk', hoist=globals())
```
### The output
The output shows some statistics about the images found in the Uruk data.
Then there are links to the documentation.
**Tip:** open them, and have a quick look.
Every notebook that you set up with `Cunei` will have such links.
**GitHub and NBViewer**
If you have made your own notebook, and used this incantation,
and pushed the notebook to GitHub, links to the online version
of *your* notebook on GitHub and NBViewer will be generated and displayed.
By the way, GitHub shows notebooks nicely.
Sometimes NBViewer does it better, although it fetches exactly the same notebook from GitHub.
NBViewer is handy to navigate all the notebooks of a particular organization.
Try the [Nino-cunei starting point](http://nbviewer.jupyter.org/github/Nino-cunei/).
These links you can share with colleagues.
## Test
We perform a quick test to see that everything works.
### Count the signs
We count how many signs there are in the corpus.
In a next notebook we'll explain code like this.
```
len(F.otype.s("sign"))
```
### Show photos and lineart
We show the photo and lineart of a tablet, to whet your appetite.
```
example = T.nodeFromSection(("P005381",))
A.photo(example)
```
Note that you can click on the photo to see a better version on CDLI.
Here comes the lineart:
```
A.lineart(example)
```
A pretty representation of the transcription with embedded lineart for quads and signs:
```
A.pretty(example, withNodes=True)
```
We can suppress the lineart:
```
A.pretty(example, showGraphics=False)
```
The transliteration:
```
A.getSource(example)
```
Now the lines ans cases of this tablet in a table:
```
table = []
for sub in L.d(example):
if F.otype.v(sub) in {"line", "case"}:
table.append((sub,))
A.table(table, showGraphics=False)
```
We can include the lineart in plain displays:
```
A.table(table, showGraphics=True)
```
This is just the beginning.
In the next chapters we show you how to
* fine-tune tablet displays,
* step and jump around in the corpus,
* search for patterns,
* drill down to quads and signs,
* and study frequency distributions of signs in subcases.
# Next
[imagery](imagery.ipynb)
*Get the big picture ...*
All chapters:
[start](start.ipynb)
[imagery](imagery.ipynb)
[steps](steps.ipynb)
[search](search.ipynb)
[signs](signs.ipynb)
[quads](quads.ipynb)
[jumps](jumps.ipynb)
[cases](cases.ipynb)
---
CC-BY Dirk Roorda
|
github_jupyter
|
pip3 install jupyter
pip3 install text-fabric
jupyter notebook
%load_ext autoreload
%autoreload 2
import sys, os
from tf.app import use
A = use("uruk:clone", checkout="clone", hoist=globals())
# A = use('uruk:hot', checkout="hot", hoist=globals())
# A = use('uruk:latest', checkout="latest", hoist=globals())
# A = use('uruk', hoist=globals())
len(F.otype.s("sign"))
example = T.nodeFromSection(("P005381",))
A.photo(example)
A.lineart(example)
A.pretty(example, withNodes=True)
A.pretty(example, showGraphics=False)
A.getSource(example)
table = []
for sub in L.d(example):
if F.otype.v(sub) in {"line", "case"}:
table.append((sub,))
A.table(table, showGraphics=False)
A.table(table, showGraphics=True)
| 0.336767 | 0.987289 |
```
# https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data
import pandas as pd
import numpy as np
pd.set_option('display.width', 5000)
col_dtypes = {
#'client_id': np.int64,
#'loan_type': str
}
#dateparse = lambda x: pd.datetime.strptime(x, '%Y-%m-%d')
train = pd.read_csv("../data/house-prices/train.csv",delimiter=",")
sub = pd.read_csv("../data/house-prices/test.csv",delimiter=",")#,parse_dates=['loan_start','loan_end'], date_parser=dateparse, dtype=loans_dtypes)
train.info(memory_usage='deep')
train_obj = train.select_dtypes(include=['object']).copy()
converted_obj = pd.DataFrame()
for col in train_obj.columns:
num_unique_values = len(train_obj[col].unique())
num_total_values = len(train_obj[col])
if num_unique_values / num_total_values < 0.5:
converted_obj.loc[:,col] = train_obj[col].astype('category')
else:
converted_obj.loc[:,col] = train_obj[col]
converted_obj.info(memory_usage='deep')
train[converted_obj.columns] = converted_obj
train['LogSalePrice'] = np.log(train["SalePrice"])
train['LogSalePrice'].head()
# use some automl
import h2o
h2o.init()
# Load a pandas data frame to H2O
hf = h2o.H2OFrame(train)
hf_sub = h2o.H2OFrame(sub)
x = hf.names
x.remove("SalePrice")
x.remove("LogSalePrice")
Y = "LogSalePrice"
train, test = hf.split_frame([0.7], seed=42)
from h2o.estimators.xgboost import H2OXGBoostEstimator
xgb = H2OXGBoostEstimator(nfolds=10, seed=1)
xgb.train(x=x, y=Y, training_frame=train,
validation_frame=test)
print(xgb)
# Grid Search
from h2o.grid.grid_search import H2OGridSearch
xgb_parameters = {'max_depth': [3, 4, 5, 6],
'sample_rate': [0.7, 0.8],
'col_sample_rate': [0.7, 0.8],
'ntrees': [200, 300, 400]}
xgb_grid_search = H2OGridSearch(model=H2OXGBoostEstimator,
grid_id='example_grid',
hyper_params=xgb_parameters)
xgb_grid_search.train(x=x,
y=Y,
training_frame=train,
validation_frame=test,
nfolds = 10,
learn_rate=0.01,
seed=42)
grid_results = xgb_grid_search.get_grid(sort_by='rmse',
decreasing=False)
print(grid_results)
xgb = grid_results.models[0]
preds = xgb.predict(hf_sub)
from h2o.automl import H2OAutoML
autoML = H2OAutoML(max_runtime_secs=30,
nfolds = 10,
stopping_metric = 'RMSE',
sort_metric = 'RMSE',
exclude_algos=['DRF','GLM'],
)
autoML.train(x=x,
y=Y,
training_frame=train,
validation_frame=test,
)
print(autoML.leaderboard)
import h2o
preds = autoML.leader.predict(hf_sub)
submission = hf_sub['Id']
pd_ser = h2o.as_list(preds, use_pandas=True)
#ser = pd.Series(preds)
df = np.exp(pd_ser)
type(df)
submission['SalePrice'] = h2o.H2OFrame(df)
h2o.h2o.download_csv(submission, "../data/house-prices/submission.csv")
```
|
github_jupyter
|
# https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data
import pandas as pd
import numpy as np
pd.set_option('display.width', 5000)
col_dtypes = {
#'client_id': np.int64,
#'loan_type': str
}
#dateparse = lambda x: pd.datetime.strptime(x, '%Y-%m-%d')
train = pd.read_csv("../data/house-prices/train.csv",delimiter=",")
sub = pd.read_csv("../data/house-prices/test.csv",delimiter=",")#,parse_dates=['loan_start','loan_end'], date_parser=dateparse, dtype=loans_dtypes)
train.info(memory_usage='deep')
train_obj = train.select_dtypes(include=['object']).copy()
converted_obj = pd.DataFrame()
for col in train_obj.columns:
num_unique_values = len(train_obj[col].unique())
num_total_values = len(train_obj[col])
if num_unique_values / num_total_values < 0.5:
converted_obj.loc[:,col] = train_obj[col].astype('category')
else:
converted_obj.loc[:,col] = train_obj[col]
converted_obj.info(memory_usage='deep')
train[converted_obj.columns] = converted_obj
train['LogSalePrice'] = np.log(train["SalePrice"])
train['LogSalePrice'].head()
# use some automl
import h2o
h2o.init()
# Load a pandas data frame to H2O
hf = h2o.H2OFrame(train)
hf_sub = h2o.H2OFrame(sub)
x = hf.names
x.remove("SalePrice")
x.remove("LogSalePrice")
Y = "LogSalePrice"
train, test = hf.split_frame([0.7], seed=42)
from h2o.estimators.xgboost import H2OXGBoostEstimator
xgb = H2OXGBoostEstimator(nfolds=10, seed=1)
xgb.train(x=x, y=Y, training_frame=train,
validation_frame=test)
print(xgb)
# Grid Search
from h2o.grid.grid_search import H2OGridSearch
xgb_parameters = {'max_depth': [3, 4, 5, 6],
'sample_rate': [0.7, 0.8],
'col_sample_rate': [0.7, 0.8],
'ntrees': [200, 300, 400]}
xgb_grid_search = H2OGridSearch(model=H2OXGBoostEstimator,
grid_id='example_grid',
hyper_params=xgb_parameters)
xgb_grid_search.train(x=x,
y=Y,
training_frame=train,
validation_frame=test,
nfolds = 10,
learn_rate=0.01,
seed=42)
grid_results = xgb_grid_search.get_grid(sort_by='rmse',
decreasing=False)
print(grid_results)
xgb = grid_results.models[0]
preds = xgb.predict(hf_sub)
from h2o.automl import H2OAutoML
autoML = H2OAutoML(max_runtime_secs=30,
nfolds = 10,
stopping_metric = 'RMSE',
sort_metric = 'RMSE',
exclude_algos=['DRF','GLM'],
)
autoML.train(x=x,
y=Y,
training_frame=train,
validation_frame=test,
)
print(autoML.leaderboard)
import h2o
preds = autoML.leader.predict(hf_sub)
submission = hf_sub['Id']
pd_ser = h2o.as_list(preds, use_pandas=True)
#ser = pd.Series(preds)
df = np.exp(pd_ser)
type(df)
submission['SalePrice'] = h2o.H2OFrame(df)
h2o.h2o.download_csv(submission, "../data/house-prices/submission.csv")
| 0.40028 | 0.306161 |
## Multi-label prediction with Planet Amazon dataset
```
%matplotlib inline
from fastai.vision.all import *
from nbdev.showdoc import *
```
## Getting the data
The planet dataset isn't available on the [fastai dataset page](https://course.fast.ai/datasets) due to copyright restrictions. You can download it from Kaggle however. Let's see how to do this by using the [Kaggle API](https://github.com/Kaggle/kaggle-api) as it's going to be pretty useful to you if you want to join a competition or use other Kaggle datasets later on.
First, install the Kaggle API by uncommenting the following line and executing it, or by executing it in your terminal (depending on your platform you may need to modify this slightly to either add `source activate fastai` or similar, or prefix `pip` with a path. Have a look at how `conda install` is called for your platform in the appropriate *Returning to work* section of https://course.fast.ai/. (Depending on your environment, you may also need to append "--user" to the command.)
```
# ! {sys.executable} -m pip install kaggle --upgrade
```
Then you need to upload your credentials from Kaggle on your instance. Login to kaggle and click on your profile picture on the top left corner, then 'My account'. Scroll down until you find a button named 'Create New API Token' and click on it. This will trigger the download of a file named 'kaggle.json'.
Upload this file to the directory this notebook is running in, by clicking "Upload" on your main Jupyter page, then uncomment and execute the next two commands (or run them in a terminal). For Windows, uncomment the last two commands.
```
# ! mkdir -p ~/.kaggle/
# ! mv kaggle.json ~/.kaggle/
# For Windows, uncomment these two commands
# ! mkdir %userprofile%\.kaggle
# ! move kaggle.json %userprofile%\.kaggle
```
You're all set to download the data from [planet competition](https://www.kaggle.com/c/planet-understanding-the-amazon-from-space). You **first need to go to its main page and accept its rules**, and run the two cells below (uncomment the shell commands to download and unzip the data). If you get a `403 forbidden` error it means you haven't accepted the competition rules yet (you have to go to the competition page, click on *Rules* tab, and then scroll to the bottom to find the *accept* button).
```
path = Config().data/'planet'
path.mkdir(parents=True, exist_ok=True)
path
#! kaggle competitions download -c planet-understanding-the-amazon-from-space -f train-jpg.tar.7z -p {path}
#! kaggle competitions download -c planet-understanding-the-amazon-from-space -f train_v2.csv -p {path}
#! unzip -q -n {path}/train_v2.csv.zip -d {path}
```
To extract the content of this file, we'll need 7zip, so uncomment the following line if you need to install it (or run `sudo apt install p7zip-full` in your terminal).
```
# ! conda install --yes --prefix {sys.prefix} -c haasad eidl7zip
```
And now we can unpack the data (uncomment to run - this might take a few minutes to complete).
```
#! 7za -bd -y -so x {path}/train-jpg.tar.7z | tar xf - -C {path.as_posix()}
```
## Multiclassification
Contrary to the pets dataset studied in last lesson, here each picture can have multiple labels. If we take a look at the csv file containing the labels (in 'train_v2.csv' here) we see that each 'image_name' is associated to several tags separated by spaces.
```
df = pd.read_csv(path/'train_v2.csv')
df.head()
```
To put this in a `DataLoaders` while using the [data block API](https://docs.fast.ai/data_block.html), to do this we need to indicate:
- the types of our inputs/targets (here image and multi-label categories) through a thing called blocks
- how to get our xs and ys from the dataframe through a `ColReader`
- how to split out data between training and validation
Since we have satellite images, it makes sense to use all kinds of flip, we limit the amount of lighting/zoom and remove the warping.
```
tfms = aug_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0., size=128)
planet = DataBlock(blocks=(ImageBlock, MultiCategoryBlock),
get_x=ColReader(0, pref=str(path/"train-jpg")+"/", suff='.jpg'),
get_y=ColReader(1, label_delim=' '),
splitter=RandomSplitter(seed=42),
batch_tfms=tfms+[Normalize.from_stats(*imagenet_stats)])
```
Since we have satellite images, it makes sense to use all kinds of flip, we limit the amount of lighting/zoom and remove the warping.
```
dls = planet.dataloaders(df, bs=64, path=path)
```
`show_batch` still works, and show us the different labels separated by `;`.
```
dls.show_batch(max_n=9, figsize=(12,9))
```
To create a `Learner` we use the same function as in lesson 1. Our base architecture is resnet50 again, but the metrics are a little bit differeent: we use `accuracy_thresh` instead of `accuracy`. In lesson 1, we determined the predicition for a given class by picking the final activation that was the biggest, but here, each activation can be 0. or 1. `accuracy_thresh` selects the ones that are above a certain threshold (0.5 by default) and compares them to the ground truth.
As for Fbeta, it's the metric that was used by Kaggle on this competition. See [here](https://en.wikipedia.org/wiki/F1_score) for more details.
```
arch = resnet50
acc_02 = partial(accuracy_multi, thresh=0.2)
f_score = FBetaMulti(2, thresh=0.2, average='samples')
learn = cnn_learner(dls, arch, metrics=[acc_02, f_score])
```
We use the LR Finder to pick a good learning rate.
```
learn.lr_find()
```
Then we can fit the head of our network.
```
lr = 0.01
learn.fit_one_cycle(5, slice(lr))
learn.save('stage-1-rn50')
```
...And fine-tune the whole model:
```
learn.unfreeze()
learn.lr_find()
learn.fit_one_cycle(5, slice(1e-5, lr/5))
learn.save('stage-2-rn50')
tfms = aug_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0., size=256)
np.random.seed(42)
dls = planet.dataloaders(df, bs=64, path=path, batch_tfms=tfms+[Normalize.from_stats(*imagenet_stats)])
learn.dls = dls
learn.freeze()
learn.lr_find()
lr=1e-2/2
learn.fit_one_cycle(5, slice(lr))
learn.save('stage-1-256-rn50')
learn.unfreeze()
learn.fit_one_cycle(5, slice(1e-5, lr/5))
learn.recorder.plot_loss()
learn.save('stage-2-256-rn50')
```
You won't really know how you're going until you submit to Kaggle, since the leaderboard isn't using the same subset as we have for training. But as a guide, 50th place (out of 938 teams) on the private leaderboard was a score of `0.930`.
```
#learn.export()
```
## Submitting to Kaggle
```
#! kaggle competitions download -c planet-understanding-the-amazon-from-space -f test-jpg.tar.7z -p {path}
#! 7za -bd -y -so x {path}/test-jpg.tar.7z | tar xf - -C {path}
#! kaggle competitions download -c planet-understanding-the-amazon-from-space -f test-jpg-additional.tar.7z -p {path}
#! 7za -bd -y -so x {path}/test-jpg-additional.tar.7z | tar xf - -C {path}
test_items = get_image_files(path/'test-jpg') + get_image_files(path/'test-jpg-additional')
len(test_items)
dl = learn.dls.test_dl(test_items, rm_type_tfms=1, bs=64)
preds, _ = learn.get_preds(dl=dl)
preds.shape
thresh = 0.2
labelled_preds = [' '.join([learn.dls.vocab[i] for i,p in enumerate(pred) if p > thresh]) for pred in preds.numpy()]
labelled_preds[:5]
fnames = [f.name[:-4] for f in test_items]
df = pd.DataFrame({'image_name':fnames, 'tags':labelled_preds}, columns=['image_name', 'tags'])
df.to_csv(path/'submission.csv', index=False)
! kaggle competitions submit planet-understanding-the-amazon-from-space -f {path/'submission.csv'} -m "My submission"
```
Private Leaderboard score: 0.9296 (around 80th)
|
github_jupyter
|
%matplotlib inline
from fastai.vision.all import *
from nbdev.showdoc import *
# ! {sys.executable} -m pip install kaggle --upgrade
# ! mkdir -p ~/.kaggle/
# ! mv kaggle.json ~/.kaggle/
# For Windows, uncomment these two commands
# ! mkdir %userprofile%\.kaggle
# ! move kaggle.json %userprofile%\.kaggle
path = Config().data/'planet'
path.mkdir(parents=True, exist_ok=True)
path
#! kaggle competitions download -c planet-understanding-the-amazon-from-space -f train-jpg.tar.7z -p {path}
#! kaggle competitions download -c planet-understanding-the-amazon-from-space -f train_v2.csv -p {path}
#! unzip -q -n {path}/train_v2.csv.zip -d {path}
# ! conda install --yes --prefix {sys.prefix} -c haasad eidl7zip
#! 7za -bd -y -so x {path}/train-jpg.tar.7z | tar xf - -C {path.as_posix()}
df = pd.read_csv(path/'train_v2.csv')
df.head()
tfms = aug_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0., size=128)
planet = DataBlock(blocks=(ImageBlock, MultiCategoryBlock),
get_x=ColReader(0, pref=str(path/"train-jpg")+"/", suff='.jpg'),
get_y=ColReader(1, label_delim=' '),
splitter=RandomSplitter(seed=42),
batch_tfms=tfms+[Normalize.from_stats(*imagenet_stats)])
dls = planet.dataloaders(df, bs=64, path=path)
dls.show_batch(max_n=9, figsize=(12,9))
arch = resnet50
acc_02 = partial(accuracy_multi, thresh=0.2)
f_score = FBetaMulti(2, thresh=0.2, average='samples')
learn = cnn_learner(dls, arch, metrics=[acc_02, f_score])
learn.lr_find()
lr = 0.01
learn.fit_one_cycle(5, slice(lr))
learn.save('stage-1-rn50')
learn.unfreeze()
learn.lr_find()
learn.fit_one_cycle(5, slice(1e-5, lr/5))
learn.save('stage-2-rn50')
tfms = aug_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0., size=256)
np.random.seed(42)
dls = planet.dataloaders(df, bs=64, path=path, batch_tfms=tfms+[Normalize.from_stats(*imagenet_stats)])
learn.dls = dls
learn.freeze()
learn.lr_find()
lr=1e-2/2
learn.fit_one_cycle(5, slice(lr))
learn.save('stage-1-256-rn50')
learn.unfreeze()
learn.fit_one_cycle(5, slice(1e-5, lr/5))
learn.recorder.plot_loss()
learn.save('stage-2-256-rn50')
#learn.export()
#! kaggle competitions download -c planet-understanding-the-amazon-from-space -f test-jpg.tar.7z -p {path}
#! 7za -bd -y -so x {path}/test-jpg.tar.7z | tar xf - -C {path}
#! kaggle competitions download -c planet-understanding-the-amazon-from-space -f test-jpg-additional.tar.7z -p {path}
#! 7za -bd -y -so x {path}/test-jpg-additional.tar.7z | tar xf - -C {path}
test_items = get_image_files(path/'test-jpg') + get_image_files(path/'test-jpg-additional')
len(test_items)
dl = learn.dls.test_dl(test_items, rm_type_tfms=1, bs=64)
preds, _ = learn.get_preds(dl=dl)
preds.shape
thresh = 0.2
labelled_preds = [' '.join([learn.dls.vocab[i] for i,p in enumerate(pred) if p > thresh]) for pred in preds.numpy()]
labelled_preds[:5]
fnames = [f.name[:-4] for f in test_items]
df = pd.DataFrame({'image_name':fnames, 'tags':labelled_preds}, columns=['image_name', 'tags'])
df.to_csv(path/'submission.csv', index=False)
! kaggle competitions submit planet-understanding-the-amazon-from-space -f {path/'submission.csv'} -m "My submission"
| 0.281702 | 0.903762 |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import numpy as np
import pandas as pd
from pathlib import Path
import torch
import random
import os
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
from tqdm.notebook import tqdm
from sklearn.model_selection import KFold
from sklearn.metrics import mean_absolute_error
import matplotlib.pyplot as plt
import torch.nn.functional as F
from fastai import *
from fastai.basic_train import *
from fastai.basic_data import *
from lib import common
pd.options.mode.chained_assignment = None
```
### Path
```
path = Path('/kaggle/osic_pulmonary')
assert path.exists()
model_path = Path('/kaggle/osic_pulmonary/model')
if os.path.isdir(model_path) == False:
os.makedirs(model_path)
assert model_path.exists()
```
### Read Data
```
train_df, test_df, submission_df = common.read_data(path)
```
#### Feature generation
```
len(train_df)
submission_df = common.prepare_submission(submission_df, test_df)
submission_df[((submission_df['Patient'] == 'ID00419637202311204720264') & (submission_df['Weeks'] == 6))].head(25)
def adapt_percent_in_submission():
previous_match = None
for i, r in submission_df.iterrows():
in_training = train_df[(train_df['Patient'] == r['Patient']) & (train_df['Weeks'] == r['Weeks'])]
if(len(in_training) > 0):
previous_match = in_training['Percent'].item()
submission_df.iloc[i, submission_df.columns.get_loc('Percent')] = previous_match
elif previous_match is not None:
submission_df.iloc[i, submission_df.columns.get_loc('Percent')] = previous_match
adapt_percent_in_submission()
test_df
test_df[test_df['Patient'] == 'ID00419637202311204720264']
train_df[train_df['Patient'] == 'ID00419637202311204720264']
submission_df[submission_df['Patient'] == 'ID00419637202311204720264'].head(10)
```
Adding missing values
```
train_df['WHERE'] = 'train'
test_df['WHERE'] = 'val'
submission_df['WHERE'] = 'test'
data = train_df.append([test_df, submission_df])
data['min_week'] = data['Weeks']
data.loc[data.WHERE=='test','min_week'] = np.nan
data['min_week'] = data.groupby('Patient')['min_week'].transform('min')
base = data.loc[data.Weeks == data.min_week]
base = base[['Patient','FVC', 'Percent']].copy()
base.columns = ['Patient','min_FVC', 'min_Percent']
base['nb'] = 1
base['nb'] = base.groupby('Patient')['nb'].transform('cumsum')
base = base[base.nb==1]
base
data = data.merge(base, on='Patient', how='left')
data['base_week'] = data['Weeks'] - data['min_week']
data['base_week'] = data['base_week']
del base
data[data['Patient'] == 'ID00421637202311550012437']
COLS = ['Sex','SmokingStatus'] #,'Age', 'Sex_SmokingStatus'
FE = []
for col in COLS:
for mod in data[col].unique():
FE.append(mod)
data[mod] = (data[col] == mod).astype(int)
data[data['Patient'] == 'ID00421637202311550012437']
np.mean(np.abs(data['Age'] - data['Age'].mean())), data['Age'].mad()
def normalize(df:pd.DataFrame, cont_names, target_names):
"Compute the means and stds of `self.cont_names` columns to normalize them."
means, stds = {},{}
for n, t in zip(cont_names, target_names):
means[n], stds[n] = df[n].mean(), df[n].std()
# means[n], stds[n] = df[n].mean(), df[n].mad()
df[t] = (df[n]-means[n]) / (1e-7 + stds[n])
normalize(data, ['Age','min_FVC','base_week','Percent', 'min_Percent', 'min_week'], ['age','BASE','week','percent', 'min_percent', 'min_week'])
FE += ['age','week','BASE', 'percent']
data['base_week'].min()
train_df = data.loc[data.WHERE=='train']
test_df = data.loc[data.WHERE=='val']
submission_df = data.loc[data.WHERE=='test']
del data
train_df.sort_values(['Patient', 'Weeks']).head(15)
X = train_df[FE]
X.head(15)
y = train_df['FVC']
y.shape
```
#### Seed
```
def seed_everything(seed=2020):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(42)
torch.cuda.manual_seed(42)
seed_everything(42)
```
### Create Dataset
```
class ArrayDataset(Dataset):
def __init__(self, x, y):
self.x, self.y = torch.tensor(x.values, dtype=torch.float32), torch.tensor(y.values, dtype=torch.float32)
assert(len(self.x) == len(self.y))
def __len__(self):
return len(self.x)
def __getitem__(self, i):
return self.x[i], self.y[i]
def __repr__(self):
return f'x: {self.x.shape} y: {self.y.shape}'
def create_dl(X, y, batch_size=128, num_workers=10):
ds = ArrayDataset(X, y)
return DataLoader(ds, batch_size, shuffle=True, num_workers=num_workers)
sample_dl = create_dl(X, y)
sample_ds = ArrayDataset(X, y)
sample_ds
```
### Prepare neural network
```
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
def move_to_dev(x, y):
x = x.to(device)
y = y.to(device)
return x, y
C1, C2 = torch.tensor(70, dtype=torch.float32), torch.tensor(1000, dtype=torch.float32)
C1, C2 = move_to_dev(C1, C2)
q = torch.tensor([0.2, 0.50, 0.8]).float().to(device)
def score(y_true, y_pred):
y_true = y_true.unsqueeze(1)
sigma = y_pred[:, 2] - y_pred[:, 0]
fvc_pred = y_pred[:, 1]
#sigma_clip = sigma + C1
sigma_clip = torch.max(sigma, C1)
delta = torch.abs(y_true[:, 0] - fvc_pred)
delta = torch.min(delta, C2)
sq2 = torch.sqrt(torch.tensor(2.))
metric = (delta / sigma_clip)*sq2 + torch.log(sigma_clip* sq2)
return torch.mean(metric)
def qloss(y_true, y_pred):
# Pinball loss for multiple quantiles
e = y_true - y_pred
v = torch.max(q*e, (q-1)*e)
return torch.mean(v)
def mloss(_lambda):
def loss(y_pred, y_true):
y_true = y_true.unsqueeze(1)
return _lambda * qloss(y_true, y_pred) + (1 - _lambda)*score(y_true, y_pred)
return loss
def laplace_log(y_pred, y_true):
return score(y_true, y_pred)
class OsicModel(torch.nn.Module):
def __init__(self, ni, nh1, nh2):
super(OsicModel, self).__init__()
self.l1 = nn.Linear(ni, nh1)
self.l1_bn = nn.BatchNorm1d(nh1, momentum=0.1)
self.l2 = nn.Linear(nh1, nh2)
self.relu = nn.ReLU()
self.p1 = nn.Linear(nh2, 3)
self.p2 = nn.Linear(nh2, 3)
def forward(self, x):
x = self.relu(self.l1(x))
x = self.l1_bn(x)
x = self.relu(self.l2(x))
p1 = self.p1(x)
p2 = self.relu(self.p2(x))
preds = p1 + torch.cumsum(p2, axis=1)
return preds
def create_model(nh1=100, nh2=100):
model = OsicModel(X.shape[1], nh1, nh2)
model = model.to(device)
return model
sample_model = create_model()
sample_model
criterion=mloss(0.8)
# Test model
x_sample, y_sample = next(iter(sample_dl))
y_sample, x_sample = move_to_dev(y_sample, x_sample)
output = sample_model(x_sample)
criterion(output, y_sample)
```
#### Training functions
```
import fastai.callbacks
LR=1e-3
learn = Learner(DataBunch.create(sample_ds, sample_ds), sample_model, loss_func=criterion, metrics=laplace_log, silent=True)
learn.fit(100, LR)
learn.recorder.plot_metrics()
learn.recorder.plot_losses()
learn.recorder.plot_lr()
def get_last_metric(recorder):
return recorder.metrics[-1][0].item()
get_last_metric(learn.recorder)
```
#### Training
```
NFOLD = 5
kf = KFold(n_splits=NFOLD)
EPOCHS=100
def convert_to_tensor(df):
return torch.tensor(df.values, dtype=torch.float32).to(device)
submission_patients = submission_df['Patient']
submission_df['dummy_FVC'] = 0.0
submission_dl = create_dl(submission_df[FE], pd.Series(np.zeros(submission_df[FE].shape[0])))
x_sample, y_sample = next(iter(submission_dl))
x_sample.shape, y_sample.shape
pe = np.zeros((submission_df.shape[0], 3))
pred = np.zeros((train_df.shape[0], 3))
pred.shape
test_values = convert_to_tensor(submission_df[FE])
test_values.shape
def predict(features, model):
return model(features).detach().cpu().numpy()
%%time
recorders = []
for cnt, (tr_idx, val_idx) in tqdm(enumerate(kf.split(X)), total=NFOLD):
X_train, y_train = X.loc[tr_idx], y[tr_idx]
X_valid, y_valid = X.loc[val_idx], y[val_idx]
print(f"FOLD {cnt}", X_train.shape, y_train.shape, X_valid.shape, y_valid.shape)
model = create_model()
train_ds = ArrayDataset(X_train, y_train)
valid_ds = ArrayDataset(X_valid, y_valid)
learn = Learner(DataBunch.create(train_ds, valid_ds), model, loss_func=criterion, metrics=laplace_log, silent=True)
learn.fit(EPOCHS, LR, callbacks=[fastai.callbacks.OneCycleScheduler(lr_max = LR * 4, learn=learn)])
recorders.append(learn.recorder)
pred[val_idx] = predict(convert_to_tensor(X_valid), model)
print("Mean validation score:", np.array([get_last_metric(recorder) for recorder in recorders]).mean())
torch.optim.AdamW
for recorder in recorders:
recorder.plot_losses()
for recorder in recorders:
recorder.plot_metrics()
def load_best_model(i):
model_file = model_path/f'best_model_simple_{i}.pt'
model = create_model()
model.load_state_dict(torch.load(model_file))
model.to(device)
model.eval()
return model
# Using best models for prediction
pe = np.zeros((submission_df.shape[0], 3))
for i in range(NFOLD):
model = load_best_model(i)
pe += predict(test_values, model)
pe = pe / NFOLD
```
#### Prediction
```
sigma_opt = mean_absolute_error(y, pred[:, 1])
unc = pred[:,2] - pred[:, 0]
sigma_mean = np.mean(unc)
sigma_opt, sigma_mean
submission_df['FVC1'] = pe[:,1]
submission_df['Confidence1'] = pe[:, 2] - pe[:, 0]
submission_df.head(15)
subm = submission_df[['Patient_Week','FVC','Confidence','FVC1','Confidence1']].copy()
subm.loc[~subm.FVC1.isnull()].shape, subm.shape
subm.loc[~subm.FVC1.isnull(),'FVC'] = subm.loc[~subm.FVC1.isnull(),'FVC1']
if sigma_mean<70:
subm['Confidence'] = sigma_opt
else:
subm.loc[~subm.FVC1.isnull(),'Confidence'] = subm.loc[~subm.FVC1.isnull(),'Confidence1']
subm.describe().T
def replace_with_existing(df):
for i in range(len(df)):
patient_week_filter = subm['Patient_Week']==df.Patient[i]+'_'+str(df.Weeks[i])
subm.loc[patient_week_filter, 'FVC'] = df.FVC[i]
subm.loc[patient_week_filter, 'Confidence'] = 0.1
train_df = pd.read_csv(path/'train.csv', dtype = common.TRAIN_TYPES)
test_df = pd.read_csv(path/'test.csv', dtype = common.TRAIN_TYPES)
replace_with_existing(train_df)
replace_with_existing(test_df)
subm[subm['Patient_Week'].str.find('ID00419637202311204720264') > -1].head(30)
subm[["Patient_Week","FVC","Confidence"]].to_csv("submission.csv", index=False)
submission_final_df = pd.read_csv('submission.csv')
submission_final_df[submission_final_df['Patient_Week'].str.find('ID00419637202311204720264') == 0]['FVC'].plot()
submission_final_df[submission_final_df['Patient_Week'].str.find('ID00421637202311550012437') == 0]['FVC'].plot()
submission_final_df[submission_final_df['Patient_Week'].str.find('ID00423637202312137826377') == 0]['FVC'].plot()
train_df[train_df['Patient'].str.find('ID00419637202311204720264') == 0][['Weeks', 'FVC']].plot(x='Weeks', y='FVC')
!cat submission.csv
```
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
%matplotlib inline
import numpy as np
import pandas as pd
from pathlib import Path
import torch
import random
import os
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
from tqdm.notebook import tqdm
from sklearn.model_selection import KFold
from sklearn.metrics import mean_absolute_error
import matplotlib.pyplot as plt
import torch.nn.functional as F
from fastai import *
from fastai.basic_train import *
from fastai.basic_data import *
from lib import common
pd.options.mode.chained_assignment = None
path = Path('/kaggle/osic_pulmonary')
assert path.exists()
model_path = Path('/kaggle/osic_pulmonary/model')
if os.path.isdir(model_path) == False:
os.makedirs(model_path)
assert model_path.exists()
train_df, test_df, submission_df = common.read_data(path)
len(train_df)
submission_df = common.prepare_submission(submission_df, test_df)
submission_df[((submission_df['Patient'] == 'ID00419637202311204720264') & (submission_df['Weeks'] == 6))].head(25)
def adapt_percent_in_submission():
previous_match = None
for i, r in submission_df.iterrows():
in_training = train_df[(train_df['Patient'] == r['Patient']) & (train_df['Weeks'] == r['Weeks'])]
if(len(in_training) > 0):
previous_match = in_training['Percent'].item()
submission_df.iloc[i, submission_df.columns.get_loc('Percent')] = previous_match
elif previous_match is not None:
submission_df.iloc[i, submission_df.columns.get_loc('Percent')] = previous_match
adapt_percent_in_submission()
test_df
test_df[test_df['Patient'] == 'ID00419637202311204720264']
train_df[train_df['Patient'] == 'ID00419637202311204720264']
submission_df[submission_df['Patient'] == 'ID00419637202311204720264'].head(10)
train_df['WHERE'] = 'train'
test_df['WHERE'] = 'val'
submission_df['WHERE'] = 'test'
data = train_df.append([test_df, submission_df])
data['min_week'] = data['Weeks']
data.loc[data.WHERE=='test','min_week'] = np.nan
data['min_week'] = data.groupby('Patient')['min_week'].transform('min')
base = data.loc[data.Weeks == data.min_week]
base = base[['Patient','FVC', 'Percent']].copy()
base.columns = ['Patient','min_FVC', 'min_Percent']
base['nb'] = 1
base['nb'] = base.groupby('Patient')['nb'].transform('cumsum')
base = base[base.nb==1]
base
data = data.merge(base, on='Patient', how='left')
data['base_week'] = data['Weeks'] - data['min_week']
data['base_week'] = data['base_week']
del base
data[data['Patient'] == 'ID00421637202311550012437']
COLS = ['Sex','SmokingStatus'] #,'Age', 'Sex_SmokingStatus'
FE = []
for col in COLS:
for mod in data[col].unique():
FE.append(mod)
data[mod] = (data[col] == mod).astype(int)
data[data['Patient'] == 'ID00421637202311550012437']
np.mean(np.abs(data['Age'] - data['Age'].mean())), data['Age'].mad()
def normalize(df:pd.DataFrame, cont_names, target_names):
"Compute the means and stds of `self.cont_names` columns to normalize them."
means, stds = {},{}
for n, t in zip(cont_names, target_names):
means[n], stds[n] = df[n].mean(), df[n].std()
# means[n], stds[n] = df[n].mean(), df[n].mad()
df[t] = (df[n]-means[n]) / (1e-7 + stds[n])
normalize(data, ['Age','min_FVC','base_week','Percent', 'min_Percent', 'min_week'], ['age','BASE','week','percent', 'min_percent', 'min_week'])
FE += ['age','week','BASE', 'percent']
data['base_week'].min()
train_df = data.loc[data.WHERE=='train']
test_df = data.loc[data.WHERE=='val']
submission_df = data.loc[data.WHERE=='test']
del data
train_df.sort_values(['Patient', 'Weeks']).head(15)
X = train_df[FE]
X.head(15)
y = train_df['FVC']
y.shape
def seed_everything(seed=2020):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(42)
torch.cuda.manual_seed(42)
seed_everything(42)
class ArrayDataset(Dataset):
def __init__(self, x, y):
self.x, self.y = torch.tensor(x.values, dtype=torch.float32), torch.tensor(y.values, dtype=torch.float32)
assert(len(self.x) == len(self.y))
def __len__(self):
return len(self.x)
def __getitem__(self, i):
return self.x[i], self.y[i]
def __repr__(self):
return f'x: {self.x.shape} y: {self.y.shape}'
def create_dl(X, y, batch_size=128, num_workers=10):
ds = ArrayDataset(X, y)
return DataLoader(ds, batch_size, shuffle=True, num_workers=num_workers)
sample_dl = create_dl(X, y)
sample_ds = ArrayDataset(X, y)
sample_ds
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
def move_to_dev(x, y):
x = x.to(device)
y = y.to(device)
return x, y
C1, C2 = torch.tensor(70, dtype=torch.float32), torch.tensor(1000, dtype=torch.float32)
C1, C2 = move_to_dev(C1, C2)
q = torch.tensor([0.2, 0.50, 0.8]).float().to(device)
def score(y_true, y_pred):
y_true = y_true.unsqueeze(1)
sigma = y_pred[:, 2] - y_pred[:, 0]
fvc_pred = y_pred[:, 1]
#sigma_clip = sigma + C1
sigma_clip = torch.max(sigma, C1)
delta = torch.abs(y_true[:, 0] - fvc_pred)
delta = torch.min(delta, C2)
sq2 = torch.sqrt(torch.tensor(2.))
metric = (delta / sigma_clip)*sq2 + torch.log(sigma_clip* sq2)
return torch.mean(metric)
def qloss(y_true, y_pred):
# Pinball loss for multiple quantiles
e = y_true - y_pred
v = torch.max(q*e, (q-1)*e)
return torch.mean(v)
def mloss(_lambda):
def loss(y_pred, y_true):
y_true = y_true.unsqueeze(1)
return _lambda * qloss(y_true, y_pred) + (1 - _lambda)*score(y_true, y_pred)
return loss
def laplace_log(y_pred, y_true):
return score(y_true, y_pred)
class OsicModel(torch.nn.Module):
def __init__(self, ni, nh1, nh2):
super(OsicModel, self).__init__()
self.l1 = nn.Linear(ni, nh1)
self.l1_bn = nn.BatchNorm1d(nh1, momentum=0.1)
self.l2 = nn.Linear(nh1, nh2)
self.relu = nn.ReLU()
self.p1 = nn.Linear(nh2, 3)
self.p2 = nn.Linear(nh2, 3)
def forward(self, x):
x = self.relu(self.l1(x))
x = self.l1_bn(x)
x = self.relu(self.l2(x))
p1 = self.p1(x)
p2 = self.relu(self.p2(x))
preds = p1 + torch.cumsum(p2, axis=1)
return preds
def create_model(nh1=100, nh2=100):
model = OsicModel(X.shape[1], nh1, nh2)
model = model.to(device)
return model
sample_model = create_model()
sample_model
criterion=mloss(0.8)
# Test model
x_sample, y_sample = next(iter(sample_dl))
y_sample, x_sample = move_to_dev(y_sample, x_sample)
output = sample_model(x_sample)
criterion(output, y_sample)
import fastai.callbacks
LR=1e-3
learn = Learner(DataBunch.create(sample_ds, sample_ds), sample_model, loss_func=criterion, metrics=laplace_log, silent=True)
learn.fit(100, LR)
learn.recorder.plot_metrics()
learn.recorder.plot_losses()
learn.recorder.plot_lr()
def get_last_metric(recorder):
return recorder.metrics[-1][0].item()
get_last_metric(learn.recorder)
NFOLD = 5
kf = KFold(n_splits=NFOLD)
EPOCHS=100
def convert_to_tensor(df):
return torch.tensor(df.values, dtype=torch.float32).to(device)
submission_patients = submission_df['Patient']
submission_df['dummy_FVC'] = 0.0
submission_dl = create_dl(submission_df[FE], pd.Series(np.zeros(submission_df[FE].shape[0])))
x_sample, y_sample = next(iter(submission_dl))
x_sample.shape, y_sample.shape
pe = np.zeros((submission_df.shape[0], 3))
pred = np.zeros((train_df.shape[0], 3))
pred.shape
test_values = convert_to_tensor(submission_df[FE])
test_values.shape
def predict(features, model):
return model(features).detach().cpu().numpy()
%%time
recorders = []
for cnt, (tr_idx, val_idx) in tqdm(enumerate(kf.split(X)), total=NFOLD):
X_train, y_train = X.loc[tr_idx], y[tr_idx]
X_valid, y_valid = X.loc[val_idx], y[val_idx]
print(f"FOLD {cnt}", X_train.shape, y_train.shape, X_valid.shape, y_valid.shape)
model = create_model()
train_ds = ArrayDataset(X_train, y_train)
valid_ds = ArrayDataset(X_valid, y_valid)
learn = Learner(DataBunch.create(train_ds, valid_ds), model, loss_func=criterion, metrics=laplace_log, silent=True)
learn.fit(EPOCHS, LR, callbacks=[fastai.callbacks.OneCycleScheduler(lr_max = LR * 4, learn=learn)])
recorders.append(learn.recorder)
pred[val_idx] = predict(convert_to_tensor(X_valid), model)
print("Mean validation score:", np.array([get_last_metric(recorder) for recorder in recorders]).mean())
torch.optim.AdamW
for recorder in recorders:
recorder.plot_losses()
for recorder in recorders:
recorder.plot_metrics()
def load_best_model(i):
model_file = model_path/f'best_model_simple_{i}.pt'
model = create_model()
model.load_state_dict(torch.load(model_file))
model.to(device)
model.eval()
return model
# Using best models for prediction
pe = np.zeros((submission_df.shape[0], 3))
for i in range(NFOLD):
model = load_best_model(i)
pe += predict(test_values, model)
pe = pe / NFOLD
sigma_opt = mean_absolute_error(y, pred[:, 1])
unc = pred[:,2] - pred[:, 0]
sigma_mean = np.mean(unc)
sigma_opt, sigma_mean
submission_df['FVC1'] = pe[:,1]
submission_df['Confidence1'] = pe[:, 2] - pe[:, 0]
submission_df.head(15)
subm = submission_df[['Patient_Week','FVC','Confidence','FVC1','Confidence1']].copy()
subm.loc[~subm.FVC1.isnull()].shape, subm.shape
subm.loc[~subm.FVC1.isnull(),'FVC'] = subm.loc[~subm.FVC1.isnull(),'FVC1']
if sigma_mean<70:
subm['Confidence'] = sigma_opt
else:
subm.loc[~subm.FVC1.isnull(),'Confidence'] = subm.loc[~subm.FVC1.isnull(),'Confidence1']
subm.describe().T
def replace_with_existing(df):
for i in range(len(df)):
patient_week_filter = subm['Patient_Week']==df.Patient[i]+'_'+str(df.Weeks[i])
subm.loc[patient_week_filter, 'FVC'] = df.FVC[i]
subm.loc[patient_week_filter, 'Confidence'] = 0.1
train_df = pd.read_csv(path/'train.csv', dtype = common.TRAIN_TYPES)
test_df = pd.read_csv(path/'test.csv', dtype = common.TRAIN_TYPES)
replace_with_existing(train_df)
replace_with_existing(test_df)
subm[subm['Patient_Week'].str.find('ID00419637202311204720264') > -1].head(30)
subm[["Patient_Week","FVC","Confidence"]].to_csv("submission.csv", index=False)
submission_final_df = pd.read_csv('submission.csv')
submission_final_df[submission_final_df['Patient_Week'].str.find('ID00419637202311204720264') == 0]['FVC'].plot()
submission_final_df[submission_final_df['Patient_Week'].str.find('ID00421637202311550012437') == 0]['FVC'].plot()
submission_final_df[submission_final_df['Patient_Week'].str.find('ID00423637202312137826377') == 0]['FVC'].plot()
train_df[train_df['Patient'].str.find('ID00419637202311204720264') == 0][['Weeks', 'FVC']].plot(x='Weeks', y='FVC')
!cat submission.csv
| 0.565539 | 0.649391 |
# 15. Facial Expression Recognition - Theano
We are now going to go through the facial expression recognition project that we have worked on in the past, but we will use **Theano** as our framework of choice this time! We will be creating a neural network that has 2000 units in the first hidden layer, and 1000 units in the second hidden layer. We can start with our imports.
```
import numpy as np
import theano
import theano.tensor as T
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
% matplotlib inline
```
And now we can define the utilities that we are going to need.
```
"""----------------------- Function to get data -----------------------------"""
def getData(balance_ones=True):
# images are 48x48 = 2304 size vectors
Y = []
X = []
first = True
for line in open('../../../data/fer/fer2013.csv'):
if first:
first = False
else:
row = line.split(',')
Y.append(int(row[0]))
X.append([int(p) for p in row[1].split()])
X, Y = np.array(X) / 255.0, np.array(Y)
if balance_ones:
# balance the 1 class
X0, Y0 = X[Y!=1, :], Y[Y!=1]
X1 = X[Y==1, :]
X1 = np.repeat(X1, 9, axis=0)
X = np.vstack([X0, X1])
Y = np.concatenate((Y0, [1]*len(X1)))
return X, Y
""" --------- Creates indicator (N x K), from an input N x 1 y matrix --------"""
def y2indicator(y):
N = len(y)
K = len(set(y))
ind = np.zeros((N, K))
for i in range(N):
ind[i, y[i]] = 1
return ind
""" ----------- Gives the error rate between targets and predictions ---------------- """
def error_rate(targets, predictions):
return np.mean(targets != predictions)
""" Rectifier Linear Unit - an activation function that can be used in a neural network """
def relu(x):
return x * (x > 0)
"""
Function to initialize a weight matrix and a bias. M1 is the input size, and M2 is the output size
W is a matrix of size M1 x M2, which is randomized initialy to a gaussian normal
We make the standard deviation of this the sqrt of size in + size out
The bias is initialized as zeros. Each is then turned into float 32s so that they can be used in
Theano and TensorFlow
"""
def init_weight_and_bias(M1, M2):
W = np.random.randn(M1, M2) / np.sqrt(M1)
b = np.zeros(M2)
return W.astype(np.float32), b.astype(np.float32)
def rmsprop(cost, params, lr, mu, decay, eps):
grads = T.grad(cost, params)
updates = []
for p, g in zip(params, grads):
# cache
ones = np.ones_like(p.get_value(), dtype=np.float32)
c = theano.shared(ones)
new_c = decay*c + (np.float32(1.0) - decay)*g*g
# momentum
zeros = np.zeros_like(p.get_value(), dtype=np.float32)
m = theano.shared(zeros)
new_m = mu*m - lr*g / T.sqrt(new_c + eps)
# param update
new_p = p + new_m
# append the updates
updates.append((c, new_c))
updates.append((m, new_m))
updates.append((p, new_p))
return updates
```
Now, we want to put our hidden layer into it's own class. We want to do this so we can add an arbitrary number of hidden layers more easily.
```
class HiddenLayer(object):
def __init__(self, M1, M2, an_id):
self.id = id
self.M1 = M1
self.M2 = M2
W, b = init_weight_and_bias(M1, M2) # Getting initial weights and bias's
"""Recall, in theano a shared variable is an updatable variable"""
self.W = theano.shared(W, 'W_%s' % self.id) # Unique name associated with id
self.b = theano.shared(b, 'W_%s' % self.id)
self.params = [self.W, self.b] # Keep all params in 1 list to calc grad
def forward(self, X):
return relu(X.dot(self.W) + self.b)
```
Now we can define our **ANN** class. It will take in the hidden layer sizes.
```
def rmsprop(cost, params, lr, mu, decay, eps):
grads = T.grad(cost, params)
updates = []
for p, g in zip(params, grads):
# cache
ones = np.ones_like(p.get_value(), dtype=np.float32)
c = theano.shared(ones)
new_c = decay*c + (np.float32(1.0) - decay)*g*g
# momentum
zeros = np.zeros_like(p.get_value(), dtype=np.float32)
m = theano.shared(zeros)
new_m = mu*m - lr*g / T.sqrt(new_c + eps)
# param update
new_p = p + new_m
# append the updates
updates.append((c, new_c))
updates.append((m, new_m))
updates.append((p, new_p))
return updates
class HiddenLayer(object):
def __init__(self, M1, M2, an_id):
self.id = an_id
self.M1 = M1
self.M2 = M2
W, b = init_weight_and_bias(M1, M2)
self.W = theano.shared(W, 'W_%s' % self.id)
self.b = theano.shared(b, 'b_%s' % self.id)
self.params = [self.W, self.b]
def forward(self, X):
return relu(X.dot(self.W) + self.b)
class ANN(object):
def __init__(self, hidden_layer_sizes):
self.hidden_layer_sizes = hidden_layer_sizes
def fit(self, X, Y, learning_rate=1e-3, mu=0.9, decay=0.9, reg=0, eps=1e-10, epochs=100, batch_sz=30, show_fig=False):
learning_rate = np.float32(learning_rate)
mu = np.float32(mu)
decay = np.float32(decay)
reg = np.float32(reg)
eps = np.float32(eps)
# make a validation set
X, Y = shuffle(X, Y)
X = X.astype(np.float32)
Y = Y.astype(np.int32)
Xvalid, Yvalid = X[-1000:], Y[-1000:]
X, Y = X[:-1000], Y[:-1000]
# initialize hidden layers
N, D = X.shape
K = len(set(Y))
self.hidden_layers = []
M1 = D
count = 0
for M2 in self.hidden_layer_sizes:
h = HiddenLayer(M1, M2, count)
self.hidden_layers.append(h)
M1 = M2
count += 1
W, b = init_weight_and_bias(M1, K)
self.W = theano.shared(W, 'W_logreg')
self.b = theano.shared(b, 'b_logreg')
# collect params for later use
self.params = [self.W, self.b]
for h in self.hidden_layers:
self.params += h.params
# set up theano functions and variables
thX = T.fmatrix('X')
thY = T.ivector('Y')
pY = self.th_forward(thX)
rcost = reg*T.sum([(p*p).sum() for p in self.params])
cost = -T.mean(T.log(pY[T.arange(thY.shape[0]), thY])) + rcost
prediction = self.th_predict(thX)
# actual prediction function
self.predict_op = theano.function(inputs=[thX], outputs=prediction)
cost_predict_op = theano.function(inputs=[thX, thY], outputs=[cost, prediction])
updates = rmsprop(cost, self.params, learning_rate, mu, decay, eps)
train_op = theano.function(
inputs=[thX, thY],
updates=updates
)
n_batches = N // batch_sz
costs = []
for i in range(epochs):
X, Y = shuffle(X, Y)
for j in range(n_batches):
Xbatch = X[j*batch_sz:(j*batch_sz+batch_sz)]
Ybatch = Y[j*batch_sz:(j*batch_sz+batch_sz)]
train_op(Xbatch, Ybatch)
if j % 20 == 0:
c, p = cost_predict_op(Xvalid, Yvalid)
costs.append(c)
e = error_rate(Yvalid, p)
print("i:", i, "j:", j, "nb:", n_batches, "cost:", c, "error rate:", e)
if show_fig:
plt.plot(costs)
plt.show()
def th_forward(self, X):
Z = X
for h in self.hidden_layers:
Z = h.forward(Z)
return T.nnet.softmax(Z.dot(self.W) + self.b)
def th_predict(self, X):
pY = self.th_forward(X)
return T.argmax(pY, axis=1)
def predict(self, X):
return self.predict_op(X)
```
And we finally have our main method. We are going to create a model that contains 2000 units in the first hidden layer, and 1000 units in the second hidden layer.
```
def main():
X, Y = getData()
model = ANN([2000, 1000])
model.fit(X, Y, show_fig=True)
if __name__ == '__main__':
main()
```
|
github_jupyter
|
import numpy as np
import theano
import theano.tensor as T
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
% matplotlib inline
"""----------------------- Function to get data -----------------------------"""
def getData(balance_ones=True):
# images are 48x48 = 2304 size vectors
Y = []
X = []
first = True
for line in open('../../../data/fer/fer2013.csv'):
if first:
first = False
else:
row = line.split(',')
Y.append(int(row[0]))
X.append([int(p) for p in row[1].split()])
X, Y = np.array(X) / 255.0, np.array(Y)
if balance_ones:
# balance the 1 class
X0, Y0 = X[Y!=1, :], Y[Y!=1]
X1 = X[Y==1, :]
X1 = np.repeat(X1, 9, axis=0)
X = np.vstack([X0, X1])
Y = np.concatenate((Y0, [1]*len(X1)))
return X, Y
""" --------- Creates indicator (N x K), from an input N x 1 y matrix --------"""
def y2indicator(y):
N = len(y)
K = len(set(y))
ind = np.zeros((N, K))
for i in range(N):
ind[i, y[i]] = 1
return ind
""" ----------- Gives the error rate between targets and predictions ---------------- """
def error_rate(targets, predictions):
return np.mean(targets != predictions)
""" Rectifier Linear Unit - an activation function that can be used in a neural network """
def relu(x):
return x * (x > 0)
"""
Function to initialize a weight matrix and a bias. M1 is the input size, and M2 is the output size
W is a matrix of size M1 x M2, which is randomized initialy to a gaussian normal
We make the standard deviation of this the sqrt of size in + size out
The bias is initialized as zeros. Each is then turned into float 32s so that they can be used in
Theano and TensorFlow
"""
def init_weight_and_bias(M1, M2):
W = np.random.randn(M1, M2) / np.sqrt(M1)
b = np.zeros(M2)
return W.astype(np.float32), b.astype(np.float32)
def rmsprop(cost, params, lr, mu, decay, eps):
grads = T.grad(cost, params)
updates = []
for p, g in zip(params, grads):
# cache
ones = np.ones_like(p.get_value(), dtype=np.float32)
c = theano.shared(ones)
new_c = decay*c + (np.float32(1.0) - decay)*g*g
# momentum
zeros = np.zeros_like(p.get_value(), dtype=np.float32)
m = theano.shared(zeros)
new_m = mu*m - lr*g / T.sqrt(new_c + eps)
# param update
new_p = p + new_m
# append the updates
updates.append((c, new_c))
updates.append((m, new_m))
updates.append((p, new_p))
return updates
class HiddenLayer(object):
def __init__(self, M1, M2, an_id):
self.id = id
self.M1 = M1
self.M2 = M2
W, b = init_weight_and_bias(M1, M2) # Getting initial weights and bias's
"""Recall, in theano a shared variable is an updatable variable"""
self.W = theano.shared(W, 'W_%s' % self.id) # Unique name associated with id
self.b = theano.shared(b, 'W_%s' % self.id)
self.params = [self.W, self.b] # Keep all params in 1 list to calc grad
def forward(self, X):
return relu(X.dot(self.W) + self.b)
def rmsprop(cost, params, lr, mu, decay, eps):
grads = T.grad(cost, params)
updates = []
for p, g in zip(params, grads):
# cache
ones = np.ones_like(p.get_value(), dtype=np.float32)
c = theano.shared(ones)
new_c = decay*c + (np.float32(1.0) - decay)*g*g
# momentum
zeros = np.zeros_like(p.get_value(), dtype=np.float32)
m = theano.shared(zeros)
new_m = mu*m - lr*g / T.sqrt(new_c + eps)
# param update
new_p = p + new_m
# append the updates
updates.append((c, new_c))
updates.append((m, new_m))
updates.append((p, new_p))
return updates
class HiddenLayer(object):
def __init__(self, M1, M2, an_id):
self.id = an_id
self.M1 = M1
self.M2 = M2
W, b = init_weight_and_bias(M1, M2)
self.W = theano.shared(W, 'W_%s' % self.id)
self.b = theano.shared(b, 'b_%s' % self.id)
self.params = [self.W, self.b]
def forward(self, X):
return relu(X.dot(self.W) + self.b)
class ANN(object):
def __init__(self, hidden_layer_sizes):
self.hidden_layer_sizes = hidden_layer_sizes
def fit(self, X, Y, learning_rate=1e-3, mu=0.9, decay=0.9, reg=0, eps=1e-10, epochs=100, batch_sz=30, show_fig=False):
learning_rate = np.float32(learning_rate)
mu = np.float32(mu)
decay = np.float32(decay)
reg = np.float32(reg)
eps = np.float32(eps)
# make a validation set
X, Y = shuffle(X, Y)
X = X.astype(np.float32)
Y = Y.astype(np.int32)
Xvalid, Yvalid = X[-1000:], Y[-1000:]
X, Y = X[:-1000], Y[:-1000]
# initialize hidden layers
N, D = X.shape
K = len(set(Y))
self.hidden_layers = []
M1 = D
count = 0
for M2 in self.hidden_layer_sizes:
h = HiddenLayer(M1, M2, count)
self.hidden_layers.append(h)
M1 = M2
count += 1
W, b = init_weight_and_bias(M1, K)
self.W = theano.shared(W, 'W_logreg')
self.b = theano.shared(b, 'b_logreg')
# collect params for later use
self.params = [self.W, self.b]
for h in self.hidden_layers:
self.params += h.params
# set up theano functions and variables
thX = T.fmatrix('X')
thY = T.ivector('Y')
pY = self.th_forward(thX)
rcost = reg*T.sum([(p*p).sum() for p in self.params])
cost = -T.mean(T.log(pY[T.arange(thY.shape[0]), thY])) + rcost
prediction = self.th_predict(thX)
# actual prediction function
self.predict_op = theano.function(inputs=[thX], outputs=prediction)
cost_predict_op = theano.function(inputs=[thX, thY], outputs=[cost, prediction])
updates = rmsprop(cost, self.params, learning_rate, mu, decay, eps)
train_op = theano.function(
inputs=[thX, thY],
updates=updates
)
n_batches = N // batch_sz
costs = []
for i in range(epochs):
X, Y = shuffle(X, Y)
for j in range(n_batches):
Xbatch = X[j*batch_sz:(j*batch_sz+batch_sz)]
Ybatch = Y[j*batch_sz:(j*batch_sz+batch_sz)]
train_op(Xbatch, Ybatch)
if j % 20 == 0:
c, p = cost_predict_op(Xvalid, Yvalid)
costs.append(c)
e = error_rate(Yvalid, p)
print("i:", i, "j:", j, "nb:", n_batches, "cost:", c, "error rate:", e)
if show_fig:
plt.plot(costs)
plt.show()
def th_forward(self, X):
Z = X
for h in self.hidden_layers:
Z = h.forward(Z)
return T.nnet.softmax(Z.dot(self.W) + self.b)
def th_predict(self, X):
pY = self.th_forward(X)
return T.argmax(pY, axis=1)
def predict(self, X):
return self.predict_op(X)
def main():
X, Y = getData()
model = ANN([2000, 1000])
model.fit(X, Y, show_fig=True)
if __name__ == '__main__':
main()
| 0.596198 | 0.948442 |
# Examples
Below you will find various examples for you to experiment with HOG. For each image, you can modify the `cell_size`, `num_cells_per_block`, and `num_bins` (the number of angular bins in your histograms), to see how those parameters affect the resulting HOG descriptor. These examples, will help you get some intuition for what each parameter does and how they can be *tuned* to pick out the amount of detail required. Below is a list of the available images that you can load:
* cat.jpeg
* jeep1.jpeg
* jeep2.jpeg
* jeep3.jpeg
* man.jpeg
* pedestrian_bike.jpeg
* roundabout.jpeg
* scrabble.jpeg
* shuttle.jpeg
* triangle_tile.jpeg
* watch.jpeg
* woman.jpeg
**NOTE**: If you are running this notebook in the Udacity workspace, there is around a 2 second lag in the interactive plot. This means that if you click in the image to zoom in, it will take about 2 seconds for the plot to refresh.
```
%matplotlib notebook
import cv2
import copy
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as patches
# Set the default figure size
plt.rcParams['figure.figsize'] = [9.8, 9]
# -------------------------- Select the Image and Specify the parameters for our HOG descriptor --------------------------
# Load the image
image = cv2.imread('./images/jeep2.jpeg')
# Cell Size in pixels (width, height). Must be smaller than the size of the detection window
# and must be chosen so that the resulting Block Size is smaller than the detection window.
cell_size = (8, 8)
# Number of cells per block in each direction (x, y). Must be chosen so that the resulting
# Block Size is smaller than the detection window
num_cells_per_block = (2, 2)
# Number of gradient orientation bins
num_bins = 9
# -------------------------------------------------------------------------------------------------------------------------
# Convert the original image to RGB
original_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Convert the original image to gray scale
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Block Size in pixels (width, height). Must be an integer multiple of Cell Size.
# The Block Size must be smaller than the detection window
block_size = (num_cells_per_block[0] * cell_size[0],
num_cells_per_block[1] * cell_size[1])
# Calculate the number of cells that fit in our image in the x and y directions
x_cells = gray_image.shape[1] // cell_size[0]
y_cells = gray_image.shape[0] // cell_size[1]
# Horizontal distance between blocks in units of Cell Size. Must be an integer and it must
# be set such that (x_cells - num_cells_per_block[0]) / h_stride = integer.
h_stride = 1
# Vertical distance between blocks in units of Cell Size. Must be an integer and it must
# be set such that (y_cells - num_cells_per_block[1]) / v_stride = integer.
v_stride = 1
# Block Stride in pixels (horizantal, vertical). Must be an integer multiple of Cell Size
block_stride = (cell_size[0] * h_stride, cell_size[1] * v_stride)
# Specify the size of the detection window (Region of Interest) in pixels (width, height).
# It must be an integer multiple of Cell Size and it must cover the entire image. Because
# the detection window must be an integer multiple of cell size, depending on the size of
# your cells, the resulting detection window might be slightly smaller than the image.
# This is perfectly ok.
win_size = (x_cells * cell_size[0] , y_cells * cell_size[1])
# Print the shape of the gray scale image for reference
print('\nThe gray scale image has shape: ', gray_image.shape)
print()
# Print the parameters of our HOG descriptor
print('HOG Descriptor Parameters:\n')
print('Window Size:', win_size)
print('Cell Size:', cell_size)
print('Block Size:', block_size)
print('Block Stride:', block_stride)
print('Number of Bins:', num_bins)
print()
# Set the parameters of the HOG descriptor using the variables defined above
hog = cv2.HOGDescriptor(win_size, block_size, block_stride, cell_size, num_bins)
# Compute the HOG Descriptor for the gray scale image
hog_descriptor = hog.compute(gray_image)
# Calculate the total number of blocks along the width of the detection window
tot_bx = np.uint32(((x_cells - num_cells_per_block[0]) / h_stride) + 1)
# Calculate the total number of blocks along the height of the detection window
tot_by = np.uint32(((y_cells - num_cells_per_block[1]) / v_stride) + 1)
# Calculate the total number of elements in the feature vector
tot_els = (tot_bx) * (tot_by) * num_cells_per_block[0] * num_cells_per_block[1] * num_bins
# Reshape the feature vector to [blocks_y, blocks_x, num_cells_per_block_x, num_cells_per_block_y, num_bins].
# The blocks_x and blocks_y will be transposed so that the first index (blocks_y) referes to the row number
# and the second index to the column number. This will be useful later when we plot the feature vector, so
# that the feature vector indexing matches the image indexing.
hog_descriptor_reshaped = hog_descriptor.reshape(tot_bx,
tot_by,
num_cells_per_block[0],
num_cells_per_block[1],
num_bins).transpose((1, 0, 2, 3, 4))
# Create an array that will hold the average gradients for each cell
ave_grad = np.zeros((y_cells, x_cells, num_bins))
# Create an array that will count the number of histograms per cell
hist_counter = np.zeros((y_cells, x_cells, 1))
# Add up all the histograms for each cell and count the number of histograms per cell
for i in range (num_cells_per_block[0]):
for j in range(num_cells_per_block[1]):
ave_grad[i:tot_by + i,
j:tot_bx + j] += hog_descriptor_reshaped[:, :, i, j, :]
hist_counter[i:tot_by + i,
j:tot_bx + j] += 1
# Calculate the average gradient for each cell
ave_grad /= hist_counter
# Calculate the total number of vectors we have in all the cells.
len_vecs = ave_grad.shape[0] * ave_grad.shape[1] * ave_grad.shape[2]
# Create an array that has num_bins equally spaced between 0 and 180 degress in radians.
deg = np.linspace(0, np.pi, num_bins, endpoint = False)
# Each cell will have a histogram with num_bins. For each cell, plot each bin as a vector (with its magnitude
# equal to the height of the bin in the histogram, and its angle corresponding to the bin in the histogram).
# To do this, create rank 1 arrays that will hold the (x,y)-coordinate of all the vectors in all the cells in the
# image. Also, create the rank 1 arrays that will hold all the (U,V)-components of all the vectors in all the
# cells in the image. Create the arrays that will hold all the vector positons and components.
U = np.zeros((len_vecs))
V = np.zeros((len_vecs))
X = np.zeros((len_vecs))
Y = np.zeros((len_vecs))
# Set the counter to zero
counter = 0
# Use the cosine and sine functions to calculate the vector components (U,V) from their maginitudes. Remember the
# cosine and sine functions take angles in radians. Calculate the vector positions and magnitudes from the
# average gradient array
for i in range(ave_grad.shape[0]):
for j in range(ave_grad.shape[1]):
for k in range(ave_grad.shape[2]):
U[counter] = ave_grad[i,j,k] * np.cos(deg[k])
V[counter] = ave_grad[i,j,k] * np.sin(deg[k])
X[counter] = (cell_size[0] / 2) + (cell_size[0] * i)
Y[counter] = (cell_size[1] / 2) + (cell_size[1] * j)
counter = counter + 1
# Create the bins in degress to plot our histogram.
angle_axis = np.linspace(0, 180, num_bins, endpoint = False)
angle_axis += ((angle_axis[1] - angle_axis[0]) / 2)
# Create a figure with 4 subplots arranged in 2 x 2
fig, ((a,b),(c,d)) = plt.subplots(2,2)
# Set the title of each subplot
a.set(title = 'Gray Scale Image\n(Click to Zoom)')
b.set(title = 'HOG Descriptor\n(Click to Zoom)')
c.set(title = 'Zoom Window', xlim = (0, 18), ylim = (0, 18), autoscale_on = False)
d.set(title = 'Histogram of Gradients')
# Plot the gray scale image
a.imshow(gray_image, cmap = 'gray')
a.set_aspect(aspect = 1)
# Plot the feature vector (HOG Descriptor)
b.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 5)
b.invert_yaxis()
b.set_aspect(aspect = 1)
b.set_facecolor('black')
# Define function for interactive zoom
def onpress(event):
#Unless the left mouse button is pressed do nothing
if event.button != 1:
return
# Only accept clicks for subplots a and b
if event.inaxes in [a, b]:
# Get mouse click coordinates
x, y = event.xdata, event.ydata
# Select the cell closest to the mouse click coordinates
cell_num_x = np.uint32(x / cell_size[0])
cell_num_y = np.uint32(y / cell_size[1])
# Set the edge coordinates of the rectangle patch
edgex = x - (x % cell_size[0])
edgey = y - (y % cell_size[1])
# Create a rectangle patch that matches the the cell selected above
rect = patches.Rectangle((edgex, edgey),
cell_size[0], cell_size[1],
linewidth = 1,
edgecolor = 'magenta',
facecolor='none')
# A single patch can only be used in a single plot. Create copies
# of the patch to use in the other subplots
rect2 = copy.copy(rect)
rect3 = copy.copy(rect)
# Update all subplots
a.clear()
a.set(title = 'Gray Scale Image\n(Click to Zoom)')
a.imshow(gray_image, cmap = 'gray')
a.set_aspect(aspect = 1)
a.add_patch(rect)
b.clear()
b.set(title = 'HOG Descriptor\n(Click to Zoom)')
b.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 5)
b.invert_yaxis()
b.set_aspect(aspect = 1)
b.set_facecolor('black')
b.add_patch(rect2)
c.clear()
c.set(title = 'Zoom Window')
c.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 1)
c.set_xlim(edgex - cell_size[0], edgex + (2 * cell_size[0]))
c.set_ylim(edgey - cell_size[1], edgey + (2 * cell_size[1]))
c.invert_yaxis()
c.set_aspect(aspect = 1)
c.set_facecolor('black')
c.add_patch(rect3)
d.clear()
d.set(title = 'Histogram of Gradients')
d.grid()
d.set_xlim(0, 180)
d.set_xticks(angle_axis)
d.set_xlabel('Angle')
d.bar(angle_axis,
ave_grad[cell_num_y, cell_num_x, :],
180 // num_bins,
align = 'center',
alpha = 0.5,
linewidth = 1.2,
edgecolor = 'k')
fig.canvas.draw()
# Create a connection between the figure and the mouse click
fig.canvas.mpl_connect('button_press_event', onpress)
plt.show()
```
|
github_jupyter
|
%matplotlib notebook
import cv2
import copy
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as patches
# Set the default figure size
plt.rcParams['figure.figsize'] = [9.8, 9]
# -------------------------- Select the Image and Specify the parameters for our HOG descriptor --------------------------
# Load the image
image = cv2.imread('./images/jeep2.jpeg')
# Cell Size in pixels (width, height). Must be smaller than the size of the detection window
# and must be chosen so that the resulting Block Size is smaller than the detection window.
cell_size = (8, 8)
# Number of cells per block in each direction (x, y). Must be chosen so that the resulting
# Block Size is smaller than the detection window
num_cells_per_block = (2, 2)
# Number of gradient orientation bins
num_bins = 9
# -------------------------------------------------------------------------------------------------------------------------
# Convert the original image to RGB
original_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Convert the original image to gray scale
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Block Size in pixels (width, height). Must be an integer multiple of Cell Size.
# The Block Size must be smaller than the detection window
block_size = (num_cells_per_block[0] * cell_size[0],
num_cells_per_block[1] * cell_size[1])
# Calculate the number of cells that fit in our image in the x and y directions
x_cells = gray_image.shape[1] // cell_size[0]
y_cells = gray_image.shape[0] // cell_size[1]
# Horizontal distance between blocks in units of Cell Size. Must be an integer and it must
# be set such that (x_cells - num_cells_per_block[0]) / h_stride = integer.
h_stride = 1
# Vertical distance between blocks in units of Cell Size. Must be an integer and it must
# be set such that (y_cells - num_cells_per_block[1]) / v_stride = integer.
v_stride = 1
# Block Stride in pixels (horizantal, vertical). Must be an integer multiple of Cell Size
block_stride = (cell_size[0] * h_stride, cell_size[1] * v_stride)
# Specify the size of the detection window (Region of Interest) in pixels (width, height).
# It must be an integer multiple of Cell Size and it must cover the entire image. Because
# the detection window must be an integer multiple of cell size, depending on the size of
# your cells, the resulting detection window might be slightly smaller than the image.
# This is perfectly ok.
win_size = (x_cells * cell_size[0] , y_cells * cell_size[1])
# Print the shape of the gray scale image for reference
print('\nThe gray scale image has shape: ', gray_image.shape)
print()
# Print the parameters of our HOG descriptor
print('HOG Descriptor Parameters:\n')
print('Window Size:', win_size)
print('Cell Size:', cell_size)
print('Block Size:', block_size)
print('Block Stride:', block_stride)
print('Number of Bins:', num_bins)
print()
# Set the parameters of the HOG descriptor using the variables defined above
hog = cv2.HOGDescriptor(win_size, block_size, block_stride, cell_size, num_bins)
# Compute the HOG Descriptor for the gray scale image
hog_descriptor = hog.compute(gray_image)
# Calculate the total number of blocks along the width of the detection window
tot_bx = np.uint32(((x_cells - num_cells_per_block[0]) / h_stride) + 1)
# Calculate the total number of blocks along the height of the detection window
tot_by = np.uint32(((y_cells - num_cells_per_block[1]) / v_stride) + 1)
# Calculate the total number of elements in the feature vector
tot_els = (tot_bx) * (tot_by) * num_cells_per_block[0] * num_cells_per_block[1] * num_bins
# Reshape the feature vector to [blocks_y, blocks_x, num_cells_per_block_x, num_cells_per_block_y, num_bins].
# The blocks_x and blocks_y will be transposed so that the first index (blocks_y) referes to the row number
# and the second index to the column number. This will be useful later when we plot the feature vector, so
# that the feature vector indexing matches the image indexing.
hog_descriptor_reshaped = hog_descriptor.reshape(tot_bx,
tot_by,
num_cells_per_block[0],
num_cells_per_block[1],
num_bins).transpose((1, 0, 2, 3, 4))
# Create an array that will hold the average gradients for each cell
ave_grad = np.zeros((y_cells, x_cells, num_bins))
# Create an array that will count the number of histograms per cell
hist_counter = np.zeros((y_cells, x_cells, 1))
# Add up all the histograms for each cell and count the number of histograms per cell
for i in range (num_cells_per_block[0]):
for j in range(num_cells_per_block[1]):
ave_grad[i:tot_by + i,
j:tot_bx + j] += hog_descriptor_reshaped[:, :, i, j, :]
hist_counter[i:tot_by + i,
j:tot_bx + j] += 1
# Calculate the average gradient for each cell
ave_grad /= hist_counter
# Calculate the total number of vectors we have in all the cells.
len_vecs = ave_grad.shape[0] * ave_grad.shape[1] * ave_grad.shape[2]
# Create an array that has num_bins equally spaced between 0 and 180 degress in radians.
deg = np.linspace(0, np.pi, num_bins, endpoint = False)
# Each cell will have a histogram with num_bins. For each cell, plot each bin as a vector (with its magnitude
# equal to the height of the bin in the histogram, and its angle corresponding to the bin in the histogram).
# To do this, create rank 1 arrays that will hold the (x,y)-coordinate of all the vectors in all the cells in the
# image. Also, create the rank 1 arrays that will hold all the (U,V)-components of all the vectors in all the
# cells in the image. Create the arrays that will hold all the vector positons and components.
U = np.zeros((len_vecs))
V = np.zeros((len_vecs))
X = np.zeros((len_vecs))
Y = np.zeros((len_vecs))
# Set the counter to zero
counter = 0
# Use the cosine and sine functions to calculate the vector components (U,V) from their maginitudes. Remember the
# cosine and sine functions take angles in radians. Calculate the vector positions and magnitudes from the
# average gradient array
for i in range(ave_grad.shape[0]):
for j in range(ave_grad.shape[1]):
for k in range(ave_grad.shape[2]):
U[counter] = ave_grad[i,j,k] * np.cos(deg[k])
V[counter] = ave_grad[i,j,k] * np.sin(deg[k])
X[counter] = (cell_size[0] / 2) + (cell_size[0] * i)
Y[counter] = (cell_size[1] / 2) + (cell_size[1] * j)
counter = counter + 1
# Create the bins in degress to plot our histogram.
angle_axis = np.linspace(0, 180, num_bins, endpoint = False)
angle_axis += ((angle_axis[1] - angle_axis[0]) / 2)
# Create a figure with 4 subplots arranged in 2 x 2
fig, ((a,b),(c,d)) = plt.subplots(2,2)
# Set the title of each subplot
a.set(title = 'Gray Scale Image\n(Click to Zoom)')
b.set(title = 'HOG Descriptor\n(Click to Zoom)')
c.set(title = 'Zoom Window', xlim = (0, 18), ylim = (0, 18), autoscale_on = False)
d.set(title = 'Histogram of Gradients')
# Plot the gray scale image
a.imshow(gray_image, cmap = 'gray')
a.set_aspect(aspect = 1)
# Plot the feature vector (HOG Descriptor)
b.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 5)
b.invert_yaxis()
b.set_aspect(aspect = 1)
b.set_facecolor('black')
# Define function for interactive zoom
def onpress(event):
#Unless the left mouse button is pressed do nothing
if event.button != 1:
return
# Only accept clicks for subplots a and b
if event.inaxes in [a, b]:
# Get mouse click coordinates
x, y = event.xdata, event.ydata
# Select the cell closest to the mouse click coordinates
cell_num_x = np.uint32(x / cell_size[0])
cell_num_y = np.uint32(y / cell_size[1])
# Set the edge coordinates of the rectangle patch
edgex = x - (x % cell_size[0])
edgey = y - (y % cell_size[1])
# Create a rectangle patch that matches the the cell selected above
rect = patches.Rectangle((edgex, edgey),
cell_size[0], cell_size[1],
linewidth = 1,
edgecolor = 'magenta',
facecolor='none')
# A single patch can only be used in a single plot. Create copies
# of the patch to use in the other subplots
rect2 = copy.copy(rect)
rect3 = copy.copy(rect)
# Update all subplots
a.clear()
a.set(title = 'Gray Scale Image\n(Click to Zoom)')
a.imshow(gray_image, cmap = 'gray')
a.set_aspect(aspect = 1)
a.add_patch(rect)
b.clear()
b.set(title = 'HOG Descriptor\n(Click to Zoom)')
b.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 5)
b.invert_yaxis()
b.set_aspect(aspect = 1)
b.set_facecolor('black')
b.add_patch(rect2)
c.clear()
c.set(title = 'Zoom Window')
c.quiver(Y, X, U, V, color = 'white', headwidth = 0, headlength = 0, scale_units = 'inches', scale = 1)
c.set_xlim(edgex - cell_size[0], edgex + (2 * cell_size[0]))
c.set_ylim(edgey - cell_size[1], edgey + (2 * cell_size[1]))
c.invert_yaxis()
c.set_aspect(aspect = 1)
c.set_facecolor('black')
c.add_patch(rect3)
d.clear()
d.set(title = 'Histogram of Gradients')
d.grid()
d.set_xlim(0, 180)
d.set_xticks(angle_axis)
d.set_xlabel('Angle')
d.bar(angle_axis,
ave_grad[cell_num_y, cell_num_x, :],
180 // num_bins,
align = 'center',
alpha = 0.5,
linewidth = 1.2,
edgecolor = 'k')
fig.canvas.draw()
# Create a connection between the figure and the mouse click
fig.canvas.mpl_connect('button_press_event', onpress)
plt.show()
| 0.749362 | 0.964888 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.