code
stringlengths 2.5k
6.36M
| kind
stringclasses 2
values | parsed_code
stringlengths 0
404k
| quality_prob
float64 0
0.98
| learning_prob
float64 0.03
1
|
---|---|---|---|---|
<a href="https://colab.research.google.com/github/sasha-kap/Events-Analytics/blob/master/GDELT_EDA.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### Imports
```
import pandas as pd
import numpy as np
pd.set_option('display.max_colwidth',500)
import matplotlib.pyplot as plt
import matplotlib as mpl
%matplotlib inline
plt.style.use('seaborn-white') # use plt.style.available to see list of available styles
import pickle
from datetime import datetime
from google.cloud import bigquery
import sys
sys.version
print("date ran:", datetime.today())
print("matplotlib version:", mpl.__version__)
print("pandas version:", pd.__version__)
project_id = 'spark-project-254623'
bucket_name = 'spark-projects'
```
###Provide credentials to the runtime
```
# authenticates Colab to the Google Cloud account
from google.colab import auth
auth.authenticate_user()
print('Authenticated')
```
### BigQuery dry run
Dry run query to check query size across different date ranges:
```
client = bigquery.Client(project=project_id)
job_config = bigquery.QueryJobConfig()
job_config.dry_run = True
job_config.use_query_cache = False
sql = '''
SELECT *
FROM `gdelt-bq.gdeltv2.events_partitioned`
WHERE
_PARTITIONDATE BETWEEN '2018-04-04' AND '2018-04-04'
'''
query_job = client.query(
sql,
# Location must match that of the dataset(s) referenced in the query.
location="US",
job_config=job_config,
) # API request
# A dry run query completes immediately.
assert query_job.state == "DONE"
assert query_job.dry_run
print("This query will process {:,} bytes.".format(query_job.total_bytes_processed))
```
### Query Events Table
```
client = bigquery.Client(project=project_id)
sql = '''
SELECT *
FROM `gdelt-bq.gdeltv2.events_partitioned`
WHERE
_PARTITIONDATE BETWEEN '2019-12-01' AND '2019-12-07'
'''
df = client.query(sql, location="US").to_dataframe()
```
### Save DataFrame to Pickle and then copy to existing Cloud Storage bucket
```
df.to_pickle("./dec19_wk1_events.pkl")
# upload file to GCS bucket
!gsutil cp ./dec19_wk1_events.pkl gs://{bucket_name}/dec19_wk1_events.pkl
```
### Copy Pickle file back to Colab instance
```
# run authentication before running this cell
!gsutil cp gs://{bucket_name}/dec19_wk1_events.pkl ./dec19_wk1_events.pkl
df = pd.read_pickle("./dec19_wk1_events.pkl")
```
### Exploratory Analysis of Queried Events
#### Dataset Properties
```
print(df.shape)
df.columns
df.info()
#CHECK FOR PRESENCE OF MISSING VALUES
miss_vls = pd.DataFrame(df.isnull().sum(axis=0).sort_values(ascending=False), columns=['Count'])
miss_vls['Percent'] = miss_vls['Count'].apply(lambda x: '{:.2f}%'.format((float(x) / df.shape[0]) * 100))
miss_vls[miss_vls.Count > 0]
```
#### Actor and Event Codes
First, we see that Actor1Code is not always recorded (missing on about 10% of events). Let's check if there is any correlation between EventCode and whether Actor1Code is present or null.
Let's import the Event Descriptions lookup table.
```
event_lt = pd.read_csv("https://gdeltproject.org/data/lookups/CAMEO.eventcodes.txt", sep='\t', dtype={'CAMEOEVENTCODE':str})
event_lt.set_index('CAMEOEVENTCODE',inplace=True)
event_lt.head()
# tabulate number of unique EventCode's in cases where Actor1Code is null
df[df.Actor1Code.isnull()]['EventCode'].unique().shape
# plot frequencies of EventCode values for events when Actor1Code is null
plt.figure(figsize=(30,5))
df[df.Actor1Code.isnull()]['EventCode'].value_counts().plot(kind="bar")
plt.show()
# display top 10 most frequent event codes when Actor1Code is null
event_code_cts = df[df.Actor1Code.isnull()]['EventCode'].value_counts().nlargest(10)
event_lt.merge(event_code_cts, left_index=True, right_index=True, how='right').rename({'EventCode':'N_EVENTS'},axis=1)
# plot frequencies of EventCode values for events when Actor1Code is NOT null
plt.figure(figsize=(30,5))
df[df.Actor1Code.notnull()]['EventCode'].value_counts().plot(kind="bar")
plt.show()
# display top 10 most frequent event codes when Actor1Code is NOT null
event_code_cts = df[df.Actor1Code.notnull()]['EventCode'].value_counts().nlargest(10)
event_lt.merge(event_code_cts, left_index=True, right_index=True, how='right').rename({'EventCode':'N_EVENTS'},axis=1)
#get counts of '010' code in the EventCode column, grouped by whether Actor1Code is null or not null
print("count of 010 code when Actor1Code is null:", int(df[(df.Actor1Code.isnull()) & (df.EventCode == '010')]['EventCode'].value_counts().values))
print("count of 010 code when Actor1Code is not null:", int(df[(df.Actor1Code.notnull()) & (df.EventCode == '010')]['EventCode'].value_counts().values))
```
010 Code appears quite frequently when Actor1Code is NOT null, but rarely when it IS null. Per https://www.gdeltproject.org/data/lookups/CAMEO.eventcodes.txt, 010 is "Make statement, unspecified". So, there is this one noticeable difference, but many codes still appear in large numbers regardless of Actor1Code being present.
Let's look into Actor2Code. Maybe it can help us better understand the relationship between EventCode and presence of one or both of the ActorCode's.
```
# Check for overlaps between presence of Actor1Code and Actor2Code
print("Actor1Code NULL, Actor2Code NOT NULL:", df[df.Actor1Code.isnull() & df.Actor2Code.notnull()].shape[0], f"{df[df.Actor1Code.isnull() & df.Actor2Code.notnull()].shape[0] / df.shape[0] :.2%}")
print("Actor1Code NULL, Actor2Code NULL:", df[df.Actor1Code.isnull() & df.Actor2Code.isnull()].shape[0], f"{df[df.Actor1Code.isnull() & df.Actor2Code.isnull()].shape[0] / df.shape[0] :.2%}")
print("Actor1Code NOT NULL, Actor2Code NOT NULL:", df[df.Actor1Code.notnull() & df.Actor2Code.notnull()].shape[0], f"{df[df.Actor1Code.notnull() & df.Actor2Code.notnull()].shape[0] / df.shape[0] :.2%}")
print("Actor1Code NOT NULL, Actor2Code NULL:", df[df.Actor1Code.notnull() & df.Actor2Code.isnull()].shape[0], f"{df[df.Actor1Code.notnull() & df.Actor2Code.isnull()].shape[0] / df.shape[0] :.2%}")
```
We see that when Actor1Code is null, Actor2Code is almost always not null. So at least we can get one actor in those cases.
When Actor1Code is not null, Actor2Code is present only about 60% of the time.
It will need to be determined what it means for one of the ActorCode fields to be present and for the second to be null.
Let's check if there is a noticeable difference in EventCode values when Actor2Code is null versus not null, with Actor1Code present.
```
# plot frequencies of EventCode values for events when Actor1Code is NOT null and Actor2Code IS null
plt.figure(figsize=(30,5))
df[df.Actor1Code.notnull() & df.Actor2Code.isnull()]['EventCode'].value_counts().plot(kind="bar")
plt.show()
# plot frequencies of EventCode values for events when Actor1Code is NOT null and Actor2Code is NOT null
plt.figure(figsize=(30,5))
df[df.Actor1Code.notnull() & df.Actor2Code.notnull()]['EventCode'].value_counts().plot(kind="bar")
plt.show()
# let's plot the counts side by side for each EventCode (just the top 10 for case when Actor2Code is present and when it is null)
# first, subset the dataframe to rows where Actor1Code is not null
temp_df = df[df.Actor1Code.notnull()][['EventCode','Actor2Code']]
# create boolean column indicating whether Actor2Code is null or not null
temp_df['Actor2CodeIsNull'] = temp_df['Actor2Code'].apply(lambda x: x is None)
# create column that includes counts of each EventCode value, grouped by Actor2CodeIsNull, in the corresponding rows
temp_df['Count'] = temp_df.groupby(['EventCode','Actor2CodeIsNull']).transform(len)
# create column ranking count values within each Actor2CodeIsNull category (largest counts receive smallest(top) rank values)
temp_df['Rank'] = temp_df.groupby('Actor2CodeIsNull')['Count'].rank(ascending=False, method='min')
# subset dataframe to rows where Rank value is at or below the 10th smallest Rank value in Actor2CodeIsNull group
cond_null = (temp_df.Actor2CodeIsNull == 1) & (temp_df.Rank <= np.sort(temp_df[temp_df.Actor2CodeIsNull == 1]['Rank'].unique())[9])
cond_notnull = (temp_df.Actor2CodeIsNull == 0) & (temp_df.Rank <= np.sort(temp_df[temp_df.Actor2CodeIsNull == 0]['Rank'].unique())[9])
# verify the results
vc = pd.DataFrame(temp_df[cond_null | cond_notnull].groupby('Actor2CodeIsNull')['EventCode'].value_counts())
vc.rename(columns={'EventCode':"N_EVENTS"}, inplace=True)
# merge with event code descriptions lookup table
event_lt.index.names = ['EventCode']
event_lt.join(vc,how='inner')
# plot frequencies of EventCode values for events when Actor1Code is NOT null, split by whether Actor2Code is null or NOT null
temp_df[cond_null | cond_notnull].groupby('EventCode')['Actor2CodeIsNull'] \
.value_counts().unstack().plot(kind="bar")
plt.show()
```
There is a bit of a difference in distributions of EventCode values in those situations, but not significant enough or easily interpretable to understand how it relates to the presence of Actor2Code or lack thereof.
```
#check number of unique Actor1Code values
df[df.Actor1Code.notnull()].Actor1Code.value_counts().shape[0]
# plot frequencies of 100 most frequent Actor1Code values
plt.figure(figsize=(30,5))
df[df.Actor1Code.notnull()]['Actor1Code'].value_counts().nlargest(100).plot(kind="bar")
plt.show()
# Check the most frequent combinations of Actor1Code and Actor2Code
df.groupby([df.Actor1Code, df.Actor2Code]).size().nlargest(40)
```
Many domestic group codes above (like LEG (Legislature) or MED (Media)) do not include a country code, so it is not clear which country's group is being referenced.
```
# Check which Actor1CountryCode values exist when Actor1Code just has one of the primary or secondary role codes
df[df.Actor1Code.isin(['GOV','COP','EDU','MIL','JUD','LEG','BUS','CVL','MED'])]['Actor1CountryCode'].value_counts()
```
It does not look like Actor1CountryCode can assist in identifying the country of group in the Actor1Code column. Let's check if Actor1Name can assist.
```
df[df.Actor1Code.isin(['GOV','COP','EDU','MIL','JUD','LEG','BUS','CVL','MED'])]['Actor1Name'].value_counts()
# plot frequencies of 100 most frequent Actor1Name values when Actor1Code has one of the main domestic group values
plt.figure(figsize=(30,5))
df[df.Actor1Code.isin(['GOV','COP','EDU','MIL','JUD','LEG','BUS','CVL','MED'])]['Actor1Name'].value_counts().nlargest(100).plot(kind="bar")
plt.show()
```
Even though documentation says that Actor1Name contains the "actual name of the Actor1", Actor1Name values appear to be just more specific categories of Actor1Code, not actual names.
Let's investigate the Actor1CountryCode field a bit more.
```
# Check what values the field generally takes.
df.Actor1CountryCode.value_counts(dropna=False)
# Check what Actor1Code values exist when Actor1CountryCode is USA
df[df.Actor1CountryCode == "USA"]['Actor1Code'].value_counts(dropna=False)
# plot frequencies of 100 most frequent Actor1Code values when Actor1CountryCode is USA
plt.figure(figsize=(30,5))
df[df.Actor1CountryCode == 'USA']['Actor1Code'].value_counts().nlargest(25).plot(kind="bar")
plt.show()
```
Actor1CountryCode (even when not missing) does not appear to provide much more value over just the Actor1Code column.
#### Geography Codes
```
# Check values of Actor1Geo_Type
geo_type_vls = df.Actor1Geo_Type.value_counts(dropna=False).to_frame(name='Count')
geo_type_vls['Percent'] = geo_type_vls['Count'].apply(lambda x: '{:.2f}%'.format((float(x) / geo_type_vls.Count.sum()) * 100))
geo_type_vls
```
It is not clear what the 0 value represents (documentation only identifies 1 through 5). To try and understand what the 0 value represents, let's check the Actor1Geo_Fullname column for those values. We saw earlier that Actor1Geo_FullName is null roughly the same number of times as there are 0 values in Actor1Geo_Type.
```
df[df.Actor1Geo_Type == 0]['Actor1Geo_FullName'].value_counts()
df[df.Actor1Geo_FullName.isnull()]['Actor1Geo_Type'].value_counts()
```
So, in all cases when Geo_Type is 0, Geo_FullName is null. There is a small number of cases when Geo_FullName is null, but Geo_Type takes on the value 1 (Country).
```
df[df.Actor1Geo_FullName.isnull() & df.Actor1Geo_Type == 1]['Actor1Geo_CountryCode'].value_counts(dropna=False)
```
So, at least country name is available in the Geo_CountryCode field when Geo_FullName is null, except that these two codes (RB and YI are not found in the FIPS10-4 country code list (https://en.wikipedia.org/wiki/List_of_FIPS_country_codes, https://www.gdeltproject.org/data/lookups/FIPS.country.txt)
```
# Check what ActionGeo_CountryCode values exist for those RB and YI Actor1 values
df[df.Actor1Geo_FullName.isnull() & (df.Actor1Geo_Type == 1)].groupby(['Actor1Geo_CountryCode','ActionGeo_CountryCode']).size()
# Download country code FIPS crosswalk file to check against
url = 'https://www.gdeltproject.org/data/lookups/FIPS.country.txt'
country_fips_df = pd.read_csv(url, header=None, delimiter='\t', names=['code','country'])
country_fips_df.head()
# Check whether FIPS10-4 country codes found in the Events table can all be mapped to the crosswalk file
event_countries = df.Actor1Geo_CountryCode.values.tolist()
xwalk_countries = country_fips_df.code.values.tolist()
list(set(event_countries) - set(xwalk_countries))
```
So, YI, RB and OC are country codes found in the Events table but not in the code-to-country_name crosswalk. (The same results as above are observed for Actor2Geo_CountryCode and ActionGeo_CountryCode values. And these three values are also not found in the GENC country code list that superseded FIPS codes.)
```
# Check
df[df.Actor1Code.isin(['GOV','COP','EDU','MIL','JUD','LEG','BUS','CVL','MED'])]['Actor1Geo_Type'].value_counts()
df[df.Actor1Code.isin(['GOV','COP','EDU','MIL','JUD','LEG','BUS','CVL','MED'])]['Actor1Geo_CountryCode'].value_counts()
```
|
github_jupyter
|
import pandas as pd
import numpy as np
pd.set_option('display.max_colwidth',500)
import matplotlib.pyplot as plt
import matplotlib as mpl
%matplotlib inline
plt.style.use('seaborn-white') # use plt.style.available to see list of available styles
import pickle
from datetime import datetime
from google.cloud import bigquery
import sys
sys.version
print("date ran:", datetime.today())
print("matplotlib version:", mpl.__version__)
print("pandas version:", pd.__version__)
project_id = 'spark-project-254623'
bucket_name = 'spark-projects'
# authenticates Colab to the Google Cloud account
from google.colab import auth
auth.authenticate_user()
print('Authenticated')
client = bigquery.Client(project=project_id)
job_config = bigquery.QueryJobConfig()
job_config.dry_run = True
job_config.use_query_cache = False
sql = '''
SELECT *
FROM `gdelt-bq.gdeltv2.events_partitioned`
WHERE
_PARTITIONDATE BETWEEN '2018-04-04' AND '2018-04-04'
'''
query_job = client.query(
sql,
# Location must match that of the dataset(s) referenced in the query.
location="US",
job_config=job_config,
) # API request
# A dry run query completes immediately.
assert query_job.state == "DONE"
assert query_job.dry_run
print("This query will process {:,} bytes.".format(query_job.total_bytes_processed))
client = bigquery.Client(project=project_id)
sql = '''
SELECT *
FROM `gdelt-bq.gdeltv2.events_partitioned`
WHERE
_PARTITIONDATE BETWEEN '2019-12-01' AND '2019-12-07'
'''
df = client.query(sql, location="US").to_dataframe()
df.to_pickle("./dec19_wk1_events.pkl")
# upload file to GCS bucket
!gsutil cp ./dec19_wk1_events.pkl gs://{bucket_name}/dec19_wk1_events.pkl
# run authentication before running this cell
!gsutil cp gs://{bucket_name}/dec19_wk1_events.pkl ./dec19_wk1_events.pkl
df = pd.read_pickle("./dec19_wk1_events.pkl")
print(df.shape)
df.columns
df.info()
#CHECK FOR PRESENCE OF MISSING VALUES
miss_vls = pd.DataFrame(df.isnull().sum(axis=0).sort_values(ascending=False), columns=['Count'])
miss_vls['Percent'] = miss_vls['Count'].apply(lambda x: '{:.2f}%'.format((float(x) / df.shape[0]) * 100))
miss_vls[miss_vls.Count > 0]
event_lt = pd.read_csv("https://gdeltproject.org/data/lookups/CAMEO.eventcodes.txt", sep='\t', dtype={'CAMEOEVENTCODE':str})
event_lt.set_index('CAMEOEVENTCODE',inplace=True)
event_lt.head()
# tabulate number of unique EventCode's in cases where Actor1Code is null
df[df.Actor1Code.isnull()]['EventCode'].unique().shape
# plot frequencies of EventCode values for events when Actor1Code is null
plt.figure(figsize=(30,5))
df[df.Actor1Code.isnull()]['EventCode'].value_counts().plot(kind="bar")
plt.show()
# display top 10 most frequent event codes when Actor1Code is null
event_code_cts = df[df.Actor1Code.isnull()]['EventCode'].value_counts().nlargest(10)
event_lt.merge(event_code_cts, left_index=True, right_index=True, how='right').rename({'EventCode':'N_EVENTS'},axis=1)
# plot frequencies of EventCode values for events when Actor1Code is NOT null
plt.figure(figsize=(30,5))
df[df.Actor1Code.notnull()]['EventCode'].value_counts().plot(kind="bar")
plt.show()
# display top 10 most frequent event codes when Actor1Code is NOT null
event_code_cts = df[df.Actor1Code.notnull()]['EventCode'].value_counts().nlargest(10)
event_lt.merge(event_code_cts, left_index=True, right_index=True, how='right').rename({'EventCode':'N_EVENTS'},axis=1)
#get counts of '010' code in the EventCode column, grouped by whether Actor1Code is null or not null
print("count of 010 code when Actor1Code is null:", int(df[(df.Actor1Code.isnull()) & (df.EventCode == '010')]['EventCode'].value_counts().values))
print("count of 010 code when Actor1Code is not null:", int(df[(df.Actor1Code.notnull()) & (df.EventCode == '010')]['EventCode'].value_counts().values))
# Check for overlaps between presence of Actor1Code and Actor2Code
print("Actor1Code NULL, Actor2Code NOT NULL:", df[df.Actor1Code.isnull() & df.Actor2Code.notnull()].shape[0], f"{df[df.Actor1Code.isnull() & df.Actor2Code.notnull()].shape[0] / df.shape[0] :.2%}")
print("Actor1Code NULL, Actor2Code NULL:", df[df.Actor1Code.isnull() & df.Actor2Code.isnull()].shape[0], f"{df[df.Actor1Code.isnull() & df.Actor2Code.isnull()].shape[0] / df.shape[0] :.2%}")
print("Actor1Code NOT NULL, Actor2Code NOT NULL:", df[df.Actor1Code.notnull() & df.Actor2Code.notnull()].shape[0], f"{df[df.Actor1Code.notnull() & df.Actor2Code.notnull()].shape[0] / df.shape[0] :.2%}")
print("Actor1Code NOT NULL, Actor2Code NULL:", df[df.Actor1Code.notnull() & df.Actor2Code.isnull()].shape[0], f"{df[df.Actor1Code.notnull() & df.Actor2Code.isnull()].shape[0] / df.shape[0] :.2%}")
# plot frequencies of EventCode values for events when Actor1Code is NOT null and Actor2Code IS null
plt.figure(figsize=(30,5))
df[df.Actor1Code.notnull() & df.Actor2Code.isnull()]['EventCode'].value_counts().plot(kind="bar")
plt.show()
# plot frequencies of EventCode values for events when Actor1Code is NOT null and Actor2Code is NOT null
plt.figure(figsize=(30,5))
df[df.Actor1Code.notnull() & df.Actor2Code.notnull()]['EventCode'].value_counts().plot(kind="bar")
plt.show()
# let's plot the counts side by side for each EventCode (just the top 10 for case when Actor2Code is present and when it is null)
# first, subset the dataframe to rows where Actor1Code is not null
temp_df = df[df.Actor1Code.notnull()][['EventCode','Actor2Code']]
# create boolean column indicating whether Actor2Code is null or not null
temp_df['Actor2CodeIsNull'] = temp_df['Actor2Code'].apply(lambda x: x is None)
# create column that includes counts of each EventCode value, grouped by Actor2CodeIsNull, in the corresponding rows
temp_df['Count'] = temp_df.groupby(['EventCode','Actor2CodeIsNull']).transform(len)
# create column ranking count values within each Actor2CodeIsNull category (largest counts receive smallest(top) rank values)
temp_df['Rank'] = temp_df.groupby('Actor2CodeIsNull')['Count'].rank(ascending=False, method='min')
# subset dataframe to rows where Rank value is at or below the 10th smallest Rank value in Actor2CodeIsNull group
cond_null = (temp_df.Actor2CodeIsNull == 1) & (temp_df.Rank <= np.sort(temp_df[temp_df.Actor2CodeIsNull == 1]['Rank'].unique())[9])
cond_notnull = (temp_df.Actor2CodeIsNull == 0) & (temp_df.Rank <= np.sort(temp_df[temp_df.Actor2CodeIsNull == 0]['Rank'].unique())[9])
# verify the results
vc = pd.DataFrame(temp_df[cond_null | cond_notnull].groupby('Actor2CodeIsNull')['EventCode'].value_counts())
vc.rename(columns={'EventCode':"N_EVENTS"}, inplace=True)
# merge with event code descriptions lookup table
event_lt.index.names = ['EventCode']
event_lt.join(vc,how='inner')
# plot frequencies of EventCode values for events when Actor1Code is NOT null, split by whether Actor2Code is null or NOT null
temp_df[cond_null | cond_notnull].groupby('EventCode')['Actor2CodeIsNull'] \
.value_counts().unstack().plot(kind="bar")
plt.show()
#check number of unique Actor1Code values
df[df.Actor1Code.notnull()].Actor1Code.value_counts().shape[0]
# plot frequencies of 100 most frequent Actor1Code values
plt.figure(figsize=(30,5))
df[df.Actor1Code.notnull()]['Actor1Code'].value_counts().nlargest(100).plot(kind="bar")
plt.show()
# Check the most frequent combinations of Actor1Code and Actor2Code
df.groupby([df.Actor1Code, df.Actor2Code]).size().nlargest(40)
# Check which Actor1CountryCode values exist when Actor1Code just has one of the primary or secondary role codes
df[df.Actor1Code.isin(['GOV','COP','EDU','MIL','JUD','LEG','BUS','CVL','MED'])]['Actor1CountryCode'].value_counts()
df[df.Actor1Code.isin(['GOV','COP','EDU','MIL','JUD','LEG','BUS','CVL','MED'])]['Actor1Name'].value_counts()
# plot frequencies of 100 most frequent Actor1Name values when Actor1Code has one of the main domestic group values
plt.figure(figsize=(30,5))
df[df.Actor1Code.isin(['GOV','COP','EDU','MIL','JUD','LEG','BUS','CVL','MED'])]['Actor1Name'].value_counts().nlargest(100).plot(kind="bar")
plt.show()
# Check what values the field generally takes.
df.Actor1CountryCode.value_counts(dropna=False)
# Check what Actor1Code values exist when Actor1CountryCode is USA
df[df.Actor1CountryCode == "USA"]['Actor1Code'].value_counts(dropna=False)
# plot frequencies of 100 most frequent Actor1Code values when Actor1CountryCode is USA
plt.figure(figsize=(30,5))
df[df.Actor1CountryCode == 'USA']['Actor1Code'].value_counts().nlargest(25).plot(kind="bar")
plt.show()
# Check values of Actor1Geo_Type
geo_type_vls = df.Actor1Geo_Type.value_counts(dropna=False).to_frame(name='Count')
geo_type_vls['Percent'] = geo_type_vls['Count'].apply(lambda x: '{:.2f}%'.format((float(x) / geo_type_vls.Count.sum()) * 100))
geo_type_vls
df[df.Actor1Geo_Type == 0]['Actor1Geo_FullName'].value_counts()
df[df.Actor1Geo_FullName.isnull()]['Actor1Geo_Type'].value_counts()
df[df.Actor1Geo_FullName.isnull() & df.Actor1Geo_Type == 1]['Actor1Geo_CountryCode'].value_counts(dropna=False)
# Check what ActionGeo_CountryCode values exist for those RB and YI Actor1 values
df[df.Actor1Geo_FullName.isnull() & (df.Actor1Geo_Type == 1)].groupby(['Actor1Geo_CountryCode','ActionGeo_CountryCode']).size()
# Download country code FIPS crosswalk file to check against
url = 'https://www.gdeltproject.org/data/lookups/FIPS.country.txt'
country_fips_df = pd.read_csv(url, header=None, delimiter='\t', names=['code','country'])
country_fips_df.head()
# Check whether FIPS10-4 country codes found in the Events table can all be mapped to the crosswalk file
event_countries = df.Actor1Geo_CountryCode.values.tolist()
xwalk_countries = country_fips_df.code.values.tolist()
list(set(event_countries) - set(xwalk_countries))
# Check
df[df.Actor1Code.isin(['GOV','COP','EDU','MIL','JUD','LEG','BUS','CVL','MED'])]['Actor1Geo_Type'].value_counts()
df[df.Actor1Code.isin(['GOV','COP','EDU','MIL','JUD','LEG','BUS','CVL','MED'])]['Actor1Geo_CountryCode'].value_counts()
| 0.429908 | 0.839767 |
## Load Packages
```
import hashlib
import os
import json
import datetime as date
import pandas as pd
```
## Creating Helper Functions
```
def hash_file(filename):
BLOCKSIZE = 65536
hasher = hashlib.sha256()
with open(filename, 'rb') as afile:
buf = afile.read(BLOCKSIZE)
while len(buf) > 0:
hasher.update(buf)
buf = afile.read(BLOCKSIZE)
return(hasher.hexdigest())
## maybe do not need the timestamp and name included?
def hash_block(block):
sha = hashlib.sha256()
sha.update(str(block['name']) +
str(block['timestamp']) +
str(block['data']) +
str(block['previous_hash']))
return sha.hexdigest()
```
## Hash Input File
This can be used as an id of the file. It is based off of the file contents and therefore if anything in the file changes so does the hash.
```
hash_file('iris.csv')
```
## Testing Script
This part is a small development portion to be used in the making of a python script used below.
```
iris = pd.read_csv('iris.csv')
iris_group = iris.groupby('Species').mean()
iris_group.to_csv('iris_group.csv')
```
## Creating Script
The following cell acutally creates the python script to calculate and then save the group means to a csv file.
```
%%writefile iris_groupmeans.py
import pandas as pd
iris = pd.read_csv('iris.csv')
iris_group = iris.groupby('Species').mean()
iris_group.to_csv('iris_group.csv')
```
## Hashing Created Python Script
Hashing the script file in order to get the content hash to use in the block creation.
```
hash_file('iris_groupmeans.py')
```
## Hashing Output File
Hashing the created output file to get the content hash to use in the block creation.
```
hash_file('iris_group.csv')
```
## Creating Block
```
block = {'name': 'iris_group.csv',
'data': hash_file('iris_group.csv'),
'timestamp': str(date.datetime.now()),
'previous_hash': [hash_file('iris.csv'), hash_file('iris_groupmeans.py')],
'hash': ''}
block
block['hash'] = hash_block(block)
block
```
## Loading merkledag package
testing out functionality of package
```
!pip install /Users/kgosik/Documents/Projects/MerkleDAGWorkflow/python/pypi_package/merkledag
from merkledag import *
init()
iris_block = create_genesis_block('iris.csv')
iris_block.hash
iris_block.data
iris_block.previous_hashes
import hashlib
hashlib.sha256(str(iris_block.name) +
#str(iris_block.timestamp) +
str(iris_block.data) +
str(iris_block.previous_hashes)).hexdigest()
iris_block.__dict__
check = Block('iris.csv', date.datetime.now(), ['GenesisFile', 'AnotherFile'])
check.__dict__
check2 = Block('iris.csv', date.datetime.now(), ['AnotherFile', 'GenesisFile'])
check2.__dict__
check == check2
```
|
github_jupyter
|
import hashlib
import os
import json
import datetime as date
import pandas as pd
def hash_file(filename):
BLOCKSIZE = 65536
hasher = hashlib.sha256()
with open(filename, 'rb') as afile:
buf = afile.read(BLOCKSIZE)
while len(buf) > 0:
hasher.update(buf)
buf = afile.read(BLOCKSIZE)
return(hasher.hexdigest())
## maybe do not need the timestamp and name included?
def hash_block(block):
sha = hashlib.sha256()
sha.update(str(block['name']) +
str(block['timestamp']) +
str(block['data']) +
str(block['previous_hash']))
return sha.hexdigest()
hash_file('iris.csv')
iris = pd.read_csv('iris.csv')
iris_group = iris.groupby('Species').mean()
iris_group.to_csv('iris_group.csv')
%%writefile iris_groupmeans.py
import pandas as pd
iris = pd.read_csv('iris.csv')
iris_group = iris.groupby('Species').mean()
iris_group.to_csv('iris_group.csv')
hash_file('iris_groupmeans.py')
hash_file('iris_group.csv')
block = {'name': 'iris_group.csv',
'data': hash_file('iris_group.csv'),
'timestamp': str(date.datetime.now()),
'previous_hash': [hash_file('iris.csv'), hash_file('iris_groupmeans.py')],
'hash': ''}
block
block['hash'] = hash_block(block)
block
!pip install /Users/kgosik/Documents/Projects/MerkleDAGWorkflow/python/pypi_package/merkledag
from merkledag import *
init()
iris_block = create_genesis_block('iris.csv')
iris_block.hash
iris_block.data
iris_block.previous_hashes
import hashlib
hashlib.sha256(str(iris_block.name) +
#str(iris_block.timestamp) +
str(iris_block.data) +
str(iris_block.previous_hashes)).hexdigest()
iris_block.__dict__
check = Block('iris.csv', date.datetime.now(), ['GenesisFile', 'AnotherFile'])
check.__dict__
check2 = Block('iris.csv', date.datetime.now(), ['AnotherFile', 'GenesisFile'])
check2.__dict__
check == check2
| 0.271541 | 0.693207 |
Copyright (c) Microsoft Corporation.
Licensed under the MIT license.
# Feast Azure Provider Tutorial: Register Features
In this notebook you will connect to your feature store and register features into a central repository hosted on Azure Blob Storage. It should be noted that best practice for registering features would be through a CI/CD process e.g. GitHub Actions, or Azure DevOps.
## Configure Feature Repo
The cell below connects to your feature store. __You need to update the feature_repo/feature_store.yaml file so that the registry path points to your blob location__
```
import os
from feast import FeatureStore
from azureml.core import Workspace
# access key vault to get secrets
ws = Workspace.from_config()
kv = ws.get_default_keyvault()
# update with your connection string
os.environ['SQL_CONN']=kv.get_secret("FEAST-SQL-CONN")
os.environ['REDIS_CONN']=kv.get_secret("FEAST-REDIS-CONN")
# connect to feature store
fs = FeatureStore("./feature_repo")
```
## Define the data source (offline store)
The data source refers to raw underlying data (a table in Azure SQL DB or Synapse SQL). Feast uses a time-series data model to represent data. This data model is used to interpret feature data in data sources in order to build training datasets or when materializing features into an online store.
```
from feast_azure_provider.mssqlserver_source import MsSqlServerSource
orders_table = "orders"
driver_hourly_table = "driver_hourly"
customer_profile_table = "customer_profile"
driver_source = MsSqlServerSource(
table_ref=driver_hourly_table,
event_timestamp_column="datetime",
created_timestamp_column="created",
)
customer_source = MsSqlServerSource(
table_ref=customer_profile_table,
event_timestamp_column="datetime",
created_timestamp_column="",
)
```
## Define Feature Views
A feature view is an object that represents a logical group of time-series feature data as it is found in a data source. Feature views consist of one or more entities, features, and a data source. Feature views allow Feast to model your existing feature data in a consistent way in both an offline (training) and online (serving) environment.
Feature views are used during:
- The generation of training datasets by querying the data source of feature views in order to find historical feature values. A single training dataset may consist of features from multiple feature views.
- Loading of feature values into an online store. Feature views determine the storage schema in the online store.
- Retrieval of features from the online store. Feature views provide the schema definition to Feast in order to look up features from the online store.
__NOTE: Feast does not generate feature values. It acts as the ingestion and serving system. The data sources described within feature views should reference feature values in their already computed form.__
```
from feast import Feature, FeatureView, ValueType
from datetime import timedelta
driver_fv = FeatureView(
name="driver_stats",
entities=["driver"],
features=[
Feature(name="conv_rate", dtype=ValueType.FLOAT),
Feature(name="acc_rate", dtype=ValueType.FLOAT),
Feature(name="avg_daily_trips", dtype=ValueType.INT32),
],
batch_source=driver_source,
ttl=timedelta(hours=2),
)
customer_fv = FeatureView(
name="customer_profile",
entities=["customer_id"],
features=[
Feature(name="current_balance", dtype=ValueType.FLOAT),
Feature(name="avg_passenger_count", dtype=ValueType.FLOAT),
Feature(name="lifetime_trip_count", dtype=ValueType.INT32),
],
batch_source=customer_source,
ttl=timedelta(days=2),
)
```
# Define entities
An entity is a collection of semantically related features. Users define entities to map to the domain of their use case. For example, a ride-hailing service could have customers and drivers as their entities, which group related features that correspond to these customers and drivers.
Entities are defined as part of feature views. Entities are used to identify the primary key on which feature values should be stored and retrieved. These keys are used during the lookup of feature values from the online store and the join process in point-in-time joins. It is possible to define composite entities (more than one entity object) in a feature view.
Entities should be reused across feature views.
## Entity key
A related concept is an entity key. These are one or more entity values that uniquely describe a feature view record. In the case of an entity (like a driver) that only has a single entity field, the entity is an entity key. However, it is also possible for an entity key to consist of multiple entity values. For example, a feature view with the composite entity of (customer, country) might have an entity key of (1001, 5).
Entity keys act as primary keys. They are used during the lookup of features from the online store, and they are also used to match feature rows across feature views during point-in-time joins.
```
from feast import Entity
driver = Entity(name="driver", join_key="driver_id", value_type=ValueType.INT64)
customer = Entity(name="customer_id", value_type=ValueType.INT64)
```
## Feast `apply()`
Feast `apply` will:
1. Feast will scan Python files in your feature repository and find all Feast object definitions, such as feature views, entities, and data sources.
1. Feast will validate your feature definitions
1. Feast will sync the metadata about Feast objects to the registry. If a registry does not exist, then it will be instantiated. The standard registry is a simple protobuf binary file that is stored on Azure Blob Storage.
1. Feast CLI will create all necessary feature store infrastructure. The exact infrastructure that is deployed or configured depends on the provider configuration that you have set in feature_store.yaml.
```
fs.apply([driver, driver_fv, customer, customer_fv])
```
|
github_jupyter
|
import os
from feast import FeatureStore
from azureml.core import Workspace
# access key vault to get secrets
ws = Workspace.from_config()
kv = ws.get_default_keyvault()
# update with your connection string
os.environ['SQL_CONN']=kv.get_secret("FEAST-SQL-CONN")
os.environ['REDIS_CONN']=kv.get_secret("FEAST-REDIS-CONN")
# connect to feature store
fs = FeatureStore("./feature_repo")
from feast_azure_provider.mssqlserver_source import MsSqlServerSource
orders_table = "orders"
driver_hourly_table = "driver_hourly"
customer_profile_table = "customer_profile"
driver_source = MsSqlServerSource(
table_ref=driver_hourly_table,
event_timestamp_column="datetime",
created_timestamp_column="created",
)
customer_source = MsSqlServerSource(
table_ref=customer_profile_table,
event_timestamp_column="datetime",
created_timestamp_column="",
)
from feast import Feature, FeatureView, ValueType
from datetime import timedelta
driver_fv = FeatureView(
name="driver_stats",
entities=["driver"],
features=[
Feature(name="conv_rate", dtype=ValueType.FLOAT),
Feature(name="acc_rate", dtype=ValueType.FLOAT),
Feature(name="avg_daily_trips", dtype=ValueType.INT32),
],
batch_source=driver_source,
ttl=timedelta(hours=2),
)
customer_fv = FeatureView(
name="customer_profile",
entities=["customer_id"],
features=[
Feature(name="current_balance", dtype=ValueType.FLOAT),
Feature(name="avg_passenger_count", dtype=ValueType.FLOAT),
Feature(name="lifetime_trip_count", dtype=ValueType.INT32),
],
batch_source=customer_source,
ttl=timedelta(days=2),
)
from feast import Entity
driver = Entity(name="driver", join_key="driver_id", value_type=ValueType.INT64)
customer = Entity(name="customer_id", value_type=ValueType.INT64)
fs.apply([driver, driver_fv, customer, customer_fv])
| 0.422743 | 0.942188 |
# Bigquery - Query
## Intended Use
A Kubeflow Pipeline component to submit a query to Google Cloud Bigquery service and dump outputs to a Google Cloud Storage blob.
## Input:
Name | Description
:--- | :----------
query | The query used by Bigquery service to fetch the results.
project_id | The project to execute the query job.
dataset_id | The ID of the persistent dataset to keep the results of the query. If the dataset does not exist, the operation will create a new one.
table_id | The ID of the table to keep the results of the query. If absent, the operation will generate a random id for the table.
output_gcs_path | The GCS blob path to dump the query results to.
dataset_location | The location to create the dataset. Defaults to `US`.
job_config | The full config spec for the query job. See [QueryJobConfig](https://googleapis.github.io/google-cloud-python/latest/bigquery/generated/google.cloud.bigquery.job.QueryJobConfig.html#google.cloud.bigquery.job.QueryJobConfig) for details.
## Output:
Name | Description
:--- | :----------
output_gcs_path | The GCS blob path to dump the query results to.
## Sample
Note: the sample code below works in both IPython notebook or python code directly.
### Set sample parameters
```
# Required Parameters
PROJECT_ID = '<Please put your project ID here>'
GCS_WORKING_DIR = 'gs://<Please put your GCS path here>' # No ending slash
# Optional Parameters
EXPERIMENT_NAME = 'Bigquery -Query'
COMPONENT_SPEC_URI = 'https://raw.githubusercontent.com/kubeflow/pipelines/d2f5cc92a46012b9927209e2aaccab70961582dc/components/gcp/bigquery/query/component.yaml'
```
### Install KFP SDK
```
# Install the SDK (Uncomment the code if the SDK is not installed before)
# KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.11/kfp.tar.gz'
# !pip3 install $KFP_PACKAGE --upgrade
```
### Load component definitions
```
import kfp.components as comp
bigquery_query_op = comp.load_component_from_url(COMPONENT_SPEC_URI)
display(bigquery_query_op)
```
### Run the component as a single pipeline
```
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Bigquery query pipeline',
description='Bigquery query pipeline'
)
def pipeline(
query,
project_id,
dataset_id='',
table_id='',
output_gcs_path='',
dataset_location='US',
job_config=''
):
bigquery_query_op(query, project_id, dataset_id, table_id, output_gcs_path, dataset_location,
job_config).apply(gcp.use_gcp_secret('user-gcp-sa'))
```
### Compile the pipeline
```
pipeline_func = pipeline
pipeline_filename = pipeline_func.__name__ + '.pipeline.tar.gz'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
```
### Submit the pipeline for execution
```
#Specify pipeline argument values
arguments = {
'query': 'SELECT * FROM `bigquery-public-data.stackoverflow.posts_questions` LIMIT 10',
'project_id': PROJECT_ID,
'output_gcs_path': '{}/bigquery/query/questions.csv'.format(GCS_WORKING_DIR)
}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
```
|
github_jupyter
|
# Required Parameters
PROJECT_ID = '<Please put your project ID here>'
GCS_WORKING_DIR = 'gs://<Please put your GCS path here>' # No ending slash
# Optional Parameters
EXPERIMENT_NAME = 'Bigquery -Query'
COMPONENT_SPEC_URI = 'https://raw.githubusercontent.com/kubeflow/pipelines/d2f5cc92a46012b9927209e2aaccab70961582dc/components/gcp/bigquery/query/component.yaml'
# Install the SDK (Uncomment the code if the SDK is not installed before)
# KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.11/kfp.tar.gz'
# !pip3 install $KFP_PACKAGE --upgrade
import kfp.components as comp
bigquery_query_op = comp.load_component_from_url(COMPONENT_SPEC_URI)
display(bigquery_query_op)
import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Bigquery query pipeline',
description='Bigquery query pipeline'
)
def pipeline(
query,
project_id,
dataset_id='',
table_id='',
output_gcs_path='',
dataset_location='US',
job_config=''
):
bigquery_query_op(query, project_id, dataset_id, table_id, output_gcs_path, dataset_location,
job_config).apply(gcp.use_gcp_secret('user-gcp-sa'))
pipeline_func = pipeline
pipeline_filename = pipeline_func.__name__ + '.pipeline.tar.gz'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
#Specify pipeline argument values
arguments = {
'query': 'SELECT * FROM `bigquery-public-data.stackoverflow.posts_questions` LIMIT 10',
'project_id': PROJECT_ID,
'output_gcs_path': '{}/bigquery/query/questions.csv'.format(GCS_WORKING_DIR)
}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
| 0.448426 | 0.921358 |
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = ''
import tensorflow as tf
import malaya_speech
import malaya_speech.train
from malaya_speech.train.model import fastpitch
import numpy as np
_pad = 'pad'
_start = 'start'
_eos = 'eos'
_punctuation = "!'(),.:;? "
_special = '-'
_letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'
_rejected = '\'():;"'
MALAYA_SPEECH_SYMBOLS = (
[_pad, _start, _eos] + list(_special) + list(_punctuation) + list(_letters)
)
input_ids = tf.placeholder(tf.int32, [None, None])
lens = tf.placeholder(tf.int32, [None, None])
mel_outputs = tf.placeholder(tf.float32, [None, None, 80])
mel_lengths = tf.placeholder(tf.int32, [None])
pitches = tf.placeholder(tf.float32, [None, None])
pitches_lengths = tf.placeholder(tf.int32, [None])
config = malaya_speech.config.fastspeech2_config
config = fastpitch.Config(
vocab_size = len(MALAYA_SPEECH_SYMBOLS), **config
)
model = fastpitch.Model(config)
r_training = model(input_ids, lens, pitches, training = False)
speed_ratios = tf.placeholder(tf.float32, [None], name = 'speed_ratios')
pitch_ratios = tf.placeholder(tf.float32, [None], name = 'pitch_ratios')
pitch_addition = tf.placeholder(tf.float32, [None], name = 'pitch_addition')
def transform(pitch_outputs, attention_mask):
weights = tf.cast(attention_mask, tf.int32) * tf.expand_dims(tf.range(tf.shape(pitch_outputs)[1]), 0)
weights = tf.cast(weights, tf.float32) / tf.cast(tf.shape(pitch_outputs)[1], tf.float32)
weights += 2.0
return pitch_outputs * weights
r = model.inference(input_ids, speed_ratios, pitch_ratios, pitch_addition)
r
decoder_output = tf.identity(r[0], name = 'decoder_output')
post_mel_outputs = tf.identity(r[1], name = 'post_mel_outputs')
pitch_outputs = tf.identity(r[3], name = 'pitch_outputs')
sess = tf.Session()
sess.run(tf.global_variables_initializer())
path = 'fastpitch-female-singlish'
ckpt_path = tf.train.latest_checkpoint(path)
ckpt_path
saver = tf.train.Saver()
saver.restore(sess, ckpt_path)
import re
from unidecode import unidecode
import malaya
normalizer = malaya.normalize.normalizer(date = False, time = False)
pad_to = 8
def tts_encode(string: str, add_eos: bool = True):
r = [MALAYA_SPEECH_SYMBOLS.index(c) for c in string if c in MALAYA_SPEECH_SYMBOLS]
if add_eos:
r = r + [MALAYA_SPEECH_SYMBOLS.index('eos')]
return r
def put_spacing_num(string):
string = re.sub('[A-Za-z]+', lambda ele: ' ' + ele[0] + ' ', string)
return re.sub(r'[ ]+', ' ', string).strip()
def convert_to_ascii(string):
return unidecode(string)
def collapse_whitespace(string):
return re.sub(_whitespace_re, ' ', string)
def cleaning(string, normalize = True, add_eos = False):
sequence = []
string = convert_to_ascii(string)
if string[-1] in '-,':
string = string[:-1]
if string[-1] not in '.,?!':
string = string + '.'
string = string.replace('&', ' dan ')
string = string.replace(':', ',').replace(';', ',')
if normalize:
t = normalizer._tokenizer(string)
for i in range(len(t)):
if t[i] == '-':
t[i] = ','
string = ' '.join(t)
string = normalizer.normalize(string,
check_english = False,
normalize_entity = False,
normalize_text = False,
normalize_url = True,
normalize_email = True,
normalize_year = True)
string = string['normalize']
else:
string = string
string = put_spacing_num(string)
string = ''.join([c for c in string if c in MALAYA_SPEECH_SYMBOLS and c not in _rejected])
string = re.sub(r'[ ]+', ' ', string).strip()
string = string.lower()
ids = tts_encode(string, add_eos = add_eos)
text_input = np.array(ids)
num_pad = pad_to - ((len(text_input) + 2) % pad_to)
text_input = np.pad(
text_input, ((1, 1)), 'constant', constant_values = ((1, 2))
)
text_input = np.pad(
text_input, ((0, num_pad)), 'constant', constant_values = 0
)
return string, text_input
import matplotlib.pyplot as plt
# https://umno-online.my/2020/12/28/isu-kartel-daging-haram-lagi-pihak-gesa-kerajaan-ambil-tindakan-tegas-drastik/
string1 = 'PETALING JAYA: Former prime minister Najib Razak has criticised the Inland Revenue Board’s (LHDN) move to serve him a bankruptcy notice, which his legal team had earlier called a political ploy.'
t, ids = cleaning(string1)
t, ids
%%time
o = sess.run([decoder_output, post_mel_outputs, pitch_outputs], feed_dict = {input_ids: [ids],
speed_ratios: [1.0],
pitch_ratios: [1.0],
pitch_addition: [-0.5]})
mel_outputs_ = np.reshape(o[1], [-1, 80])
fig = plt.figure(figsize=(10, 8))
ax1 = fig.add_subplot(311)
ax1.set_title(f'Predicted Mel-before-Spectrogram')
im = ax1.imshow(np.rot90(mel_outputs_), aspect='auto', interpolation='none')
fig.colorbar(mappable=im, shrink=0.65, orientation='horizontal', ax=ax1)
plt.show()
mel_outputs_ = np.reshape(o[0], [-1, 80])
fig = plt.figure(figsize=(10, 8))
ax1 = fig.add_subplot(311)
ax1.set_title(f'Predicted Mel-before-Spectrogram')
im = ax1.imshow(np.rot90(mel_outputs_), aspect='auto', interpolation='none')
fig.colorbar(mappable=im, shrink=0.65, orientation='horizontal', ax=ax1)
plt.show()
import pickle
with open('a.pkl', 'wb') as fopen:
pickle.dump([np.reshape(o[0], [-1, 80]), np.reshape(o[1], [-1, 80])], fopen)
saver = tf.train.Saver()
saver.save(sess, 'fastpitch-female-singlish-output/model.ckpt')
strings = ','.join(
[
n.name
for n in tf.get_default_graph().as_graph_def().node
if ('Variable' in n.op
or 'gather' in n.op.lower()
or 'Placeholder' in n.name
or 'ratios' in n.name
or 'pitch_addition' in n.name
or 'pitch_outputs' in n.name
or 'post_mel_outputs' in n.name
or 'decoder_output' in n.name
or 'alignment_histories' in n.name)
and 'adam' not in n.name
and 'global_step' not in n.name
and 'Assign' not in n.name
and 'ReadVariableOp' not in n.name
and 'Gather' not in n.name
and 'IsVariableInitialized' not in n.name
]
)
strings.split(',')
def freeze_graph(model_dir, output_node_names):
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
'directory: %s' % model_dir
)
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])
output_graph = absolute_model_dir + '/frozen_model.pb'
clear_devices = True
with tf.Session(graph = tf.Graph()) as sess:
saver = tf.train.import_meta_graph(
input_checkpoint + '.meta', clear_devices = clear_devices
)
saver.restore(sess, input_checkpoint)
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
tf.get_default_graph().as_graph_def(),
output_node_names.split(','),
)
with tf.gfile.GFile(output_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
print('%d ops in the final graph.' % len(output_graph_def.node))
freeze_graph('fastpitch-female-singlish-output', strings)
def load_graph(frozen_graph_filename):
with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def)
return graph
g = load_graph('fastpitch-female-singlish-output/frozen_model.pb')
test_sess = tf.InteractiveSession(graph = g)
output_nodes = ['decoder_output', 'post_mel_outputs', 'pitch_outputs']
outputs = {n: g.get_tensor_by_name(f'import/{n}:0') for n in output_nodes}
from tensorflow.tools.graph_transforms import TransformGraph
transforms = ['add_default_attributes',
'remove_nodes(op=Identity, op=CheckNumerics)',
'fold_batch_norms',
'fold_old_batch_norms',
'quantize_weights(fallback_min=-1024, fallback_max=1024)',
'strip_unused_nodes',
'sort_by_execution_order']
pb = 'fastpitch-female-singlish-output/frozen_model.pb'
input_graph_def = tf.GraphDef()
with tf.gfile.FastGFile(pb, 'rb') as f:
input_graph_def.ParseFromString(f.read())
transformed_graph_def = TransformGraph(input_graph_def,
['Placeholder', 'speed_ratios', 'pitch_ratios', 'pitch_addition'],
output_nodes, transforms)
with tf.gfile.GFile(f'{pb}.quantized', 'wb') as f:
f.write(transformed_graph_def.SerializeToString())
b2_application_key_id = os.environ['b2_application_key_id']
b2_application_key = os.environ['b2_application_key']
from b2sdk.v1 import *
info = InMemoryAccountInfo()
b2_api = B2Api(info)
application_key_id = b2_application_key_id
application_key = b2_application_key
b2_api.authorize_account("production", application_key_id, application_key)
file_info = {'how': 'good-file'}
b2_bucket = b2_api.get_bucket_by_name('malaya-speech-model')
file = 'fastpitch-female-singlish-output/frozen_model.pb'
outPutname = 'v1/tts/fastpitch-female-singlish.pb'
b2_bucket.upload_local_file(
local_file=file,
file_name=outPutname,
file_infos=file_info,
)
file = 'fastpitch-female-singlish-output/frozen_model.pb.quantized'
outPutname = 'v1/tts/fastpitch-female-singlish.pb.quantized'
b2_bucket.upload_local_file(
local_file=file,
file_name=outPutname,
file_infos=file_info,
)
```
|
github_jupyter
|
import os
os.environ['CUDA_VISIBLE_DEVICES'] = ''
import tensorflow as tf
import malaya_speech
import malaya_speech.train
from malaya_speech.train.model import fastpitch
import numpy as np
_pad = 'pad'
_start = 'start'
_eos = 'eos'
_punctuation = "!'(),.:;? "
_special = '-'
_letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'
_rejected = '\'():;"'
MALAYA_SPEECH_SYMBOLS = (
[_pad, _start, _eos] + list(_special) + list(_punctuation) + list(_letters)
)
input_ids = tf.placeholder(tf.int32, [None, None])
lens = tf.placeholder(tf.int32, [None, None])
mel_outputs = tf.placeholder(tf.float32, [None, None, 80])
mel_lengths = tf.placeholder(tf.int32, [None])
pitches = tf.placeholder(tf.float32, [None, None])
pitches_lengths = tf.placeholder(tf.int32, [None])
config = malaya_speech.config.fastspeech2_config
config = fastpitch.Config(
vocab_size = len(MALAYA_SPEECH_SYMBOLS), **config
)
model = fastpitch.Model(config)
r_training = model(input_ids, lens, pitches, training = False)
speed_ratios = tf.placeholder(tf.float32, [None], name = 'speed_ratios')
pitch_ratios = tf.placeholder(tf.float32, [None], name = 'pitch_ratios')
pitch_addition = tf.placeholder(tf.float32, [None], name = 'pitch_addition')
def transform(pitch_outputs, attention_mask):
weights = tf.cast(attention_mask, tf.int32) * tf.expand_dims(tf.range(tf.shape(pitch_outputs)[1]), 0)
weights = tf.cast(weights, tf.float32) / tf.cast(tf.shape(pitch_outputs)[1], tf.float32)
weights += 2.0
return pitch_outputs * weights
r = model.inference(input_ids, speed_ratios, pitch_ratios, pitch_addition)
r
decoder_output = tf.identity(r[0], name = 'decoder_output')
post_mel_outputs = tf.identity(r[1], name = 'post_mel_outputs')
pitch_outputs = tf.identity(r[3], name = 'pitch_outputs')
sess = tf.Session()
sess.run(tf.global_variables_initializer())
path = 'fastpitch-female-singlish'
ckpt_path = tf.train.latest_checkpoint(path)
ckpt_path
saver = tf.train.Saver()
saver.restore(sess, ckpt_path)
import re
from unidecode import unidecode
import malaya
normalizer = malaya.normalize.normalizer(date = False, time = False)
pad_to = 8
def tts_encode(string: str, add_eos: bool = True):
r = [MALAYA_SPEECH_SYMBOLS.index(c) for c in string if c in MALAYA_SPEECH_SYMBOLS]
if add_eos:
r = r + [MALAYA_SPEECH_SYMBOLS.index('eos')]
return r
def put_spacing_num(string):
string = re.sub('[A-Za-z]+', lambda ele: ' ' + ele[0] + ' ', string)
return re.sub(r'[ ]+', ' ', string).strip()
def convert_to_ascii(string):
return unidecode(string)
def collapse_whitespace(string):
return re.sub(_whitespace_re, ' ', string)
def cleaning(string, normalize = True, add_eos = False):
sequence = []
string = convert_to_ascii(string)
if string[-1] in '-,':
string = string[:-1]
if string[-1] not in '.,?!':
string = string + '.'
string = string.replace('&', ' dan ')
string = string.replace(':', ',').replace(';', ',')
if normalize:
t = normalizer._tokenizer(string)
for i in range(len(t)):
if t[i] == '-':
t[i] = ','
string = ' '.join(t)
string = normalizer.normalize(string,
check_english = False,
normalize_entity = False,
normalize_text = False,
normalize_url = True,
normalize_email = True,
normalize_year = True)
string = string['normalize']
else:
string = string
string = put_spacing_num(string)
string = ''.join([c for c in string if c in MALAYA_SPEECH_SYMBOLS and c not in _rejected])
string = re.sub(r'[ ]+', ' ', string).strip()
string = string.lower()
ids = tts_encode(string, add_eos = add_eos)
text_input = np.array(ids)
num_pad = pad_to - ((len(text_input) + 2) % pad_to)
text_input = np.pad(
text_input, ((1, 1)), 'constant', constant_values = ((1, 2))
)
text_input = np.pad(
text_input, ((0, num_pad)), 'constant', constant_values = 0
)
return string, text_input
import matplotlib.pyplot as plt
# https://umno-online.my/2020/12/28/isu-kartel-daging-haram-lagi-pihak-gesa-kerajaan-ambil-tindakan-tegas-drastik/
string1 = 'PETALING JAYA: Former prime minister Najib Razak has criticised the Inland Revenue Board’s (LHDN) move to serve him a bankruptcy notice, which his legal team had earlier called a political ploy.'
t, ids = cleaning(string1)
t, ids
%%time
o = sess.run([decoder_output, post_mel_outputs, pitch_outputs], feed_dict = {input_ids: [ids],
speed_ratios: [1.0],
pitch_ratios: [1.0],
pitch_addition: [-0.5]})
mel_outputs_ = np.reshape(o[1], [-1, 80])
fig = plt.figure(figsize=(10, 8))
ax1 = fig.add_subplot(311)
ax1.set_title(f'Predicted Mel-before-Spectrogram')
im = ax1.imshow(np.rot90(mel_outputs_), aspect='auto', interpolation='none')
fig.colorbar(mappable=im, shrink=0.65, orientation='horizontal', ax=ax1)
plt.show()
mel_outputs_ = np.reshape(o[0], [-1, 80])
fig = plt.figure(figsize=(10, 8))
ax1 = fig.add_subplot(311)
ax1.set_title(f'Predicted Mel-before-Spectrogram')
im = ax1.imshow(np.rot90(mel_outputs_), aspect='auto', interpolation='none')
fig.colorbar(mappable=im, shrink=0.65, orientation='horizontal', ax=ax1)
plt.show()
import pickle
with open('a.pkl', 'wb') as fopen:
pickle.dump([np.reshape(o[0], [-1, 80]), np.reshape(o[1], [-1, 80])], fopen)
saver = tf.train.Saver()
saver.save(sess, 'fastpitch-female-singlish-output/model.ckpt')
strings = ','.join(
[
n.name
for n in tf.get_default_graph().as_graph_def().node
if ('Variable' in n.op
or 'gather' in n.op.lower()
or 'Placeholder' in n.name
or 'ratios' in n.name
or 'pitch_addition' in n.name
or 'pitch_outputs' in n.name
or 'post_mel_outputs' in n.name
or 'decoder_output' in n.name
or 'alignment_histories' in n.name)
and 'adam' not in n.name
and 'global_step' not in n.name
and 'Assign' not in n.name
and 'ReadVariableOp' not in n.name
and 'Gather' not in n.name
and 'IsVariableInitialized' not in n.name
]
)
strings.split(',')
def freeze_graph(model_dir, output_node_names):
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
'directory: %s' % model_dir
)
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])
output_graph = absolute_model_dir + '/frozen_model.pb'
clear_devices = True
with tf.Session(graph = tf.Graph()) as sess:
saver = tf.train.import_meta_graph(
input_checkpoint + '.meta', clear_devices = clear_devices
)
saver.restore(sess, input_checkpoint)
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
tf.get_default_graph().as_graph_def(),
output_node_names.split(','),
)
with tf.gfile.GFile(output_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
print('%d ops in the final graph.' % len(output_graph_def.node))
freeze_graph('fastpitch-female-singlish-output', strings)
def load_graph(frozen_graph_filename):
with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def)
return graph
g = load_graph('fastpitch-female-singlish-output/frozen_model.pb')
test_sess = tf.InteractiveSession(graph = g)
output_nodes = ['decoder_output', 'post_mel_outputs', 'pitch_outputs']
outputs = {n: g.get_tensor_by_name(f'import/{n}:0') for n in output_nodes}
from tensorflow.tools.graph_transforms import TransformGraph
transforms = ['add_default_attributes',
'remove_nodes(op=Identity, op=CheckNumerics)',
'fold_batch_norms',
'fold_old_batch_norms',
'quantize_weights(fallback_min=-1024, fallback_max=1024)',
'strip_unused_nodes',
'sort_by_execution_order']
pb = 'fastpitch-female-singlish-output/frozen_model.pb'
input_graph_def = tf.GraphDef()
with tf.gfile.FastGFile(pb, 'rb') as f:
input_graph_def.ParseFromString(f.read())
transformed_graph_def = TransformGraph(input_graph_def,
['Placeholder', 'speed_ratios', 'pitch_ratios', 'pitch_addition'],
output_nodes, transforms)
with tf.gfile.GFile(f'{pb}.quantized', 'wb') as f:
f.write(transformed_graph_def.SerializeToString())
b2_application_key_id = os.environ['b2_application_key_id']
b2_application_key = os.environ['b2_application_key']
from b2sdk.v1 import *
info = InMemoryAccountInfo()
b2_api = B2Api(info)
application_key_id = b2_application_key_id
application_key = b2_application_key
b2_api.authorize_account("production", application_key_id, application_key)
file_info = {'how': 'good-file'}
b2_bucket = b2_api.get_bucket_by_name('malaya-speech-model')
file = 'fastpitch-female-singlish-output/frozen_model.pb'
outPutname = 'v1/tts/fastpitch-female-singlish.pb'
b2_bucket.upload_local_file(
local_file=file,
file_name=outPutname,
file_infos=file_info,
)
file = 'fastpitch-female-singlish-output/frozen_model.pb.quantized'
outPutname = 'v1/tts/fastpitch-female-singlish.pb.quantized'
b2_bucket.upload_local_file(
local_file=file,
file_name=outPutname,
file_infos=file_info,
)
| 0.463444 | 0.274339 |
# Week 1: Introduction to Computer Vision
## Notebook 3: Semantic Segmentation with a U-Net Convolutional Neural Network using PyTorch
Welcome to the fourth notebook of this week's Applied AI Study Group! We will study semantic segmentation problem with MSRC-v2 image dataset provided by Microsoft. The aim of our task will be to make object segmentation in the given images.
### 1. Semantic Segmentation
Semantic Segmentation aims to label each pixel (aka classify pixel-wise) of a given image. We treat different objects of the same class as they are same object. On contrast, instance segmentation treats each objects of the same class as they are different objects, hence, label them differently such as object 1, object 2, etc. In this notebook, we will tackle the problem of semantic segmentation. The pixel-wise operations can be applied via segmentation on images, for example, portrait mode in images requires to differentiate between foreground and background of an image. We blur out the pixels which are classified as background.
So, how do we build our model for this case? We know that the capabilities of convolution filters are proven in terms of their capabilities in processing structured data such as images. However, they reduce the size of their input vector depends on their kernel size. We need a model that outputs the same size of input vector since we want to retrieve the same image we give into the model. Luckily for us, we have U-Net Architecture for these kind of tasks. We will study U-Nets in the following section.
### 2. U-Net Convolutional Neural Network

We can divide the U-Net architecture into two parts while studying it: the left half of the network is responsible for information encoding and the right half of the network is responsible for information decoding.
* Encoder captures the context of image with a series of convolution and pooling operations, hence, it is responsible for feature extraction from the input image.
* The middle layer between the encoder and the decoder is called bottlenecek representation of the input image. It is the high level feature representation of input data, e.g. objects and events. Using the information retrieved from each layer, we re-construct an output image with a same size as input image. For that, we utilize decoder of U-Net.
* Decoder upsamples the bottleneck representation to recover spatial locations (aka pixel-wise information) for assigning class labels to each pixel of the input image. For that, we recover the same size output image as the input image. During upsampling, we add extracted features from encoder part as in skip connections. We cannot afford to lose much information during encoding-decoding. Therefore, we need low-level features such as edges to have better results in classifying pixels.
### 3. Imports and Checks
You should have installed Numpy and Matplotlib using `pip` and, PyTorch using [Week 0 - Notebook 2](https://github.com/inzva/Applied-AI-Study-Group/blob/add-frameworks-week/Applied%20AI%20Study%20Group%20%236%20-%20January%202022/Week%200/2-mnist_classification_convnet_pytorch.ipynb).
The python file segmentation_dataset.py and MSRC-v2 image dataset can be found in the following link: [Segmentation](https://drive.google.com/drive/u/1/folders/18bKQKwvjuXbjNDMI-ktRD6tet9tcxNkL)
```
import numpy as np
import matplotlib.pyplot as plt
import os
import torch
from datasets.segmentation_dataset import SegmentationData, label_img_to_rgb
```
If the following two code cells are running successfully, then, you are good to go further!
```
print(torch.__version__)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
```
We will load the data using the code provided in segmentation_dataset.py
```
data_root = os.path.join('../../../applied ai datasets/datasets','segmentation')
train_data = SegmentationData(image_paths_file=f'{data_root}/segmentation_data/train.txt')
val_data = SegmentationData(image_paths_file=f'{data_root}/segmentation_data/val.txt')
test_data = SegmentationData(image_paths_file=f'{data_root}/segmentation_data/test.txt')
```
The first print calls are for double-checking the data loading.
Then, we visualize a couple of example images to observe what our input images look like and what our output images should look like.
```
print("Train size: %i" % len(train_data))
print("Validation size: %i" % len(val_data))
print("Img size: ", train_data[0][0].size())
print("Segmentation size: ", train_data[0][1].size())
num_example_imgs = 4
plt.figure(figsize=(10, 5 * num_example_imgs))
for i, (img, target) in enumerate(train_data[:num_example_imgs]):
# img
plt.subplot(num_example_imgs, 2, i * 2 + 1)
plt.imshow(img.numpy().transpose(1,2,0))
plt.axis('off')
if i == 0:
plt.title("Input image")
# target
plt.subplot(num_example_imgs, 2, i * 2 + 2)
plt.imshow(label_img_to_rgb(target.numpy()))
plt.axis('off')
if i == 0:
plt.title("Target image")
plt.show()
```
We will build our model in the following cell. Since semantic segmentation is a challenging task, we will use a pretrained model from torchvision for the encoder of our U-Net architecture. Then, we will build our decoder using Upsample, ConvTranspose2d, and LeakyReLU activation layers on top of MobileNet encoder.
One important note for pretrained model: We use [MobileNet](https://arxiv.org/pdf/1704.04861.pdf) for its fast inference time. We don't need a more complex model with much inference time for the purpose of this notebook. You can check the models provided by [torchvision](https://pytorch.org/vision/0.8/models.html) for trying out different models. The MobileNet is trained with [ImageNet dataset](https://www.image-net.org/index.php). It is trained for classification task. Hence, you can see that we exclude the last layer of MobileNet because we don't need the classifier layer but we need the rest of the network for feature extraction.
```
import torch
import torch.nn as nn
import torchvision.models as models
class SegmentationNN(nn.Module):
def __init__(self, num_classes=23, hparams=None):
super().__init__()
self.hparams = hparams
mobile_network = models.mobilenet_v2(pretrained=True)
layers = list(mobile_network.children())[:-1] # 1x1280x8x8
layers.append(nn.Conv2d(1280, 120, 1, 1)) # 1x160x8x8
layers.append(nn.LeakyReLU(0.1))
layers.append(nn.Upsample(scale_factor=4)) # 1x160x32x32
layers.append(nn.ConvTranspose2d(120, 80, 3, 2)) # 1x120x64x64
layers.append(nn.LeakyReLU(0.1))
layers.append(nn.ConvTranspose2d(80, 60, 9, dilation=2)) # 1x80x80x80
layers.append(nn.LeakyReLU(0.1))
layers.append(nn.ConvTranspose2d(60, 40, 9, dilation=2)) # 1x60x96x96
layers.append(nn.LeakyReLU(0.1))
layers.append(nn.ConvTranspose2d(40, 40, 11, dilation=2)) # 1x40x116x116
layers.append(nn.LeakyReLU(0.1))
layers.append(nn.Upsample(scale_factor=2)) # 1x40x232x232
layers.append(nn.ConvTranspose2d(40, 23, 7)) # 1x23x240x240
layers.append(nn.LeakyReLU(0.1))
self.network = nn.Sequential(*layers)
def forward(self, x):
x = self.network(x)
return x
```
We specify our training hyperparameters and set the rest of the adjustments for training.
```
hparams = {
"lr" : 0.001,
"batch_size" : 4,
"num_epochs" : 4
}
model = SegmentationNN(hparams=hparams)
model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=hparams["lr"])
criterion = torch.nn.CrossEntropyLoss(ignore_index=-1, reduction='mean')
train_loader = torch.utils.data.DataLoader(train_data, batch_size=hparams["batch_size"], shuffle=True)
print(train_loader)
print(next(iter(train_loader)))
```
We do a test run for training loop to check if anything is wrong.
```
for (inputs, targets) in train_data[0:4]:
inputs, targets = inputs, targets
outputs = model(inputs.unsqueeze(0).to(device))
losses = criterion(outputs, targets.unsqueeze(0).to(device))
print(losses)
```
Now, we train our model using the classical PyTorch training loop.
```
print('training starts!')
for epoch in range(hparams["num_epochs"]):
epoch_loss = 0.0
for i, data in enumerate(train_loader):
images, labels = data[0].to(device), data[1].to(device)
optimizer.zero_grad()
predictions = model(images)
loss = criterion(predictions, labels)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
print("Epoch: %d Loss: %.3f" % (epoch + 1, epoch_loss / 276))
# TODO: add validation into training loop for per epoch
# TODO: add test code and visualize some results.
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import os
import torch
from datasets.segmentation_dataset import SegmentationData, label_img_to_rgb
print(torch.__version__)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
data_root = os.path.join('../../../applied ai datasets/datasets','segmentation')
train_data = SegmentationData(image_paths_file=f'{data_root}/segmentation_data/train.txt')
val_data = SegmentationData(image_paths_file=f'{data_root}/segmentation_data/val.txt')
test_data = SegmentationData(image_paths_file=f'{data_root}/segmentation_data/test.txt')
print("Train size: %i" % len(train_data))
print("Validation size: %i" % len(val_data))
print("Img size: ", train_data[0][0].size())
print("Segmentation size: ", train_data[0][1].size())
num_example_imgs = 4
plt.figure(figsize=(10, 5 * num_example_imgs))
for i, (img, target) in enumerate(train_data[:num_example_imgs]):
# img
plt.subplot(num_example_imgs, 2, i * 2 + 1)
plt.imshow(img.numpy().transpose(1,2,0))
plt.axis('off')
if i == 0:
plt.title("Input image")
# target
plt.subplot(num_example_imgs, 2, i * 2 + 2)
plt.imshow(label_img_to_rgb(target.numpy()))
plt.axis('off')
if i == 0:
plt.title("Target image")
plt.show()
import torch
import torch.nn as nn
import torchvision.models as models
class SegmentationNN(nn.Module):
def __init__(self, num_classes=23, hparams=None):
super().__init__()
self.hparams = hparams
mobile_network = models.mobilenet_v2(pretrained=True)
layers = list(mobile_network.children())[:-1] # 1x1280x8x8
layers.append(nn.Conv2d(1280, 120, 1, 1)) # 1x160x8x8
layers.append(nn.LeakyReLU(0.1))
layers.append(nn.Upsample(scale_factor=4)) # 1x160x32x32
layers.append(nn.ConvTranspose2d(120, 80, 3, 2)) # 1x120x64x64
layers.append(nn.LeakyReLU(0.1))
layers.append(nn.ConvTranspose2d(80, 60, 9, dilation=2)) # 1x80x80x80
layers.append(nn.LeakyReLU(0.1))
layers.append(nn.ConvTranspose2d(60, 40, 9, dilation=2)) # 1x60x96x96
layers.append(nn.LeakyReLU(0.1))
layers.append(nn.ConvTranspose2d(40, 40, 11, dilation=2)) # 1x40x116x116
layers.append(nn.LeakyReLU(0.1))
layers.append(nn.Upsample(scale_factor=2)) # 1x40x232x232
layers.append(nn.ConvTranspose2d(40, 23, 7)) # 1x23x240x240
layers.append(nn.LeakyReLU(0.1))
self.network = nn.Sequential(*layers)
def forward(self, x):
x = self.network(x)
return x
hparams = {
"lr" : 0.001,
"batch_size" : 4,
"num_epochs" : 4
}
model = SegmentationNN(hparams=hparams)
model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=hparams["lr"])
criterion = torch.nn.CrossEntropyLoss(ignore_index=-1, reduction='mean')
train_loader = torch.utils.data.DataLoader(train_data, batch_size=hparams["batch_size"], shuffle=True)
print(train_loader)
print(next(iter(train_loader)))
for (inputs, targets) in train_data[0:4]:
inputs, targets = inputs, targets
outputs = model(inputs.unsqueeze(0).to(device))
losses = criterion(outputs, targets.unsqueeze(0).to(device))
print(losses)
print('training starts!')
for epoch in range(hparams["num_epochs"]):
epoch_loss = 0.0
for i, data in enumerate(train_loader):
images, labels = data[0].to(device), data[1].to(device)
optimizer.zero_grad()
predictions = model(images)
loss = criterion(predictions, labels)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
print("Epoch: %d Loss: %.3f" % (epoch + 1, epoch_loss / 276))
# TODO: add validation into training loop for per epoch
# TODO: add test code and visualize some results.
| 0.717903 | 0.994089 |
```
import psycopg2
from sqlalchemy import create_engine
import ast
import pandas as pd
import glob
import os
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
%matplotlib inline
```
### Questions to answer
- What kind of projects are popular on Kickstarter?
- How much are people asking for?
- What kind of projects tend to be more funded?
### Connect to database
```
dbname="kick"
tblname="info"
engine = create_engine(
'postgresql://localhost:5432/{dbname}'.format(dbname=dbname))
# Connect to database
conn = psycopg2.connect(dbname=dbname)
cur = conn.cursor()
```
Remind myself of the columns in the table:
```
cur.execute("SELECT column_name,data_type FROM information_schema.columns WHERE table_name = '{table}';".format(table=tblname))
rows = cur.fetchall()
pd.DataFrame(rows, columns=["column_name", "data_type"])
```
Number of records in table:
```
cur.execute("SELECT COUNT(*) from {table}".format(table=tblname))
cur.fetchone()
```
---
### Question 1: Project topics
- How many different types of projects are on Kickstarter?
- What is most popular?
- What is most rare?
```
cur.execute("SELECT topic, COUNT(*) from {table} GROUP BY topic ORDER BY count DESC;".format(table=tblname))
rows = cur.fetchall()
df = pd.DataFrame(rows, columns=["topic", "count"])
# Plot findings
plt.rcParams["figure.figsize"] = [17,5]
df.plot(kind="bar", x="topic", y="count", legend=False)
plt.ylabel("Kickstarter projects")
plt.xlabel("Topic")
plt.title("Kickstarter projects by topic")
plt.tick_params(axis='x', labelsize=7)
"There are {num_topics} different types of Kickstarter projects".format(num_topics=df.shape[0])
# Most popular project topic is
df[df["count"] == df["count"].max()]
# Most rare project topic is
df[df["count"] == df["count"].min()]
```
What are the rare projects?
```
cur.execute("SELECT id, blurb, goal*static_usd_rate as goal_usd FROM {table} WHERE topic = '{topic}'".format(table=tblname, topic="Taxidermy"))
rows = cur.fetchall()
for row in rows:
row_id, blurb, goal = row
print(">>> $%d | id: %s" % (goal, row_id),
blurb, sep="\n")
```
### Question 2: Project funding goals
- How much are people asking for in general? by topics?
```
sql = "SELECT id, topic, goal*static_usd_rate as goal_usd FROM {table}".format(table=tblname)
cur.execute(sql)
rows = cur.fetchall()
df = pd.DataFrame(rows, columns=["id", "topic", "goal_usd"])
# Asking average
np.log10(df.goal_usd).plot.kde()
plt.xlabel("log(funding goal in USD)")
"Most projects are asking for: $%d - $%d" % (10**2.5, 10**5)
sns.barplot(x="topic", y="goal_usd",
data=df.groupby("topic").mean().reset_index().sort_values(by="goal_usd", ascending=False))
_ = plt.xticks(rotation='vertical')
plt.ylabel("Average goal (USD)")
plt.xlabel("Kickstarter project topic")
plt.title("Funding goals on Kickstarter by topic")
plt.tick_params(axis='x', labelsize=7)
```
"Movie Theaters" and "Space exploration" have the average higest funding goals
### Question 3: Funding success
What tends to get funded?
```
sql = "SELECT id, topic, goal, pledged, pledged/goal as progress FROM info ORDER BY progress DESC;"
cur.execute(sql)
rows = cur.fetchall()
df = pd.DataFrame(rows, columns=["id", "topic", "goal", "pledged", "progress"])
df["well_funded"] = df.progress >= 1
plt.rcParams["figure.figsize"] = [17,5]
sns.boxplot(x="topic", y="progress", data=df[df.well_funded].sort_values(by="topic"))
_ = plt.xticks(rotation='vertical')
plt.yscale('log')
plt.ylabel("Percent of funding goal")
plt.xlabel("Topic")
plt.title("Projects that were successfully funded by Topic")
plt.tick_params(axis='x', labelsize=7)
sns.barplot(x="topic", y="progress",
data=df[df.well_funded].groupby("topic").count().reset_index().sort_values(by="progress", ascending=False))
_ = plt.xticks(rotation='vertical')
plt.ylabel("Project that were successfully funded")
plt.xlabel("Topic")
plt.title("Projects that were successfully funded by Topic")
plt.tick_params(axis='x', labelsize=7)
plt.rcParams["figure.figsize"] = [17,5]
sns.boxplot(x="topic", y="progress",
data=df[np.invert(df.well_funded)].sort_values(by="topic"))
_ = plt.xticks(rotation='vertical')
plt.ylabel("Percent of funding goal met")
plt.xlabel("Topic")
plt.title("Pojects that have yet to meet their funding goals")
plt.tick_params(axis='x', labelsize=7)
sns.barplot(x="topic", y="progress",
data=df[np.invert(df.well_funded)].groupby("topic").count().reset_index().sort_values(by="progress", ascending=False))
_ = plt.xticks(rotation='vertical')
plt.ylabel("Project that were not yet successfully funded")
plt.xlabel("Topic")
plt.title("Pojects that have yet to meet their funding goals")
plt.tick_params(axis='x', labelsize=7)
```
### Close connection
```
# close communication with the PostgreSQL database server
cur.close()
# commit the changes
conn.commit()
# close connection
conn.close()
```
|
github_jupyter
|
import psycopg2
from sqlalchemy import create_engine
import ast
import pandas as pd
import glob
import os
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
%matplotlib inline
dbname="kick"
tblname="info"
engine = create_engine(
'postgresql://localhost:5432/{dbname}'.format(dbname=dbname))
# Connect to database
conn = psycopg2.connect(dbname=dbname)
cur = conn.cursor()
cur.execute("SELECT column_name,data_type FROM information_schema.columns WHERE table_name = '{table}';".format(table=tblname))
rows = cur.fetchall()
pd.DataFrame(rows, columns=["column_name", "data_type"])
cur.execute("SELECT COUNT(*) from {table}".format(table=tblname))
cur.fetchone()
cur.execute("SELECT topic, COUNT(*) from {table} GROUP BY topic ORDER BY count DESC;".format(table=tblname))
rows = cur.fetchall()
df = pd.DataFrame(rows, columns=["topic", "count"])
# Plot findings
plt.rcParams["figure.figsize"] = [17,5]
df.plot(kind="bar", x="topic", y="count", legend=False)
plt.ylabel("Kickstarter projects")
plt.xlabel("Topic")
plt.title("Kickstarter projects by topic")
plt.tick_params(axis='x', labelsize=7)
"There are {num_topics} different types of Kickstarter projects".format(num_topics=df.shape[0])
# Most popular project topic is
df[df["count"] == df["count"].max()]
# Most rare project topic is
df[df["count"] == df["count"].min()]
cur.execute("SELECT id, blurb, goal*static_usd_rate as goal_usd FROM {table} WHERE topic = '{topic}'".format(table=tblname, topic="Taxidermy"))
rows = cur.fetchall()
for row in rows:
row_id, blurb, goal = row
print(">>> $%d | id: %s" % (goal, row_id),
blurb, sep="\n")
sql = "SELECT id, topic, goal*static_usd_rate as goal_usd FROM {table}".format(table=tblname)
cur.execute(sql)
rows = cur.fetchall()
df = pd.DataFrame(rows, columns=["id", "topic", "goal_usd"])
# Asking average
np.log10(df.goal_usd).plot.kde()
plt.xlabel("log(funding goal in USD)")
"Most projects are asking for: $%d - $%d" % (10**2.5, 10**5)
sns.barplot(x="topic", y="goal_usd",
data=df.groupby("topic").mean().reset_index().sort_values(by="goal_usd", ascending=False))
_ = plt.xticks(rotation='vertical')
plt.ylabel("Average goal (USD)")
plt.xlabel("Kickstarter project topic")
plt.title("Funding goals on Kickstarter by topic")
plt.tick_params(axis='x', labelsize=7)
sql = "SELECT id, topic, goal, pledged, pledged/goal as progress FROM info ORDER BY progress DESC;"
cur.execute(sql)
rows = cur.fetchall()
df = pd.DataFrame(rows, columns=["id", "topic", "goal", "pledged", "progress"])
df["well_funded"] = df.progress >= 1
plt.rcParams["figure.figsize"] = [17,5]
sns.boxplot(x="topic", y="progress", data=df[df.well_funded].sort_values(by="topic"))
_ = plt.xticks(rotation='vertical')
plt.yscale('log')
plt.ylabel("Percent of funding goal")
plt.xlabel("Topic")
plt.title("Projects that were successfully funded by Topic")
plt.tick_params(axis='x', labelsize=7)
sns.barplot(x="topic", y="progress",
data=df[df.well_funded].groupby("topic").count().reset_index().sort_values(by="progress", ascending=False))
_ = plt.xticks(rotation='vertical')
plt.ylabel("Project that were successfully funded")
plt.xlabel("Topic")
plt.title("Projects that were successfully funded by Topic")
plt.tick_params(axis='x', labelsize=7)
plt.rcParams["figure.figsize"] = [17,5]
sns.boxplot(x="topic", y="progress",
data=df[np.invert(df.well_funded)].sort_values(by="topic"))
_ = plt.xticks(rotation='vertical')
plt.ylabel("Percent of funding goal met")
plt.xlabel("Topic")
plt.title("Pojects that have yet to meet their funding goals")
plt.tick_params(axis='x', labelsize=7)
sns.barplot(x="topic", y="progress",
data=df[np.invert(df.well_funded)].groupby("topic").count().reset_index().sort_values(by="progress", ascending=False))
_ = plt.xticks(rotation='vertical')
plt.ylabel("Project that were not yet successfully funded")
plt.xlabel("Topic")
plt.title("Pojects that have yet to meet their funding goals")
plt.tick_params(axis='x', labelsize=7)
# close communication with the PostgreSQL database server
cur.close()
# commit the changes
conn.commit()
# close connection
conn.close()
| 0.42179 | 0.805747 |
```
import numpy as np
def reweight_distribution(original_distribution, temperature=0.5):
distribution = np.log(original_distribution) / temperature
distribution = np.exp(distribution)
return distribution / np.sum(distribution)
text = """
PREFACE
SUPPOSING that Truth is a woman--what then? Is there not ground
for suspecting that all philosophers, in so far as they have been
dogmatists, have failed to understand women--that the terrible
seriousness and clumsy importunity with which they have usually paid
their addresses to Truth, have been unskilled and unseemly methods for
winning a woman? Certainly she has never allowed herself to be won; and
at present every kind of dogma stands with sad and discouraged mien--IF,
indeed, it stands at all! For there are scoffers who maintain that it
has fallen, that all dogma lies on the ground--nay more, that it is at
its last gasp. But to speak seriously, there are good grounds for hoping
that all dogmatizing in philosophy, whatever solemn, whatever conclusive
and decided airs it has assumed, may have been only a noble puerilism
and tyronism; and probably the time is at hand when it will be once
and again understood WHAT has actually sufficed for the basis of such
imposing and absolute philosophical edifices as the dogmatists have
hitherto reared: perhaps some popular superstition of immemorial time
(such as the soul-superstition, which, in the form of subject- and
ego-superstition, has not yet ceased doing mischief): perhaps some
play upon words, a deception on the part of grammar, or an
audacious generalization of very restricted, very personal, very
human--all-too-human facts. The philosophy of the dogmatists, it is to
be hoped, was only a promise for thousands of years afterwards, as was
astrology in still earlier times, in the service of which probably more
labour, gold, acuteness, and patience have been spent than on any
actual science hitherto: we owe to it, and to its "super-terrestrial"
pretensions in Asia and Egypt, the grand style of architecture. It seems
that in order to inscribe themselves upon the heart of humanity with
everlasting claims, all great things have first to wander about the
earth as enormous and awe-inspiring caricatures: dogmatic philosophy has
been a caricature of this kind--for instance, the Vedanta doctrine in
Asia, and Platonism in Europe. Let us not be ungrateful to it, although
it must certainly be confessed that the worst, the most tiresome,
and the most dangerous of errors hitherto has been a dogmatist
error--namely, Plato's invention of Pure Spirit and the Good in Itself.
But now when it has been surmounted, when Europe, rid of this nightmare,
can again draw breath freely and at least enjoy a healthier--sleep,
we, WHOSE DUTY IS WAKEFULNESS ITSELF, are the heirs of all the strength
which the struggle against this error has fostered. It amounted to
the very inversion of truth, and the denial of the PERSPECTIVE--the
fundamental condition--of life, to speak of Spirit and the Good as Plato
spoke of them; indeed one might ask, as a physician: "How did such a
malady attack that finest product of antiquity, Plato? Had the wicked
Socrates really corrupted him? Was Socrates after all a corrupter of
youths, and deserved his hemlock?" But the struggle against Plato,
or--to speak plainer, and for the "people"--the struggle against
the ecclesiastical oppression of millenniums of Christianity (FOR
CHRISTIANITY IS PLATONISM FOR THE "PEOPLE"), produced in Europe
a magnificent tension of soul, such as had not existed anywhere
previously; with such a tensely strained bow one can now aim at the
furthest goals. As a matter of fact, the European feels this tension as
a state of distress, and twice attempts have been made in grand style to
unbend the bow: once by means of Jesuitism, and the second time by means
of democratic enlightenment--which, with the aid of liberty of the press
and newspaper-reading, might, in fact, bring it about that the spirit
would not so easily find itself in "distress"! (The Germans invented
gunpowder--all credit to them! but they again made things square--they
invented printing.) But we, who are neither Jesuits, nor democrats,
nor even sufficiently Germans, we GOOD EUROPEANS, and free, VERY free
spirits--we have it still, all the distress of spirit and all the
tension of its bow! And perhaps also the arrow, the duty, and, who
knows? THE GOAL TO AIM AT....
""".lower()
print(len(text))
text
```
## Listing 8.2 Downloading and parsing the initial text file
```
import keras
import numpy as np
path = keras.utils.get_file('nietzsche.txt',
origin='https://s3.amazonaws.com/text-datasets/nietzsche.txt')
text = open(path).read().lower()
print('Corpus length:', len(text))
```
## Listing 8.3 Vectorizing sequences of characters
```
maxlen = 60
step = 3
sentences = []
next_chars = []
for i in range(0, len(text) - maxlen, step):
sentences.append(text[i: i + maxlen])
next_chars.append(text[i + maxlen])
print('Number of sequences:', len(sentences))
chars = sorted(list(set(text)))
print('Unique characters:', len(chars))
char_indices = dict((char, chars.index(char)) for char in chars)
print('Vectorization...')
x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
y[2]
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
x[i, t, char_indices[char]] = 1
y[i, char_indices[next_chars[i]]] = 1
y[0]
```
## Listing 8.4 Single-layer LSTM model for next-character prediction
```
import keras
from keras import layers
model = keras.models.Sequential()
model.add(layers.LSTM(128, input_shape=(maxlen, len(chars))))
model.add(layers.Dense(len(chars), activation='softmax'))
```
## Listing 8.5 Model compilation configuration
```
optimizer = keras.optimizers.RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
```
## Listing 8.6 Function to sample the next character given the model’s predictions
```
def sample(preds, temperature=1.0):
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
```
## Listing 8.7 Text-generation loop
```
import random
import sys
model.fit(x, y, batch_size=128, epochs=10)
start_index = random.randint(0, len(text) - maxlen - 1)
generated_text = text[start_index: start_index + maxlen]
print('--- Generating with seed: "' + generated_text + '"')
temperature = 0.5
for i in range(400):
sampled = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(generated_text):
sampled[0, t, char_indices[char]] = 1.
preds = model.predict(sampled, verbose=0)[0]
next_index = sample(preds, temperature)
next_char = chars[next_index]
generated_text += next_char
generated_text = generated_text[1:]
sys.stdout.write(next_char)
import random
import sys
for epoch in range(1, 60):
print('epoch', epoch)
model.fit(x, y, batch_size=128, epochs=1)
start_index = random.randint(0, len(text) - maxlen - 1)
generated_text = text[start_index: start_index + maxlen]
print('--- Generating with seed: "' + generated_text + '"')
for temperature in [0.2, 0.5, 1.0, 1.2]:
print('------ temperature:', temperature)
sys.stdout.write(generated_text)
for i in range(400):
sampled = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(generated_text):
sampled[0, t, char_indices[char]] = 1.
preds = model.predict(sampled, verbose=0)[0]
next_index = sample(preds, temperature)
next_char = chars[next_index]
generated_text += next_char
generated_text = generated_text[1:]
sys.stdout.write(next_char)
```
|
github_jupyter
|
import numpy as np
def reweight_distribution(original_distribution, temperature=0.5):
distribution = np.log(original_distribution) / temperature
distribution = np.exp(distribution)
return distribution / np.sum(distribution)
text = """
PREFACE
SUPPOSING that Truth is a woman--what then? Is there not ground
for suspecting that all philosophers, in so far as they have been
dogmatists, have failed to understand women--that the terrible
seriousness and clumsy importunity with which they have usually paid
their addresses to Truth, have been unskilled and unseemly methods for
winning a woman? Certainly she has never allowed herself to be won; and
at present every kind of dogma stands with sad and discouraged mien--IF,
indeed, it stands at all! For there are scoffers who maintain that it
has fallen, that all dogma lies on the ground--nay more, that it is at
its last gasp. But to speak seriously, there are good grounds for hoping
that all dogmatizing in philosophy, whatever solemn, whatever conclusive
and decided airs it has assumed, may have been only a noble puerilism
and tyronism; and probably the time is at hand when it will be once
and again understood WHAT has actually sufficed for the basis of such
imposing and absolute philosophical edifices as the dogmatists have
hitherto reared: perhaps some popular superstition of immemorial time
(such as the soul-superstition, which, in the form of subject- and
ego-superstition, has not yet ceased doing mischief): perhaps some
play upon words, a deception on the part of grammar, or an
audacious generalization of very restricted, very personal, very
human--all-too-human facts. The philosophy of the dogmatists, it is to
be hoped, was only a promise for thousands of years afterwards, as was
astrology in still earlier times, in the service of which probably more
labour, gold, acuteness, and patience have been spent than on any
actual science hitherto: we owe to it, and to its "super-terrestrial"
pretensions in Asia and Egypt, the grand style of architecture. It seems
that in order to inscribe themselves upon the heart of humanity with
everlasting claims, all great things have first to wander about the
earth as enormous and awe-inspiring caricatures: dogmatic philosophy has
been a caricature of this kind--for instance, the Vedanta doctrine in
Asia, and Platonism in Europe. Let us not be ungrateful to it, although
it must certainly be confessed that the worst, the most tiresome,
and the most dangerous of errors hitherto has been a dogmatist
error--namely, Plato's invention of Pure Spirit and the Good in Itself.
But now when it has been surmounted, when Europe, rid of this nightmare,
can again draw breath freely and at least enjoy a healthier--sleep,
we, WHOSE DUTY IS WAKEFULNESS ITSELF, are the heirs of all the strength
which the struggle against this error has fostered. It amounted to
the very inversion of truth, and the denial of the PERSPECTIVE--the
fundamental condition--of life, to speak of Spirit and the Good as Plato
spoke of them; indeed one might ask, as a physician: "How did such a
malady attack that finest product of antiquity, Plato? Had the wicked
Socrates really corrupted him? Was Socrates after all a corrupter of
youths, and deserved his hemlock?" But the struggle against Plato,
or--to speak plainer, and for the "people"--the struggle against
the ecclesiastical oppression of millenniums of Christianity (FOR
CHRISTIANITY IS PLATONISM FOR THE "PEOPLE"), produced in Europe
a magnificent tension of soul, such as had not existed anywhere
previously; with such a tensely strained bow one can now aim at the
furthest goals. As a matter of fact, the European feels this tension as
a state of distress, and twice attempts have been made in grand style to
unbend the bow: once by means of Jesuitism, and the second time by means
of democratic enlightenment--which, with the aid of liberty of the press
and newspaper-reading, might, in fact, bring it about that the spirit
would not so easily find itself in "distress"! (The Germans invented
gunpowder--all credit to them! but they again made things square--they
invented printing.) But we, who are neither Jesuits, nor democrats,
nor even sufficiently Germans, we GOOD EUROPEANS, and free, VERY free
spirits--we have it still, all the distress of spirit and all the
tension of its bow! And perhaps also the arrow, the duty, and, who
knows? THE GOAL TO AIM AT....
""".lower()
print(len(text))
text
import keras
import numpy as np
path = keras.utils.get_file('nietzsche.txt',
origin='https://s3.amazonaws.com/text-datasets/nietzsche.txt')
text = open(path).read().lower()
print('Corpus length:', len(text))
maxlen = 60
step = 3
sentences = []
next_chars = []
for i in range(0, len(text) - maxlen, step):
sentences.append(text[i: i + maxlen])
next_chars.append(text[i + maxlen])
print('Number of sequences:', len(sentences))
chars = sorted(list(set(text)))
print('Unique characters:', len(chars))
char_indices = dict((char, chars.index(char)) for char in chars)
print('Vectorization...')
x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
y[2]
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
x[i, t, char_indices[char]] = 1
y[i, char_indices[next_chars[i]]] = 1
y[0]
import keras
from keras import layers
model = keras.models.Sequential()
model.add(layers.LSTM(128, input_shape=(maxlen, len(chars))))
model.add(layers.Dense(len(chars), activation='softmax'))
optimizer = keras.optimizers.RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
def sample(preds, temperature=1.0):
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
import random
import sys
model.fit(x, y, batch_size=128, epochs=10)
start_index = random.randint(0, len(text) - maxlen - 1)
generated_text = text[start_index: start_index + maxlen]
print('--- Generating with seed: "' + generated_text + '"')
temperature = 0.5
for i in range(400):
sampled = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(generated_text):
sampled[0, t, char_indices[char]] = 1.
preds = model.predict(sampled, verbose=0)[0]
next_index = sample(preds, temperature)
next_char = chars[next_index]
generated_text += next_char
generated_text = generated_text[1:]
sys.stdout.write(next_char)
import random
import sys
for epoch in range(1, 60):
print('epoch', epoch)
model.fit(x, y, batch_size=128, epochs=1)
start_index = random.randint(0, len(text) - maxlen - 1)
generated_text = text[start_index: start_index + maxlen]
print('--- Generating with seed: "' + generated_text + '"')
for temperature in [0.2, 0.5, 1.0, 1.2]:
print('------ temperature:', temperature)
sys.stdout.write(generated_text)
for i in range(400):
sampled = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(generated_text):
sampled[0, t, char_indices[char]] = 1.
preds = model.predict(sampled, verbose=0)[0]
next_index = sample(preds, temperature)
next_char = chars[next_index]
generated_text += next_char
generated_text = generated_text[1:]
sys.stdout.write(next_char)
| 0.407805 | 0.871721 |
# MODULE 0 - INTRODUCTION
# --- EXPLORE THE DATA -------------------------------------
```
# Load text file into local variable called 'data'
data = read.delim(file = 'purchases.txt', header = FALSE, sep = '\t', dec = '.')
data
# Display what has been loaded
head(data)
summary(data)
# Add headers and interpret the last column as a date, extract year of purchase
colnames(data) = c('customer_id', 'purchase_amount', 'date_of_purchase')
data$date_of_purchase = as.Date(data$date_of_purchase, "%Y-%m-%d")
data$year_of_purchase = as.numeric(format(data$date_of_purchase, "%Y"))
# Display the data set after transformation
head(data)
summary(data)
# Explore the data using simple SQL statements
library(sqldf)
# Number of purchases per year
x = sqldf("SELECT year_of_purchase, COUNT(year_of_purchase) AS 'counter' FROM data GROUP BY 1 ORDER BY 1")
barplot(x$counter, names.arg = x$year_of_purchase)
x
# Average purchase amount per year
x = sqldf("SELECT year_of_purchase, AVG(purchase_amount) AS 'avg_amount' FROM data GROUP BY 1 ORDER BY 1")
barplot(x$avg_amount, names.arg = x$year_of_purchase)
x
# Total purchase amounts per year
x = sqldf("SELECT year_of_purchase, SUM(purchase_amount) AS 'sum_amount' FROM data GROUP BY 1 ORDER BY 1")
barplot(x$sum_amount, names.arg = x$year_of_purchase)
# All in one
x = sqldf("SELECT year_of_purchase,
COUNT(year_of_purchase) AS 'counter',
AVG(purchase_amount) AS 'avg_amount',
SUM(purchase_amount) AS 'sum_amount'
FROM data GROUP BY 1 ORDER BY 1")
print(x)
```
# MODULE 1 - STATISTICAL SEGMENTATION
```
# Load text file into local variable called 'data'
data = read.delim(file = 'purchases.txt', header = FALSE, sep = '\t', dec = '.')
# Add headers and interpret the last column as a date, extract year of purchase
colnames(data) = c('customer_id', 'purchase_amount', 'date_of_purchase')
data$date_of_purchase = as.Date(data$date_of_purchase, "%Y-%m-%d")
data$days_since = as.numeric(difftime(time1 = "2016-01-01",
time2 = data$date_of_purchase,
units = "days"))
# Display the data after transformation
head(data)
summary(data)
# Compute key marketing indicators using SQL language
library(sqldf)
# Compute recency, frequency, and average purchase amount
customers = sqldf("SELECT customer_id,
MIN(days_since) AS 'recency',
COUNT(*) AS 'frequency',
AVG(purchase_amount) AS 'amount'
FROM data GROUP BY 1")
customers
# Explore the data
head(customers)
summary(customers)
hist(customers$recency)
hist(customers$frequency)
hist(customers$amount)
hist(customers$amount, breaks = 100)
```
# --- PREPARING AND TRANSFORMING DATA ----------------------
```
# Copy customer data into new data frame
new_data = customers
# Remove customer id as a variable, store it as row names
head(new_data)
row.names(new_data) = new_data$customer_id
new_data$customer_id = NULL
head(new_data)
# Take the log-transform of the amount, and plot
new_data$amount = log(new_data$amount)
hist(new_data$amount)
new_data
# Standardize variables
new_data = scale(new_data)
head(new_data)
new_data
```
# --- RUNNING A HIERARCHICAL SEGMENTATION ------------------
```
# Compute distance metrics on standardized data
# This will likely generate an error on most machines
# d = dist(new_data)
# Take a 10% sample
sample = seq(1, 18417, by = 10)
head(sample)
customers_sample = customers[sample, ]
new_data_sample = new_data[sample, ]
# Compute distance metrics on standardized data
d = dist(new_data_sample)
# Perform hierarchical clustering on distance metrics
c = hclust(d, method="ward.D2")
# Plot de dendogram
plot(c)
# Cut at 9 segments
members = cutree(c, k = 9)
# Show 30 first customers, frequency table
members[1:30]
table(members)
# Show profile of each segment
aggregate(customers_sample[, 2:4], by = list(members), mean)
```
# MODULE 2 - MANAGERIAL SEGMENTATION
# --- COMPUTING RECENCY, FREQUENCY, MONETARY VALUE ---------
```
# Load text file into local variable called 'data'
data = read.delim(file = 'purchases.txt', header = FALSE, sep = '\t', dec = '.')
# Add headers and interpret the last column as a date, extract year of purchase
colnames(data) = c('customer_id', 'purchase_amount', 'date_of_purchase')
data$date_of_purchase = as.Date(data$date_of_purchase, "%Y-%m-%d")
data$year_of_purchase = as.numeric(format(data$date_of_purchase, "%Y"))
data$days_since = as.numeric(difftime(time1 = "2016-01-01",
time2 = data$date_of_purchase,
units = "days"))
# Display the data after transformation
head(data)
summary(data)
# Compute key marketing indicators using SQL language
library(sqldf)
# Compute recency, frequency, and average purchase amount
customers_2015 = sqldf("SELECT customer_id,
MIN(days_since) AS 'recency',
MAX(days_since) AS 'first_purchase',
COUNT(*) AS 'frequency',
AVG(purchase_amount) AS 'amount'
FROM data GROUP BY 1")
# Explore the data
head(customers_2015)
summary(customers_2015)
hist(customers_2015$recency)
hist(customers_2015$frequency)
hist(customers_2015$amount)
hist(customers_2015$amount, breaks = 100)
```
# --- CODING A MANAGERIAL SEGMENTATION ---------------------
```
# Simple 2-segment solution based on recency alone
customers_2015$segment = ifelse(test = customers_2015$recency > 365*3, yes = "inactive", no = "NA")
table(customers_2015$segment)
aggregate(x = customers_2015[, 2:5], by = list(customers_2015$segment), mean)
# A more complex 3-segment solution based on recency alone
customers_2015$segment = ifelse(test = customers_2015$recency > 365*3,
yes = "inactive",
no = ifelse(test = customers_2015$recency > 365*2,
yes = "cold",
no = "NA"))
table(customers_2015$segment)
aggregate(x = customers_2015[, 2:5], by = list(customers_2015$segment), mean)
# Simple 2-segment solution using the which statement
customers_2015$segment = "NA"
customers_2015$segment[which(customers_2015$recency > 365*3)] = "inactive"
table(customers_2015$segment)
aggregate(x = customers_2015[, 2:5], by = list(customers_2015$segment), mean)
# More complex 4-segment solution using which
customers_2015$segment = "NA"
customers_2015$segment[which(customers_2015$recency > 365*3)] = "inactive"
customers_2015$segment[which(customers_2015$recency <= 365*3 & customers_2015$recency > 365*2)] = "cold"
customers_2015$segment[which(customers_2015$recency <= 365*2 & customers_2015$recency > 365*1)] = "warm"
customers_2015$segment[which(customers_2015$recency <= 365)] = "active"
table(customers_2015$segment)
aggregate(x = customers_2015[, 2:5], by = list(customers_2015$segment), mean)
# Complete segment solution using which, and exploiting previous test as input
customers_2015$segment = "NA"
customers_2015$segment[which(customers_2015$recency > 365*3)] = "inactive"
customers_2015$segment[which(customers_2015$recency <= 365*3 & customers_2015$recency > 365*2)] = "cold"
customers_2015$segment[which(customers_2015$recency <= 365*2 & customers_2015$recency > 365*1)] = "warm"
customers_2015$segment[which(customers_2015$recency <= 365)] = "active"
customers_2015$segment[which(customers_2015$segment == "warm" & customers_2015$first_purchase <= 365*2)] = "new warm"
customers_2015$segment[which(customers_2015$segment == "warm" & customers_2015$amount < 100)] = "warm low value"
customers_2015$segment[which(customers_2015$segment == "warm" & customers_2015$amount >= 100)] = "warm high value"
customers_2015$segment[which(customers_2015$segment == "active" & customers_2015$first_purchase <= 365)] = "new active"
customers_2015$segment[which(customers_2015$segment == "active" & customers_2015$amount < 100)] = "active low value"
customers_2015$segment[which(customers_2015$segment == "active" & customers_2015$amount >= 100)] = "active high value"
table(customers_2015$segment)
aggregate(x = customers_2015[, 2:5], by = list(customers_2015$segment), mean)
# Re-order factor in a way that makes sense
customers_2015$segment = factor(x = customers_2015$segment, levels = c("inactive", "cold",
"warm high value", "warm low value", "new warm",
"active high value", "active low value", "new active"))
table(customers_2015$segment)
aggregate(x = customers_2015[, 2:5], by = list(customers_2015$segment), mean)
```
# --- SEGMENTING A DATABASE RETROSPECTIVELY ----------------
```
# Compute key marketing indicators using SQL language
library(sqldf)
# Compute recency, frequency, and average purchase amount
customers_2014 = sqldf("SELECT customer_id,
MIN(days_since) - 365 AS 'recency',
MAX(days_since) - 365 AS 'first_purchase',
COUNT(*) AS 'frequency',
AVG(purchase_amount) AS 'amount'
FROM data
WHERE days_since > 365
GROUP BY 1")
customers_2015
# Complete segment solution using which, and exploiting previous test as input
customers_2014$segment = "NA"
customers_2014$segment[which(customers_2014$recency > 365*3)] = "inactive"
customers_2014$segment[which(customers_2014$recency <= 365*3 & customers_2014$recency > 365*2)] = "cold"
customers_2014$segment[which(customers_2014$recency <= 365*2 & customers_2014$recency > 365*1)] = "warm"
customers_2014$segment[which(customers_2014$recency <= 365)] = "active"
customers_2014$segment[which(customers_2014$segment == "warm" & customers_2014$first_purchase <= 365*2)] = "new warm"
customers_2014$segment[which(customers_2014$segment == "warm" & customers_2014$amount < 100)] = "warm low value"
customers_2014$segment[which(customers_2014$segment == "warm" & customers_2014$amount >= 100)] = "warm high value"
customers_2014$segment[which(customers_2014$segment == "active" & customers_2014$first_purchase <= 365)] = "new active"
customers_2014$segment[which(customers_2014$segment == "active" & customers_2014$amount < 100)] = "active low value"
customers_2014$segment[which(customers_2014$segment == "active" & customers_2014$amount >= 100)] = "active high value"
# Re-order factor in a way that makes sense
customers_2014$segment = factor(x = customers_2014$segment, levels = c("inactive", "cold",
"warm high value", "warm low value", "new warm",
"active high value", "active low value", "new active"))
# Show segmentation results
table(customers_2014$segment)
pie(table(customers_2014$segment), col = rainbow(24))
aggregate(x = customers_2014[, 2:5], by = list(customers_2014$segment), mean)
```
# --- COMPUTING REVENUE GENERATION PER SEGMENT -------------
```
# Compute how much revenue is generated by segments
# Notice that people with no revenue in 2015 do NOT appear
revenue_2015 = sqldf("SELECT customer_id, SUM(purchase_amount) AS 'revenue_2015'
FROM data
WHERE year_of_purchase = 2015
GROUP BY 1")
summary(revenue_2015)
# Merge 2015 customers and 2015 revenue (the wrong way)
actual = merge(customers_2015, revenue_2015)
actual
# Merge 2015 customers and 2015 revenue (correct)
actual = merge(customers_2015, revenue_2015, all.x = TRUE)
actual$revenue_2015[is.na(actual$revenue_2015)] = 0
# Show average revenue per customer and per segment
aggregate(x = actual$revenue_2015, by = list(customers_2015$segment), mean)
# Merge 2014 customers and 2015 revenue (correct)
forward = merge(customers_2014, revenue_2015, all.x = TRUE)
forward$revenue_2015[is.na(forward$revenue_2015)] = 0
forward
revenue_2015
# Show average revenue per customer and per segment
r = aggregate(x = forward$revenue_2015, by = list(customers_2014$segment), mean)
print(r)
# Re-order and display results
r = r[order(r$x, decreasing = TRUE), ]
print(r)
barplot(r$x, names.arg = r$Group.1)
```
# MODULE 3 - SCORING
# --- COMPUTING PREDICTORS AND TARGET VARIABLES ------------
```
# Load text file into local variable called 'data'
data = read.delim(file = 'purchases.txt', header = FALSE, sep = '\t', dec = '.')
# Add headers and interpret the last column as a date, extract year of purchase
colnames(data) = c('customer_id', 'purchase_amount', 'date_of_purchase')
data$date_of_purchase = as.Date(data$date_of_purchase, "%Y-%m-%d")
data$year_of_purchase = as.numeric(format(data$date_of_purchase, "%Y"))
data$days_since = as.numeric(difftime(time1 = "2016-01-01",
time2 = data$date_of_purchase,
units = "days"))
# Compute key marketing indicators using SQL language
library(sqldf)
# Compute RFM variables as of a year ago
customers_2014 = sqldf("SELECT customer_id,
MIN(days_since) - 365 AS 'recency',
MAX(days_since) - 365 AS 'first_purchase',
COUNT(*) AS 'frequency',
AVG(purchase_amount) AS 'avg_amount',
MAX(purchase_amount) AS 'max_amount'
FROM data
WHERE days_since > 365
GROUP BY 1")
customers_2014
# Compute revenues generated by customers in 2015
revenue_2015 = sqldf("SELECT customer_id, SUM(purchase_amount) AS 'revenue_2015'
FROM data
WHERE year_of_purchase = 2015
GROUP BY 1")
revenue_2015
# Merge 2015 customers and 2015 revenue
in_sample = merge(customers_2014, revenue_2015, all.x = TRUE)
in_sample$revenue_2015[is.na(in_sample$revenue_2015)] = 0
in_sample$active_2015 = as.numeric(in_sample$revenue_2015 > 0)
# Display calibration (in-sample) data
head(in_sample)
summary(in_sample)
```
# --- CALIBRATE THE MODELS ---------------------------------
```
# Calibrate probability model
library(nnet)
prob.model = multinom(formula = active_2015 ~ recency + first_purchase + frequency + avg_amount + max_amount,
data = in_sample)
coef = summary(prob.model)$coefficients
std = summary(prob.model)$standard.errors
print(coef)
print(std)
print(coef / std)
summary(prob.model)
# For the monetary model, select only those who made a purchase
z = which(in_sample$active_2015 == 1)
head(in_sample[z, ])
summary(in_sample[z, ])
# Calibrate the monetary model (version 1)
amount.model = lm(formula = revenue_2015 ~ avg_amount + max_amount, data = in_sample[z, ])
summary(amount.model)
# Plot the results of the monetary model
plot(x = in_sample[z, ]$revenue_2015, y = amount.model$fitted.values)
# Re-calibrate the monetary model, using a log-transform (version 2)
amount.model = lm(formula = log(revenue_2015) ~ log(avg_amount) + log(max_amount), data = in_sample[z, ])
summary(amount.model)
# Plot the results of this new monetary model
plot(x = log(in_sample[z, ]$revenue_2015), y = amount.model$fitted.values)
```
# --- APPLY THE MODELS TO TODAY'S DATA ---------------------
```
# Compute RFM variables as of today
customers_2015 = sqldf("SELECT customer_id,
MIN(days_since) AS 'recency',
MAX(days_since) AS 'first_purchase',
COUNT(*) AS 'frequency',
AVG(purchase_amount) AS 'avg_amount',
MAX(purchase_amount) AS 'max_amount'
FROM data GROUP BY 1")
customers_2015
# Predict the target variables based on today's data
customers_2015$prob_predicted = predict(object = prob.model, newdata = customers_2015, type = "probs")
customers_2015$revenue_predicted = exp(predict(object = amount.model, newdata = customers_2015))
customers_2015$score_predicted = customers_2015$prob_predicted * customers_2015$revenue_predicted
summary(customers_2015$prob_predicted)
summary(customers_2015$revenue_predicted)
summary(customers_2015$score_predicted)
hist(customers_2015$score_predicted)
# How many customers have an expected revenue of more than $50
z = which(customers_2015$score_predicted > 50)
print(length(z))
```
# MODULE 4 - CUSTOMER LIFETIME VALUE
# --- SEGMENT CUSTOMERS IN 2014 AND 2015 -------------------
```
# Load text file into local variable called 'data'
data = read.delim(file = 'purchases.txt', header = FALSE, sep = '\t', dec = '.')
# Add headers and interpret the last column as a date, extract year of purchase
colnames(data) = c('customer_id', 'purchase_amount', 'date_of_purchase')
data$date_of_purchase = as.Date(data$date_of_purchase, "%Y-%m-%d")
data$year_of_purchase = as.numeric(format(data$date_of_purchase, "%Y"))
data$days_since = as.numeric(difftime(time1 = "2016-01-01",
time2 = data$date_of_purchase,
units = "days"))
# Invoke library to compute key marketing indicators using SQL language
library(sqldf)
# Segment customers in 2015
customers_2015 = sqldf("SELECT customer_id,
MIN(days_since) AS 'recency',
MAX(days_since) AS 'first_purchase',
COUNT(*) AS 'frequency',
AVG(purchase_amount) AS 'amount'
FROM data GROUP BY 1")
customers_2015$segment = "NA"
customers_2015$segment[which(customers_2015$recency > 365*3)] = "inactive"
customers_2015$segment[which(customers_2015$recency <= 365*3 & customers_2015$recency > 365*2)] = "cold"
customers_2015$segment[which(customers_2015$recency <= 365*2 & customers_2015$recency > 365*1)] = "warm"
customers_2015$segment[which(customers_2015$recency <= 365)] = "active"
customers_2015$segment[which(customers_2015$segment == "warm" & customers_2015$first_purchase <= 365*2)] = "new warm"
customers_2015$segment[which(customers_2015$segment == "warm" & customers_2015$amount < 100)] = "warm low value"
customers_2015$segment[which(customers_2015$segment == "warm" & customers_2015$amount >= 100)] = "warm high value"
customers_2015$segment[which(customers_2015$segment == "active" & customers_2015$first_purchase <= 365)] = "new active"
customers_2015$segment[which(customers_2015$segment == "active" & customers_2015$amount < 100)] = "active low value"
customers_2015$segment[which(customers_2015$segment == "active" & customers_2015$amount >= 100)] = "active high value"
customers_2015$segment = factor(x = customers_2015$segment, levels = c("inactive", "cold",
"warm high value", "warm low value", "new warm",
"active high value", "active low value", "new active"))
# Segment customers in 2014
customers_2014 = sqldf("SELECT customer_id,
MIN(days_since) - 365 AS 'recency',
MAX(days_since) - 365 AS 'first_purchase',
COUNT(*) AS 'frequency',
AVG(purchase_amount) AS 'amount'
FROM data
WHERE days_since > 365
GROUP BY 1")
customers_2014$segment = "NA"
customers_2014$segment[which(customers_2014$recency > 365*3)] = "inactive"
customers_2014$segment[which(customers_2014$recency <= 365*3 & customers_2014$recency > 365*2)] = "cold"
customers_2014$segment[which(customers_2014$recency <= 365*2 & customers_2014$recency > 365*1)] = "warm"
customers_2014$segment[which(customers_2014$recency <= 365)] = "active"
customers_2014$segment[which(customers_2014$segment == "warm" & customers_2014$first_purchase <= 365*2)] = "new warm"
customers_2014$segment[which(customers_2014$segment == "warm" & customers_2014$amount < 100)] = "warm low value"
customers_2014$segment[which(customers_2014$segment == "warm" & customers_2014$amount >= 100)] = "warm high value"
customers_2014$segment[which(customers_2014$segment == "active" & customers_2014$first_purchase <= 365)] = "new active"
customers_2014$segment[which(customers_2014$segment == "active" & customers_2014$amount < 100)] = "active low value"
customers_2014$segment[which(customers_2014$segment == "active" & customers_2014$amount >= 100)] = "active high value"
customers_2014$segment = factor(x = customers_2014$segment, levels = c("inactive", "cold",
"warm high value", "warm low value", "new warm",
"active high value", "active low value", "new active"))
```
# --- COMPUTE TRANSITION MATRIX ----------------------------
```
# Compute transition matrix
new_data = merge(x = customers_2014, y = customers_2015, by = "customer_id", all.x = TRUE)
head(new_data)
transition = table(new_data$segment.x, new_data$segment.y)
print(transition)
# Divide each row by its sum
transition = transition / rowSums(transition)
print(transition)
```
# --- USE TRANSITION MATRIX TO MAKE PREDICTIONS ------------
```
# Initialize a matrix with the number of customers in each segment today and after 10 periods
segments = matrix(nrow = 8, ncol = 11)
segments[, 1] = table(customers_2015$segment)
colnames(segments) = 2015:2025
row.names(segments) = levels(customers_2015$segment)
print(segments)
# Compute for each an every period
for (i in 2:11) {
segments[, i] = segments[, i-1] %*% transition
}
# Plot inactive, active high value customers over time
barplot(segments[1, ])
barplot(segments[2, ])
# Display how segments will evolve over time
print(round(segments))
```
# --- COMPUTE THE (DISCOUNTED) CLV OF A DATABASE -----------
```
# Yearly revenue per segment
# This comes directly from module 2, lines 160-161
yearly_revenue = c(0, 0, 0, 0, 0, 323.57, 52.31, 79.17)
# Compute revenue per segment
revenue_per_segment = yearly_revenue * segments
print(revenue_per_segment)
# Compute yearly revenue
yearly_revenue = colSums(revenue_per_segment)
print(round(yearly_revenue))
barplot(yearly_revenue)
# Compute cumulated revenue
cumulated_revenue = cumsum(yearly_revenue)
print(round(cumulated_revenue))
barplot(cumulated_revenue)
# Create a discount factor
discount_rate = 0.10
discount = 1 / ((1 + discount_rate) ^ ((1:11) - 1))
print(discount)
# Compute discounted yearly revenue
disc_yearly_revenue = yearly_revenue * discount
print(round(disc_yearly_revenue))
barplot(disc_yearly_revenue)
lines(yearly_revenue)
# Compute discounted cumulated revenue
disc_cumulated_revenue = cumsum(disc_yearly_revenue)
print(round(disc_cumulated_revenue))
barplot(disc_cumulated_revenue)
# What is the database worth?
print(disc_cumulated_revenue[11] - yearly_revenue[1])
```
|
github_jupyter
|
# Load text file into local variable called 'data'
data = read.delim(file = 'purchases.txt', header = FALSE, sep = '\t', dec = '.')
data
# Display what has been loaded
head(data)
summary(data)
# Add headers and interpret the last column as a date, extract year of purchase
colnames(data) = c('customer_id', 'purchase_amount', 'date_of_purchase')
data$date_of_purchase = as.Date(data$date_of_purchase, "%Y-%m-%d")
data$year_of_purchase = as.numeric(format(data$date_of_purchase, "%Y"))
# Display the data set after transformation
head(data)
summary(data)
# Explore the data using simple SQL statements
library(sqldf)
# Number of purchases per year
x = sqldf("SELECT year_of_purchase, COUNT(year_of_purchase) AS 'counter' FROM data GROUP BY 1 ORDER BY 1")
barplot(x$counter, names.arg = x$year_of_purchase)
x
# Average purchase amount per year
x = sqldf("SELECT year_of_purchase, AVG(purchase_amount) AS 'avg_amount' FROM data GROUP BY 1 ORDER BY 1")
barplot(x$avg_amount, names.arg = x$year_of_purchase)
x
# Total purchase amounts per year
x = sqldf("SELECT year_of_purchase, SUM(purchase_amount) AS 'sum_amount' FROM data GROUP BY 1 ORDER BY 1")
barplot(x$sum_amount, names.arg = x$year_of_purchase)
# All in one
x = sqldf("SELECT year_of_purchase,
COUNT(year_of_purchase) AS 'counter',
AVG(purchase_amount) AS 'avg_amount',
SUM(purchase_amount) AS 'sum_amount'
FROM data GROUP BY 1 ORDER BY 1")
print(x)
# Load text file into local variable called 'data'
data = read.delim(file = 'purchases.txt', header = FALSE, sep = '\t', dec = '.')
# Add headers and interpret the last column as a date, extract year of purchase
colnames(data) = c('customer_id', 'purchase_amount', 'date_of_purchase')
data$date_of_purchase = as.Date(data$date_of_purchase, "%Y-%m-%d")
data$days_since = as.numeric(difftime(time1 = "2016-01-01",
time2 = data$date_of_purchase,
units = "days"))
# Display the data after transformation
head(data)
summary(data)
# Compute key marketing indicators using SQL language
library(sqldf)
# Compute recency, frequency, and average purchase amount
customers = sqldf("SELECT customer_id,
MIN(days_since) AS 'recency',
COUNT(*) AS 'frequency',
AVG(purchase_amount) AS 'amount'
FROM data GROUP BY 1")
customers
# Explore the data
head(customers)
summary(customers)
hist(customers$recency)
hist(customers$frequency)
hist(customers$amount)
hist(customers$amount, breaks = 100)
# Copy customer data into new data frame
new_data = customers
# Remove customer id as a variable, store it as row names
head(new_data)
row.names(new_data) = new_data$customer_id
new_data$customer_id = NULL
head(new_data)
# Take the log-transform of the amount, and plot
new_data$amount = log(new_data$amount)
hist(new_data$amount)
new_data
# Standardize variables
new_data = scale(new_data)
head(new_data)
new_data
# Compute distance metrics on standardized data
# This will likely generate an error on most machines
# d = dist(new_data)
# Take a 10% sample
sample = seq(1, 18417, by = 10)
head(sample)
customers_sample = customers[sample, ]
new_data_sample = new_data[sample, ]
# Compute distance metrics on standardized data
d = dist(new_data_sample)
# Perform hierarchical clustering on distance metrics
c = hclust(d, method="ward.D2")
# Plot de dendogram
plot(c)
# Cut at 9 segments
members = cutree(c, k = 9)
# Show 30 first customers, frequency table
members[1:30]
table(members)
# Show profile of each segment
aggregate(customers_sample[, 2:4], by = list(members), mean)
# Load text file into local variable called 'data'
data = read.delim(file = 'purchases.txt', header = FALSE, sep = '\t', dec = '.')
# Add headers and interpret the last column as a date, extract year of purchase
colnames(data) = c('customer_id', 'purchase_amount', 'date_of_purchase')
data$date_of_purchase = as.Date(data$date_of_purchase, "%Y-%m-%d")
data$year_of_purchase = as.numeric(format(data$date_of_purchase, "%Y"))
data$days_since = as.numeric(difftime(time1 = "2016-01-01",
time2 = data$date_of_purchase,
units = "days"))
# Display the data after transformation
head(data)
summary(data)
# Compute key marketing indicators using SQL language
library(sqldf)
# Compute recency, frequency, and average purchase amount
customers_2015 = sqldf("SELECT customer_id,
MIN(days_since) AS 'recency',
MAX(days_since) AS 'first_purchase',
COUNT(*) AS 'frequency',
AVG(purchase_amount) AS 'amount'
FROM data GROUP BY 1")
# Explore the data
head(customers_2015)
summary(customers_2015)
hist(customers_2015$recency)
hist(customers_2015$frequency)
hist(customers_2015$amount)
hist(customers_2015$amount, breaks = 100)
# Simple 2-segment solution based on recency alone
customers_2015$segment = ifelse(test = customers_2015$recency > 365*3, yes = "inactive", no = "NA")
table(customers_2015$segment)
aggregate(x = customers_2015[, 2:5], by = list(customers_2015$segment), mean)
# A more complex 3-segment solution based on recency alone
customers_2015$segment = ifelse(test = customers_2015$recency > 365*3,
yes = "inactive",
no = ifelse(test = customers_2015$recency > 365*2,
yes = "cold",
no = "NA"))
table(customers_2015$segment)
aggregate(x = customers_2015[, 2:5], by = list(customers_2015$segment), mean)
# Simple 2-segment solution using the which statement
customers_2015$segment = "NA"
customers_2015$segment[which(customers_2015$recency > 365*3)] = "inactive"
table(customers_2015$segment)
aggregate(x = customers_2015[, 2:5], by = list(customers_2015$segment), mean)
# More complex 4-segment solution using which
customers_2015$segment = "NA"
customers_2015$segment[which(customers_2015$recency > 365*3)] = "inactive"
customers_2015$segment[which(customers_2015$recency <= 365*3 & customers_2015$recency > 365*2)] = "cold"
customers_2015$segment[which(customers_2015$recency <= 365*2 & customers_2015$recency > 365*1)] = "warm"
customers_2015$segment[which(customers_2015$recency <= 365)] = "active"
table(customers_2015$segment)
aggregate(x = customers_2015[, 2:5], by = list(customers_2015$segment), mean)
# Complete segment solution using which, and exploiting previous test as input
customers_2015$segment = "NA"
customers_2015$segment[which(customers_2015$recency > 365*3)] = "inactive"
customers_2015$segment[which(customers_2015$recency <= 365*3 & customers_2015$recency > 365*2)] = "cold"
customers_2015$segment[which(customers_2015$recency <= 365*2 & customers_2015$recency > 365*1)] = "warm"
customers_2015$segment[which(customers_2015$recency <= 365)] = "active"
customers_2015$segment[which(customers_2015$segment == "warm" & customers_2015$first_purchase <= 365*2)] = "new warm"
customers_2015$segment[which(customers_2015$segment == "warm" & customers_2015$amount < 100)] = "warm low value"
customers_2015$segment[which(customers_2015$segment == "warm" & customers_2015$amount >= 100)] = "warm high value"
customers_2015$segment[which(customers_2015$segment == "active" & customers_2015$first_purchase <= 365)] = "new active"
customers_2015$segment[which(customers_2015$segment == "active" & customers_2015$amount < 100)] = "active low value"
customers_2015$segment[which(customers_2015$segment == "active" & customers_2015$amount >= 100)] = "active high value"
table(customers_2015$segment)
aggregate(x = customers_2015[, 2:5], by = list(customers_2015$segment), mean)
# Re-order factor in a way that makes sense
customers_2015$segment = factor(x = customers_2015$segment, levels = c("inactive", "cold",
"warm high value", "warm low value", "new warm",
"active high value", "active low value", "new active"))
table(customers_2015$segment)
aggregate(x = customers_2015[, 2:5], by = list(customers_2015$segment), mean)
# Compute key marketing indicators using SQL language
library(sqldf)
# Compute recency, frequency, and average purchase amount
customers_2014 = sqldf("SELECT customer_id,
MIN(days_since) - 365 AS 'recency',
MAX(days_since) - 365 AS 'first_purchase',
COUNT(*) AS 'frequency',
AVG(purchase_amount) AS 'amount'
FROM data
WHERE days_since > 365
GROUP BY 1")
customers_2015
# Complete segment solution using which, and exploiting previous test as input
customers_2014$segment = "NA"
customers_2014$segment[which(customers_2014$recency > 365*3)] = "inactive"
customers_2014$segment[which(customers_2014$recency <= 365*3 & customers_2014$recency > 365*2)] = "cold"
customers_2014$segment[which(customers_2014$recency <= 365*2 & customers_2014$recency > 365*1)] = "warm"
customers_2014$segment[which(customers_2014$recency <= 365)] = "active"
customers_2014$segment[which(customers_2014$segment == "warm" & customers_2014$first_purchase <= 365*2)] = "new warm"
customers_2014$segment[which(customers_2014$segment == "warm" & customers_2014$amount < 100)] = "warm low value"
customers_2014$segment[which(customers_2014$segment == "warm" & customers_2014$amount >= 100)] = "warm high value"
customers_2014$segment[which(customers_2014$segment == "active" & customers_2014$first_purchase <= 365)] = "new active"
customers_2014$segment[which(customers_2014$segment == "active" & customers_2014$amount < 100)] = "active low value"
customers_2014$segment[which(customers_2014$segment == "active" & customers_2014$amount >= 100)] = "active high value"
# Re-order factor in a way that makes sense
customers_2014$segment = factor(x = customers_2014$segment, levels = c("inactive", "cold",
"warm high value", "warm low value", "new warm",
"active high value", "active low value", "new active"))
# Show segmentation results
table(customers_2014$segment)
pie(table(customers_2014$segment), col = rainbow(24))
aggregate(x = customers_2014[, 2:5], by = list(customers_2014$segment), mean)
# Compute how much revenue is generated by segments
# Notice that people with no revenue in 2015 do NOT appear
revenue_2015 = sqldf("SELECT customer_id, SUM(purchase_amount) AS 'revenue_2015'
FROM data
WHERE year_of_purchase = 2015
GROUP BY 1")
summary(revenue_2015)
# Merge 2015 customers and 2015 revenue (the wrong way)
actual = merge(customers_2015, revenue_2015)
actual
# Merge 2015 customers and 2015 revenue (correct)
actual = merge(customers_2015, revenue_2015, all.x = TRUE)
actual$revenue_2015[is.na(actual$revenue_2015)] = 0
# Show average revenue per customer and per segment
aggregate(x = actual$revenue_2015, by = list(customers_2015$segment), mean)
# Merge 2014 customers and 2015 revenue (correct)
forward = merge(customers_2014, revenue_2015, all.x = TRUE)
forward$revenue_2015[is.na(forward$revenue_2015)] = 0
forward
revenue_2015
# Show average revenue per customer and per segment
r = aggregate(x = forward$revenue_2015, by = list(customers_2014$segment), mean)
print(r)
# Re-order and display results
r = r[order(r$x, decreasing = TRUE), ]
print(r)
barplot(r$x, names.arg = r$Group.1)
# Load text file into local variable called 'data'
data = read.delim(file = 'purchases.txt', header = FALSE, sep = '\t', dec = '.')
# Add headers and interpret the last column as a date, extract year of purchase
colnames(data) = c('customer_id', 'purchase_amount', 'date_of_purchase')
data$date_of_purchase = as.Date(data$date_of_purchase, "%Y-%m-%d")
data$year_of_purchase = as.numeric(format(data$date_of_purchase, "%Y"))
data$days_since = as.numeric(difftime(time1 = "2016-01-01",
time2 = data$date_of_purchase,
units = "days"))
# Compute key marketing indicators using SQL language
library(sqldf)
# Compute RFM variables as of a year ago
customers_2014 = sqldf("SELECT customer_id,
MIN(days_since) - 365 AS 'recency',
MAX(days_since) - 365 AS 'first_purchase',
COUNT(*) AS 'frequency',
AVG(purchase_amount) AS 'avg_amount',
MAX(purchase_amount) AS 'max_amount'
FROM data
WHERE days_since > 365
GROUP BY 1")
customers_2014
# Compute revenues generated by customers in 2015
revenue_2015 = sqldf("SELECT customer_id, SUM(purchase_amount) AS 'revenue_2015'
FROM data
WHERE year_of_purchase = 2015
GROUP BY 1")
revenue_2015
# Merge 2015 customers and 2015 revenue
in_sample = merge(customers_2014, revenue_2015, all.x = TRUE)
in_sample$revenue_2015[is.na(in_sample$revenue_2015)] = 0
in_sample$active_2015 = as.numeric(in_sample$revenue_2015 > 0)
# Display calibration (in-sample) data
head(in_sample)
summary(in_sample)
# Calibrate probability model
library(nnet)
prob.model = multinom(formula = active_2015 ~ recency + first_purchase + frequency + avg_amount + max_amount,
data = in_sample)
coef = summary(prob.model)$coefficients
std = summary(prob.model)$standard.errors
print(coef)
print(std)
print(coef / std)
summary(prob.model)
# For the monetary model, select only those who made a purchase
z = which(in_sample$active_2015 == 1)
head(in_sample[z, ])
summary(in_sample[z, ])
# Calibrate the monetary model (version 1)
amount.model = lm(formula = revenue_2015 ~ avg_amount + max_amount, data = in_sample[z, ])
summary(amount.model)
# Plot the results of the monetary model
plot(x = in_sample[z, ]$revenue_2015, y = amount.model$fitted.values)
# Re-calibrate the monetary model, using a log-transform (version 2)
amount.model = lm(formula = log(revenue_2015) ~ log(avg_amount) + log(max_amount), data = in_sample[z, ])
summary(amount.model)
# Plot the results of this new monetary model
plot(x = log(in_sample[z, ]$revenue_2015), y = amount.model$fitted.values)
# Compute RFM variables as of today
customers_2015 = sqldf("SELECT customer_id,
MIN(days_since) AS 'recency',
MAX(days_since) AS 'first_purchase',
COUNT(*) AS 'frequency',
AVG(purchase_amount) AS 'avg_amount',
MAX(purchase_amount) AS 'max_amount'
FROM data GROUP BY 1")
customers_2015
# Predict the target variables based on today's data
customers_2015$prob_predicted = predict(object = prob.model, newdata = customers_2015, type = "probs")
customers_2015$revenue_predicted = exp(predict(object = amount.model, newdata = customers_2015))
customers_2015$score_predicted = customers_2015$prob_predicted * customers_2015$revenue_predicted
summary(customers_2015$prob_predicted)
summary(customers_2015$revenue_predicted)
summary(customers_2015$score_predicted)
hist(customers_2015$score_predicted)
# How many customers have an expected revenue of more than $50
z = which(customers_2015$score_predicted > 50)
print(length(z))
# Load text file into local variable called 'data'
data = read.delim(file = 'purchases.txt', header = FALSE, sep = '\t', dec = '.')
# Add headers and interpret the last column as a date, extract year of purchase
colnames(data) = c('customer_id', 'purchase_amount', 'date_of_purchase')
data$date_of_purchase = as.Date(data$date_of_purchase, "%Y-%m-%d")
data$year_of_purchase = as.numeric(format(data$date_of_purchase, "%Y"))
data$days_since = as.numeric(difftime(time1 = "2016-01-01",
time2 = data$date_of_purchase,
units = "days"))
# Invoke library to compute key marketing indicators using SQL language
library(sqldf)
# Segment customers in 2015
customers_2015 = sqldf("SELECT customer_id,
MIN(days_since) AS 'recency',
MAX(days_since) AS 'first_purchase',
COUNT(*) AS 'frequency',
AVG(purchase_amount) AS 'amount'
FROM data GROUP BY 1")
customers_2015$segment = "NA"
customers_2015$segment[which(customers_2015$recency > 365*3)] = "inactive"
customers_2015$segment[which(customers_2015$recency <= 365*3 & customers_2015$recency > 365*2)] = "cold"
customers_2015$segment[which(customers_2015$recency <= 365*2 & customers_2015$recency > 365*1)] = "warm"
customers_2015$segment[which(customers_2015$recency <= 365)] = "active"
customers_2015$segment[which(customers_2015$segment == "warm" & customers_2015$first_purchase <= 365*2)] = "new warm"
customers_2015$segment[which(customers_2015$segment == "warm" & customers_2015$amount < 100)] = "warm low value"
customers_2015$segment[which(customers_2015$segment == "warm" & customers_2015$amount >= 100)] = "warm high value"
customers_2015$segment[which(customers_2015$segment == "active" & customers_2015$first_purchase <= 365)] = "new active"
customers_2015$segment[which(customers_2015$segment == "active" & customers_2015$amount < 100)] = "active low value"
customers_2015$segment[which(customers_2015$segment == "active" & customers_2015$amount >= 100)] = "active high value"
customers_2015$segment = factor(x = customers_2015$segment, levels = c("inactive", "cold",
"warm high value", "warm low value", "new warm",
"active high value", "active low value", "new active"))
# Segment customers in 2014
customers_2014 = sqldf("SELECT customer_id,
MIN(days_since) - 365 AS 'recency',
MAX(days_since) - 365 AS 'first_purchase',
COUNT(*) AS 'frequency',
AVG(purchase_amount) AS 'amount'
FROM data
WHERE days_since > 365
GROUP BY 1")
customers_2014$segment = "NA"
customers_2014$segment[which(customers_2014$recency > 365*3)] = "inactive"
customers_2014$segment[which(customers_2014$recency <= 365*3 & customers_2014$recency > 365*2)] = "cold"
customers_2014$segment[which(customers_2014$recency <= 365*2 & customers_2014$recency > 365*1)] = "warm"
customers_2014$segment[which(customers_2014$recency <= 365)] = "active"
customers_2014$segment[which(customers_2014$segment == "warm" & customers_2014$first_purchase <= 365*2)] = "new warm"
customers_2014$segment[which(customers_2014$segment == "warm" & customers_2014$amount < 100)] = "warm low value"
customers_2014$segment[which(customers_2014$segment == "warm" & customers_2014$amount >= 100)] = "warm high value"
customers_2014$segment[which(customers_2014$segment == "active" & customers_2014$first_purchase <= 365)] = "new active"
customers_2014$segment[which(customers_2014$segment == "active" & customers_2014$amount < 100)] = "active low value"
customers_2014$segment[which(customers_2014$segment == "active" & customers_2014$amount >= 100)] = "active high value"
customers_2014$segment = factor(x = customers_2014$segment, levels = c("inactive", "cold",
"warm high value", "warm low value", "new warm",
"active high value", "active low value", "new active"))
# Compute transition matrix
new_data = merge(x = customers_2014, y = customers_2015, by = "customer_id", all.x = TRUE)
head(new_data)
transition = table(new_data$segment.x, new_data$segment.y)
print(transition)
# Divide each row by its sum
transition = transition / rowSums(transition)
print(transition)
# Initialize a matrix with the number of customers in each segment today and after 10 periods
segments = matrix(nrow = 8, ncol = 11)
segments[, 1] = table(customers_2015$segment)
colnames(segments) = 2015:2025
row.names(segments) = levels(customers_2015$segment)
print(segments)
# Compute for each an every period
for (i in 2:11) {
segments[, i] = segments[, i-1] %*% transition
}
# Plot inactive, active high value customers over time
barplot(segments[1, ])
barplot(segments[2, ])
# Display how segments will evolve over time
print(round(segments))
# Yearly revenue per segment
# This comes directly from module 2, lines 160-161
yearly_revenue = c(0, 0, 0, 0, 0, 323.57, 52.31, 79.17)
# Compute revenue per segment
revenue_per_segment = yearly_revenue * segments
print(revenue_per_segment)
# Compute yearly revenue
yearly_revenue = colSums(revenue_per_segment)
print(round(yearly_revenue))
barplot(yearly_revenue)
# Compute cumulated revenue
cumulated_revenue = cumsum(yearly_revenue)
print(round(cumulated_revenue))
barplot(cumulated_revenue)
# Create a discount factor
discount_rate = 0.10
discount = 1 / ((1 + discount_rate) ^ ((1:11) - 1))
print(discount)
# Compute discounted yearly revenue
disc_yearly_revenue = yearly_revenue * discount
print(round(disc_yearly_revenue))
barplot(disc_yearly_revenue)
lines(yearly_revenue)
# Compute discounted cumulated revenue
disc_cumulated_revenue = cumsum(disc_yearly_revenue)
print(round(disc_cumulated_revenue))
barplot(disc_cumulated_revenue)
# What is the database worth?
print(disc_cumulated_revenue[11] - yearly_revenue[1])
| 0.789518 | 0.767167 |
# Casa da Pedra:
## Análise Exploratória de Dados
Análise dos dados do livro de registro utilizando Pandas e Matplotlib.
Os dados utilizados foram cedidos pela equipe da Casa da Pedra, localizada no município de Santana do Cariri, sul do Ceará.
### 1. Preparação do Notebook
```
# Importação dos pacotes necessários
from __future__ import division
import numpy as np
import pandas as pd
# Configuração das opções de output do Pandas
pd.set_option('display.notebook_repr_html', True) # Visualização em formato de tabela <'True'>
pd.set_option('display.max_columns', 10) # Máximo número de colunas
pd.set_option('display.max_rows', 20) # Máximo número de linhas
# Biblioteca para plotar gráficos
import matplotlib.pyplot as plt
%matplotlib inline
%pwd # Diretório atual
import seaborn as sns
sns.set(color_codes=True)
```
### 2. Importação dos dados
```
# Importar os dados
df = pd.read_csv('/Users/gabrielcorreadematos/CasaDaPedra/CasaDaPedra.csv',\
parse_dates=True, infer_datetime_format=True)
# Pré-visualização
df
```
Os valores núméricos nulos estão representados por NaN e as variáveis de tempo por NaT.
Estes valores podem ser removidos com a função data.dropna()
```
df.dropna()
```
### 3. Análise Exploratória com Pandas e Seaborn
Nesta etapa, as varávies são analisadas e classificadas de acordo com o tipo (categóricas ou numéricas),
valores de tendência central (média, mediana) e dispersão (desvio padrão).
A análise foi feita utilizando pacotes Python que oferecem recursos estatísticos e de visualização,
como Pandas e Seaborne.
```
# Dimensões da amostra
print 'Numero de Linhas: ', len(df)
print 'Numero de Colunas: ', len(df.columns)
print 'Dimensoes: ', df.shape
```
Determinar os tipos de variáveis com a função '.dtypes'
```
df.dtypes
# Resumo estatístico de variáveis numéricas
df.describe()
```
Apenas uma variável numérica (n_pessoas) foi encontrada no $DataFrame$
```
# Datas
# Tranformar datas string em Timestamps
df['entrada'] = pd.to_datetime(df['entrada'], dayfirst=True);
df['saida'] = pd.to_datetime(df['saida'], dayfirst=True);
# Calculando os períodos
df['periodos'] = df.saida - df.entrada
# Extrair número de dias dos períodos e utilizá-los como variável contínua
df['dias'] = df['periodos'].dt.days
df[['entrada', 'saida', 'periodos', 'dias']].head()
```
Nesta etapa foram criadas duas novas variáveis ($features$):
- Períodos ($TimeStamp$)
- Dias ($Floating$)
```
# Resumo estatístico de variáveis numéricas
df.describe()
```
### Variáveis categóricas
Três variáveis no arquivo são do tipo categóricas (string):
- Instituição
- Nacionaliade
- Setor
Para cada linha no arquivo, temos: o registro da "Instituição", que pode ser de ensino, pesquisa ou empresa; a "Nacionalidade", que representa o país da Instituição e o "Setor" de conhecimento, ou área de estudo.
```
# Instituição
df['instituicao'].value_counts()
# Nacionalidade
df['nacionalidade'].value_counts()
# Setor
df['setor'].value_counts()
```
### Agrupamento dos dados
Agrupamento de dados é uma técnica fundamental para analisar a relação numérica entre as variáveis categóricas. Em outras palavras, é uma forma de estudar as propriedades de cada classe de uma variável categórica.
O agrupamento pode ser feito com o Pandas, utilizando as funções 'groupby' ou 'pivot_table', que gera novas 'Series' e 'DataFrames'.
As Series criadas pelo agrupamento podem ser reordenadas pelo somatório do número de pessoas para poderem ser visualizadas nos gráficos em ordem crescente ou descrescente.
```
# Soma do número de pessoas por Instituição
# Ordenando pelo número de pessoas
instituicao_list = df.pivot_table(index='instituicao', aggfunc=sum)\
.sort_values(by='n_pessoas', ascending=False)
instituicao_list
# Soma do número de pessoas por área de conhecimento
setor_list = df.groupby('setor')[['n_pessoas','dias']].sum()\
.sort_values(by='n_pessoas', ascending=False)
setor_list
# Soma do número de pessoas por nacionalidade
nacionalidade_list = df.groupby('nacionalidade')[['n_pessoas','dias']].sum()\
.sort_values(by='n_pessoas', ascending=False)
nacionalidade_list
```
Seguindo com a análise das relações entre as variáveis numéricas e categorias.
```
# Criando DataFrame a partir do agrupamento de categorias
# utilizando pivot_table
df_pivot = df.pivot_table(index='instituicao', columns='setor', \
aggfunc={'n_pessoas':sum,'dias':sum})
df_pivot
```
### Visualização dos dados com Seaborn
- Histogramas
```
# Histograma do n_pessoas do df com Seaborn
with sns.axes_style('white'):
sns.set_context("poster");
h = sns.distplot(df['n_pessoas'], \
kde=False, \
axlabel="No. de pessoas");
# Histograma do n_pessoas do df com Seaborn
with sns.axes_style('white'):
sns.set_context("poster");
h = sns.distplot(df['dias'].dropna(), \
kde=False, \
axlabel="Dias");
```
- Gráficos de barras
```
# Gráficos de barras com as somas do número de pessoas por
# instituição
with sns.axes_style('white'):
sns.set_context("poster");
b = sns.factorplot(x='n_pessoas', y='instituicao', \
kind='bar', \
data=df, \
order=instituicao_list.index, \
aspect=1.5, size=5.5);
plt.xlabel("No. de pessoas");
plt.ylabel("Instituicao");
with sns.axes_style('white'):
sns.set_context("poster");
g = sns.barplot(x='n_pessoas', y='setor', \
data=df, \
order=setor_list.index, \
estimator=np.sum);
plt.xlabel("No. de pessoas");
plt.ylabel("Setor");
```
### Análise de correlação
Para analisar a correlação entre as propriedades numéricas, podemos utilizar gráficos como:
- Joint plots
```
# Outra formatação do Joint plot
with sns.axes_style('white'):
g = sns.jointplot('n_pessoas', 'dias', data=df, kind='kde')
g.set_axis_labels("No. de pesooas", "Dias");
with sns.axes_style('white'):
g = sns.jointplot('n_pessoas', 'dias', df, kind='reg')
g.set_axis_labels("No. de pesooas", "Dias");
```
- Pair plot
```
with sns.axes_style('white'):
sns.pairplot(df.dropna(), hue='instituicao', size=2.5);
```
- Histogramas facetados
```
grid = sns.FacetGrid(df.dropna(), row='instituicao', col='setor', margin_titles=True)
grid.map(plt.hist, 'n_pessoas', bins=np.linspace(0,10,20));
```
|
github_jupyter
|
# Importação dos pacotes necessários
from __future__ import division
import numpy as np
import pandas as pd
# Configuração das opções de output do Pandas
pd.set_option('display.notebook_repr_html', True) # Visualização em formato de tabela <'True'>
pd.set_option('display.max_columns', 10) # Máximo número de colunas
pd.set_option('display.max_rows', 20) # Máximo número de linhas
# Biblioteca para plotar gráficos
import matplotlib.pyplot as plt
%matplotlib inline
%pwd # Diretório atual
import seaborn as sns
sns.set(color_codes=True)
# Importar os dados
df = pd.read_csv('/Users/gabrielcorreadematos/CasaDaPedra/CasaDaPedra.csv',\
parse_dates=True, infer_datetime_format=True)
# Pré-visualização
df
df.dropna()
# Dimensões da amostra
print 'Numero de Linhas: ', len(df)
print 'Numero de Colunas: ', len(df.columns)
print 'Dimensoes: ', df.shape
df.dtypes
# Resumo estatístico de variáveis numéricas
df.describe()
# Datas
# Tranformar datas string em Timestamps
df['entrada'] = pd.to_datetime(df['entrada'], dayfirst=True);
df['saida'] = pd.to_datetime(df['saida'], dayfirst=True);
# Calculando os períodos
df['periodos'] = df.saida - df.entrada
# Extrair número de dias dos períodos e utilizá-los como variável contínua
df['dias'] = df['periodos'].dt.days
df[['entrada', 'saida', 'periodos', 'dias']].head()
# Resumo estatístico de variáveis numéricas
df.describe()
# Instituição
df['instituicao'].value_counts()
# Nacionalidade
df['nacionalidade'].value_counts()
# Setor
df['setor'].value_counts()
# Soma do número de pessoas por Instituição
# Ordenando pelo número de pessoas
instituicao_list = df.pivot_table(index='instituicao', aggfunc=sum)\
.sort_values(by='n_pessoas', ascending=False)
instituicao_list
# Soma do número de pessoas por área de conhecimento
setor_list = df.groupby('setor')[['n_pessoas','dias']].sum()\
.sort_values(by='n_pessoas', ascending=False)
setor_list
# Soma do número de pessoas por nacionalidade
nacionalidade_list = df.groupby('nacionalidade')[['n_pessoas','dias']].sum()\
.sort_values(by='n_pessoas', ascending=False)
nacionalidade_list
# Criando DataFrame a partir do agrupamento de categorias
# utilizando pivot_table
df_pivot = df.pivot_table(index='instituicao', columns='setor', \
aggfunc={'n_pessoas':sum,'dias':sum})
df_pivot
# Histograma do n_pessoas do df com Seaborn
with sns.axes_style('white'):
sns.set_context("poster");
h = sns.distplot(df['n_pessoas'], \
kde=False, \
axlabel="No. de pessoas");
# Histograma do n_pessoas do df com Seaborn
with sns.axes_style('white'):
sns.set_context("poster");
h = sns.distplot(df['dias'].dropna(), \
kde=False, \
axlabel="Dias");
# Gráficos de barras com as somas do número de pessoas por
# instituição
with sns.axes_style('white'):
sns.set_context("poster");
b = sns.factorplot(x='n_pessoas', y='instituicao', \
kind='bar', \
data=df, \
order=instituicao_list.index, \
aspect=1.5, size=5.5);
plt.xlabel("No. de pessoas");
plt.ylabel("Instituicao");
with sns.axes_style('white'):
sns.set_context("poster");
g = sns.barplot(x='n_pessoas', y='setor', \
data=df, \
order=setor_list.index, \
estimator=np.sum);
plt.xlabel("No. de pessoas");
plt.ylabel("Setor");
# Outra formatação do Joint plot
with sns.axes_style('white'):
g = sns.jointplot('n_pessoas', 'dias', data=df, kind='kde')
g.set_axis_labels("No. de pesooas", "Dias");
with sns.axes_style('white'):
g = sns.jointplot('n_pessoas', 'dias', df, kind='reg')
g.set_axis_labels("No. de pesooas", "Dias");
with sns.axes_style('white'):
sns.pairplot(df.dropna(), hue='instituicao', size=2.5);
grid = sns.FacetGrid(df.dropna(), row='instituicao', col='setor', margin_titles=True)
grid.map(plt.hist, 'n_pessoas', bins=np.linspace(0,10,20));
| 0.398406 | 0.905782 |
# Running Tune experiments with BlendSearch and CFO
In this tutorial we introduce BlendSearch and CFO, while running a simple Ray Tune
experiment. Tune’s Search Algorithms integrate with FLAML and, as a result, allow
you to seamlessly scale up a BlendSearch and CFO optimization
process - without sacrificing performance.
Fast Library for Automated Machine Learning & Tuning (FLAML) does not rely on the
gradient of the objective function, but instead, learns from samples of the
search space. It is suitable for optimizing functions that are non-differentiable,
with many local minima, or even unknown but only testable. Therefore, it is
necessarily belongs to the domain of "derivative-free optimization"
and "black-box optimization".
FLAML has two primary algorithms: (1) Frugal Optimization for Cost-related
Hyperparameters (CFO) begins with a low-cost initial point and gradually moves to
a high-cost region as needed. It is a local search method that leverages randomized
direct search method with an adaptive step-size and random restarts.
As a local search method, it has an appealing provable convergence rate and bounded
cost but may get trapped in suboptimal local minima. (2) Economical Hyperparameter
Optimization With Blended Search Strategy (BlendSearch) combines CFO's local search
with global search, making it less suspectable to local minima traps.
It leverages the frugality of CFO and the space exploration ability of global search
methods such as Bayesian optimization.
In this example we minimize a simple objective to briefly demonstrate the usage of
FLAML with Ray Tune via `BlendSearch` and `CFO`. It's useful to keep in mind that
despite the emphasis on machine learning experiments, Ray Tune optimizes any implicit
or explicit objective. Here we assume `flaml==0.4.1` and `optuna==2.9.1` libraries
are installed. To learn more, please refer to
the [FLAML website](https://github.com/microsoft/FLAML/tree/main/flaml/tune).
Click below to see all the imports we need for this example.
You can also launch directly into a Binder instance to run this notebook yourself.
Just click on the rocket symbol at the top of the navigation.
```
import time
import ray
from ray import tune
from ray.tune.suggest import ConcurrencyLimiter
from ray.tune.suggest.flaml import BlendSearch, CFO
```
Let's start by defining a simple evaluation function.
We artificially sleep for a bit (`0.1` seconds) to simulate a long-running ML experiment.
This setup assumes that we're running multiple `step`s of an experiment and try to
tune three hyperparameters, namely `width` and `height`, and `activation`.
```
def evaluate(step, width, height, activation):
time.sleep(0.1)
activation_boost = 10 if activation=="relu" else 1
return (0.1 + width * step / 100) ** (-1) + height * 0.1 + activation_boost
```
Next, our `objective` function takes a Tune `config`, evaluates the `score` of your
experiment in a training loop, and uses `tune.report` to report the `score` back to Tune.
```
def objective(config):
for step in range(config["steps"]):
score = evaluate(step, config["width"], config["height"], config["activation"])
tune.report(iterations=step, mean_loss=score)
ray.init(configure_logging=False)
```
## Running Tune experiments with BlendSearch
This example demonstrates the usage of Economical Hyperparameter Optimization
With Blended Search Strategy (BlendSearch) with Ray Tune.
Now we define the search algorithm built from `BlendSearch`, constrained to a
maximum of `4` concurrent trials with a `ConcurrencyLimiter`.
```
algo = BlendSearch()
algo = ConcurrencyLimiter(algo, max_concurrent=4)
```
The number of samples this Tune run is set to `1000`.
(you can decrease this if it takes too long on your machine).
```
num_samples = 1000
# If 1000 samples take too long, you can reduce this number.
# We override this number here for our smoke tests.
num_samples = 10
```
Next we define a search space. The critical assumption is that the optimal
hyperparameters live within this space. Yet, if the space is very large, then those
hyperparameters may be difficult to find in a short amount of time.
```
search_config = {
"steps": 100,
"width": tune.uniform(0, 20),
"height": tune.uniform(-100, 100),
"activation": tune.choice(["relu, tanh"])
}
```
Finally, we run the experiment to `"min"`imize the "mean_loss" of the `objective` by
searching `search_config` via `algo`, `num_samples` times. This previous sentence is
fully characterizes the search problem we aim to solve. With this in mind, observe
how efficient it is to execute `tune.run()`.
```
analysis = tune.run(
objective,
search_alg=algo,
metric="mean_loss",
mode="min",
name="blendsearch_exp",
num_samples=num_samples,
config=search_config,
)
```
Here are the hyperparamters found to minimize the mean loss of the defined objective.
```
print("Best hyperparameters found were: ", analysis.best_config)
```
## Incorporating a time budget to the experiment
Define the time budget in seconds:
```
time_budget_s = 30
```
Similarly we define a search space, but this time we feed it as an argument to
`BlendSearch` rather than `tune.run`'s `config` argument.
We next define the time budget via `set_search_properties`.
And once again include the `ConcurrencyLimiter`.
```
algo = BlendSearch(
metric="mean_loss",
mode="min",
space={
"width": tune.uniform(0, 20),
"height": tune.uniform(-100, 100),
"activation": tune.choice(["relu", "tanh"]),
},
)
algo.set_search_properties(config={"time_budget_s": time_budget_s})
algo = ConcurrencyLimiter(algo, max_concurrent=4)
```
Now we run the experiment, this time with the `time_budget` included as an argument.
Note: We allow for virtually infinite `num_samples` by passing `-1`, so that the
experiment is stopped according to the time budget rather than a sample limit.
```
analysis = tune.run(
objective,
search_alg=algo,
time_budget_s=time_budget_s,
metric="mean_loss",
mode="min",
name="blendsearch_exp",
num_samples=-1,
config={"steps": 100},
)
print("Best hyperparameters found were: ", analysis.best_config)
```
## Running Tune experiments with CFO
This example demonstrates the usage of Frugal Optimization for Cost-related
Hyperparameters (CFO) with Ray Tune.
We now define the search algorithm as built from `CFO`, constrained to a maximum of `4`
concurrent trials with a `ConcurrencyLimiter`.
```
algo = CFO()
algo = ConcurrencyLimiter(algo, max_concurrent=4)
```
The number of samples is the number of hyperparameter combinations that will be
tried out. This Tune run is set to `1000` samples.
(you can decrease this if it takes too long on your machine).
```
num_samples = 1000
# If 1000 samples take too long, you can reduce this number.
# We override this number here for our smoke tests.
num_samples = 10
```
Next we define a search space. The critical assumption is that the optimal
hyperparameters live within this space. Yet, if the space is very large, then
those hyperparameters may be difficult to find in a short amount of time.
```
search_config = {
"steps": 100,
"width": tune.uniform(0, 20),
"height": tune.uniform(-100, 100),
"activation": tune.choice(["relu, tanh"])
}
```
Finally, we run the experiment to `"min"`imize the "mean_loss" of the `objective`
by searching `search_config` via `algo`, `num_samples` times. This previous sentence
is fully characterizes the search problem we aim to solve. With this in mind,
notice how efficient it is to execute `tune.run()`.
```
analysis = tune.run(
objective,
search_alg=algo,
metric="mean_loss",
mode="min",
name="cfo_exp",
num_samples=num_samples,
config=search_config,
)
```
Here are the hyperparameters found to minimize the mean loss of the defined objective.
```
print("Best hyperparameters found were: ", analysis.best_config)
ray.shutdown()
```
|
github_jupyter
|
import time
import ray
from ray import tune
from ray.tune.suggest import ConcurrencyLimiter
from ray.tune.suggest.flaml import BlendSearch, CFO
def evaluate(step, width, height, activation):
time.sleep(0.1)
activation_boost = 10 if activation=="relu" else 1
return (0.1 + width * step / 100) ** (-1) + height * 0.1 + activation_boost
def objective(config):
for step in range(config["steps"]):
score = evaluate(step, config["width"], config["height"], config["activation"])
tune.report(iterations=step, mean_loss=score)
ray.init(configure_logging=False)
algo = BlendSearch()
algo = ConcurrencyLimiter(algo, max_concurrent=4)
num_samples = 1000
# If 1000 samples take too long, you can reduce this number.
# We override this number here for our smoke tests.
num_samples = 10
search_config = {
"steps": 100,
"width": tune.uniform(0, 20),
"height": tune.uniform(-100, 100),
"activation": tune.choice(["relu, tanh"])
}
analysis = tune.run(
objective,
search_alg=algo,
metric="mean_loss",
mode="min",
name="blendsearch_exp",
num_samples=num_samples,
config=search_config,
)
print("Best hyperparameters found were: ", analysis.best_config)
time_budget_s = 30
algo = BlendSearch(
metric="mean_loss",
mode="min",
space={
"width": tune.uniform(0, 20),
"height": tune.uniform(-100, 100),
"activation": tune.choice(["relu", "tanh"]),
},
)
algo.set_search_properties(config={"time_budget_s": time_budget_s})
algo = ConcurrencyLimiter(algo, max_concurrent=4)
analysis = tune.run(
objective,
search_alg=algo,
time_budget_s=time_budget_s,
metric="mean_loss",
mode="min",
name="blendsearch_exp",
num_samples=-1,
config={"steps": 100},
)
print("Best hyperparameters found were: ", analysis.best_config)
algo = CFO()
algo = ConcurrencyLimiter(algo, max_concurrent=4)
num_samples = 1000
# If 1000 samples take too long, you can reduce this number.
# We override this number here for our smoke tests.
num_samples = 10
search_config = {
"steps": 100,
"width": tune.uniform(0, 20),
"height": tune.uniform(-100, 100),
"activation": tune.choice(["relu, tanh"])
}
analysis = tune.run(
objective,
search_alg=algo,
metric="mean_loss",
mode="min",
name="cfo_exp",
num_samples=num_samples,
config=search_config,
)
print("Best hyperparameters found were: ", analysis.best_config)
ray.shutdown()
| 0.560012 | 0.992739 |
# Exercício 1.2
Utilizando o arquivo acidentes-apriori.csv.csv que é um recorte da base de dados abertas dos acidentes de todas as rodovias federais, disponível de forma completa em: https://portal.prf.gov.br/dados-abertos-acidentes pela Polícia Rodoviária Federal.
Procure analisar quais são as regras mais relevantes para análise de acidentes, tentem–se colocar no lugar no lugar de cientista de dados da polícia rodoviária, quais são as regras relevantes? Existem itemsets pouco relevantes que poderiam ser descartados na análise?
Realize alguns filtros nos itemsets para facilitar a análise das regras geradas.
```
!pip install mlxtend
import pandas as pd
from mlxtend.preprocessing import TransactionEncoder
from mlxtend.frequent_patterns import apriori
from mlxtend.frequent_patterns import association_rules
data = pd.read_csv('bases/acidentes-apriori.csv', sep=';' , engine='python', error_bad_lines=False, encoding='mac_roman')
data.head(10)
# print('dia semana =', data.dia_semana.unique())
# print('UF =', data.uf.unique())
# print('causa acidente = ', data.causa_acidente.unique())
# print('tipo acidente = ', data.tipo_acidente.unique())
# print('classificacao acidente = ', data.classificacao_acidente.unique())
# print('fase dia = ', data.fase_dia.unique())
# print('condicao metereologica = ', data.condicao_metereologica.unique())
# print('tracado via = ', data.tracado_via.unique())
# Correção dds palavras por conta do encoding
data['causa_acidente'] = data['causa_acidente'].str.replace('êe', 'e')
data['tipo_acidente'] = data['tipo_acidente'].str.replace('êe', 'e')
data['classificacao_acidente'] = data['classificacao_acidente'].str.replace('êe', 'e')
data['causa_acidente'] = data['causa_acidente'].str.replace('ê', 'e')
data['tipo_acidente'] = data['tipo_acidente'].str.replace('ê', 'e')
data['classificacao_acidente'] = data['classificacao_acidente'].str.replace('ê', 'e')
data['causa_acidente'] = data['causa_acidente'].str.replace('çc', 'c')
data['tipo_acidente'] = data['tipo_acidente'].str.replace('çc', 'c')
data['classificacao_acidente'] = data['classificacao_acidente'].str.replace('çc', 'c')
data.head(10)
print('Dia semana:')
print(data.dia_semana.value_counts(), '\n')
print('UF:')
print( data.uf.value_counts(), '\n')
print('Causa acidente:')
print( data.causa_acidente.value_counts(), '\n')
print('Tipo acidente:')
print( data.tipo_acidente.value_counts(), '\n')
print('Classificacao acidente:')
print( data.classificacao_acidente.value_counts(), '\n')
print('Fase dia:')
print( data.fase_dia.value_counts(), '\n')
print('Condicao metereologica:')
print( data.condicao_metereologica.value_counts(), '\n')
print('Tracado via:')
print( data.tracado_via.value_counts())
#visualizando os dados acima as colunas dia da semana e UF parecem contribuir pouco pra justificar os acidentes
data.drop(columns=['dia_semana','uf'], inplace=True)
qtdlinhas = data.shape[0]
qtdcols = data.shape[1]
transacoes = []
for i in range(0, qtdlinhas):
linhaTransacao = []
for j in range(0, qtdcols):
linhaTransacao.append(str(data.values[i,j]))
transacoes.append(linhaTransacao)
te = TransactionEncoder()
#Coloca em memória as trasações e interpreta a quantidade de colunas que serão geradas durante o processamento
te.fit(transacoes)
#O objeto TransactionEncoder faz a conversão das transações em uma matriz binária onde cada linha da matriz representa uma transação
matriz_transacoes = te.transform(transacoes)
#Cria um dataframe auxiliar com a matriz binária (passo te.transform(transacoes)) de transações e as colunas obtidas (passo te.fit(transacoes))
dfAuxiliar = pd.DataFrame(matriz_transacoes, columns=te.columns_)
# Removendo colunas que não contribuem, não seriam justificativas fortes para acidente ou não evitáveis.
dfAuxiliar.drop(columns=['Ignorado', 'Nao Informado', 'Pleno dia', 'Ceu Claro', 'Sol', 'Reta', 'Mal Subito', 'Incendio', 'Fenomenos da Natureza'], inplace=True)
#Obtêm os itemsets mais frequentes com um suporte mínimo igual a 0.01. O paramêtro use_colnames significa que vamos usar os nomes das colunas do DataFrame dfAuxiliar
#para construir as regras de Associação
itemsets_freq = apriori(dfAuxiliar, min_support=0.01, use_colnames=True)
#Obtêm as regras de associação a partir dos itemsets mais frequêntes
regras = association_rules(itemsets_freq, metric="confidence", min_threshold=0.6)
#Ordena as Regras por confiança
regras_ordenadas = regras.sort_values(['confidence', 'support'] , ascending=False)
print('Foram encontradas', regras_ordenadas.shape[0], 'regras')
regras_ordenadas = regras_ordenadas[['antecedents', 'consequents', 'support', 'confidence']]
# função auxiliar para impressão das regras
def print_row(row):
support_format = "{:.3f}".format(row['support'])
confidence_format = "{:.3f}".format(row['confidence'])
print('[S:', support_format, ', C:', confidence_format, ']',
tuple(row['antecedents']), '->', tuple(row['consequents']))
# Imprime as 20 regras mais relevantes
for i in range(0, 20):
print_row(regras_ordenadas.iloc[i])
# Vitimas Feridas
# Filtrando as regras que possuem
subset_vitimas_feridas = {'Com Vitimas Feridas'}
regras_vitimas_feridas = regras_ordenadas[ (regras_ordenadas['antecedents'].apply(lambda x: len(x) > 1)) &
(regras_ordenadas['consequents'].apply(lambda x: subset_vitimas_feridas.issubset(x))) ]
# Imprime no máximo as 20 regras mais relevantes
for i in range(0, min(20, regras_vitimas_feridas.shape[0])):
print_row(regras_vitimas_feridas.iloc[i])
```
## Acidentes com vitimas feridas
Dentre as regras analisadas, temos como principais causa de acidentes:
- os maiores suportes os casos de Queda de ocupante do veiculo
- casos de colisao transversal em interseção de vias
- falta de atençao do condutor
- desobediencia nas normas de transito pelo condutor
- em curvas em estado de pista escorregadia
|
github_jupyter
|
!pip install mlxtend
import pandas as pd
from mlxtend.preprocessing import TransactionEncoder
from mlxtend.frequent_patterns import apriori
from mlxtend.frequent_patterns import association_rules
data = pd.read_csv('bases/acidentes-apriori.csv', sep=';' , engine='python', error_bad_lines=False, encoding='mac_roman')
data.head(10)
# print('dia semana =', data.dia_semana.unique())
# print('UF =', data.uf.unique())
# print('causa acidente = ', data.causa_acidente.unique())
# print('tipo acidente = ', data.tipo_acidente.unique())
# print('classificacao acidente = ', data.classificacao_acidente.unique())
# print('fase dia = ', data.fase_dia.unique())
# print('condicao metereologica = ', data.condicao_metereologica.unique())
# print('tracado via = ', data.tracado_via.unique())
# Correção dds palavras por conta do encoding
data['causa_acidente'] = data['causa_acidente'].str.replace('êe', 'e')
data['tipo_acidente'] = data['tipo_acidente'].str.replace('êe', 'e')
data['classificacao_acidente'] = data['classificacao_acidente'].str.replace('êe', 'e')
data['causa_acidente'] = data['causa_acidente'].str.replace('ê', 'e')
data['tipo_acidente'] = data['tipo_acidente'].str.replace('ê', 'e')
data['classificacao_acidente'] = data['classificacao_acidente'].str.replace('ê', 'e')
data['causa_acidente'] = data['causa_acidente'].str.replace('çc', 'c')
data['tipo_acidente'] = data['tipo_acidente'].str.replace('çc', 'c')
data['classificacao_acidente'] = data['classificacao_acidente'].str.replace('çc', 'c')
data.head(10)
print('Dia semana:')
print(data.dia_semana.value_counts(), '\n')
print('UF:')
print( data.uf.value_counts(), '\n')
print('Causa acidente:')
print( data.causa_acidente.value_counts(), '\n')
print('Tipo acidente:')
print( data.tipo_acidente.value_counts(), '\n')
print('Classificacao acidente:')
print( data.classificacao_acidente.value_counts(), '\n')
print('Fase dia:')
print( data.fase_dia.value_counts(), '\n')
print('Condicao metereologica:')
print( data.condicao_metereologica.value_counts(), '\n')
print('Tracado via:')
print( data.tracado_via.value_counts())
#visualizando os dados acima as colunas dia da semana e UF parecem contribuir pouco pra justificar os acidentes
data.drop(columns=['dia_semana','uf'], inplace=True)
qtdlinhas = data.shape[0]
qtdcols = data.shape[1]
transacoes = []
for i in range(0, qtdlinhas):
linhaTransacao = []
for j in range(0, qtdcols):
linhaTransacao.append(str(data.values[i,j]))
transacoes.append(linhaTransacao)
te = TransactionEncoder()
#Coloca em memória as trasações e interpreta a quantidade de colunas que serão geradas durante o processamento
te.fit(transacoes)
#O objeto TransactionEncoder faz a conversão das transações em uma matriz binária onde cada linha da matriz representa uma transação
matriz_transacoes = te.transform(transacoes)
#Cria um dataframe auxiliar com a matriz binária (passo te.transform(transacoes)) de transações e as colunas obtidas (passo te.fit(transacoes))
dfAuxiliar = pd.DataFrame(matriz_transacoes, columns=te.columns_)
# Removendo colunas que não contribuem, não seriam justificativas fortes para acidente ou não evitáveis.
dfAuxiliar.drop(columns=['Ignorado', 'Nao Informado', 'Pleno dia', 'Ceu Claro', 'Sol', 'Reta', 'Mal Subito', 'Incendio', 'Fenomenos da Natureza'], inplace=True)
#Obtêm os itemsets mais frequentes com um suporte mínimo igual a 0.01. O paramêtro use_colnames significa que vamos usar os nomes das colunas do DataFrame dfAuxiliar
#para construir as regras de Associação
itemsets_freq = apriori(dfAuxiliar, min_support=0.01, use_colnames=True)
#Obtêm as regras de associação a partir dos itemsets mais frequêntes
regras = association_rules(itemsets_freq, metric="confidence", min_threshold=0.6)
#Ordena as Regras por confiança
regras_ordenadas = regras.sort_values(['confidence', 'support'] , ascending=False)
print('Foram encontradas', regras_ordenadas.shape[0], 'regras')
regras_ordenadas = regras_ordenadas[['antecedents', 'consequents', 'support', 'confidence']]
# função auxiliar para impressão das regras
def print_row(row):
support_format = "{:.3f}".format(row['support'])
confidence_format = "{:.3f}".format(row['confidence'])
print('[S:', support_format, ', C:', confidence_format, ']',
tuple(row['antecedents']), '->', tuple(row['consequents']))
# Imprime as 20 regras mais relevantes
for i in range(0, 20):
print_row(regras_ordenadas.iloc[i])
# Vitimas Feridas
# Filtrando as regras que possuem
subset_vitimas_feridas = {'Com Vitimas Feridas'}
regras_vitimas_feridas = regras_ordenadas[ (regras_ordenadas['antecedents'].apply(lambda x: len(x) > 1)) &
(regras_ordenadas['consequents'].apply(lambda x: subset_vitimas_feridas.issubset(x))) ]
# Imprime no máximo as 20 regras mais relevantes
for i in range(0, min(20, regras_vitimas_feridas.shape[0])):
print_row(regras_vitimas_feridas.iloc[i])
| 0.203075 | 0.788563 |
<a href="https://colab.research.google.com/github/kumiori/mec647/blob/main/mec647_Elast_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
%%capture
import sys
try:
import google.colab # noqa: F401
except ImportError:
import ufl # noqa: F401
import dolfinx # noqa: F401
else:
try:
import ufl
import dolfinx
except ImportError:
!wget "https://fem-on-colab.github.io/releases/fenicsx-install.sh" -O "/tmp/fenicsx-install.sh" && bash "/tmp/fenicsx-install.sh";
import ufl # noqa: F401
import dolfinx # noqa: F401
%%capture
!sudo apt install libgl1-mesa-glx xvfb;
!{sys.executable} -m pip install pythreejs;
!{sys.executable} -m pip install ipygany;
!{sys.executable} -m pip install --upgrade pyyaml
try:
import google.colab
except ImportError:
pass
else:
pass
# google.colab.output.enable_custom_widget_manager();
try:
import pyvista
except ImportError:
!pip3 install --upgrade pyvista itkwidgets;
import pyvista # noqa: F401
from pyvista.utilities import xvfb
try:
import gmsh
except ImportError:
!{sys.executable} -m pip install gmsh
import gmsh
```
# The problem of elasticity
Let $\Omega \subset (0, L)^D$, with $D=1, 2, 3$, $L$ finite, being the (or one) characteristic length of the specimen. For any $u\in V_t : H^1(\Omega, R^n) + bcs(t)$ with $n=1, 2$ or $3$, consider the energy $E(u)$ defined as
$$
E(u)=\frac{1}{2}\int_\Omega A e(u): e(u) dx - \int_\Omega f.u dx$$
Above, $A$ is the 4-th order tensor of elasticity, in the isotropic and homogeneous case, it corresponds to a linear combination with two coefficients, say, $A_0$ the stiffness (dimensional), and $\nu$ the Poisson ratio (non-dimensional).
We solve:
$$min \left\{ E(u): u \in V_t\right\}$$.
From a mechanical standpoint, linear elasticity is the limit regime of small deformations of the general, fully nonlinear, problem of elasticity.
From a mathematical standpoint, the minimisation problem above is a standard variational problem which is i) convex, ii) defined on a complete, compact, vector space of functions, and iii) . Its solution is unique and depends continuously upon the data. Can you show this?
Boundary conditions are such that equilibrium ...
The interest of the above is that $E(u)$ ...
```
# library include
import numpy as np
import yaml
import json
import sys
import os
from pathlib import Path
from mpi4py import MPI
import petsc4py
from petsc4py import PETSc
import dolfinx
import dolfinx.plot
from dolfinx import log
import ufl
from dolfinx.io import XDMFFile
import logging
logging.basicConfig(level=logging.INFO)
import dolfinx
import dolfinx.plot
import dolfinx.io
from dolfinx.fem import (
Constant,
Function,
FunctionSpace,
assemble_scalar,
dirichletbc,
form,
locate_dofs_geometrical,
set_bc,
)
import matplotlib.pyplot as plt
!rm -rf mec647
try:
!git clone https://github.com/kumiori/mec647.git
except Exception:
print('Something went wrong')
!rm -rf mec647
!git clone https://github.com/kumiori/mec647.git
sys.path.append('mec647/')
# meshes
import meshes
from meshes import primitives
# visualisation
from utils import viz
import matplotlib.pyplot as plt
from utils.viz import plot_mesh, plot_vector, plot_scalar
# Parameters
parameters = {
'loading': {
'min': 0,
'max': 1
},
'geometry': {
'geom_type': 'bar',
'Lx': 1.,
'Ly': 0.1
},
'model': {
'mu': 1.,
'lmbda': 0.
},
'solvers': {
'snes': {
'snes_type': 'newtontr',
'snes_stol': 1e-8,
'snes_atol': 1e-8,
'snes_rtol': 1e-8,
'snes_max_it': 100,
'snes_monitor': "",
'ksp_type': 'preonly',
'pc_type': 'lu',
'pc_factor_mat_solver_type': 'mumps'
}
}
}
# parameters.get('loading')
# Mesh
Lx = parameters["geometry"]["Lx"]
Ly = parameters["geometry"]["Ly"]
geom_type = parameters["geometry"]["geom_type"]
gmsh_model, tdim = primitives.mesh_bar_gmshapi(geom_type,
Lx,
Ly,
0.03,
tdim=2)
mesh, mts = meshes.gmsh_model_to_mesh(gmsh_model,
cell_data=False,
facet_data=True,
gdim=2)
# TODO: Plot mesh
plt.figure()
ax = plot_mesh(mesh)
fig = ax.get_figure()
fig.savefig(f"mesh.png")
# Functional setting
element_u = ufl.VectorElement("Lagrange", mesh.ufl_cell(),
degree=1, dim=2)
V_u = dolfinx.fem.FunctionSpace(mesh, element_u)
u = dolfinx.fem.Function(V_u, name="Displacement")
g = dolfinx.fem.Function(V_u, name="Body pressure")
u_ = dolfinx.fem.Function(V_u, name="Boundary Displacement")
# ux_ = dolfinx.fem.Function(V_u.sub(0).collapse(), name="Boundary Displacement")
# Integral measures
dx = ufl.Measure("dx", domain=mesh)
ds = ufl.Measure("ds", domain=mesh)
# Data
zero = Function(V_u)
# works in parallel!
with zero.vector.localForm() as loc:
loc.set(0.0)
one = Function(V_u)
# works in parallel!
with one.vector.localForm() as loc:
loc.set(1.0)
g = Function(V_u)
# works in parallel!
with zero.vector.localForm() as loc:
loc.set(0.0)
# energy
mu = parameters["model"]["mu"]
lmbda = parameters["model"]["lmbda"]
def _e(u):
return ufl.sym(ufl.grad(u))
en_density = 1/2 * (2*mu* ufl.inner(_e(u),_e(u))) + lmbda*ufl.tr(_e(u))**2
energy = en_density * dx - ufl.dot(g, u) * dx
# boundary conditions
def left(x):
return np.isclose(x[0], 0.)
def right(x):
return np.isclose(x[0], Lx)
left_facets = dolfinx.mesh.locate_entities_boundary(mesh, 1, left)
left_dofs = dolfinx.fem.locate_dofs_topological(V_u, mesh.topology.dim - 1,
left_facets)
# right side
right_facets = dolfinx.mesh.locate_entities_boundary(mesh, 1, right)
right_dofs = dolfinx.fem.locate_dofs_topological(V_u, mesh.topology.dim - 1,
right_facets)
bcs = [dirichletbc(zero, left_dofs), dirichletbc(one, right_dofs)]
bcs
left_dofs
# solving
from solvers import SNESSolver
D_energy_u = ufl.derivative(energy, u, ufl.TestFunction(V_u))
problem = SNESSolver(
D_energy_u,
u,
bcs,
bounds=None,
petsc_options=parameters.get("solvers").get("snes"),
prefix="elast",
)
problem.solve()
```
## Visualisation & Post Processing
```
def plot_vector(u, plotter, subplot=None):
if subplot:
plotter.subplot(subplot[0], subplot[1])
V = u.function_space
mesh = V.mesh
topology, cell_types, _ = dolfinx.plot.create_vtk_mesh(mesh, mesh.topology.dim)
num_dofs_local = u.function_space.dofmap.index_map.size_local
geometry = u.function_space.tabulate_dof_coordinates()[:num_dofs_local]
values = np.zeros((V.dofmap.index_map.size_local, 3), dtype=np.float64)
values[:, : mesh.geometry.dim] = u.vector.array.real.reshape(
V.dofmap.index_map.size_local, V.dofmap.index_map_bs
)
grid = pyvista.UnstructuredGrid(topology, cell_types, geometry)
grid["vectors"] = values
grid.set_active_vectors("vectors")
# geom = pyvista.Arrow()
# glyphs = grid.glyph(orient="vectors", factor=1, geom=geom)
glyphs = grid.glyph(orient="vectors", factor=1.0)
plotter.add_mesh(glyphs)
plotter.add_mesh(
grid, show_edges=True, color="black", style="wireframe", opacity=0.3
)
plotter.view_xy()
return plotter
def plot_scalar(alpha, plotter, subplot=None, lineproperties={}):
if subplot:
plotter.subplot(subplot[0], subplot[1])
V = alpha.function_space
mesh = V.mesh
topology, cell_types, _ = dolfinx.plot.create_vtk_mesh(mesh, mesh.topology.dim)
grid = pyvista.UnstructuredGrid(topology, cell_types, mesh.geometry.x)
plotter.subplot(0, 0)
grid.point_data["alpha"] = alpha.compute_point_values().real
grid.set_active_scalars("alpha")
plotter.add_mesh(grid, **lineproperties)
plotter.view_xy()
return plotter
# plt.figure()
# ax = plot_mesh(mesh)
# fig = ax.get_figure()
# fig.savefig(f"mesh.png")
# postprocessing
xvfb.start_xvfb(wait=0.05)
pyvista.OFF_SCREEN = True
plotter = pyvista.Plotter(
title="Displacement",
window_size=[1600, 600],
shape=(1, 2),
)
# _plt = plot_scalar(u_.sub(0), plotter, subplot=(0, 0))
_plt = plot_vector(u, plotter, subplot=(0, 1))
_plt.screenshot(f"displacement_MPI.png")
```
|
github_jupyter
|
%%capture
import sys
try:
import google.colab # noqa: F401
except ImportError:
import ufl # noqa: F401
import dolfinx # noqa: F401
else:
try:
import ufl
import dolfinx
except ImportError:
!wget "https://fem-on-colab.github.io/releases/fenicsx-install.sh" -O "/tmp/fenicsx-install.sh" && bash "/tmp/fenicsx-install.sh";
import ufl # noqa: F401
import dolfinx # noqa: F401
%%capture
!sudo apt install libgl1-mesa-glx xvfb;
!{sys.executable} -m pip install pythreejs;
!{sys.executable} -m pip install ipygany;
!{sys.executable} -m pip install --upgrade pyyaml
try:
import google.colab
except ImportError:
pass
else:
pass
# google.colab.output.enable_custom_widget_manager();
try:
import pyvista
except ImportError:
!pip3 install --upgrade pyvista itkwidgets;
import pyvista # noqa: F401
from pyvista.utilities import xvfb
try:
import gmsh
except ImportError:
!{sys.executable} -m pip install gmsh
import gmsh
# library include
import numpy as np
import yaml
import json
import sys
import os
from pathlib import Path
from mpi4py import MPI
import petsc4py
from petsc4py import PETSc
import dolfinx
import dolfinx.plot
from dolfinx import log
import ufl
from dolfinx.io import XDMFFile
import logging
logging.basicConfig(level=logging.INFO)
import dolfinx
import dolfinx.plot
import dolfinx.io
from dolfinx.fem import (
Constant,
Function,
FunctionSpace,
assemble_scalar,
dirichletbc,
form,
locate_dofs_geometrical,
set_bc,
)
import matplotlib.pyplot as plt
!rm -rf mec647
try:
!git clone https://github.com/kumiori/mec647.git
except Exception:
print('Something went wrong')
!rm -rf mec647
!git clone https://github.com/kumiori/mec647.git
sys.path.append('mec647/')
# meshes
import meshes
from meshes import primitives
# visualisation
from utils import viz
import matplotlib.pyplot as plt
from utils.viz import plot_mesh, plot_vector, plot_scalar
# Parameters
parameters = {
'loading': {
'min': 0,
'max': 1
},
'geometry': {
'geom_type': 'bar',
'Lx': 1.,
'Ly': 0.1
},
'model': {
'mu': 1.,
'lmbda': 0.
},
'solvers': {
'snes': {
'snes_type': 'newtontr',
'snes_stol': 1e-8,
'snes_atol': 1e-8,
'snes_rtol': 1e-8,
'snes_max_it': 100,
'snes_monitor': "",
'ksp_type': 'preonly',
'pc_type': 'lu',
'pc_factor_mat_solver_type': 'mumps'
}
}
}
# parameters.get('loading')
# Mesh
Lx = parameters["geometry"]["Lx"]
Ly = parameters["geometry"]["Ly"]
geom_type = parameters["geometry"]["geom_type"]
gmsh_model, tdim = primitives.mesh_bar_gmshapi(geom_type,
Lx,
Ly,
0.03,
tdim=2)
mesh, mts = meshes.gmsh_model_to_mesh(gmsh_model,
cell_data=False,
facet_data=True,
gdim=2)
# TODO: Plot mesh
plt.figure()
ax = plot_mesh(mesh)
fig = ax.get_figure()
fig.savefig(f"mesh.png")
# Functional setting
element_u = ufl.VectorElement("Lagrange", mesh.ufl_cell(),
degree=1, dim=2)
V_u = dolfinx.fem.FunctionSpace(mesh, element_u)
u = dolfinx.fem.Function(V_u, name="Displacement")
g = dolfinx.fem.Function(V_u, name="Body pressure")
u_ = dolfinx.fem.Function(V_u, name="Boundary Displacement")
# ux_ = dolfinx.fem.Function(V_u.sub(0).collapse(), name="Boundary Displacement")
# Integral measures
dx = ufl.Measure("dx", domain=mesh)
ds = ufl.Measure("ds", domain=mesh)
# Data
zero = Function(V_u)
# works in parallel!
with zero.vector.localForm() as loc:
loc.set(0.0)
one = Function(V_u)
# works in parallel!
with one.vector.localForm() as loc:
loc.set(1.0)
g = Function(V_u)
# works in parallel!
with zero.vector.localForm() as loc:
loc.set(0.0)
# energy
mu = parameters["model"]["mu"]
lmbda = parameters["model"]["lmbda"]
def _e(u):
return ufl.sym(ufl.grad(u))
en_density = 1/2 * (2*mu* ufl.inner(_e(u),_e(u))) + lmbda*ufl.tr(_e(u))**2
energy = en_density * dx - ufl.dot(g, u) * dx
# boundary conditions
def left(x):
return np.isclose(x[0], 0.)
def right(x):
return np.isclose(x[0], Lx)
left_facets = dolfinx.mesh.locate_entities_boundary(mesh, 1, left)
left_dofs = dolfinx.fem.locate_dofs_topological(V_u, mesh.topology.dim - 1,
left_facets)
# right side
right_facets = dolfinx.mesh.locate_entities_boundary(mesh, 1, right)
right_dofs = dolfinx.fem.locate_dofs_topological(V_u, mesh.topology.dim - 1,
right_facets)
bcs = [dirichletbc(zero, left_dofs), dirichletbc(one, right_dofs)]
bcs
left_dofs
# solving
from solvers import SNESSolver
D_energy_u = ufl.derivative(energy, u, ufl.TestFunction(V_u))
problem = SNESSolver(
D_energy_u,
u,
bcs,
bounds=None,
petsc_options=parameters.get("solvers").get("snes"),
prefix="elast",
)
problem.solve()
def plot_vector(u, plotter, subplot=None):
if subplot:
plotter.subplot(subplot[0], subplot[1])
V = u.function_space
mesh = V.mesh
topology, cell_types, _ = dolfinx.plot.create_vtk_mesh(mesh, mesh.topology.dim)
num_dofs_local = u.function_space.dofmap.index_map.size_local
geometry = u.function_space.tabulate_dof_coordinates()[:num_dofs_local]
values = np.zeros((V.dofmap.index_map.size_local, 3), dtype=np.float64)
values[:, : mesh.geometry.dim] = u.vector.array.real.reshape(
V.dofmap.index_map.size_local, V.dofmap.index_map_bs
)
grid = pyvista.UnstructuredGrid(topology, cell_types, geometry)
grid["vectors"] = values
grid.set_active_vectors("vectors")
# geom = pyvista.Arrow()
# glyphs = grid.glyph(orient="vectors", factor=1, geom=geom)
glyphs = grid.glyph(orient="vectors", factor=1.0)
plotter.add_mesh(glyphs)
plotter.add_mesh(
grid, show_edges=True, color="black", style="wireframe", opacity=0.3
)
plotter.view_xy()
return plotter
def plot_scalar(alpha, plotter, subplot=None, lineproperties={}):
if subplot:
plotter.subplot(subplot[0], subplot[1])
V = alpha.function_space
mesh = V.mesh
topology, cell_types, _ = dolfinx.plot.create_vtk_mesh(mesh, mesh.topology.dim)
grid = pyvista.UnstructuredGrid(topology, cell_types, mesh.geometry.x)
plotter.subplot(0, 0)
grid.point_data["alpha"] = alpha.compute_point_values().real
grid.set_active_scalars("alpha")
plotter.add_mesh(grid, **lineproperties)
plotter.view_xy()
return plotter
# plt.figure()
# ax = plot_mesh(mesh)
# fig = ax.get_figure()
# fig.savefig(f"mesh.png")
# postprocessing
xvfb.start_xvfb(wait=0.05)
pyvista.OFF_SCREEN = True
plotter = pyvista.Plotter(
title="Displacement",
window_size=[1600, 600],
shape=(1, 2),
)
# _plt = plot_scalar(u_.sub(0), plotter, subplot=(0, 0))
_plt = plot_vector(u, plotter, subplot=(0, 1))
_plt.screenshot(f"displacement_MPI.png")
| 0.270673 | 0.772874 |
### Univariate linear regression using gradient descent
- Hypothesis: y=t0+t1*x
```
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
```
#### Dataset
```
data_train = np.zeros((2,20))
data_train[0] = [4, 5, 5, 7, 8, 8, 9, 11, 11, 12, 13, 14, 16, 18, 19, 19, 21, 22, 25, 27] #x (input)
data_train[1] = [21, 24, 27, 30, 29, 31, 32, 33, 36, 37, 41, 37, 40, 39, 41, 42, 44, 45, 45, 48] #y (what we want to predict)
plt.plot(data_train[0], data_train[1], 'bx')
plt.ylabel('Y_train')
plt.xlabel('X_train')
plt.title('Training dataset')
plt.show()
```
#### Implement prediction function
- Based on hypothesis h(x) = t0 + t1*x
```
def make_prediction(X, t0, t1):
y = (t1 * X) + t0
return y
```
#### Implement cost function
- Using standard mean squared error
```
def compute_cost(y, y_predicted):
squared_differences = [data**2 for data in (y-y_predicted)]
cost = sum(squared_differences) / float(len(y))
return cost
```
#### Implement gradient descent function
- For each epoch:
- Compute the predicted y values using the current t0 and t1 values
- Compute the cost function on the entire dataset
- Compute the gradients
- Update the current t0 and t1 values with gradient descent
```
def gradient_descent(X, y, t0_current=0, t1_current=0, epochs=1000, learning_rate=0.0001):
cost_array = np.zeros((4,epochs))
for i in range(epochs):
y_current = make_prediction(X, t0_current, t1_current)
cost = compute_cost(y, y_current)
t1_grad = -2/float(len(y)) * sum(X * (y - y_current))
t0_grad = -2/float(len(y)) * sum(y - y_current)
t1_current = t1_current - (learning_rate * t1_grad)
t0_current = t0_current - (learning_rate * t0_grad)
cost_array[:,i] = [i, cost, t0_current, t1_current]
return t1_current, t0_current, cost, cost_array
```
#### Run the algorithm
```
[t1_current, t0_current, cost, cost_array] = gradient_descent(data_train[0], data_train[1], t0_current=0, t1_current=0, epochs=20000, learning_rate=0.001)
print "The is h(x) = t0 + t1*x with t0 = {0} and t1 = {1}.".format(t0_current, t1_current)
print "This solution has a cost of {0}.".format(cost)
```
#### Plot the hypothesis
```
plt.plot(data_train[0], data_train[1], 'bx')
plt.ylabel('Y_train')
plt.xlabel('X_train')
plt.title('Training dataset')
h = np.linspace(0, 30, 100)
plt.plot(h, t0_current+t1_current*h)
plt.show()
```
#### Plot the cost vs the number of epochs
- Useful to make sure that your algorithm is learning and the cost is being minimized
- We can observe that the algorithm starts to converge after 2500 epochs
```
plt.plot(cost_array[0], cost_array[1])
plt.ylabel('Cost')
plt.xlabel('epochs')
plt.title('Cost vs epochs')
plt.show()
```
#### Plot the evolution of the t0 param. vs the number of epochs
- We initialized the t0 param. to 0 here.
```
plt.plot(cost_array[0], cost_array[2])
plt.ylabel('t0')
plt.xlabel('epochs')
plt.title('t0 vs epochs')
plt.show()
```
#### Plot the evolution of the t1 param. vs the number of epochs
- We initialized the t1 param. to 0 here.
```
plt.plot(cost_array[0], cost_array[3])
plt.ylabel('t1')
plt.xlabel('epochs')
plt.title('t1 vs epochs')
plt.show()
```
|
github_jupyter
|
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
data_train = np.zeros((2,20))
data_train[0] = [4, 5, 5, 7, 8, 8, 9, 11, 11, 12, 13, 14, 16, 18, 19, 19, 21, 22, 25, 27] #x (input)
data_train[1] = [21, 24, 27, 30, 29, 31, 32, 33, 36, 37, 41, 37, 40, 39, 41, 42, 44, 45, 45, 48] #y (what we want to predict)
plt.plot(data_train[0], data_train[1], 'bx')
plt.ylabel('Y_train')
plt.xlabel('X_train')
plt.title('Training dataset')
plt.show()
def make_prediction(X, t0, t1):
y = (t1 * X) + t0
return y
def compute_cost(y, y_predicted):
squared_differences = [data**2 for data in (y-y_predicted)]
cost = sum(squared_differences) / float(len(y))
return cost
def gradient_descent(X, y, t0_current=0, t1_current=0, epochs=1000, learning_rate=0.0001):
cost_array = np.zeros((4,epochs))
for i in range(epochs):
y_current = make_prediction(X, t0_current, t1_current)
cost = compute_cost(y, y_current)
t1_grad = -2/float(len(y)) * sum(X * (y - y_current))
t0_grad = -2/float(len(y)) * sum(y - y_current)
t1_current = t1_current - (learning_rate * t1_grad)
t0_current = t0_current - (learning_rate * t0_grad)
cost_array[:,i] = [i, cost, t0_current, t1_current]
return t1_current, t0_current, cost, cost_array
[t1_current, t0_current, cost, cost_array] = gradient_descent(data_train[0], data_train[1], t0_current=0, t1_current=0, epochs=20000, learning_rate=0.001)
print "The is h(x) = t0 + t1*x with t0 = {0} and t1 = {1}.".format(t0_current, t1_current)
print "This solution has a cost of {0}.".format(cost)
plt.plot(data_train[0], data_train[1], 'bx')
plt.ylabel('Y_train')
plt.xlabel('X_train')
plt.title('Training dataset')
h = np.linspace(0, 30, 100)
plt.plot(h, t0_current+t1_current*h)
plt.show()
plt.plot(cost_array[0], cost_array[1])
plt.ylabel('Cost')
plt.xlabel('epochs')
plt.title('Cost vs epochs')
plt.show()
plt.plot(cost_array[0], cost_array[2])
plt.ylabel('t0')
plt.xlabel('epochs')
plt.title('t0 vs epochs')
plt.show()
plt.plot(cost_array[0], cost_array[3])
plt.ylabel('t1')
plt.xlabel('epochs')
plt.title('t1 vs epochs')
plt.show()
| 0.659953 | 0.977799 |
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt # print graph of costs
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
from subprocess import check_output
print(check_output(["ls", "../input"]).decode("utf8"))
# Any results you write to the current directory are saved as output.
data=pd.read_csv('../input/fashion-mnist_train.csv')
X_train=data.iloc[:,1:].values.reshape(-1,28,28,1)/255
Y_train=data.iloc[:,0].values.reshape(-1,1)
print(X_train.shape,Y_train.shape)
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
Y_Train=np.zeros((60000,10))
for elem in range(60000):
Y_Train[elem,Y_train[elem,0]]=1
Y_train=Y_Train
Xtr,Xte,Ytr,Yte=train_test_split(X_train,Y_train,test_size=0.2)
print(Xtr.shape,Ytr.shape,Xte.shape,Yte.shape)
import tensorflow as tf
X=tf.placeholder(tf.float32,[None,28,28,1])
Y=tf.placeholder(tf.float32,[None,10])
c1=tf.contrib.layers.conv2d(X, 10, kernel_size=(3,3))
p1=tf.contrib.layers.max_pool2d(c1,(3,3))
bn1=tf.contrib.layers.batch_norm(p1,center=True, scale=True,is_training=True,scope='bn1')
a1=tf.nn.relu(bn1)
c2=tf.contrib.layers.conv2d(a1, 30, kernel_size=(3,3))
p2=tf.contrib.layers.max_pool2d(c2,(2,2))
bn2=tf.contrib.layers.batch_norm(p2,center=True,scale=True,is_training=True,scope='bn2')
a2=tf.nn.relu(bn2)
c3=tf.contrib.layers.conv2d(a2, 50, kernel_size=(3,3))
p3=tf.contrib.layers.max_pool2d(c3,(2,2))
bn3=tf.contrib.layers.batch_norm(p3,center=True,scale=True,is_training=True,scope='bn3')
a3=tf.nn.relu(bn3)
F=tf.contrib.layers.flatten(a3)
Yhat=tf.contrib.layers.fully_connected(F,10)
loss=tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=Y,logits=Yhat))
#tf.nn.softmax_cross_entropy_with_logits()
learning_rate = 0.001
optimizer=tf.train.AdamOptimizer(0.001).minimize(loss)
def random_mini_batches(X_train, Y_train, minibatch_size):
data_size=Y_train.shape[0]
minibatches = []
num=int(data_size/minibatch_size)
num_ex=data_size%minibatch_size
if(num_ex>0):
num=num+1
for i in range(num):
inds=np.random.randint(0,48000,size=128)
X_batch,Y_batch=Xtr[inds,...],Ytr[inds,...]
minibatches.append((X_batch,Y_batch))
return minibatches
init = tf.global_variables_initializer()
num_epoch = 10
minibatch_size = 128
costs = []
with tf.Session() as sess:
sess.run(init)
for epoch in range(num_epoch):
minibatch_cost = 0
num_minibatches = int(Xtr.shape[0]/minibatch_size)
minibatches=random_mini_batches(Xtr,Ytr,minibatch_size)
for minibatch in minibatches:
(X_batch,Y_batch)=minibatch
_, temp_cost=sess.run([optimizer,loss],feed_dict={X:X_batch,Y:Y_batch})
minibatch_cost += temp_cost/num_minibatches
if(epoch%1 == 0):
print("Cost after epoch %i: %f" % (epoch, minibatch_cost))
if(epoch%1 == 0):
costs.append(minibatch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# inds=np.random.randint(0,12000,size=128)
# x,y=Xte[inds],Yte[inds]
# print(sess.run(loss,feed_dict={X:x,Y:y}))
# Calculate the correct predictions
predict_op = tf.argmax(Yhat, 1)
correct_prediction = tf.equal(predict_op, tf.argmax(Y, 1))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print(accuracy)
train_accuracy = accuracy.eval({X: Xtr, Y: Ytr})
test_accuracy = accuracy.eval({X: Xte, Y: Yte})
print("Train Accuracy:", train_accuracy)
print("Test Accuracy:", test_accuracy)
```
|
github_jupyter
|
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt # print graph of costs
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
from subprocess import check_output
print(check_output(["ls", "../input"]).decode("utf8"))
# Any results you write to the current directory are saved as output.
data=pd.read_csv('../input/fashion-mnist_train.csv')
X_train=data.iloc[:,1:].values.reshape(-1,28,28,1)/255
Y_train=data.iloc[:,0].values.reshape(-1,1)
print(X_train.shape,Y_train.shape)
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
Y_Train=np.zeros((60000,10))
for elem in range(60000):
Y_Train[elem,Y_train[elem,0]]=1
Y_train=Y_Train
Xtr,Xte,Ytr,Yte=train_test_split(X_train,Y_train,test_size=0.2)
print(Xtr.shape,Ytr.shape,Xte.shape,Yte.shape)
import tensorflow as tf
X=tf.placeholder(tf.float32,[None,28,28,1])
Y=tf.placeholder(tf.float32,[None,10])
c1=tf.contrib.layers.conv2d(X, 10, kernel_size=(3,3))
p1=tf.contrib.layers.max_pool2d(c1,(3,3))
bn1=tf.contrib.layers.batch_norm(p1,center=True, scale=True,is_training=True,scope='bn1')
a1=tf.nn.relu(bn1)
c2=tf.contrib.layers.conv2d(a1, 30, kernel_size=(3,3))
p2=tf.contrib.layers.max_pool2d(c2,(2,2))
bn2=tf.contrib.layers.batch_norm(p2,center=True,scale=True,is_training=True,scope='bn2')
a2=tf.nn.relu(bn2)
c3=tf.contrib.layers.conv2d(a2, 50, kernel_size=(3,3))
p3=tf.contrib.layers.max_pool2d(c3,(2,2))
bn3=tf.contrib.layers.batch_norm(p3,center=True,scale=True,is_training=True,scope='bn3')
a3=tf.nn.relu(bn3)
F=tf.contrib.layers.flatten(a3)
Yhat=tf.contrib.layers.fully_connected(F,10)
loss=tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=Y,logits=Yhat))
#tf.nn.softmax_cross_entropy_with_logits()
learning_rate = 0.001
optimizer=tf.train.AdamOptimizer(0.001).minimize(loss)
def random_mini_batches(X_train, Y_train, minibatch_size):
data_size=Y_train.shape[0]
minibatches = []
num=int(data_size/minibatch_size)
num_ex=data_size%minibatch_size
if(num_ex>0):
num=num+1
for i in range(num):
inds=np.random.randint(0,48000,size=128)
X_batch,Y_batch=Xtr[inds,...],Ytr[inds,...]
minibatches.append((X_batch,Y_batch))
return minibatches
init = tf.global_variables_initializer()
num_epoch = 10
minibatch_size = 128
costs = []
with tf.Session() as sess:
sess.run(init)
for epoch in range(num_epoch):
minibatch_cost = 0
num_minibatches = int(Xtr.shape[0]/minibatch_size)
minibatches=random_mini_batches(Xtr,Ytr,minibatch_size)
for minibatch in minibatches:
(X_batch,Y_batch)=minibatch
_, temp_cost=sess.run([optimizer,loss],feed_dict={X:X_batch,Y:Y_batch})
minibatch_cost += temp_cost/num_minibatches
if(epoch%1 == 0):
print("Cost after epoch %i: %f" % (epoch, minibatch_cost))
if(epoch%1 == 0):
costs.append(minibatch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# inds=np.random.randint(0,12000,size=128)
# x,y=Xte[inds],Yte[inds]
# print(sess.run(loss,feed_dict={X:x,Y:y}))
# Calculate the correct predictions
predict_op = tf.argmax(Yhat, 1)
correct_prediction = tf.equal(predict_op, tf.argmax(Y, 1))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print(accuracy)
train_accuracy = accuracy.eval({X: Xtr, Y: Ytr})
test_accuracy = accuracy.eval({X: Xte, Y: Yte})
print("Train Accuracy:", train_accuracy)
print("Test Accuracy:", test_accuracy)
| 0.723407 | 0.379091 |
# Boston Housing Model
# Data Exploration:
```
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import os
plt.rcParams['figure.dpi'] = 150
df = pd.read_csv("HousingData_train.csv")
df.head(10)
# Checking for null or nan values
df.isnull().sum()
# Visualizing Missing or nan Values
plt.figure(figsize=(15,8))
sns.heatmap(df.isnull(), yticklabels=False)
# Impute CHAS categorical feature using median of the column
print(df["CHAS"].unique())
df["CHAS"].fillna((df['CHAS'].median()), inplace=True)
df["CHAS"] = df["CHAS"].apply(np.int64)
df.head()
# Get the dependent variable column
y = df["MEDV"]
y.head()
# Get the independent variable column
X = df.drop(["MEDV"], axis=1).select_dtypes(exclude=['object'])
X.head()
# Plotting correlation between features (RAD and TAX are highly correlated)
plt.figure(figsize=(20,20))
corr = X.corr()
sns.heatmap(corr,
xticklabels=corr.columns.values,
yticklabels=corr.columns.values,
annot=True, annot_kws={"size":9},
cmap=sns.color_palette("coolwarm"))
# Using KNN Imputer to fill the rest of the missing values
from sklearn.impute import KNNImputer
og_X_column = list(X.columns.values)
impute_knn = KNNImputer(n_neighbors=2)
X = impute_knn.fit_transform(X)
X = pd.DataFrame(X,columns=og_X_column)
X.isnull().sum()
# Checking the dataframe after cleaning
X.describe()
# Grid Search Hyperparameter
params = {
"learning_rate" : [ 0.05, 0.10, 0.15, 0.20, 0.25 ],
"max_depth" : [ 3 , 5, 8, 10, 15 ],
"gamma" : [0.0, 0.1, 0.2, 0.3, 0.4],
"alpha" : [0, 2, 6 ,8 , 10],
"colsample_bytree": [ 0.3, 0.5, 0.7],
"min_child_weight" : [ 1, 3, 5, 7 ]
}
import xgboost as xgb
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.metrics import mean_squared_error as MSE
from sklearn.metrics import r2_score
from sklearn.model_selection import RandomizedSearchCV, GridSearchCV
from sklearn.preprocessing import MinMaxScaler
xReg_model = xgb.XGBRegressor()
x_train , x_test, y_train, y_test = train_test_split(X,y, test_size = 0.2, shuffle=True)
# Perform Gridsearch on the parameters shown above. Using the output to run the next stages.
# NOTE: Please peoceed to the next step if grid search need not be run as this will take quite some time to run.
grid_xReg = GridSearchCV(estimator=xReg_model, param_grid = params, cv = 5, n_jobs=-1, verbose=10)
grid_xReg.fit(X, y)
print("Best Estiamtor:\n",grid_xReg.best_estimator_)
print("Best Score:\n",grid_xReg.best_score_)
print("Parameter List:\n",grid_xReg.best_params_)
```
# Run Linear Regression
```
# Builds a Linear Regreesion Model and prints out the metrics to comapare to other models.
from sklearn.linear_model import LinearRegression
l_regr = LinearRegression()
l_regr.fit(x_train, y_train)
cv_score_lr = cross_val_score(xReg_model, x_train, y_train, cv=10)
print("Cross Validation Scores: ", cv_score_lr)
# Predict the model
pred = l_regr.predict(x_test)
# Metrics Computation.
print("\nMSE is: {:.4f}".format(MSE(y_test, pred)))
rmse = np.sqrt(MSE(y_test, pred))
print("RMSE is: {:.4f}".format(rmse))
r2 = r2_score(y_test, pred)
print('R^2 score is {:.4f}'.format(r2))
```
# Run XGBoost Regression with optimized hyperparameters
```
# Using the XGBoost's Regressor to test out gradient boosting technique with optimized hyperparameters.
xReg_model = xgb.XGBRegressor(alpha=0, base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bynode=1, colsample_bytree=0.5, gamma=0.0, gpu_id=-1,
importance_type='gain', interaction_constraints='',
learning_rate=0.1, max_delta_step=0, max_depth=3,
min_child_weight=3, monotone_constraints='()',
n_estimators=100, n_jobs=12, num_parallel_tree=1, random_state=0,
reg_alpha=0, reg_lambda=1, scale_pos_weight=1, subsample=1,
tree_method='exact', validate_parameters=1, verbosity=None)
# Fitting the model
xReg_model.fit(x_train, y_train)
score = xReg_model.score(x_train, y_train)
cv_score = cross_val_score(xReg_model, x_train, y_train, cv=10)
print("Cross Validation Scores: ", cv_score)
# Predict the model
pred = xReg_model.predict(x_test)
# Metrics Computation.
print("\nMSE is: {:.4f}".format(MSE(y_test, pred)))
rmse = np.sqrt(MSE(y_test, pred))
print("RMSE is: {:.4f}".format(rmse))
r2 = r2_score(y_test, pred)
print('R^2 score is {:.4f}'.format(r2))
```
# Run prediction on Test Set
```
# Read Test Data
df_test = pd.read_csv("HousingData_test.csv")
df_test.head()
df_test.isnull().sum()
# Impute CHAS categorical feature using median of the column for test data
print(df_test["CHAS"].unique())
df_test["CHAS"].fillna((df_test['CHAS'].median()), inplace=True)
df_test["CHAS"] = df_test["CHAS"].apply(np.int64)
df_test.isnull().sum()
# Check test dataframe
df_test.head()
# Impute missing values
X_Test_Df = impute_knn.fit_transform(df_test)
X_Test_Df = pd.DataFrame(X_Test_Df,columns=og_X_column)
X_Test_Df.head()
# Predict using the XGBoost Model and append it to the test dataframe
# Predicted value is saved under new column name Predicted_MEDV
predict_test = xReg_model.predict(X_Test_Df)
X_Test_Df["Predicted_MEDV"] = predict_test
X_Test_Df.head()
```
# Export Saved pipeline files
### NOTE: This can also be used to test single line of inputs by loading the model and replacing the x_tester list below with the required test inputs
```
# Pickle pipeline file and dump
from joblib import dump, load
dump(impute_knn, 'knn_imputer.joblib')
dump(xReg_model, 'xg_regressor.joblib')
# Load Model saved
impute_knn_loaded = load('knn_imputer.joblib')
xReg_model_loaded = load('xg_regressor.joblib')
# Run example prediction
x_tester = [9.39063, 0.0, 18.1, 0 , 0.740, 5.627, 93.9, 1.8172, 24.0, 666.0, 20.2, 396.90, 22.880]
x_tester_np = np.asarray(x_tester)
x_tester_np.reshape(1, -1)
x_tester_np
x_tester_df = impute_knn_loaded.fit_transform([x_tester_np])
x_tester_df = pd.DataFrame(x_tester_df,columns=og_X_column)
x_tester_df.head()
xReg_model_loaded.predict(x_tester_df)
```
|
github_jupyter
|
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import os
plt.rcParams['figure.dpi'] = 150
df = pd.read_csv("HousingData_train.csv")
df.head(10)
# Checking for null or nan values
df.isnull().sum()
# Visualizing Missing or nan Values
plt.figure(figsize=(15,8))
sns.heatmap(df.isnull(), yticklabels=False)
# Impute CHAS categorical feature using median of the column
print(df["CHAS"].unique())
df["CHAS"].fillna((df['CHAS'].median()), inplace=True)
df["CHAS"] = df["CHAS"].apply(np.int64)
df.head()
# Get the dependent variable column
y = df["MEDV"]
y.head()
# Get the independent variable column
X = df.drop(["MEDV"], axis=1).select_dtypes(exclude=['object'])
X.head()
# Plotting correlation between features (RAD and TAX are highly correlated)
plt.figure(figsize=(20,20))
corr = X.corr()
sns.heatmap(corr,
xticklabels=corr.columns.values,
yticklabels=corr.columns.values,
annot=True, annot_kws={"size":9},
cmap=sns.color_palette("coolwarm"))
# Using KNN Imputer to fill the rest of the missing values
from sklearn.impute import KNNImputer
og_X_column = list(X.columns.values)
impute_knn = KNNImputer(n_neighbors=2)
X = impute_knn.fit_transform(X)
X = pd.DataFrame(X,columns=og_X_column)
X.isnull().sum()
# Checking the dataframe after cleaning
X.describe()
# Grid Search Hyperparameter
params = {
"learning_rate" : [ 0.05, 0.10, 0.15, 0.20, 0.25 ],
"max_depth" : [ 3 , 5, 8, 10, 15 ],
"gamma" : [0.0, 0.1, 0.2, 0.3, 0.4],
"alpha" : [0, 2, 6 ,8 , 10],
"colsample_bytree": [ 0.3, 0.5, 0.7],
"min_child_weight" : [ 1, 3, 5, 7 ]
}
import xgboost as xgb
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.metrics import mean_squared_error as MSE
from sklearn.metrics import r2_score
from sklearn.model_selection import RandomizedSearchCV, GridSearchCV
from sklearn.preprocessing import MinMaxScaler
xReg_model = xgb.XGBRegressor()
x_train , x_test, y_train, y_test = train_test_split(X,y, test_size = 0.2, shuffle=True)
# Perform Gridsearch on the parameters shown above. Using the output to run the next stages.
# NOTE: Please peoceed to the next step if grid search need not be run as this will take quite some time to run.
grid_xReg = GridSearchCV(estimator=xReg_model, param_grid = params, cv = 5, n_jobs=-1, verbose=10)
grid_xReg.fit(X, y)
print("Best Estiamtor:\n",grid_xReg.best_estimator_)
print("Best Score:\n",grid_xReg.best_score_)
print("Parameter List:\n",grid_xReg.best_params_)
# Builds a Linear Regreesion Model and prints out the metrics to comapare to other models.
from sklearn.linear_model import LinearRegression
l_regr = LinearRegression()
l_regr.fit(x_train, y_train)
cv_score_lr = cross_val_score(xReg_model, x_train, y_train, cv=10)
print("Cross Validation Scores: ", cv_score_lr)
# Predict the model
pred = l_regr.predict(x_test)
# Metrics Computation.
print("\nMSE is: {:.4f}".format(MSE(y_test, pred)))
rmse = np.sqrt(MSE(y_test, pred))
print("RMSE is: {:.4f}".format(rmse))
r2 = r2_score(y_test, pred)
print('R^2 score is {:.4f}'.format(r2))
# Using the XGBoost's Regressor to test out gradient boosting technique with optimized hyperparameters.
xReg_model = xgb.XGBRegressor(alpha=0, base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bynode=1, colsample_bytree=0.5, gamma=0.0, gpu_id=-1,
importance_type='gain', interaction_constraints='',
learning_rate=0.1, max_delta_step=0, max_depth=3,
min_child_weight=3, monotone_constraints='()',
n_estimators=100, n_jobs=12, num_parallel_tree=1, random_state=0,
reg_alpha=0, reg_lambda=1, scale_pos_weight=1, subsample=1,
tree_method='exact', validate_parameters=1, verbosity=None)
# Fitting the model
xReg_model.fit(x_train, y_train)
score = xReg_model.score(x_train, y_train)
cv_score = cross_val_score(xReg_model, x_train, y_train, cv=10)
print("Cross Validation Scores: ", cv_score)
# Predict the model
pred = xReg_model.predict(x_test)
# Metrics Computation.
print("\nMSE is: {:.4f}".format(MSE(y_test, pred)))
rmse = np.sqrt(MSE(y_test, pred))
print("RMSE is: {:.4f}".format(rmse))
r2 = r2_score(y_test, pred)
print('R^2 score is {:.4f}'.format(r2))
# Read Test Data
df_test = pd.read_csv("HousingData_test.csv")
df_test.head()
df_test.isnull().sum()
# Impute CHAS categorical feature using median of the column for test data
print(df_test["CHAS"].unique())
df_test["CHAS"].fillna((df_test['CHAS'].median()), inplace=True)
df_test["CHAS"] = df_test["CHAS"].apply(np.int64)
df_test.isnull().sum()
# Check test dataframe
df_test.head()
# Impute missing values
X_Test_Df = impute_knn.fit_transform(df_test)
X_Test_Df = pd.DataFrame(X_Test_Df,columns=og_X_column)
X_Test_Df.head()
# Predict using the XGBoost Model and append it to the test dataframe
# Predicted value is saved under new column name Predicted_MEDV
predict_test = xReg_model.predict(X_Test_Df)
X_Test_Df["Predicted_MEDV"] = predict_test
X_Test_Df.head()
# Pickle pipeline file and dump
from joblib import dump, load
dump(impute_knn, 'knn_imputer.joblib')
dump(xReg_model, 'xg_regressor.joblib')
# Load Model saved
impute_knn_loaded = load('knn_imputer.joblib')
xReg_model_loaded = load('xg_regressor.joblib')
# Run example prediction
x_tester = [9.39063, 0.0, 18.1, 0 , 0.740, 5.627, 93.9, 1.8172, 24.0, 666.0, 20.2, 396.90, 22.880]
x_tester_np = np.asarray(x_tester)
x_tester_np.reshape(1, -1)
x_tester_np
x_tester_df = impute_knn_loaded.fit_transform([x_tester_np])
x_tester_df = pd.DataFrame(x_tester_df,columns=og_X_column)
x_tester_df.head()
xReg_model_loaded.predict(x_tester_df)
| 0.712932 | 0.89289 |
## Preprocessing
```
# Import our dependencies
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import pandas as pd
import tensorflow as tf
# Import and read the charity_data.csv.
import pandas as pd
application_df = pd.read_csv("Resources/charity_data.csv")
application_df.head()
# Drop the non-beneficial ID columns, 'EIN' and 'NAME'.
application_df.drop(['EIN', 'NAME'], axis=1, inplace=True)
application_df
# Determine the number of unique values in each column.
application_df.nunique()
# Look at APPLICATION_TYPE value counts for binning
typeCount = application_df['APPLICATION_TYPE'].value_counts()
typeCount
# Choose a cutoff value and create a list of application types to be replaced
# use the variable name `application_types_to_replace`
application_types_to_replace = list(typeCount[typeCount<200].index)
# Replace in dataframe
for app in application_types_to_replace:
application_df['APPLICATION_TYPE'] = application_df['APPLICATION_TYPE'].replace(app,"Other")
# Check to make sure binning was successful
application_df['APPLICATION_TYPE'].value_counts()
# Look at CLASSIFICATION value counts for binning
classCount = application_df['CLASSIFICATION'].value_counts()
classCount
# You may find it helpful to look at CLASSIFICATION value counts >1
classCount1 = classCount[classCount > 1]
classCount1
# Choose a cutoff value and create a list of classifications to be replaced
# use the variable name `classifications_to_replace`
classifications_to_replace = list(classCount[classCount < 150].index)
# Replace in dataframe
for cls in classifications_to_replace:
application_df['CLASSIFICATION'] = application_df['CLASSIFICATION'].replace(cls,"Other")
# Check to make sure binning was successful
application_df['CLASSIFICATION'].value_counts()
application_df.nunique()
# Convert categorical data to numeric with `pd.get_dummies`
application_df = pd.get_dummies(application_df, dtype = float)
application_df
# Split our preprocessed data into our features and target arrays
y = application_df['IS_SUCCESSFUL'].values
X = application_df.drop(['IS_SUCCESSFUL'], axis = 1)
# Split the preprocessed data into a training and testing dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 43)
# Create a StandardScaler instances
scaler = StandardScaler()
# Fit the StandardScaler
X_scaler = scaler.fit(X_train)
# Scale the data
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
```
## Compile, Train and Evaluate the Model
```
# Define the model - deep neural net, i.e., the number of input features and hidden nodes for each layer.
features = len(X_train_scaled[0])
layer1 = 50
layer2 = 100
nn = tf.keras.models.Sequential()
# First hidden layer
nn.add(tf.keras.layers.Dense(units=layer1, input_dim=features, activation='relu'))
# Second hidden layer
nn.add(tf.keras.layers.Dense(units=layer2, activation='relu'))
# Output layer
nn.add(tf.keras.layers.Dense(units=1, activation='sigmoid'))
# Check the structure of the model
nn.summary()
# Compile the model
nn.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
# Train the model
fit_model = nn.fit(X_train_scaled,y_train,epochs=100)
# Evaluate the model using the test data
model_loss, model_accuracy = nn.evaluate(X_test_scaled,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
training_history = pd.DataFrame(fit_model.history)
training_history.index += 1
training_history.plot(y="loss")
training_history.plot(y='accuracy')
# Export our model to HDF5 file
training_history.to_hdf("charityTrainingHistory.h5", "/data/d1")
```
|
github_jupyter
|
# Import our dependencies
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import pandas as pd
import tensorflow as tf
# Import and read the charity_data.csv.
import pandas as pd
application_df = pd.read_csv("Resources/charity_data.csv")
application_df.head()
# Drop the non-beneficial ID columns, 'EIN' and 'NAME'.
application_df.drop(['EIN', 'NAME'], axis=1, inplace=True)
application_df
# Determine the number of unique values in each column.
application_df.nunique()
# Look at APPLICATION_TYPE value counts for binning
typeCount = application_df['APPLICATION_TYPE'].value_counts()
typeCount
# Choose a cutoff value and create a list of application types to be replaced
# use the variable name `application_types_to_replace`
application_types_to_replace = list(typeCount[typeCount<200].index)
# Replace in dataframe
for app in application_types_to_replace:
application_df['APPLICATION_TYPE'] = application_df['APPLICATION_TYPE'].replace(app,"Other")
# Check to make sure binning was successful
application_df['APPLICATION_TYPE'].value_counts()
# Look at CLASSIFICATION value counts for binning
classCount = application_df['CLASSIFICATION'].value_counts()
classCount
# You may find it helpful to look at CLASSIFICATION value counts >1
classCount1 = classCount[classCount > 1]
classCount1
# Choose a cutoff value and create a list of classifications to be replaced
# use the variable name `classifications_to_replace`
classifications_to_replace = list(classCount[classCount < 150].index)
# Replace in dataframe
for cls in classifications_to_replace:
application_df['CLASSIFICATION'] = application_df['CLASSIFICATION'].replace(cls,"Other")
# Check to make sure binning was successful
application_df['CLASSIFICATION'].value_counts()
application_df.nunique()
# Convert categorical data to numeric with `pd.get_dummies`
application_df = pd.get_dummies(application_df, dtype = float)
application_df
# Split our preprocessed data into our features and target arrays
y = application_df['IS_SUCCESSFUL'].values
X = application_df.drop(['IS_SUCCESSFUL'], axis = 1)
# Split the preprocessed data into a training and testing dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 43)
# Create a StandardScaler instances
scaler = StandardScaler()
# Fit the StandardScaler
X_scaler = scaler.fit(X_train)
# Scale the data
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
# Define the model - deep neural net, i.e., the number of input features and hidden nodes for each layer.
features = len(X_train_scaled[0])
layer1 = 50
layer2 = 100
nn = tf.keras.models.Sequential()
# First hidden layer
nn.add(tf.keras.layers.Dense(units=layer1, input_dim=features, activation='relu'))
# Second hidden layer
nn.add(tf.keras.layers.Dense(units=layer2, activation='relu'))
# Output layer
nn.add(tf.keras.layers.Dense(units=1, activation='sigmoid'))
# Check the structure of the model
nn.summary()
# Compile the model
nn.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
# Train the model
fit_model = nn.fit(X_train_scaled,y_train,epochs=100)
# Evaluate the model using the test data
model_loss, model_accuracy = nn.evaluate(X_test_scaled,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
training_history = pd.DataFrame(fit_model.history)
training_history.index += 1
training_history.plot(y="loss")
training_history.plot(y='accuracy')
# Export our model to HDF5 file
training_history.to_hdf("charityTrainingHistory.h5", "/data/d1")
| 0.76934 | 0.813127 |
# Working With Datasets
Data is central to machine learning. This tutorial introduces the `Dataset` class that DeepChem uses to store and manage data. It provides simple but powerful tools for efficiently working with large amounts of data. It also is designed to easily interact with other popular Python frameworks such as NumPy, Pandas, TensorFlow, and PyTorch.
## Colab
This tutorial and the rest in this sequence can be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
[](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/Working_With_Datasets.ipynb)
## Setup
To run DeepChem within Colab, you'll need to run the following installation commands. This will take about 5 minutes to run to completion and install your environment. You can of course run this tutorial locally if you prefer. In that case, don't run these cells since they will download and install Anaconda on your local machine.
```
!curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import conda_installer
conda_installer.install()
!/root/miniconda/bin/conda info -e
!pip install --pre deepchem
```
We can now import the `deepchem` package to play with.
```
import deepchem as dc
dc.__version__
```
# Anatomy of a Dataset
In the last tutorial we loaded the Delaney dataset of molecular solubilities. Let's load it again.
```
tasks, datasets, transformers = dc.molnet.load_delaney(featurizer='GraphConv')
train_dataset, valid_dataset, test_dataset = datasets
```
We now have three Dataset objects: the training, validation, and test sets. What information does each of them contain? We can start to get an idea by printing out the string representation of one of them.
```
print(test_dataset)
```
There's a lot of information there, so let's start at the beginning. It begins with the label "DiskDataset". Dataset is an abstract class. It has a few subclasses that correspond to different ways of storing data.
- `DiskDataset` is a dataset that has been saved to disk. The data is stored in a way that can be efficiently accessed, even if the total amount of data is far larger than your computer's memory.
- `NumpyDataset` is an in-memory dataset that holds all the data in NumPy arrays. It is a useful tool when manipulating small to medium sized datasets that can fit entirely in memory.
- `ImageDataset` is a more specialized class that stores some or all of the data in image files on disk. It is useful when working with models that have images as their inputs or outputs.
Now let's consider the contents of the Dataset. Every Dataset stores a list of *samples*. Very roughly speaking, a sample is a single data point. In this case, each sample is a molecule. In other datasets a sample might correspond to an experimental assay, a cell line, an image, or many other things. For every sample the dataset stores the following information.
- The *features*, referred to as `X`. This is the input that should be fed into a model to represent the sample.
- The *labels*, referred to as `y`. This is the desired output from the model. During training, it tries to make the model's output for each sample as close as possible to `y`.
- The *weights*, referred to as `w`. This can be used to indicate that some data values are more important than others. In later tutorials we will see examples of how this is useful.
- An *ID*, which is a unique identifier for the sample. This can be anything as long as it is unique. Sometimes it is just an integer index, but in this dataset the ID is a SMILES string describing the molecule.
Notice that `X`, `y`, and `w` all have 113 as the size of their first dimension. That means this dataset contains 113 samples.
The final piece of information listed in the output is `task_names`. Some datasets contain multiple pieces of information for each sample. For example, if a sample represents a molecule, the dataset might record the results of several different experiments on that molecule. This dataset has only a single task: "measured log solubility in mols per litre". Also notice that `y` and `w` each have shape (113, 1). The second dimension of these arrays usually matches the number of tasks.
# Accessing Data from a Dataset
There are many ways to access the data contained in a dataset. The simplest is just to directly access the `X`, `y`, `w`, and `ids` properties. Each of these returns the corresponding information as a NumPy array.
```
test_dataset.y
```
This is a very easy way to access data, but you should be very careful about using it. This requires the data for all samples to be loaded into memory at once. That's fine for small datasets like this one, but for large datasets it could easily take more memory than you have.
A better approach is to iterate over the dataset. That lets it load just a little data at a time, process it, then free the memory before loading the next bit. You can use the `itersamples()` method to iterate over samples one at a time.
```
for X, y, w, id in test_dataset.itersamples():
print(y, id)
```
Most deep learning models can process a batch of multiple samples all at once. You can use `iterbatches()` to iterate over batches of samples.
```
for X, y, w, ids in test_dataset.iterbatches(batch_size=50):
print(y.shape)
```
`iterbatches()` has other features that are useful when training models. For example, `iterbatches(batch_size=100, epochs=10, deterministic=False)` will iterate over the complete dataset ten times, each time with the samples in a different random order.
Datasets can also expose data using the standard interfaces for TensorFlow and PyTorch. To get a `tensorflow.data.Dataset`, call `make_tf_dataset()`. To get a `torch.utils.data.IterableDataset`, call `make_pytorch_dataset()`. See the API documentation for more details.
The final way of accessing data is `to_dataframe()`. This copies the data into a Pandas `DataFrame`. This requires storing all the data in memory at once, so you should only use it with small datasets.
```
test_dataset.to_dataframe()
```
# Creating Datasets
Now let's talk about how you can create your own datasets. Creating a `NumpyDataset` is very simple: just pass the arrays containing the data to the constructor. Let's create some random arrays, then wrap them in a NumpyDataset.
```
import numpy as np
X = np.random.random((10, 5))
y = np.random.random((10, 2))
dataset = dc.data.NumpyDataset(X=X, y=y)
print(dataset)
```
Notice that we did not specify weights or IDs. These are optional, as is `y` for that matter. Only `X` is required. Since we left them out, it automatically built `w` and `ids` arrays for us, setting all weights to 1 and setting the IDs to integer indices.
```
dataset.to_dataframe()
```
What about creating a DiskDataset? If you have the data in NumPy arrays, you can call `DiskDataset.from_numpy()` to save it to disk. Since this is just a tutorial, we will save it to a temporary directory.
```
import tempfile
with tempfile.TemporaryDirectory() as data_dir:
disk_dataset = dc.data.DiskDataset.from_numpy(X=X, y=y, data_dir=data_dir)
print(disk_dataset)
```
What about larger datasets that can't fit in memory? What if you have some huge files on disk containing data on hundreds of millions of molecules? The process for creating a DiskDataset from them is slightly more involved. Fortunately, DeepChem's `DataLoader` framework can automate most of the work for you. That is a larger subject, so we will return to it in a later tutorial.
# Congratulations! Time to join the Community!
Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways:
## Star DeepChem on [GitHub](https://github.com/deepchem/deepchem)
This helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build.
## Join the DeepChem Gitter
The DeepChem [Gitter](https://gitter.im/deepchem/Lobby) hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!
|
github_jupyter
|
!curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import conda_installer
conda_installer.install()
!/root/miniconda/bin/conda info -e
!pip install --pre deepchem
import deepchem as dc
dc.__version__
tasks, datasets, transformers = dc.molnet.load_delaney(featurizer='GraphConv')
train_dataset, valid_dataset, test_dataset = datasets
print(test_dataset)
test_dataset.y
for X, y, w, id in test_dataset.itersamples():
print(y, id)
for X, y, w, ids in test_dataset.iterbatches(batch_size=50):
print(y.shape)
test_dataset.to_dataframe()
import numpy as np
X = np.random.random((10, 5))
y = np.random.random((10, 2))
dataset = dc.data.NumpyDataset(X=X, y=y)
print(dataset)
dataset.to_dataframe()
import tempfile
with tempfile.TemporaryDirectory() as data_dir:
disk_dataset = dc.data.DiskDataset.from_numpy(X=X, y=y, data_dir=data_dir)
print(disk_dataset)
| 0.512449 | 0.993765 |
```
from gs_quant.session import GsSession
# external users should substitute their client id and secret; please skip this step if using internal jupyterhub
GsSession.use(client_id=None, client_secret=None,scopes=('read_product_data', 'run_analytics'))
from gs_quant.instrument import FXOption, EqOption
from gs_quant.data import Dataset
from gs_quant.timeseries import last_value, correlation, percentiles, volatility, Returns
from gs_quant.risk import FXSpot
from gs_quant.datetime.relative_date import RelativeDate
from gs_quant.markets.portfolio import Portfolio
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
pd.set_option('display.precision', 2)
```
## EQ FX Hedging Screen
We screen for the most attractive FX options to hedge an equity index such as S&P. For each 3m FX option, we the direction is chosen based on historical 1y correlations using weekly returns. The strikes and notionals of the FX options are adjusted by the 6m realized vol ratio to an equity index.
Realized Correlation uses weekly returns with the currency value in USD and percentile uses a 5y history. 6m realized volatility ratio uses weekly returns. Strike price is spot + FX / SPX vol ratio. Strike and Spot are in normal spot convention. Discount to SPX is FX Option premium / SPX price. Past performance is not indicative of future results.
```
def calculate_screen(eq_ric, fx_crosses, start=RelativeDate('-5y').apply_rule(), end=RelativeDate('-1b').apply_rule()):
fxspot_data = Dataset('WMFXSPOT').get_data(start, end, bbid=fx_crosses)
fxspot_df = pd.pivot_table(fxspot_data, values='midPrice', index=['date'], columns=['bbid']).resample('W-FRI').last()
eq = Dataset('TREOD').get_data(start, end, ric=eq_ric).closePrice.resample('W-FRI').last()
eq_vol = last_value(volatility(eq, 24))
cors = pd.DataFrame({bbid: correlation(eq, 1/fxspot_df[bbid] if bbid[0] == 'U' else fxspot_df[bbid] , 52) for bbid in fx_crosses})
cur_cor = pd.Series(cors.tail(1).squeeze()*100, name='1y Realized Correlation (%)')
pct_cor = pd.Series({bbid: last_value(percentiles(cors[bbid])) for bbid in fx_crosses}, name='Corr %-ile')
vol_ = pd.DataFrame({bbid: volatility(fxspot_df[bbid], 24, Returns.LOGARITHMIC) for bbid in fx_crosses})
vol_cur = pd.Series(vol_.tail(1).squeeze() / eq_vol , name=f'6m Realized Vol Ratio (FX / {eq_ric})')
table = pd.concat([cur_cor, pct_cor, vol_cur ], axis=1)
#price options
eqo = EqOption(option_type='Put', underlier=eq_ric, exchange='NYSE', strike_price='90%', expiration_date='3m', buy_sell='Buy', premium=0)
eqo.resolve()
notional = eqo.strike_price * eqo.multiplier
portfolio = Portfolio()
for cross in fx_crosses:
ratio = table.loc[cross][f'6m Realized Vol Ratio (FX / {eq_ric})']
if cross[0] != 'U':
cross = f'USD{cross[:3]}'
portfolio.append(FXOption(pair=cross, option_type='Call', expiration_date='3m', strike_price=f's+{ratio*10}%', buy_sell='Buy',
notional_amount=notional/ratio))
portfolio.resolve()
port_df = portfolio.to_frame()
port_df['Cost in bps'] = (port_df['premium'] / port_df['notional_amount'])*1e4
port_df[f'Discount to {eq_ric} (%)'] =(abs(port_df['premium'])/eqo.price() - 1)*100
port_df['Spot'] = list(portfolio.calc(FXSpot).result())
port_df['pair'] = port_df['pair'].str.replace(' ', '')
port_df['pair_'] = g10
port_df = port_df.set_index('pair_')
port_df['Strike'] = [1 / port_df.loc[x]['strike_price'] if x != port_df.loc[x]['pair'] else port_df.loc[x]['strike_price'] for x in port_df.index]
result = table.join([port_df['Strike'], port_df['Spot'],port_df['Cost in bps'],port_df[f'Discount to {eq_ric} (%)']])
return result.sort_values(by=['1y Realized Correlation (%)', f'Discount to {eq_ric} (%)'], ascending=(False, True))
start = RelativeDate('-5y').apply_rule()
end = RelativeDate('-1b').apply_rule()
g10 = ['USDJPY', 'EURUSD', 'AUDUSD', 'GBPUSD', 'USDCAD', 'USDNOK', 'NZDUSD', 'USDSEK', 'USDCHF']
#example equity index rics '.FTSE','.N225','.SPX' or '.STOXX50E'
eq_ric = '.N225'
table = calculate_screen(eq_ric='.N225', fx_crosses=g10)
table.style.background_gradient(
subset=['1y Realized Correlation (%)', f'Discount to {eq_ric} (%)'])
sns.scatterplot(x=table['1y Realized Correlation (%)'], y=table[f'Discount to {eq_ric} (%)'], hue=table.index, s=60)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
```
|
github_jupyter
|
from gs_quant.session import GsSession
# external users should substitute their client id and secret; please skip this step if using internal jupyterhub
GsSession.use(client_id=None, client_secret=None,scopes=('read_product_data', 'run_analytics'))
from gs_quant.instrument import FXOption, EqOption
from gs_quant.data import Dataset
from gs_quant.timeseries import last_value, correlation, percentiles, volatility, Returns
from gs_quant.risk import FXSpot
from gs_quant.datetime.relative_date import RelativeDate
from gs_quant.markets.portfolio import Portfolio
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
pd.set_option('display.precision', 2)
def calculate_screen(eq_ric, fx_crosses, start=RelativeDate('-5y').apply_rule(), end=RelativeDate('-1b').apply_rule()):
fxspot_data = Dataset('WMFXSPOT').get_data(start, end, bbid=fx_crosses)
fxspot_df = pd.pivot_table(fxspot_data, values='midPrice', index=['date'], columns=['bbid']).resample('W-FRI').last()
eq = Dataset('TREOD').get_data(start, end, ric=eq_ric).closePrice.resample('W-FRI').last()
eq_vol = last_value(volatility(eq, 24))
cors = pd.DataFrame({bbid: correlation(eq, 1/fxspot_df[bbid] if bbid[0] == 'U' else fxspot_df[bbid] , 52) for bbid in fx_crosses})
cur_cor = pd.Series(cors.tail(1).squeeze()*100, name='1y Realized Correlation (%)')
pct_cor = pd.Series({bbid: last_value(percentiles(cors[bbid])) for bbid in fx_crosses}, name='Corr %-ile')
vol_ = pd.DataFrame({bbid: volatility(fxspot_df[bbid], 24, Returns.LOGARITHMIC) for bbid in fx_crosses})
vol_cur = pd.Series(vol_.tail(1).squeeze() / eq_vol , name=f'6m Realized Vol Ratio (FX / {eq_ric})')
table = pd.concat([cur_cor, pct_cor, vol_cur ], axis=1)
#price options
eqo = EqOption(option_type='Put', underlier=eq_ric, exchange='NYSE', strike_price='90%', expiration_date='3m', buy_sell='Buy', premium=0)
eqo.resolve()
notional = eqo.strike_price * eqo.multiplier
portfolio = Portfolio()
for cross in fx_crosses:
ratio = table.loc[cross][f'6m Realized Vol Ratio (FX / {eq_ric})']
if cross[0] != 'U':
cross = f'USD{cross[:3]}'
portfolio.append(FXOption(pair=cross, option_type='Call', expiration_date='3m', strike_price=f's+{ratio*10}%', buy_sell='Buy',
notional_amount=notional/ratio))
portfolio.resolve()
port_df = portfolio.to_frame()
port_df['Cost in bps'] = (port_df['premium'] / port_df['notional_amount'])*1e4
port_df[f'Discount to {eq_ric} (%)'] =(abs(port_df['premium'])/eqo.price() - 1)*100
port_df['Spot'] = list(portfolio.calc(FXSpot).result())
port_df['pair'] = port_df['pair'].str.replace(' ', '')
port_df['pair_'] = g10
port_df = port_df.set_index('pair_')
port_df['Strike'] = [1 / port_df.loc[x]['strike_price'] if x != port_df.loc[x]['pair'] else port_df.loc[x]['strike_price'] for x in port_df.index]
result = table.join([port_df['Strike'], port_df['Spot'],port_df['Cost in bps'],port_df[f'Discount to {eq_ric} (%)']])
return result.sort_values(by=['1y Realized Correlation (%)', f'Discount to {eq_ric} (%)'], ascending=(False, True))
start = RelativeDate('-5y').apply_rule()
end = RelativeDate('-1b').apply_rule()
g10 = ['USDJPY', 'EURUSD', 'AUDUSD', 'GBPUSD', 'USDCAD', 'USDNOK', 'NZDUSD', 'USDSEK', 'USDCHF']
#example equity index rics '.FTSE','.N225','.SPX' or '.STOXX50E'
eq_ric = '.N225'
table = calculate_screen(eq_ric='.N225', fx_crosses=g10)
table.style.background_gradient(
subset=['1y Realized Correlation (%)', f'Discount to {eq_ric} (%)'])
sns.scatterplot(x=table['1y Realized Correlation (%)'], y=table[f'Discount to {eq_ric} (%)'], hue=table.index, s=60)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
| 0.52975 | 0.674462 |
```
#export
import tempfile
from fastai2.basics import *
from fastai2.learner import Callback
from nbdev.showdoc import *
#default_exp callback.captum
```
# Captum
Captum is the Model Interpretation Library from PyTorch as available [here](https://captum.ai)
To use this we need to install the package using
`conda install captum -c pytorch`
or
`pip install captum`
This is a Call back to use Captum.
```
#export
from captum.attr import IntegratedGradients
from captum.attr import visualization as viz
from matplotlib.colors import LinearSegmentedColormap
#export
class CaptumCallback(Callback):
"Captum Callback for Resnet Interpretation"
def __init__(self):
pass
def after_fit(self):
self.integrated_gradients = IntegratedGradients(self.model)
def visualize(self,inp_data,n_steps=200,cmap_name='custom blue',colors=None,N=256,methods=['original_image','heat_map'],signs=["all", "positive"],outlier_perc=1):
dl = self.dls.test_dl([inp_data],with_labels=True, bs=1)
self.enc_inp,self.enc_preds= dl.one_batch()
dec_data=dl.decode((self.enc_inp,self.enc_preds))
self.dec_img,self.dec_pred=dec_data[0][0],dec_data[1][0]
self.colors = [(0, '#ffffff'),(0.25, '#000000'),(1, '#000000')] if colors is None else colors
self.attributions_ig = self.integrated_gradients.attribute(self.enc_inp.to(self.dl.device), target=self.enc_preds, n_steps=200)
default_cmap = LinearSegmentedColormap.from_list(cmap_name,
self.colors, N=N)
_ = viz.visualize_image_attr_multiple(np.transpose(self.attributions_ig.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(self.dec_img.numpy(), (1,2,0)),
methods=methods,
cmap=default_cmap,
show_colorbar=True,
signs=signs,
outlier_perc=outlier_perc, titles=[f'Original Image - ({self.dec_pred})', 'IG'])
from fastai2.vision.all import *
path = untar_data(URLs.PETS)/'images'
def is_cat(x): return x[0].isupper()
dls = ImageDataLoaders.from_name_func(
path, get_image_files(path), valid_pct=0.2, seed=42,
label_func=is_cat, item_tfms=Resize(128))
learn = cnn_learner(dls, resnet34, metrics=error_rate,cbs=CaptumCallback())
learn.fine_tune(1)
paths=list(path.iterdir())
index=random.randint(0,len(paths))
image_path=paths[index]
learn.captum.visualize(image_path,n_steps=1000)
```
|
github_jupyter
|
#export
import tempfile
from fastai2.basics import *
from fastai2.learner import Callback
from nbdev.showdoc import *
#default_exp callback.captum
#export
from captum.attr import IntegratedGradients
from captum.attr import visualization as viz
from matplotlib.colors import LinearSegmentedColormap
#export
class CaptumCallback(Callback):
"Captum Callback for Resnet Interpretation"
def __init__(self):
pass
def after_fit(self):
self.integrated_gradients = IntegratedGradients(self.model)
def visualize(self,inp_data,n_steps=200,cmap_name='custom blue',colors=None,N=256,methods=['original_image','heat_map'],signs=["all", "positive"],outlier_perc=1):
dl = self.dls.test_dl([inp_data],with_labels=True, bs=1)
self.enc_inp,self.enc_preds= dl.one_batch()
dec_data=dl.decode((self.enc_inp,self.enc_preds))
self.dec_img,self.dec_pred=dec_data[0][0],dec_data[1][0]
self.colors = [(0, '#ffffff'),(0.25, '#000000'),(1, '#000000')] if colors is None else colors
self.attributions_ig = self.integrated_gradients.attribute(self.enc_inp.to(self.dl.device), target=self.enc_preds, n_steps=200)
default_cmap = LinearSegmentedColormap.from_list(cmap_name,
self.colors, N=N)
_ = viz.visualize_image_attr_multiple(np.transpose(self.attributions_ig.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(self.dec_img.numpy(), (1,2,0)),
methods=methods,
cmap=default_cmap,
show_colorbar=True,
signs=signs,
outlier_perc=outlier_perc, titles=[f'Original Image - ({self.dec_pred})', 'IG'])
from fastai2.vision.all import *
path = untar_data(URLs.PETS)/'images'
def is_cat(x): return x[0].isupper()
dls = ImageDataLoaders.from_name_func(
path, get_image_files(path), valid_pct=0.2, seed=42,
label_func=is_cat, item_tfms=Resize(128))
learn = cnn_learner(dls, resnet34, metrics=error_rate,cbs=CaptumCallback())
learn.fine_tune(1)
paths=list(path.iterdir())
index=random.randint(0,len(paths))
image_path=paths[index]
learn.captum.visualize(image_path,n_steps=1000)
| 0.456652 | 0.769254 |
```
import numpy as np
np.random.seed(42)
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
dataset = load_boston()
x = dataset.data[:, 5:6]
y = dataset.target
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3)
regr = LinearRegression()
regr.fit(x_train, y_train)
r2_score = regr.score(x_test, y_test)
print(f"Coef: {regr.coef_}")
print(f"Intercept: {regr.intercept_}")
print(f"R2-Score: {r2_score}")
```
### Non-linear Transformation:
$\vec{x} =\left(\!
\begin{array}{c}
x_1 \\
x_2
\end{array}
\!\right) $ Wir haben Datenpunkte mit z.B. 2 Features.
$\vec{z} = \phi(\vec{x})$ Wir wollen dann eine Transformation mit dem Grad=2 ausführen.
$\vec{z} =\left(\!
\begin{array}{c}
z_1 \\
\vdots \\
z_9
\end{array}
\!\right) $ Daraus resultierenden dann (in dem Beispiel) 6 Features.
```
from sklearn.preprocessing import PolynomialFeatures
degree = 3
pf = PolynomialFeatures(degree=degree)
pf.fit(x_train)
x_train_transformed = pf.transform(x_train)
x_test_transformed = pf.transform(x_test)
print(x_train.shape, x_train_transformed.shape)
print(x_test.shape, x_test_transformed.shape)
print(f"Old num features: {pf.n_input_features_}")
print(f"New num features: {pf.n_output_features_}")
print("Old feature names: [x0, x1]")
print(f"New feature names: {pf.get_feature_names()}")
```
#### Polynomial Regression:
$\vec{y} = \mathbf{Z}\vec{\beta} + \vec{\epsilon}$
$\mathbf{Z}$ ist hier dann der Datensatz nach der Transformation.
```
poly_regr = LinearRegression()
poly_regr.fit(x_train_transformed, y_train)
r2_score = poly_regr.score(x_test_transformed, y_test)
print(f"Coef: {poly_regr.coef_}")
print(f"Intercept: {poly_regr.intercept_}")
print(f"R2-Score: {r2_score}")
```
#### Visualization
```
def plot_residuals(regr, x_train, y_train, x_test, y_test):
y_pred_train = regr.predict(x_train)
y_pred_test = regr.predict(x_test)
min_val = min(np.min(y_pred_train), np.min(y_pred_test))
max_val = max(np.max(y_pred_train), np.max(y_pred_test))
plt.scatter(y_pred_train, y_pred_train - y_train, color="blue")
plt.scatter(y_pred_test, y_pred_test - y_test, color="red")
plt.hlines(y=0, xmin=min_val, xmax=max_val)
plt.legend(["Train", "Test"])
plt.show()
plot_residuals(regr, x_train, y_train, x_test, y_test)
plot_residuals(poly_regr, x_train_transformed, y_train, x_test_transformed, y_test)
```
#### Plot PolyRegression
```
def f(x: np.ndarray) -> np.ndarray:
return -x**4 * np.cos(x)
x = np.arange(start= 0.0, stop=10.0, step=0.2).reshape(-1, 1)
y = f(x)
colors: list[str] = ["blue", "red", "green", "orange"]
def plot_poly_reg(x: np.ndarray, y: np.ndarray, degree: int) -> None:
# Preprocessing
pf = PolynomialFeatures(degree=degree)
pf.fit(x)
x_transformed = pf.transform(x)
poly_regr = LinearRegression()
poly_regr.fit(x_transformed, y)
r2_score = poly_regr.score(x_transformed, y)
y_pred = poly_regr.predict(x_transformed)
# Plotting
_ = plt.figure(figsize=(8, 8))
plt.plot(x, y, color="lightblue", linewidth=2, label="Ground Truth")
plt.scatter(x, y, color="white", s=30, marker="o", label="Dataset")
plt.plot(x, y_pred, color=colors[degree - 1], linewidth=2, label=f"Degree: {degree}")
plt.show()
# Print prediction metrics
print(f"Coef: {poly_regr.coef_}")
print(f"Intercept: {poly_regr.intercept_}")
print(f"R2-Score: {r2_score}")
print(f"Degree: {degree}\n")
print(f"Feature names: {pf.get_feature_names()}")
for degree in [1, 2, 3, 4]:
plot_poly_reg(x, y, degree)
```
|
github_jupyter
|
import numpy as np
np.random.seed(42)
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
dataset = load_boston()
x = dataset.data[:, 5:6]
y = dataset.target
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3)
regr = LinearRegression()
regr.fit(x_train, y_train)
r2_score = regr.score(x_test, y_test)
print(f"Coef: {regr.coef_}")
print(f"Intercept: {regr.intercept_}")
print(f"R2-Score: {r2_score}")
from sklearn.preprocessing import PolynomialFeatures
degree = 3
pf = PolynomialFeatures(degree=degree)
pf.fit(x_train)
x_train_transformed = pf.transform(x_train)
x_test_transformed = pf.transform(x_test)
print(x_train.shape, x_train_transformed.shape)
print(x_test.shape, x_test_transformed.shape)
print(f"Old num features: {pf.n_input_features_}")
print(f"New num features: {pf.n_output_features_}")
print("Old feature names: [x0, x1]")
print(f"New feature names: {pf.get_feature_names()}")
poly_regr = LinearRegression()
poly_regr.fit(x_train_transformed, y_train)
r2_score = poly_regr.score(x_test_transformed, y_test)
print(f"Coef: {poly_regr.coef_}")
print(f"Intercept: {poly_regr.intercept_}")
print(f"R2-Score: {r2_score}")
def plot_residuals(regr, x_train, y_train, x_test, y_test):
y_pred_train = regr.predict(x_train)
y_pred_test = regr.predict(x_test)
min_val = min(np.min(y_pred_train), np.min(y_pred_test))
max_val = max(np.max(y_pred_train), np.max(y_pred_test))
plt.scatter(y_pred_train, y_pred_train - y_train, color="blue")
plt.scatter(y_pred_test, y_pred_test - y_test, color="red")
plt.hlines(y=0, xmin=min_val, xmax=max_val)
plt.legend(["Train", "Test"])
plt.show()
plot_residuals(regr, x_train, y_train, x_test, y_test)
plot_residuals(poly_regr, x_train_transformed, y_train, x_test_transformed, y_test)
def f(x: np.ndarray) -> np.ndarray:
return -x**4 * np.cos(x)
x = np.arange(start= 0.0, stop=10.0, step=0.2).reshape(-1, 1)
y = f(x)
colors: list[str] = ["blue", "red", "green", "orange"]
def plot_poly_reg(x: np.ndarray, y: np.ndarray, degree: int) -> None:
# Preprocessing
pf = PolynomialFeatures(degree=degree)
pf.fit(x)
x_transformed = pf.transform(x)
poly_regr = LinearRegression()
poly_regr.fit(x_transformed, y)
r2_score = poly_regr.score(x_transformed, y)
y_pred = poly_regr.predict(x_transformed)
# Plotting
_ = plt.figure(figsize=(8, 8))
plt.plot(x, y, color="lightblue", linewidth=2, label="Ground Truth")
plt.scatter(x, y, color="white", s=30, marker="o", label="Dataset")
plt.plot(x, y_pred, color=colors[degree - 1], linewidth=2, label=f"Degree: {degree}")
plt.show()
# Print prediction metrics
print(f"Coef: {poly_regr.coef_}")
print(f"Intercept: {poly_regr.intercept_}")
print(f"R2-Score: {r2_score}")
print(f"Degree: {degree}\n")
print(f"Feature names: {pf.get_feature_names()}")
for degree in [1, 2, 3, 4]:
plot_poly_reg(x, y, degree)
| 0.839405 | 0.945851 |
# Express sklearn pipeline as codeflare pipeline
Reference: https://scikit-learn.org/stable/auto_examples/neighbors/plot_nca_classification.html#sphx-glr-auto-examples-neighbors-plot-nca-classification-py
```
%matplotlib inline
```
# Comparing Nearest Neighbors with and without Neighborhood Components Analysis
An example comparing nearest neighbors classification with and without
Neighborhood Components Analysis.
It will plot the class decision boundaries given by a Nearest Neighbors
classifier when using the Euclidean distance on the original features, versus
using the Euclidean distance after the transformation learned by Neighborhood
Components Analysis. The latter aims to find a linear transformation that
maximises the (stochastic) nearest neighbor classification accuracy on the
training set.
```
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import (KNeighborsClassifier,
NeighborhoodComponentsAnalysis)
from sklearn.pipeline import Pipeline
print(__doc__)
n_neighbors = 1
dataset = datasets.load_iris()
X, y = dataset.data, dataset.target
# we only take two features. We could avoid this ugly
# slicing by using a two-dim dataset
X = X[:, [0, 2]]
X_train, X_test, y_train, y_test = \
train_test_split(X, y, stratify=y, test_size=0.7, random_state=42)
h = .01 # step size in the mesh
# Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
names = ['KNN', 'NCA, KNN']
classifiers = [Pipeline([('scaler', StandardScaler()),
('knn', KNeighborsClassifier(n_neighbors=n_neighbors))
]),
Pipeline([('scaler', StandardScaler()),
('nca', NeighborhoodComponentsAnalysis()),
('knn', KNeighborsClassifier(n_neighbors=n_neighbors))
])
]
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
for name, clf in zip(names, classifiers):
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.rcParams['pcolor.shading'] ='nearest'
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light, alpha=.8)
# Plot also the training and testing points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold, edgecolor='k', s=20)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("{} (k = {})".format(name, n_neighbors))
plt.text(0.9, 0.1, '{:.2f}'.format(score), size=15,
ha='center', va='center', transform=plt.gca().transAxes)
plt.show()
import ray
import codeflare.pipelines.Datamodel as dm
import codeflare.pipelines.Runtime as rt
from codeflare.pipelines.Datamodel import Xy
from codeflare.pipelines.Datamodel import XYRef
from codeflare.pipelines.Runtime import ExecutionType
ray.shutdown()
ray.init()
n_neighbors = 1
dataset = datasets.load_iris()
X, y = dataset.data, dataset.target
# we only take two features. We could avoid this ugly
# slicing by using a two-dim dataset
X = X[:, [0, 2]]
X_train, X_test, y_train, y_test = \
train_test_split(X, y, stratify=y, test_size=0.7, random_state=42)
h = .01 # step size in the mesh
# Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
names = ['KNN', 'NCA, KNN']
pipeline = dm.Pipeline()
node_scalar = dm.EstimatorNode('scaler', StandardScaler())
node_knn = dm.EstimatorNode('knn', KNeighborsClassifier(n_neighbors=n_neighbors))
node_nca = dm.EstimatorNode('nca', NeighborhoodComponentsAnalysis())
node_knn_post_nca = dm.EstimatorNode('knn_post_nca', KNeighborsClassifier(n_neighbors=n_neighbors))
pipeline.add_edge(node_scalar, node_knn)
pipeline.add_edge(node_scalar, node_nca)
pipeline.add_edge(node_nca, node_knn_post_nca)
# create training input
train_input = dm.PipelineInput()
train_input.add_xy_arg(node_scalar, dm.Xy(X_train, y_train))
pipeline_fitted = rt.execute_pipeline(pipeline, ExecutionType.FIT, train_input)
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
name = names[0]
# create test input
test_input = dm.PipelineInput()
test_input.add_xy_arg(node_scalar, dm.Xy(X_test, y_test))
knn_pipeline = rt.select_pipeline(pipeline_fitted, pipeline_fitted.get_xyrefs(node_knn)[0])
knn_score = ray.get(rt.execute_pipeline(knn_pipeline, ExecutionType.SCORE, test_input)
.get_xyrefs(node_knn)[0].get_Xref())
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
# create predict input
meshinput = np.c_[xx.ravel(), yy.ravel()]
meshlabel = np.ones(meshinput.shape[0]).shape
predict_input = dm.PipelineInput()
predict_input.add_xy_arg(node_scalar, dm.Xy(meshinput, meshlabel))
Z = ray.get(rt.execute_pipeline(knn_pipeline, ExecutionType.PREDICT, predict_input)
.get_xyrefs(node_knn)[0].get_Xref())
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.rcParams['pcolor.shading'] ='nearest'
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light, alpha=.8)
# Plot also the training and testing points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold, edgecolor='k', s=20)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("{} (k = {})".format(name, n_neighbors))
plt.text(0.9, 0.1, '{:.2f}'.format(knn_score), size=15,
ha='center', va='center', transform=plt.gca().transAxes)
name = names[1]
nca_pipeline = rt.select_pipeline(pipeline_fitted, pipeline_fitted.get_xyrefs(node_knn_post_nca)[0])
nca_score = ray.get(rt.execute_pipeline(nca_pipeline, ExecutionType.SCORE, test_input)
.get_xyrefs(node_knn_post_nca)[0].get_Xref())
Z = ray.get(rt.execute_pipeline(nca_pipeline, ExecutionType.PREDICT, predict_input)
.get_xyrefs(node_knn_post_nca)[0].get_Xref())
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light, alpha=.8)
# Plot also the training and testing points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold, edgecolor='k', s=20)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("{} (k = {})".format(name, n_neighbors))
plt.text(0.9, 0.1, '{:.2f}'.format(nca_score), size=15,
ha='center', va='center', transform=plt.gca().transAxes)
plt.show()
ray.shutdown()
```
|
github_jupyter
|
%matplotlib inline
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import (KNeighborsClassifier,
NeighborhoodComponentsAnalysis)
from sklearn.pipeline import Pipeline
print(__doc__)
n_neighbors = 1
dataset = datasets.load_iris()
X, y = dataset.data, dataset.target
# we only take two features. We could avoid this ugly
# slicing by using a two-dim dataset
X = X[:, [0, 2]]
X_train, X_test, y_train, y_test = \
train_test_split(X, y, stratify=y, test_size=0.7, random_state=42)
h = .01 # step size in the mesh
# Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
names = ['KNN', 'NCA, KNN']
classifiers = [Pipeline([('scaler', StandardScaler()),
('knn', KNeighborsClassifier(n_neighbors=n_neighbors))
]),
Pipeline([('scaler', StandardScaler()),
('nca', NeighborhoodComponentsAnalysis()),
('knn', KNeighborsClassifier(n_neighbors=n_neighbors))
])
]
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
for name, clf in zip(names, classifiers):
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.rcParams['pcolor.shading'] ='nearest'
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light, alpha=.8)
# Plot also the training and testing points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold, edgecolor='k', s=20)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("{} (k = {})".format(name, n_neighbors))
plt.text(0.9, 0.1, '{:.2f}'.format(score), size=15,
ha='center', va='center', transform=plt.gca().transAxes)
plt.show()
import ray
import codeflare.pipelines.Datamodel as dm
import codeflare.pipelines.Runtime as rt
from codeflare.pipelines.Datamodel import Xy
from codeflare.pipelines.Datamodel import XYRef
from codeflare.pipelines.Runtime import ExecutionType
ray.shutdown()
ray.init()
n_neighbors = 1
dataset = datasets.load_iris()
X, y = dataset.data, dataset.target
# we only take two features. We could avoid this ugly
# slicing by using a two-dim dataset
X = X[:, [0, 2]]
X_train, X_test, y_train, y_test = \
train_test_split(X, y, stratify=y, test_size=0.7, random_state=42)
h = .01 # step size in the mesh
# Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
names = ['KNN', 'NCA, KNN']
pipeline = dm.Pipeline()
node_scalar = dm.EstimatorNode('scaler', StandardScaler())
node_knn = dm.EstimatorNode('knn', KNeighborsClassifier(n_neighbors=n_neighbors))
node_nca = dm.EstimatorNode('nca', NeighborhoodComponentsAnalysis())
node_knn_post_nca = dm.EstimatorNode('knn_post_nca', KNeighborsClassifier(n_neighbors=n_neighbors))
pipeline.add_edge(node_scalar, node_knn)
pipeline.add_edge(node_scalar, node_nca)
pipeline.add_edge(node_nca, node_knn_post_nca)
# create training input
train_input = dm.PipelineInput()
train_input.add_xy_arg(node_scalar, dm.Xy(X_train, y_train))
pipeline_fitted = rt.execute_pipeline(pipeline, ExecutionType.FIT, train_input)
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
name = names[0]
# create test input
test_input = dm.PipelineInput()
test_input.add_xy_arg(node_scalar, dm.Xy(X_test, y_test))
knn_pipeline = rt.select_pipeline(pipeline_fitted, pipeline_fitted.get_xyrefs(node_knn)[0])
knn_score = ray.get(rt.execute_pipeline(knn_pipeline, ExecutionType.SCORE, test_input)
.get_xyrefs(node_knn)[0].get_Xref())
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
# create predict input
meshinput = np.c_[xx.ravel(), yy.ravel()]
meshlabel = np.ones(meshinput.shape[0]).shape
predict_input = dm.PipelineInput()
predict_input.add_xy_arg(node_scalar, dm.Xy(meshinput, meshlabel))
Z = ray.get(rt.execute_pipeline(knn_pipeline, ExecutionType.PREDICT, predict_input)
.get_xyrefs(node_knn)[0].get_Xref())
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.rcParams['pcolor.shading'] ='nearest'
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light, alpha=.8)
# Plot also the training and testing points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold, edgecolor='k', s=20)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("{} (k = {})".format(name, n_neighbors))
plt.text(0.9, 0.1, '{:.2f}'.format(knn_score), size=15,
ha='center', va='center', transform=plt.gca().transAxes)
name = names[1]
nca_pipeline = rt.select_pipeline(pipeline_fitted, pipeline_fitted.get_xyrefs(node_knn_post_nca)[0])
nca_score = ray.get(rt.execute_pipeline(nca_pipeline, ExecutionType.SCORE, test_input)
.get_xyrefs(node_knn_post_nca)[0].get_Xref())
Z = ray.get(rt.execute_pipeline(nca_pipeline, ExecutionType.PREDICT, predict_input)
.get_xyrefs(node_knn_post_nca)[0].get_Xref())
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light, alpha=.8)
# Plot also the training and testing points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold, edgecolor='k', s=20)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("{} (k = {})".format(name, n_neighbors))
plt.text(0.9, 0.1, '{:.2f}'.format(nca_score), size=15,
ha='center', va='center', transform=plt.gca().transAxes)
plt.show()
ray.shutdown()
| 0.888665 | 0.98417 |
```
import pandas as pd
import json
import bokeh.plotting as bpl
import os
import numpy as np
import math
from bokeh.plotting import figure, output_file, show, gridplot
from bokeh.models import ColumnDataSource, LabelSet, HoverTool, Div, Label, CustomJS, Span, BoxAnnotation,LinearAxis, Range1d
from bokeh.models.widgets import Panel, Tabs
import re
os.chdir('../')
import representation_labels.useful_functions as uf
with open('representation_labels/data/cohort_demographics_test_data.json', 'r') as fb:
cohorts_dict = json.load(fb)
with open('representation_labels/data/Reference_population.json', 'r') as fb:
reference_dict = json.load(fb)
```
## Format data
```
ref_dict, graph_dict = uf.clean_data(cohorts_dict, reference_dict)
print(graph_dict['UK Biobank']['Ethnicity'].keys())
```
## Testing bar split plot
```
source = ColumnDataSource(data = graph_dict['UK Biobank']['Ethnicity'])
p = figure(
y_range = list(source.data['Ethnicity']),
title = 'Ethnicity',
x_range = (0,15),
toolbar_location= None
)
p.hbar(
y = 'Ethnicity',
right = 'percent',
height = 0.9,
color = '#003667',
line_alpha = 0,
source = source
)
p.hbar(
y = 'Ethnicity',
right = 'ref percent',
height = 0.9,
fill_alpha = 0,
line_color = '#a0a0a0',
line_width = 4,
line_alpha = 1,
source = source
)
hover2 = HoverTool(tooltips = [
('Ethnicity', '@Ethnicity'),
('Raw values', "@{values}"),
('Percent/%', "@{percent}{0.0}"),
('UK population percent/%', '@{ref percent}{0.0}')
],
mode = 'mouse', name= 'data plot')
p.yaxis.major_label_text_font_size = '10pt'
p.yaxis.major_label_text_font = 'helvetica'
p.yaxis.major_label_text_color = '#a0a0a0'
p.yaxis.major_tick_line_color = None # turn off y-axis major ticks
p.yaxis.minor_tick_line_color = None
p.xaxis.major_tick_line_color = None # turn off y-axis major ticks
p.xaxis.minor_tick_line_color = None
p.yaxis.axis_line_color = None
p.xaxis.axis_line_color = None
p.xaxis.major_label_text_font_size = '0pt'
p.xaxis.major_tick_line_color = None
p.xgrid.grid_line_color = None
p.ygrid.grid_line_color = None
p.outline_line_width = 0
p.background_fill_color = '#f5f5f5'
p.background_fill_alpha = 0.9
p.title.text_color = '#a0a0a0'
p.title.text_font_size = '24pt'
p.title.text_font = "helvetica"
p.add_tools(hover2)
q = figure(
y_range = list(source.data['Ethnicity']),
x_range = (75,110),
toolbar_location= None
)
q.hbar(
y = 'Ethnicity',
right = 'percent',
height = 0.9,
color = '#003667',
legend_label = 'UK Biobank percent',
line_alpha = 0,
source = source
)
q.hbar(
y = 'Ethnicity',
right = 'ref percent',
height = 0.9,
fill_alpha = 0,
line_color = '#a0a0a0',
line_width = 4,
line_alpha = 1,
legend_label = 'UK Population Ratio',
source = source
)
hover3 = HoverTool(tooltips = [
('Ethnicity', '@Ethnicity'),
('Raw values', "@{values}"),
('Percent/%', "@{percent}{0.0}"),
('UK population percent/%', '@{ref percent}{0.0}')
],
mode = 'mouse', name= 'data plot')
q.yaxis.major_label_text_font_size = '0pt'
q.yaxis.major_tick_line_color = None # turn off y-axis major ticks
q.yaxis.minor_tick_line_color = None
q.xaxis.major_tick_line_color = None # turn off y-axis major ticks
q.xaxis.minor_tick_line_color = None
q.yaxis.axis_line_color = None
q.xaxis.axis_line_color = None
q.xaxis.major_label_text_font_size = '0pt'
q.xaxis.major_tick_line_color = None
q.xgrid.grid_line_color = None
q.ygrid.grid_line_color = None
q.outline_line_width = 0
q.background_fill_color = '#f5f5f5'
q.background_fill_alpha = 0.9
q.legend.location = 'top_right'
q.title.text_color = '#a0a0a0'
q.title.text_font_size = '24pt'
q.title.text_font = "helvetica"
q.legend.label_text_font = "helvetica"
q.legend.label_text_color = "#a0a0a0"
q.add_tools(hover2)
final = gridplot([[p,q]])
show(final)
```
## testing dot log plot
```
dot_dict = graph_dict['UK Biobank']['Ethnicity']
dot_dict['log'] = [math.log(i) for i in dot_dict['percent']]
dot_dict['ref log'] = [math.log(i) for i in dot_dict['ref percent']]
dot_dict['lab_cords'] =[math.log(i) for i in [1,10,25,50,100]]
dot_dict['lab_cords_y'] = [6]*len(dot_dict['Ethnicity'])
dot_dict['label_perc'] = ['1%','10%','25%','50%','100%']
dot_dict['new_y'] = [1,2,3,4,5]
dot_dict['label_x'] = [-1.7] * 5
print(dot_dict)
source = ColumnDataSource(data = dot_dict)
r = figure(title = 'Ethnicity -log values',x_range=(-1.7,max(source.data['log'])*1.1),y_range=(0.5,6.2))
r.segment('log','new_y','ref log','new_y', color = '#555555',line_width = 3,source = source)
r.circle(x = 'ref log',y = 'new_y', color = '#a0a0a0',size = 10,legend_label = 'UK Population',source = source)
r.circle(x = 'log',y = 'new_y', color = '#003667',size = 10 ,legend_label = 'UK Biobank',source = source)
logone = Span(location = math.log(1),dimension = 'height', line_color = '#555555',line_alpha =0.2, line_width = 3)
log10 = Span(location = math.log(10),dimension = 'height', line_color = '#555555',line_alpha =0.2, line_width = 3)
log25 = Span(location = math.log(25),dimension = 'height', line_color = '#555555',line_alpha =0.2, line_width = 3)
log50 = Span(location = math.log(50),dimension = 'height', line_color = '#555555',line_alpha =0.2, line_width = 3)
log100 = Span(location = math.log(100),dimension = 'height', line_color = '#555555',line_alpha =0.2, line_width = 3)
box1 = BoxAnnotation(top = 1.5, bottom =2.5, fill_color = '#000000',fill_alpha = 0.2)
box2 = BoxAnnotation(top = 3.5, bottom =4.5, fill_color = '#000000',fill_alpha = 0.2)
hover4 = HoverTool(tooltips = [
('Ethnicity', '@Ethnicity'),
('Raw values', "@{values}"),
('Percent/%', "@{percent}{0.0}"),
('UK population percent/%', '@{ref percent}{0.0}')
],
mode = 'mouse', name= 'data plot')
labels = LabelSet(
x='lab_cords',
y='lab_cords_y',
text='label_perc',
text_align='right',
text_font ='helvetica',
text_color = 'grey',
source=source
)
labels2 = LabelSet(
x='label_x',
y='new_y',
text='Ethnicity',
text_align='left',
text_font ='helvetica',
text_color = 'grey',
source=source
)
r.yaxis.major_label_text_font_size = '0pt'
r.yaxis.major_tick_line_color = None
r.yaxis.minor_tick_line_color = None
r.xaxis.major_tick_line_color = None # turn off y-axis major ticks
r.xaxis.minor_tick_line_color = None
r.yaxis.axis_line_color = None
r.xaxis.axis_line_color = None
r.xaxis.major_label_text_font_size = '0pt'
r.xaxis.major_tick_line_color = None
r.xgrid.grid_line_color = None
r.ygrid.grid_line_color = None
r.outline_line_width = 0
r.background_fill_color = '#f5f5f5'
r.background_fill_alpha = 0.9
r.title.text_color = '#a0a0a0'
r.title.text_font_size = '24pt'
r.title.text_font = "helvetica"
r.add_layout(box1)
r.add_layout(box2)
r.add_layout(logone)
r.add_layout(log10)
r.add_layout(log25)
r.add_layout(log50)
r.add_layout(log100)
r.add_layout(labels2)
r.add_tools(hover4)
r.add_layout(labels)
r.legend.location = 'top_left'
r.legend.label_text_font = "helvetica"
r.legend.label_text_color = "#a0a0a0"
output_file('plots/ethnicitylogs.html')
show(r)
```
## Split dotplot
```
dot_dict['label_x'] =[0]*5
dot_dict.keys()
def dot_plot(source,x_range,plot_width):
if x_range[0] < 0:
line_val = x_range[1]
title = 'Ethnicity'
place = 'right'
other_place = 'left'
line_end = 0
else:
line_val = x_range[0]
title = ''
place = 'left'
line_end = 100
other_place = 'right'
if line_end == 0:
line_width = 1.5
else:
line_width = 3
r = figure(title = title,x_range=x_range,y_range=(0.5,6.2),plot_width = plot_width)
r.segment('percent','new_y','ref percent','new_y', color = '#555555',line_width = 3,source = source)
r.circle(x = 'ref percent',y = 'new_y', color = '#a0a0a0',size = 10,legend_label = 'UK Population',source = source)
r.circle(x = 'percent',y = 'new_y', color = '#003667',size = 10 ,legend_label = 'UK Biobank',source = source)
line = Span(location = line_val,dimension = 'height', line_color = '#555555',line_alpha =0.2, line_width = 3)
end_line = Span(location = line_end,dimension = 'height', line_color = '#555555',line_alpha =0.2, line_width = line_width)
box1 = BoxAnnotation(top = 1.5, bottom =2.5, fill_color = '#000000',fill_alpha = 0.1)
box2 = BoxAnnotation(top = 3.5, bottom =4.5, fill_color = '#000000',fill_alpha = 0.1)
hover4 = HoverTool(tooltips = [
('Ethnicity', '@Ethnicity'),
('Raw values', "@{values}"),
('Percent/%', "@{percent}{0.0}"),
('UK population percent/%', '@{ref percent}{0.0}')
],
mode = 'mouse', name= 'data plot')
label = Label(x=line_val, y=6,
text=str(line_val) + '%', render_mode='canvas',text_align = place,
border_line_alpha=0,
background_fill_alpha=0,
text_font = 'helvetica',
text_color = '#a0a0a0'
)
label_end = Label(x=line_end, y=6,
text=str(line_end) + '%', render_mode='canvas',text_align = other_place,
border_line_alpha=0,
background_fill_alpha=0,
text_font = 'helvetica',
text_color = '#a0a0a0'
)
labels2 = LabelSet(
x='label_x',
y='new_y',
text='Ethnicity',
text_align='right',
text_font ='helvetica',
text_color = 'grey',
source=source
)
r.yaxis.major_label_text_font_size = '0pt'
r.yaxis.major_tick_line_color = None
r.yaxis.minor_tick_line_color = None
r.xaxis.major_tick_line_color = None # turn off y-axis major ticks
r.xaxis.minor_tick_line_color = None
r.yaxis.axis_line_color = None
r.xaxis.axis_line_color = None
r.xaxis.major_label_text_font_size = '0pt'
r.xaxis.major_tick_line_color = None
r.xgrid.grid_line_color = None
r.ygrid.grid_line_color = None
r.outline_line_width = 0
r.background_fill_color = '#f5f5f5'
r.background_fill_alpha = 0.9
r.min_border = 0
r.title.text_color = '#a0a0a0'
r.title.text_font_size = '24pt'
r.title.text_font = "helvetica"
r.add_layout(box1)
r.add_layout(box2)
r.add_layout(line)
r.add_layout(end_line)
if x_range[0] < 0:
r.add_layout(labels2)
r.add_tools(hover4)
r.add_layout(label)
r.add_layout(label_end)
if x_range[0] >0:
r.legend.location = 'top_right'
r.legend.label_text_font = "helvetica"
r.legend.label_text_color = "#a0a0a0"
else:
r.legend.glyph_height = 0
r.legend.glyph_width = 0
r.legend.label_text_font_size = '0pt'
r.legend.background_fill_alpha = 0
r.legend.border_line_alpha = 0
return(r)
source = ColumnDataSource(data = dot_dict)
left_range = (-4.75,10)
right_range = (85,100)
total_length = left_range[1] - left_range[0]+right_range[1] - right_range[0]
name_width = round(abs(left_range[0])/total_length*600)
left_plot_width = round((left_range[1])/total_length*(600 - name_width))
right_plot_width = round((right_range[1] - right_range[0])/total_length*(600 - name_width))
left = dot_plot(source,left_range,left_plot_width + name_width)
right = dot_plot(source,right_range,right_plot_width)
full_plot = gridplot([[left,right]])
output_file('plots/split.html')
show(full_plot)
print(left_plot_width)
print(right_plot_width)
print(name_width)
```
## Boxy sanky
```
box_dict = graph_dict['UK Biobank']['Ethnicity']
perc = box_dict['percent']
ref_p = box_dict['ref percent']
box_dict['y_coords'] = [[80 if i== 0 else sum(perc[:i]) ,
sum(perc[:i+1]),
sum(ref_p[:i+1]),
80 if i == 0 else sum(ref_p[:i])] for i in range(len(perc))]
box_dict['x_coords'] = [[0,0,100,100] for i in range(len(perc))]
box_dict['colours'] = ["#dbeaff","#9dd8e7","#006272","#ffe699","#fec200"]
print(box_dict['y_coords'])
print(box_dict['x_coords'])
source = ColumnDataSource(data = box_dict)
q = figure(title = 'Ethnicity')
colours = ["#cd8845","#9dd8e7","#fec200","#ffe699","#006272"]
q.patches(xs='x_coords', ys='y_coords',color='colours',source = source)
hover4 = HoverTool(tooltips = [
('Ethnicity', '@Ethnicity'),
('Raw values', "@{values}"),
('Percent/%', "@{percent}{0.0}"),
('UK population percent/%', '@{ref percent}{0.0}')
],
mode = 'mouse', name= 'data plot')
q.add_tools(hover4)
source = ColumnDataSource(data = box_dict)
q = figure(title = 'Ethnicity',x_range=(-10,110))
colours = ["#003667","#ed6b00","#87f5fb","#a882dd","#721817"]
q.patches(xs='x_coords', ys='y_coords',color='colours',legend_field = 'Ethnicity',line_color = 'white',line_width =1.5,source = source)
perc_lab_cords = np.array([i[0] for i in source.data['y_coords']] + [100])
perc_x_lab_cords = np.array([0] * len(perc_lab_cords))
y_labels = [str(i)+'%' for i in perc_lab_cords]
perc_lab_cords = perc_lab_cords - 0.5
ref_p_lab_cords = np.array([i[3] for i in source.data['y_coords']] + [100])
ref_p_x_lab_cords = np.array([100] * len(perc_lab_cords))
ref_y_labels = [str(i)+'%' for i in ref_p_lab_cords]
ref_p_lab_cords = ref_p_lab_cords -0.5
hover4 = HoverTool(tooltips = [
('Ethnicity', '@Ethnicity'),
('Raw values', "@{values}"),
('Percent/%', "@{percent}{0.0}"),
('UK population percent/%', '@{ref percent}{0.0}')
],
mode = 'mouse', name= 'data plot')
for i in range(len(perc_lab_cords)):
label = Label(x = perc_x_lab_cords[i],y= perc_lab_cords[i],text = y_labels[i], render_mode='canvas',text_align = 'right',
border_line_alpha=0,background_fill_alpha=0,text_font = 'helvetica', text_color = '#a0a0a0',
text_font_size = '10pt')
label2 = Label(
x = ref_p_x_lab_cords[i],
y = ref_p_lab_cords[i],
text = ref_y_labels[i],
render_mode='canvas',
text_align = 'left',
border_line_alpha=0,
background_fill_alpha=0,
text_font = 'helvetica',
text_color = '#a0a0a0',
text_font_size = '10pt')
q.add_layout(label)
q.add_layout(label2)
dataset_lab = Label(x = 20, y = 100, text = 'UK Biobank', render_mode='canvas',text_align = 'right',
border_line_alpha=0,background_fill_alpha=0,text_font = 'helvetica', text_color = '#a0a0a0')
ref_lab = Label(x = 100, y = 100, text = 'UK Population', render_mode='canvas',text_align = 'right',
border_line_alpha=0,background_fill_alpha=0,text_font = 'helvetica', text_color = '#a0a0a0')
q.yaxis.major_label_text_font_size = '0pt'
q.yaxis.major_tick_line_color = None
q.yaxis.minor_tick_line_color = None
q.xaxis.major_tick_line_color = None # turn off y-axis major ticks
q.xaxis.minor_tick_line_color = None
q.yaxis.axis_line_color = None
q.xaxis.axis_line_color = None
q.xaxis.major_label_text_font_size = '0pt'
q.xaxis.major_tick_line_color = None
q.xgrid.grid_line_color = None
q.ygrid.grid_line_color = None
q.outline_line_width = 0
q.background_fill_color = '#f2f6fe'
q.background_fill_alpha = 1
q.title.text_color = '#a0a0a0'
q.title.text_font_size = '24pt'
q.title.text_font = "helvetica"
q.legend.location = (46,24)
q.legend.label_text_font = "helvetica"
q.legend.label_text_color = "#a0a0a0"
q.add_layout(dataset_lab)
q.add_layout(ref_lab)
q.add_tools(hover4)
output_file('plots/boxysanky.html')
show(q)
print(perc_lab_cords)
print(perc_lab_cords -5)
```
## multi pie plot
```
pie_dict = graph_dict['UK Biobank']['Ethnicity']
pie_dict['angles'] = [i/sum(pie_dict['percent']) * 2*math.pi for i in pie_dict['percent']]
pie_dict['start_angle'] = [0] + [sum(pie_dict['angles'][:i+1]) for i in range(len(pie_dict['angles']) -1)]
pie_dict['end_angle'] = [sum(pie_dict['angles'][:i+1]) for i in range(len(pie_dict['angles']))]
pie_dict['ref_angles'] = [i/sum(pie_dict['percent']) * 2*math.pi for i in pie_dict['ref percent']]
pie_dict['ref_start_angle'] = [0] + [sum(pie_dict['ref_angles'][:i+1]) for i in range(len(pie_dict['ref_angles']) -1)]
pie_dict['ref_end_angle'] = [sum(pie_dict['ref_angles'][:i+1]) for i in range(len(pie_dict['ref_angles']))]
pie_dict['colours'] = colours
print(pie_dict['start_angle'])
source = ColumnDataSource(data = pie_dict)
s = figure(title = 'Ethnicity',x_range = (-0.6,0.8),y_range =(-0.6,0.6) )
s.annular_wedge(
x =0,
y=0,
inner_radius = 0.31,
outer_radius = 0.5,
start_angle = 'start_angle',
end_angle ='end_angle',
color = 'colours',
legend_field = 'Ethnicity',
source = source
)
s.annular_wedge(
x =0,
y=0,
inner_radius = 0.1,
outer_radius = 0.29,
start_angle = 'ref_start_angle',
end_angle ='ref_end_angle',
color = 'colours',
source = source
)
hover4 = HoverTool(tooltips = [
('Ethnicity', '@Ethnicity'),
('Raw values', "@{values}"),
('Percent/%', "@{percent}{0.0}"),
('UK population percent/%', '@{ref percent}{0.0}')
],
mode = 'mouse', name= 'data plot')
dataset_lab = Label(x = 0.6, y = 0, text = 'UK Biobank', render_mode='canvas',text_align = 'right',
border_line_alpha=0,background_fill_alpha=0,text_font = 'helvetica', text_color = '#a0a0a0')
ref_lab = Label(x = 0.25, y = 0, text = 'UK Population', render_mode='canvas',text_align = 'right',
border_line_alpha=0,background_fill_alpha=0,text_font = 'helvetica', text_color = '#a0a0a0')
s.yaxis.major_label_text_font_size = '0pt'
s.yaxis.major_tick_line_color = None
s.yaxis.minor_tick_line_color = None
s.xaxis.major_tick_line_color = None # turn off y-axis major ticks
s.xaxis.minor_tick_line_color = None
s.yaxis.axis_line_color = None
s.xaxis.axis_line_color = None
s.xaxis.major_label_text_font_size = '0pt'
s.xaxis.major_tick_line_color = None
s.xgrid.grid_line_color = None
s.ygrid.grid_line_color = None
s.outline_line_width = 0
s.background_fill_color = '#f5f5f5'
s.background_fill_alpha = 0.9
s.title.text_color = '#a0a0a0'
s.title.text_font_size = '24pt'
s.title.text_font = "helvetica"
s.legend.label_text_font = "helvetica"
s.legend.label_text_color = "#a0a0a0"
s.add_layout(dataset_lab)
s.add_layout(ref_lab)
s.add_tools(hover4)
output_file('plots/donuts.html')
show(s)
```
|
github_jupyter
|
import pandas as pd
import json
import bokeh.plotting as bpl
import os
import numpy as np
import math
from bokeh.plotting import figure, output_file, show, gridplot
from bokeh.models import ColumnDataSource, LabelSet, HoverTool, Div, Label, CustomJS, Span, BoxAnnotation,LinearAxis, Range1d
from bokeh.models.widgets import Panel, Tabs
import re
os.chdir('../')
import representation_labels.useful_functions as uf
with open('representation_labels/data/cohort_demographics_test_data.json', 'r') as fb:
cohorts_dict = json.load(fb)
with open('representation_labels/data/Reference_population.json', 'r') as fb:
reference_dict = json.load(fb)
ref_dict, graph_dict = uf.clean_data(cohorts_dict, reference_dict)
print(graph_dict['UK Biobank']['Ethnicity'].keys())
source = ColumnDataSource(data = graph_dict['UK Biobank']['Ethnicity'])
p = figure(
y_range = list(source.data['Ethnicity']),
title = 'Ethnicity',
x_range = (0,15),
toolbar_location= None
)
p.hbar(
y = 'Ethnicity',
right = 'percent',
height = 0.9,
color = '#003667',
line_alpha = 0,
source = source
)
p.hbar(
y = 'Ethnicity',
right = 'ref percent',
height = 0.9,
fill_alpha = 0,
line_color = '#a0a0a0',
line_width = 4,
line_alpha = 1,
source = source
)
hover2 = HoverTool(tooltips = [
('Ethnicity', '@Ethnicity'),
('Raw values', "@{values}"),
('Percent/%', "@{percent}{0.0}"),
('UK population percent/%', '@{ref percent}{0.0}')
],
mode = 'mouse', name= 'data plot')
p.yaxis.major_label_text_font_size = '10pt'
p.yaxis.major_label_text_font = 'helvetica'
p.yaxis.major_label_text_color = '#a0a0a0'
p.yaxis.major_tick_line_color = None # turn off y-axis major ticks
p.yaxis.minor_tick_line_color = None
p.xaxis.major_tick_line_color = None # turn off y-axis major ticks
p.xaxis.minor_tick_line_color = None
p.yaxis.axis_line_color = None
p.xaxis.axis_line_color = None
p.xaxis.major_label_text_font_size = '0pt'
p.xaxis.major_tick_line_color = None
p.xgrid.grid_line_color = None
p.ygrid.grid_line_color = None
p.outline_line_width = 0
p.background_fill_color = '#f5f5f5'
p.background_fill_alpha = 0.9
p.title.text_color = '#a0a0a0'
p.title.text_font_size = '24pt'
p.title.text_font = "helvetica"
p.add_tools(hover2)
q = figure(
y_range = list(source.data['Ethnicity']),
x_range = (75,110),
toolbar_location= None
)
q.hbar(
y = 'Ethnicity',
right = 'percent',
height = 0.9,
color = '#003667',
legend_label = 'UK Biobank percent',
line_alpha = 0,
source = source
)
q.hbar(
y = 'Ethnicity',
right = 'ref percent',
height = 0.9,
fill_alpha = 0,
line_color = '#a0a0a0',
line_width = 4,
line_alpha = 1,
legend_label = 'UK Population Ratio',
source = source
)
hover3 = HoverTool(tooltips = [
('Ethnicity', '@Ethnicity'),
('Raw values', "@{values}"),
('Percent/%', "@{percent}{0.0}"),
('UK population percent/%', '@{ref percent}{0.0}')
],
mode = 'mouse', name= 'data plot')
q.yaxis.major_label_text_font_size = '0pt'
q.yaxis.major_tick_line_color = None # turn off y-axis major ticks
q.yaxis.minor_tick_line_color = None
q.xaxis.major_tick_line_color = None # turn off y-axis major ticks
q.xaxis.minor_tick_line_color = None
q.yaxis.axis_line_color = None
q.xaxis.axis_line_color = None
q.xaxis.major_label_text_font_size = '0pt'
q.xaxis.major_tick_line_color = None
q.xgrid.grid_line_color = None
q.ygrid.grid_line_color = None
q.outline_line_width = 0
q.background_fill_color = '#f5f5f5'
q.background_fill_alpha = 0.9
q.legend.location = 'top_right'
q.title.text_color = '#a0a0a0'
q.title.text_font_size = '24pt'
q.title.text_font = "helvetica"
q.legend.label_text_font = "helvetica"
q.legend.label_text_color = "#a0a0a0"
q.add_tools(hover2)
final = gridplot([[p,q]])
show(final)
dot_dict = graph_dict['UK Biobank']['Ethnicity']
dot_dict['log'] = [math.log(i) for i in dot_dict['percent']]
dot_dict['ref log'] = [math.log(i) for i in dot_dict['ref percent']]
dot_dict['lab_cords'] =[math.log(i) for i in [1,10,25,50,100]]
dot_dict['lab_cords_y'] = [6]*len(dot_dict['Ethnicity'])
dot_dict['label_perc'] = ['1%','10%','25%','50%','100%']
dot_dict['new_y'] = [1,2,3,4,5]
dot_dict['label_x'] = [-1.7] * 5
print(dot_dict)
source = ColumnDataSource(data = dot_dict)
r = figure(title = 'Ethnicity -log values',x_range=(-1.7,max(source.data['log'])*1.1),y_range=(0.5,6.2))
r.segment('log','new_y','ref log','new_y', color = '#555555',line_width = 3,source = source)
r.circle(x = 'ref log',y = 'new_y', color = '#a0a0a0',size = 10,legend_label = 'UK Population',source = source)
r.circle(x = 'log',y = 'new_y', color = '#003667',size = 10 ,legend_label = 'UK Biobank',source = source)
logone = Span(location = math.log(1),dimension = 'height', line_color = '#555555',line_alpha =0.2, line_width = 3)
log10 = Span(location = math.log(10),dimension = 'height', line_color = '#555555',line_alpha =0.2, line_width = 3)
log25 = Span(location = math.log(25),dimension = 'height', line_color = '#555555',line_alpha =0.2, line_width = 3)
log50 = Span(location = math.log(50),dimension = 'height', line_color = '#555555',line_alpha =0.2, line_width = 3)
log100 = Span(location = math.log(100),dimension = 'height', line_color = '#555555',line_alpha =0.2, line_width = 3)
box1 = BoxAnnotation(top = 1.5, bottom =2.5, fill_color = '#000000',fill_alpha = 0.2)
box2 = BoxAnnotation(top = 3.5, bottom =4.5, fill_color = '#000000',fill_alpha = 0.2)
hover4 = HoverTool(tooltips = [
('Ethnicity', '@Ethnicity'),
('Raw values', "@{values}"),
('Percent/%', "@{percent}{0.0}"),
('UK population percent/%', '@{ref percent}{0.0}')
],
mode = 'mouse', name= 'data plot')
labels = LabelSet(
x='lab_cords',
y='lab_cords_y',
text='label_perc',
text_align='right',
text_font ='helvetica',
text_color = 'grey',
source=source
)
labels2 = LabelSet(
x='label_x',
y='new_y',
text='Ethnicity',
text_align='left',
text_font ='helvetica',
text_color = 'grey',
source=source
)
r.yaxis.major_label_text_font_size = '0pt'
r.yaxis.major_tick_line_color = None
r.yaxis.minor_tick_line_color = None
r.xaxis.major_tick_line_color = None # turn off y-axis major ticks
r.xaxis.minor_tick_line_color = None
r.yaxis.axis_line_color = None
r.xaxis.axis_line_color = None
r.xaxis.major_label_text_font_size = '0pt'
r.xaxis.major_tick_line_color = None
r.xgrid.grid_line_color = None
r.ygrid.grid_line_color = None
r.outline_line_width = 0
r.background_fill_color = '#f5f5f5'
r.background_fill_alpha = 0.9
r.title.text_color = '#a0a0a0'
r.title.text_font_size = '24pt'
r.title.text_font = "helvetica"
r.add_layout(box1)
r.add_layout(box2)
r.add_layout(logone)
r.add_layout(log10)
r.add_layout(log25)
r.add_layout(log50)
r.add_layout(log100)
r.add_layout(labels2)
r.add_tools(hover4)
r.add_layout(labels)
r.legend.location = 'top_left'
r.legend.label_text_font = "helvetica"
r.legend.label_text_color = "#a0a0a0"
output_file('plots/ethnicitylogs.html')
show(r)
dot_dict['label_x'] =[0]*5
dot_dict.keys()
def dot_plot(source,x_range,plot_width):
if x_range[0] < 0:
line_val = x_range[1]
title = 'Ethnicity'
place = 'right'
other_place = 'left'
line_end = 0
else:
line_val = x_range[0]
title = ''
place = 'left'
line_end = 100
other_place = 'right'
if line_end == 0:
line_width = 1.5
else:
line_width = 3
r = figure(title = title,x_range=x_range,y_range=(0.5,6.2),plot_width = plot_width)
r.segment('percent','new_y','ref percent','new_y', color = '#555555',line_width = 3,source = source)
r.circle(x = 'ref percent',y = 'new_y', color = '#a0a0a0',size = 10,legend_label = 'UK Population',source = source)
r.circle(x = 'percent',y = 'new_y', color = '#003667',size = 10 ,legend_label = 'UK Biobank',source = source)
line = Span(location = line_val,dimension = 'height', line_color = '#555555',line_alpha =0.2, line_width = 3)
end_line = Span(location = line_end,dimension = 'height', line_color = '#555555',line_alpha =0.2, line_width = line_width)
box1 = BoxAnnotation(top = 1.5, bottom =2.5, fill_color = '#000000',fill_alpha = 0.1)
box2 = BoxAnnotation(top = 3.5, bottom =4.5, fill_color = '#000000',fill_alpha = 0.1)
hover4 = HoverTool(tooltips = [
('Ethnicity', '@Ethnicity'),
('Raw values', "@{values}"),
('Percent/%', "@{percent}{0.0}"),
('UK population percent/%', '@{ref percent}{0.0}')
],
mode = 'mouse', name= 'data plot')
label = Label(x=line_val, y=6,
text=str(line_val) + '%', render_mode='canvas',text_align = place,
border_line_alpha=0,
background_fill_alpha=0,
text_font = 'helvetica',
text_color = '#a0a0a0'
)
label_end = Label(x=line_end, y=6,
text=str(line_end) + '%', render_mode='canvas',text_align = other_place,
border_line_alpha=0,
background_fill_alpha=0,
text_font = 'helvetica',
text_color = '#a0a0a0'
)
labels2 = LabelSet(
x='label_x',
y='new_y',
text='Ethnicity',
text_align='right',
text_font ='helvetica',
text_color = 'grey',
source=source
)
r.yaxis.major_label_text_font_size = '0pt'
r.yaxis.major_tick_line_color = None
r.yaxis.minor_tick_line_color = None
r.xaxis.major_tick_line_color = None # turn off y-axis major ticks
r.xaxis.minor_tick_line_color = None
r.yaxis.axis_line_color = None
r.xaxis.axis_line_color = None
r.xaxis.major_label_text_font_size = '0pt'
r.xaxis.major_tick_line_color = None
r.xgrid.grid_line_color = None
r.ygrid.grid_line_color = None
r.outline_line_width = 0
r.background_fill_color = '#f5f5f5'
r.background_fill_alpha = 0.9
r.min_border = 0
r.title.text_color = '#a0a0a0'
r.title.text_font_size = '24pt'
r.title.text_font = "helvetica"
r.add_layout(box1)
r.add_layout(box2)
r.add_layout(line)
r.add_layout(end_line)
if x_range[0] < 0:
r.add_layout(labels2)
r.add_tools(hover4)
r.add_layout(label)
r.add_layout(label_end)
if x_range[0] >0:
r.legend.location = 'top_right'
r.legend.label_text_font = "helvetica"
r.legend.label_text_color = "#a0a0a0"
else:
r.legend.glyph_height = 0
r.legend.glyph_width = 0
r.legend.label_text_font_size = '0pt'
r.legend.background_fill_alpha = 0
r.legend.border_line_alpha = 0
return(r)
source = ColumnDataSource(data = dot_dict)
left_range = (-4.75,10)
right_range = (85,100)
total_length = left_range[1] - left_range[0]+right_range[1] - right_range[0]
name_width = round(abs(left_range[0])/total_length*600)
left_plot_width = round((left_range[1])/total_length*(600 - name_width))
right_plot_width = round((right_range[1] - right_range[0])/total_length*(600 - name_width))
left = dot_plot(source,left_range,left_plot_width + name_width)
right = dot_plot(source,right_range,right_plot_width)
full_plot = gridplot([[left,right]])
output_file('plots/split.html')
show(full_plot)
print(left_plot_width)
print(right_plot_width)
print(name_width)
box_dict = graph_dict['UK Biobank']['Ethnicity']
perc = box_dict['percent']
ref_p = box_dict['ref percent']
box_dict['y_coords'] = [[80 if i== 0 else sum(perc[:i]) ,
sum(perc[:i+1]),
sum(ref_p[:i+1]),
80 if i == 0 else sum(ref_p[:i])] for i in range(len(perc))]
box_dict['x_coords'] = [[0,0,100,100] for i in range(len(perc))]
box_dict['colours'] = ["#dbeaff","#9dd8e7","#006272","#ffe699","#fec200"]
print(box_dict['y_coords'])
print(box_dict['x_coords'])
source = ColumnDataSource(data = box_dict)
q = figure(title = 'Ethnicity')
colours = ["#cd8845","#9dd8e7","#fec200","#ffe699","#006272"]
q.patches(xs='x_coords', ys='y_coords',color='colours',source = source)
hover4 = HoverTool(tooltips = [
('Ethnicity', '@Ethnicity'),
('Raw values', "@{values}"),
('Percent/%', "@{percent}{0.0}"),
('UK population percent/%', '@{ref percent}{0.0}')
],
mode = 'mouse', name= 'data plot')
q.add_tools(hover4)
source = ColumnDataSource(data = box_dict)
q = figure(title = 'Ethnicity',x_range=(-10,110))
colours = ["#003667","#ed6b00","#87f5fb","#a882dd","#721817"]
q.patches(xs='x_coords', ys='y_coords',color='colours',legend_field = 'Ethnicity',line_color = 'white',line_width =1.5,source = source)
perc_lab_cords = np.array([i[0] for i in source.data['y_coords']] + [100])
perc_x_lab_cords = np.array([0] * len(perc_lab_cords))
y_labels = [str(i)+'%' for i in perc_lab_cords]
perc_lab_cords = perc_lab_cords - 0.5
ref_p_lab_cords = np.array([i[3] for i in source.data['y_coords']] + [100])
ref_p_x_lab_cords = np.array([100] * len(perc_lab_cords))
ref_y_labels = [str(i)+'%' for i in ref_p_lab_cords]
ref_p_lab_cords = ref_p_lab_cords -0.5
hover4 = HoverTool(tooltips = [
('Ethnicity', '@Ethnicity'),
('Raw values', "@{values}"),
('Percent/%', "@{percent}{0.0}"),
('UK population percent/%', '@{ref percent}{0.0}')
],
mode = 'mouse', name= 'data plot')
for i in range(len(perc_lab_cords)):
label = Label(x = perc_x_lab_cords[i],y= perc_lab_cords[i],text = y_labels[i], render_mode='canvas',text_align = 'right',
border_line_alpha=0,background_fill_alpha=0,text_font = 'helvetica', text_color = '#a0a0a0',
text_font_size = '10pt')
label2 = Label(
x = ref_p_x_lab_cords[i],
y = ref_p_lab_cords[i],
text = ref_y_labels[i],
render_mode='canvas',
text_align = 'left',
border_line_alpha=0,
background_fill_alpha=0,
text_font = 'helvetica',
text_color = '#a0a0a0',
text_font_size = '10pt')
q.add_layout(label)
q.add_layout(label2)
dataset_lab = Label(x = 20, y = 100, text = 'UK Biobank', render_mode='canvas',text_align = 'right',
border_line_alpha=0,background_fill_alpha=0,text_font = 'helvetica', text_color = '#a0a0a0')
ref_lab = Label(x = 100, y = 100, text = 'UK Population', render_mode='canvas',text_align = 'right',
border_line_alpha=0,background_fill_alpha=0,text_font = 'helvetica', text_color = '#a0a0a0')
q.yaxis.major_label_text_font_size = '0pt'
q.yaxis.major_tick_line_color = None
q.yaxis.minor_tick_line_color = None
q.xaxis.major_tick_line_color = None # turn off y-axis major ticks
q.xaxis.minor_tick_line_color = None
q.yaxis.axis_line_color = None
q.xaxis.axis_line_color = None
q.xaxis.major_label_text_font_size = '0pt'
q.xaxis.major_tick_line_color = None
q.xgrid.grid_line_color = None
q.ygrid.grid_line_color = None
q.outline_line_width = 0
q.background_fill_color = '#f2f6fe'
q.background_fill_alpha = 1
q.title.text_color = '#a0a0a0'
q.title.text_font_size = '24pt'
q.title.text_font = "helvetica"
q.legend.location = (46,24)
q.legend.label_text_font = "helvetica"
q.legend.label_text_color = "#a0a0a0"
q.add_layout(dataset_lab)
q.add_layout(ref_lab)
q.add_tools(hover4)
output_file('plots/boxysanky.html')
show(q)
print(perc_lab_cords)
print(perc_lab_cords -5)
pie_dict = graph_dict['UK Biobank']['Ethnicity']
pie_dict['angles'] = [i/sum(pie_dict['percent']) * 2*math.pi for i in pie_dict['percent']]
pie_dict['start_angle'] = [0] + [sum(pie_dict['angles'][:i+1]) for i in range(len(pie_dict['angles']) -1)]
pie_dict['end_angle'] = [sum(pie_dict['angles'][:i+1]) for i in range(len(pie_dict['angles']))]
pie_dict['ref_angles'] = [i/sum(pie_dict['percent']) * 2*math.pi for i in pie_dict['ref percent']]
pie_dict['ref_start_angle'] = [0] + [sum(pie_dict['ref_angles'][:i+1]) for i in range(len(pie_dict['ref_angles']) -1)]
pie_dict['ref_end_angle'] = [sum(pie_dict['ref_angles'][:i+1]) for i in range(len(pie_dict['ref_angles']))]
pie_dict['colours'] = colours
print(pie_dict['start_angle'])
source = ColumnDataSource(data = pie_dict)
s = figure(title = 'Ethnicity',x_range = (-0.6,0.8),y_range =(-0.6,0.6) )
s.annular_wedge(
x =0,
y=0,
inner_radius = 0.31,
outer_radius = 0.5,
start_angle = 'start_angle',
end_angle ='end_angle',
color = 'colours',
legend_field = 'Ethnicity',
source = source
)
s.annular_wedge(
x =0,
y=0,
inner_radius = 0.1,
outer_radius = 0.29,
start_angle = 'ref_start_angle',
end_angle ='ref_end_angle',
color = 'colours',
source = source
)
hover4 = HoverTool(tooltips = [
('Ethnicity', '@Ethnicity'),
('Raw values', "@{values}"),
('Percent/%', "@{percent}{0.0}"),
('UK population percent/%', '@{ref percent}{0.0}')
],
mode = 'mouse', name= 'data plot')
dataset_lab = Label(x = 0.6, y = 0, text = 'UK Biobank', render_mode='canvas',text_align = 'right',
border_line_alpha=0,background_fill_alpha=0,text_font = 'helvetica', text_color = '#a0a0a0')
ref_lab = Label(x = 0.25, y = 0, text = 'UK Population', render_mode='canvas',text_align = 'right',
border_line_alpha=0,background_fill_alpha=0,text_font = 'helvetica', text_color = '#a0a0a0')
s.yaxis.major_label_text_font_size = '0pt'
s.yaxis.major_tick_line_color = None
s.yaxis.minor_tick_line_color = None
s.xaxis.major_tick_line_color = None # turn off y-axis major ticks
s.xaxis.minor_tick_line_color = None
s.yaxis.axis_line_color = None
s.xaxis.axis_line_color = None
s.xaxis.major_label_text_font_size = '0pt'
s.xaxis.major_tick_line_color = None
s.xgrid.grid_line_color = None
s.ygrid.grid_line_color = None
s.outline_line_width = 0
s.background_fill_color = '#f5f5f5'
s.background_fill_alpha = 0.9
s.title.text_color = '#a0a0a0'
s.title.text_font_size = '24pt'
s.title.text_font = "helvetica"
s.legend.label_text_font = "helvetica"
s.legend.label_text_color = "#a0a0a0"
s.add_layout(dataset_lab)
s.add_layout(ref_lab)
s.add_tools(hover4)
output_file('plots/donuts.html')
show(s)
| 0.316898 | 0.738716 |
<img src='./img/intel-logo.jpg' width=10%>
<font size=7><div align='left'>파이썬 기초강의<br>
<br>
<font size=6><div align='left'>03. 데이터 구조<br>
<img src='./img/파이썬.png' width=30%, Fig2>
<font size=3><div align='right'>
<div align='right'>성 민 석 (Minsuk Sung)</div>
<div align='right'>류 회 성 (Hoesung Ryu)</div>
<div align='right'>이 인 구 (Ike Lee)</div>
<h1>강의목차<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#리스트(List)" data-toc-modified-id="리스트(List)-1"><span class="toc-item-num">1 </span>리스트(List)</a></span><ul class="toc-item"><li><span><a href="#리스트-기본연산" data-toc-modified-id="리스트-기본연산-1.1"><span class="toc-item-num">1.1 </span>리스트 기본연산</a></span></li><li><span><a href="#리스트-원소-추가하기" data-toc-modified-id="리스트-원소-추가하기-1.2"><span class="toc-item-num">1.2 </span>리스트 원소 추가하기</a></span></li><li><span><a href="#2차원,-3차원-리스트" data-toc-modified-id="2차원,-3차원-리스트-1.3"><span class="toc-item-num">1.3 </span>2차원, 3차원 리스트</a></span></li><li><span><a href="#POP-함수" data-toc-modified-id="POP-함수-1.4"><span class="toc-item-num">1.4 </span>POP 함수</a></span></li><li><span><a href="#insert-함수" data-toc-modified-id="insert-함수-1.5"><span class="toc-item-num">1.5 </span>insert 함수</a></span></li><li><span><a href="#sort함수" data-toc-modified-id="sort함수-1.6"><span class="toc-item-num">1.6 </span>sort함수</a></span></li><li><span><a href="#index-함수" data-toc-modified-id="index-함수-1.7"><span class="toc-item-num">1.7 </span>index 함수</a></span></li><li><span><a href="#remove-함수" data-toc-modified-id="remove-함수-1.8"><span class="toc-item-num">1.8 </span>remove 함수</a></span></li><li><span><a href="#len--함수" data-toc-modified-id="len--함수-1.9"><span class="toc-item-num">1.9 </span>len 함수</a></span></li></ul></li><li><span><a href="#튜플" data-toc-modified-id="튜플-2"><span class="toc-item-num">2 </span>튜플</a></span><ul class="toc-item"><li><span><a href="#Tuple-Assignment" data-toc-modified-id="Tuple-Assignment-2.1"><span class="toc-item-num">2.1 </span>Tuple Assignment</a></span></li></ul></li><li><span><a href="#딕셔너리(Dict)" data-toc-modified-id="딕셔너리(Dict)-3"><span class="toc-item-num">3 </span>딕셔너리(Dict)</a></span></li><li><span><a href="#집합" data-toc-modified-id="집합-4"><span class="toc-item-num">4 </span>집합</a></span></li></ul></div>
---
## 리스트(List)
### 리스트 기본연산
```
# 빈 리스트
x = list()
y = []
print('x:',x)
print('y:',y)
# 콤마(,)로 구분합니다.
x = [10,20,30]
print(x)
# List 안에 원소들이 Type이 다 같을 필요가 없음니다.
# 즉, 어떤 type이든 상관없음
y = [100,3.14,'abc',True,x]
print(y)
band_list = ['호피폴라','애프터문','루시','모네','퍼플레인','피플 온 더 브릿지']
print(band_list)
# 첫번째 원소
band_list[0]
# print하면 따옴표는 없어짐니다.
print(band_list[0])
# 거꾸로...-1은 마지막 원소
band_list[-1]
# Slicing
# 0,1 뽑아오기
band_list[0:2]
# 1번 원소부터 4번 원소전까지 매 2번째것을 뽑기 입니다.
band_list = ['호피폴라','애프터문','루시','모네','퍼플레인','피플 온 더 브릿지', '사과']
band_list[1:4:2]
# 3칸씩 이동하면서 가져오기
band_list[::3]
```
### 리스트 원소 추가하기
<font size=5>`+` `append` `extend`
```
new_list = ['루시','호피폴라','애프터문','루시']
# + 연산자를 통해서 리스트에 원소를 추가할 수 있음
new_list = new_list + ['모네']
new_list
# append함수를 통해서 더할 수 있으며 추가되는 위치는 random 입니다.
new_list.append('퍼플레인')
new_list
# extend로 리스트를 더 할 수 있으며 뒤에 추가 됩니다.
# append과의 차이점은? 추가되는 위치가 random이 아니고 뒤에 추가 됩니다.
new_list.extend(['옥상달빛','볼빨간사춘기'])
new_list
```
### 2차원, 3차원 리스트
```
# 2차원 리스트
l1 = ['이름','성적','수학']
l2 = ['홍길동',70,80]
l3 = ['김돌쇠',50,50]
l4 = ['강철수',90,100]
s = [l1,l2,l3,l4]
print(s)
# 3차원리스트
s1 = [l1,l2,l3,l4]
s2 = [l2,l3,l4,l1]
s3 = [l3,l4,l1,l2]
s4 = [l4,l1,l2,l3]
s = [s1,s2,s3,s4]
s # 마지막 대괄호가 3개니까 3차원이란걸 단번에 알 수 있음
```
### POP 함수
```
bands = ['호피폴라','애프터문','루시','모네','퍼플레인','피플 온 더 브릿지']
# 마지막 원소 삭제
print("마지막 원소: ",bands.pop())
print(bands)
```
### insert 함수
append와 extend 외에 추가적으로 등장한 이유는?<br> 위치를 지정하기 위함 입니다.
```
bands = ['호피폴라','애프터문','루시','모네','퍼플레인','피플 온 더 브릿지']
print('원래 리스트',bands)
# 3번째에 넣어라
bands.insert(3,'옥상달빛')
print('3번째에 추가된 리스트',bands)
```
### sort함수
가나다순 또는 역순으로 정열합니다.
```
bands = ['호피폴라','애프터문','루시','모네','퍼플레인','피플 온 더 브릿지']
# 가나다순으로 정렬해줌
bands.sort()
print(bands)
# 가나다 역순으로 정렬해줌
bands.sort(reverse=True) # 가나다 역순으로 정렬해줌
print(bands)
```
### index 함수
```
# 위치 찾아주기
bands = ['호피폴라','애프터문','루시','모네','퍼플레인','피플 온 더 브릿지']
bands.index('루시')
```
### remove 함수
```
bands = ['호피폴라','애프터문','루시','모네','퍼플레인','피플 온 더 브릿지','모네']
bands.remove('모네') # 중복된 값 다 지우진 않음
bands
```
### len 함수
```
bands = ['호피폴라','애프터문','루시','모네','퍼플레인','피플 온 더 브릿지']
print(len(bands)) # list의 길이
print(len(bands[0])) # 0 번째의 string의 길이
```
## 튜플
값을 바꿀 수 없는 리스트, 수정 불가!!
값 변경, 추가, 제거, 정렬 다 안됨
```
# 빈 튜플 만들기
x = tuple()
y = ()
print('x:',x)
print('y:',y)
t_days = ('Sun','Mon','Tue','Wed','Thur','Fri','Sat')
t_days
t_days[1]
```
튜플은 변경이 불가하므로 Del 함수를 쓸 수 없다.
del t_days[0]
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-57-eac9e44de8ae> in <module>
----> 1 del t_days[0]
TypeError: 'tuple' object doesn't support item deletion
```
### Tuple Assignment
```
a,b,c = 1,2,3
print(a,b,c)
```
## 딕셔너리(Dict)
딕셔너리는 key, value로 구성되어 있습니다.
key값은 유일해야하지만, value값은 유일할 필요없습니다.
```
# 빈딕셔너리 만드릭
x = dict()
y = {}
print('x:',x)
print('y:',y)
singer = {'이름': '김광석',
'출생년도': 1980,
'데뷔곡': '너에게'
}
print(singer['이름'])
print(singer['출생년도'])
print(singer['데뷔곡'])
# item 추가 혹은 value 수정
singer['성별'] = '남자' # 새로운 key : 값 할당
singer['출생년도'] = 1964 # 기존 key : 값 변경
singer
# dic 삭제하기
del singer['출생년도']
singer
list(singer.items())
singer.keys()
singer.values()
# in은 안에 있는지
'김광석' in singer.values()
```
## 집합
순서가 없어서 indexing이 불가능
중복된 원소들을 제거함
```
# 빈 집합 만들기
a = set()
b = {}
player_set = set([1,2,3,3,4,2,4,5,5,6])
player_set
a = set([1,2,3,4])
b = set([3,4,5,6])
# 교집합
print(a&b)
print(a.intersection(b))
# 합집합
print(a | b )
print(a.union(b))
# 차집합
print(a-b)
print(a.difference(b))
# 대칭차집합
print(a^b )
print(a.symmetric_difference(b))
```
|
github_jupyter
|
# 빈 리스트
x = list()
y = []
print('x:',x)
print('y:',y)
# 콤마(,)로 구분합니다.
x = [10,20,30]
print(x)
# List 안에 원소들이 Type이 다 같을 필요가 없음니다.
# 즉, 어떤 type이든 상관없음
y = [100,3.14,'abc',True,x]
print(y)
band_list = ['호피폴라','애프터문','루시','모네','퍼플레인','피플 온 더 브릿지']
print(band_list)
# 첫번째 원소
band_list[0]
# print하면 따옴표는 없어짐니다.
print(band_list[0])
# 거꾸로...-1은 마지막 원소
band_list[-1]
# Slicing
# 0,1 뽑아오기
band_list[0:2]
# 1번 원소부터 4번 원소전까지 매 2번째것을 뽑기 입니다.
band_list = ['호피폴라','애프터문','루시','모네','퍼플레인','피플 온 더 브릿지', '사과']
band_list[1:4:2]
# 3칸씩 이동하면서 가져오기
band_list[::3]
new_list = ['루시','호피폴라','애프터문','루시']
# + 연산자를 통해서 리스트에 원소를 추가할 수 있음
new_list = new_list + ['모네']
new_list
# append함수를 통해서 더할 수 있으며 추가되는 위치는 random 입니다.
new_list.append('퍼플레인')
new_list
# extend로 리스트를 더 할 수 있으며 뒤에 추가 됩니다.
# append과의 차이점은? 추가되는 위치가 random이 아니고 뒤에 추가 됩니다.
new_list.extend(['옥상달빛','볼빨간사춘기'])
new_list
# 2차원 리스트
l1 = ['이름','성적','수학']
l2 = ['홍길동',70,80]
l3 = ['김돌쇠',50,50]
l4 = ['강철수',90,100]
s = [l1,l2,l3,l4]
print(s)
# 3차원리스트
s1 = [l1,l2,l3,l4]
s2 = [l2,l3,l4,l1]
s3 = [l3,l4,l1,l2]
s4 = [l4,l1,l2,l3]
s = [s1,s2,s3,s4]
s # 마지막 대괄호가 3개니까 3차원이란걸 단번에 알 수 있음
bands = ['호피폴라','애프터문','루시','모네','퍼플레인','피플 온 더 브릿지']
# 마지막 원소 삭제
print("마지막 원소: ",bands.pop())
print(bands)
bands = ['호피폴라','애프터문','루시','모네','퍼플레인','피플 온 더 브릿지']
print('원래 리스트',bands)
# 3번째에 넣어라
bands.insert(3,'옥상달빛')
print('3번째에 추가된 리스트',bands)
bands = ['호피폴라','애프터문','루시','모네','퍼플레인','피플 온 더 브릿지']
# 가나다순으로 정렬해줌
bands.sort()
print(bands)
# 가나다 역순으로 정렬해줌
bands.sort(reverse=True) # 가나다 역순으로 정렬해줌
print(bands)
# 위치 찾아주기
bands = ['호피폴라','애프터문','루시','모네','퍼플레인','피플 온 더 브릿지']
bands.index('루시')
bands = ['호피폴라','애프터문','루시','모네','퍼플레인','피플 온 더 브릿지','모네']
bands.remove('모네') # 중복된 값 다 지우진 않음
bands
bands = ['호피폴라','애프터문','루시','모네','퍼플레인','피플 온 더 브릿지']
print(len(bands)) # list의 길이
print(len(bands[0])) # 0 번째의 string의 길이
# 빈 튜플 만들기
x = tuple()
y = ()
print('x:',x)
print('y:',y)
t_days = ('Sun','Mon','Tue','Wed','Thur','Fri','Sat')
t_days
t_days[1]
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-57-eac9e44de8ae> in <module>
----> 1 del t_days[0]
TypeError: 'tuple' object doesn't support item deletion
a,b,c = 1,2,3
print(a,b,c)
# 빈딕셔너리 만드릭
x = dict()
y = {}
print('x:',x)
print('y:',y)
singer = {'이름': '김광석',
'출생년도': 1980,
'데뷔곡': '너에게'
}
print(singer['이름'])
print(singer['출생년도'])
print(singer['데뷔곡'])
# item 추가 혹은 value 수정
singer['성별'] = '남자' # 새로운 key : 값 할당
singer['출생년도'] = 1964 # 기존 key : 값 변경
singer
# dic 삭제하기
del singer['출생년도']
singer
list(singer.items())
singer.keys()
singer.values()
# in은 안에 있는지
'김광석' in singer.values()
# 빈 집합 만들기
a = set()
b = {}
player_set = set([1,2,3,3,4,2,4,5,5,6])
player_set
a = set([1,2,3,4])
b = set([3,4,5,6])
# 교집합
print(a&b)
print(a.intersection(b))
# 합집합
print(a | b )
print(a.union(b))
# 차집합
print(a-b)
print(a.difference(b))
# 대칭차집합
print(a^b )
print(a.symmetric_difference(b))
| 0.087982 | 0.952794 |
# Introduction to XGBoost with RAPIDS
#### By Paul Hendricks
-------
While the world’s data doubles each year, CPU computing has hit a brick wall with the end of Moore’s law. For the same reasons, scientific computing and deep learning has turned to NVIDIA GPU acceleration, data analytics and machine learning where GPU acceleration is ideal.
NVIDIA created RAPIDS – an open-source data analytics and machine learning acceleration platform that leverages GPUs to accelerate computations. RAPIDS is based on Python, has pandas-like and Scikit-Learn-like interfaces, is built on Apache Arrow in-memory data format, and can scale from 1 to multi-GPU to multi-nodes. RAPIDS integrates easily into the world’s most popular data science Python-based workflows. RAPIDS accelerates data science end-to-end – from data prep, to machine learning, to deep learning. And through Arrow, Spark users can easily move data into the RAPIDS platform for acceleration.
In this notebook, we'll show the acceleration one can gain by using GPUs with XGBoost in RAPIDS.
**Table of Contents**
* Setup
* Load Libraries
* Load/Simulate Data
* Load Data
* Simulate Data
* Split Data
* Check Dimensions
* Convert NumPy data to DMatrix format
* Set Parameters
* Train Model
* Conclusion
## Setup
This notebook was tested using the `nvcr.io/nvidia/rapidsai/rapidsai:0.5-cuda10.0-runtime-ubuntu18.04-gcc7-py3.7` Docker container from [NVIDIA GPU Cloud](https://ngc.nvidia.com) and run on the NVIDIA Tesla V100 GPU. Please be aware that your system may be different and you may need to modify the code or install packages to run the below examples.
If you think you have found a bug or an error, please file an issue here: https://github.com/rapidsai/notebooks/issues
To start, let's see what hardware we're working with.
```
!nvidia-smi
```
Next, let's see what CUDA version we have.
```
!nvcc --version
!pip install --user xgboost
```
## Load Libraries
Let's load some of the libraries within the RAPIDs ecosystem and see which versions we have.
```
import numpy as np; print('numpy Version:', np.__version__)
import pandas as pd; print('pandas Version:', pd.__version__)
import xgboost as xgb; print('XGBoost Version:', xgb.__version__)
```
## Load/Simulate data
### Load Data
We can load the data using `pandas.read_csv`.
### Simulate Data
Alternatively, we can simulate data for our train and validation datasets. The features will be tabular with `n_rows` and `n_columns` in the training dataset, where each value is either of type `np.float32` if the data is numerical or `np.uint8` if the data is categorical. Both numerical and categorical data can also be combined; for this experiment, we have ignored this combination.
```
# helper function for simulating data
def simulate_data(m, n, k=2, numerical=False):
if numerical:
features = np.random.rand(m, n)
else:
features = np.random.randint(2, size=(m, n))
labels = np.random.randint(k, size=m)
return np.c_[labels, features].astype(np.float32)
# helper function for loading data
def load_data(filename, n_rows):
if n_rows >= 1e9:
df = pd.read_csv(filename)
else:
df = pd.read_csv(filename, nrows=n_rows)
return df.values.astype(np.float32)
# settings
LOAD = False
n_rows = int(1e5)
n_columns = int(100)
n_categories = 2
%%time
if LOAD:
dataset = load_data('/tmp', n_rows)
else:
dataset = simulate_data(n_rows, n_columns, n_categories)
print(dataset.shape)
```
### Split Data
We'll split our dataset into a 80% training dataset and a 20% validation dataset.
```
# identify shape and indices
n_rows, n_columns = dataset.shape
train_size = 0.80
train_index = int(n_rows * train_size)
# split X, y
X, y = dataset[:, 1:], dataset[:, 0]
del dataset
# split train data
X_train, y_train = X[:train_index, :], y[:train_index]
# split validation data
X_validation, y_validation = X[train_index:, :], y[train_index:]
```
### Check Dimensions
We can check the dimensions and proportions of our training and validation dataets.
```
# check dimensions
print('X_train: ', X_train.shape, X_train.dtype, 'y_train: ', y_train.shape, y_train.dtype)
print('X_validation', X_validation.shape, X_validation.dtype, 'y_validation: ', y_validation.shape, y_validation.dtype)
# check the proportions
total = X_train.shape[0] + X_validation.shape[0]
print('X_train proportion:', X_train.shape[0] / total)
print('X_validation proportion:', X_validation.shape[0] / total)
```
## Convert NumPy data to DMatrix format
With out data simulated and formatted as NumPy arrays, our next step is to convert this to a `DMatrix` object that XGBoost can work with. We can instantiate an object of the `xgboost.DMatrix` by passing in the feature matrix as the first argument followed by the label vector using the `label=` keyword argument. To learn more about XGBoost's support for data structures other than NumPy arrays, see the documentation for the Data Interface:
https://xgboost.readthedocs.io/en/latest/python/python_intro.html#data-interface
```
%%time
dtrain = xgb.DMatrix(X_train, label=y_train)
dvalidation = xgb.DMatrix(X_validation, label=y_validation)
```
## Set Parameters
There are a number of parameters that can be set before XGBoost can be run.
* General parameters relate to which booster we are using to do boosting, commonly tree or linear model
* Booster parameters depend on which booster you have chosen
* Learning task parameters decide on the learning scenario. For example, regression tasks may use different parameters with ranking tasks.
For more information on the configurable parameters within the XGBoost module, see the documentation here:
https://xgboost.readthedocs.io/en/latest/parameter.html
```
# instantiate params
params = {}
# general params
general_params = {'silent': 1}
params.update(general_params)
# booster params
n_gpus = 1
booster_params = {}
if n_gpus != 0:
booster_params['tree_method'] = 'gpu_hist'
booster_params['n_gpus'] = n_gpus
params.update(booster_params)
# learning task params
learning_task_params = {'eval_metric': 'auc', 'objective': 'binary:logistic'}
params.update(learning_task_params)
print(params)
```
## Train Model
Now it's time to train our model! We can use the `xgb.train` function and pass in the parameters, training dataset, the number of boosting iterations, and the list of items to be evaluated during training. For more information on the parameters that can be passed into `xgb.train`, check out the documentation:
https://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.train
```
# model training settings
evallist = [(dvalidation, 'validation'), (dtrain, 'train')]
num_round = 10
%%time
bst = xgb.train(params, dtrain, num_round, evallist)
```
## Conclusion
To learn more about RAPIDS, be sure to check out:
* [Open Source Website](http://rapids.ai)
* [GitHub](https://github.com/rapidsai/)
* [Press Release](https://nvidianews.nvidia.com/news/nvidia-introduces-rapids-open-source-gpu-acceleration-platform-for-large-scale-data-analytics-and-machine-learning)
* [NVIDIA Blog](https://blogs.nvidia.com/blog/2018/10/10/rapids-data-science-open-source-community/)
* [Developer Blog](https://devblogs.nvidia.com/gpu-accelerated-analytics-rapids/)
* [NVIDIA Data Science Webpage](https://www.nvidia.com/en-us/deep-learning-ai/solutions/data-science/)
|
github_jupyter
|
!nvidia-smi
!nvcc --version
!pip install --user xgboost
import numpy as np; print('numpy Version:', np.__version__)
import pandas as pd; print('pandas Version:', pd.__version__)
import xgboost as xgb; print('XGBoost Version:', xgb.__version__)
# helper function for simulating data
def simulate_data(m, n, k=2, numerical=False):
if numerical:
features = np.random.rand(m, n)
else:
features = np.random.randint(2, size=(m, n))
labels = np.random.randint(k, size=m)
return np.c_[labels, features].astype(np.float32)
# helper function for loading data
def load_data(filename, n_rows):
if n_rows >= 1e9:
df = pd.read_csv(filename)
else:
df = pd.read_csv(filename, nrows=n_rows)
return df.values.astype(np.float32)
# settings
LOAD = False
n_rows = int(1e5)
n_columns = int(100)
n_categories = 2
%%time
if LOAD:
dataset = load_data('/tmp', n_rows)
else:
dataset = simulate_data(n_rows, n_columns, n_categories)
print(dataset.shape)
# identify shape and indices
n_rows, n_columns = dataset.shape
train_size = 0.80
train_index = int(n_rows * train_size)
# split X, y
X, y = dataset[:, 1:], dataset[:, 0]
del dataset
# split train data
X_train, y_train = X[:train_index, :], y[:train_index]
# split validation data
X_validation, y_validation = X[train_index:, :], y[train_index:]
# check dimensions
print('X_train: ', X_train.shape, X_train.dtype, 'y_train: ', y_train.shape, y_train.dtype)
print('X_validation', X_validation.shape, X_validation.dtype, 'y_validation: ', y_validation.shape, y_validation.dtype)
# check the proportions
total = X_train.shape[0] + X_validation.shape[0]
print('X_train proportion:', X_train.shape[0] / total)
print('X_validation proportion:', X_validation.shape[0] / total)
%%time
dtrain = xgb.DMatrix(X_train, label=y_train)
dvalidation = xgb.DMatrix(X_validation, label=y_validation)
# instantiate params
params = {}
# general params
general_params = {'silent': 1}
params.update(general_params)
# booster params
n_gpus = 1
booster_params = {}
if n_gpus != 0:
booster_params['tree_method'] = 'gpu_hist'
booster_params['n_gpus'] = n_gpus
params.update(booster_params)
# learning task params
learning_task_params = {'eval_metric': 'auc', 'objective': 'binary:logistic'}
params.update(learning_task_params)
print(params)
# model training settings
evallist = [(dvalidation, 'validation'), (dtrain, 'train')]
num_round = 10
%%time
bst = xgb.train(params, dtrain, num_round, evallist)
| 0.377541 | 0.986992 |
```
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
import scipy
from path_explain.utils import set_up_environment
from preprocess import mitbih_dataset
from plot import summary, scatter
set_up_environment(visible_devices='3')
x_train, y_train, x_test, y_test = mitbih_dataset()
original_model = tf.keras.models.load_model('model.h5')
y_pred = original_model.predict(x_test)
y_pred_max = np.argmax(y_pred, axis=-1)
batch_inputs_by_class = []
for c in range(5):
class_mask = np.logical_and(y_test == c,
y_pred_max == y_test)
class_indices = np.where(class_mask)[0][:100]
batch_samples = x_test[class_indices]
batch_inputs_by_class.append(batch_samples)
batch_inputs_by_class = np.stack(batch_inputs_by_class, axis=0)
attributions_array = []
interactions_array = []
for c in range(5):
attributions = np.load(f'attributions_{c}.npy')
interactions = np.load(f'interactions_{c}.npy')
attributions_array.append(attributions)
interactions_array.append(interactions)
attributions_by_class = np.stack(attributions_array, axis=0)
interactions_by_class = np.stack(interactions_array, axis=0)
batch_inputs_by_class = np.squeeze(batch_inputs_by_class)
attributions_by_class = np.squeeze(attributions_by_class)
interactions_by_class = np.squeeze(interactions_by_class)
c = 1
i = 3
fig = plt.figure(figsize=(16, 10))
gs = mpl.gridspec.GridSpec(2, 3)
ax1 = fig.add_subplot(gs[0, 0:2])
ax2 = fig.add_subplot(gs[1, 0:2])
ax3 = fig.add_subplot(gs[0, 2])
ax4 = fig.add_subplot(gs[1, 2])
ax1.plot(np.arange(batch_inputs_by_class.shape[-1]),
batch_inputs_by_class[c, i])
ax2.scatter(x=np.arange(batch_inputs_by_class.shape[-1]),
y=batch_inputs_by_class[c, i],
c=attributions_by_class[c, i])
zero_diagonal_interactions = interactions_by_class[c, i].copy()
np.fill_diagonal(zero_diagonal_interactions, 0.0)
ax3.imshow(interactions_by_class[c, i])
ax4.imshow(zero_diagonal_interactions)
def bin_dimensions(array, join_ranges):
array = array.copy()
delete_slices = []
for join_range in join_ranges:
array[:, :, join_range[0]] = np.sum(array[:, :, join_range[0]:join_range[1]], axis=2)
delete_slices.append(np.arange(join_range[0] + 1, join_range[1]))
delete_slices = np.concatenate(delete_slices, axis=0)
array = np.delete(array, delete_slices, axis=2)
return array
def bin_dimensions_matrix(array, join_ranges):
array = array.copy()
delete_slices = []
for join_range in join_ranges:
array[:, :, join_range[0], :] = np.sum(array[:, :, join_range[0]:join_range[1], :], axis=2)
array[:, :, :, join_range[0]] = np.sum(array[:, :, :, join_range[0]:join_range[1]], axis=3)
delete_slices.append(np.arange(join_range[0] + 1, join_range[1]))
delete_slices = np.concatenate(delete_slices, axis=0)
array = np.delete(array, delete_slices, axis=2)
array = np.delete(array, delete_slices, axis=3)
return array
num_bins = 15
step = int(np.ceil(187 / num_bins))
bins = [(i * step, min(187, (i + 1) * step)) for i in range(num_bins)]
binned_attributions_by_class = bin_dimensions(attributions_by_class, bins)
binned_interactions_by_class = bin_dimensions_matrix(interactions_by_class, bins)
fig, axs = plt.subplots(1, 5, figsize=(20, 4))
for i in range(5):
mean_interactions_by_class = np.mean(np.abs(attributions_by_class[i]), axis=0)
ax = axs[i]
ax.imshow(np.tile(np.expand_dims(mean_interactions_by_class, axis=0), reps=(187, 1)))
ax.set_title('Attributions in class {}'.format(i))
fig, axs = plt.subplots(1, 5, figsize=(20, 4))
for i in range(5):
mean_interactions_by_class = np.mean(np.abs(interactions_by_class[i]), axis=0)
zeroed_mean_interactions_by_class = mean_interactions_by_class.copy()
np.fill_diagonal(zeroed_mean_interactions_by_class, 0.0)
ax = axs[i]
ax.imshow(zeroed_mean_interactions_by_class)
ax.set_title('Interaction map in class {}'.format(i))
fig, axs = plt.subplots(1, 5, figsize=(20, 4))
for i in range(5):
ax = axs[i]
ax.imshow(np.mean(np.abs(binned_interactions_by_class[i]), axis=0))
ax.set_title('Binned interaction map in class {}'.format(i))
def get_bin_summary_statistics(array, join_ranges):
mean = []
sd = []
maximum = []
minimum = []
max_range = []
skewness = []
kurtosis = []
for join_range in join_ranges:
ranged_array = array[:, :, join_range[0]:join_range[1]]
mean.append(np.mean(ranged_array, axis=-1))
sd.append(np.std(ranged_array, axis=-1))
maximum.append(np.max(ranged_array, axis=-1))
minimum.append(np.min(ranged_array, axis=-1))
max_range.append(np.max(ranged_array, axis=-1) - np.min(ranged_array, axis=-1))
skewness.append(scipy.stats.skew(ranged_array, axis=-1))
kurtosis.append(scipy.stats.kurtosis(ranged_array, axis=-1))
stats_dict = {
'mean': np.stack(mean, axis=-1),
'sd': np.stack(sd, axis=-1),
'maximum': np.stack(maximum, axis=-1),
'minimum': np.stack(minimum, axis=-1),
'range': np.stack(max_range, axis=-1),
'skewness': np.stack(skewness, axis=-1),
'kurtosis': np.stack(kurtosis, axis=-1)
}
return stats_dict
binned_attribution_stats_by_class = get_bin_summary_statistics(attributions_by_class, bins)
c = 3
fig, axs = plt.subplots(7, 7, figsize=(49, 49))
for i in range(7):
for j, stat in enumerate(binned_input_stats_by_class.keys()):
ax = axs[i, j]
ax.set_xlabel('Statistic `{}` of bin'.format(stat))
ax.set_ylabel('Max attribution to bin {}: ({}, {})'.format(i, bins[i][0], bins[i][1]))
ax.scatter(binned_input_stats_by_class[stat][c, :, i],
binned_attribution_stats_by_class['maximum'][c, :, i])
c = 2
fig, axs = plt.subplots(7, 7, figsize=(49, 49))
for i in range(7):
for j, stat in enumerate(binned_input_stats_by_class.keys()):
ax = axs[i, j]
ax.set_xlabel('Statistic `{}` of bin'.format(stat))
ax.set_ylabel('Mean attribution to bin {}: ({}, {})'.format(i, bins[i][0], bins[i][1]))
ax.scatter(binned_input_stats_by_class[stat][c, :, i],
binned_attribution_stats_by_class['mean'][c, :, i])
```
|
github_jupyter
|
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
import scipy
from path_explain.utils import set_up_environment
from preprocess import mitbih_dataset
from plot import summary, scatter
set_up_environment(visible_devices='3')
x_train, y_train, x_test, y_test = mitbih_dataset()
original_model = tf.keras.models.load_model('model.h5')
y_pred = original_model.predict(x_test)
y_pred_max = np.argmax(y_pred, axis=-1)
batch_inputs_by_class = []
for c in range(5):
class_mask = np.logical_and(y_test == c,
y_pred_max == y_test)
class_indices = np.where(class_mask)[0][:100]
batch_samples = x_test[class_indices]
batch_inputs_by_class.append(batch_samples)
batch_inputs_by_class = np.stack(batch_inputs_by_class, axis=0)
attributions_array = []
interactions_array = []
for c in range(5):
attributions = np.load(f'attributions_{c}.npy')
interactions = np.load(f'interactions_{c}.npy')
attributions_array.append(attributions)
interactions_array.append(interactions)
attributions_by_class = np.stack(attributions_array, axis=0)
interactions_by_class = np.stack(interactions_array, axis=0)
batch_inputs_by_class = np.squeeze(batch_inputs_by_class)
attributions_by_class = np.squeeze(attributions_by_class)
interactions_by_class = np.squeeze(interactions_by_class)
c = 1
i = 3
fig = plt.figure(figsize=(16, 10))
gs = mpl.gridspec.GridSpec(2, 3)
ax1 = fig.add_subplot(gs[0, 0:2])
ax2 = fig.add_subplot(gs[1, 0:2])
ax3 = fig.add_subplot(gs[0, 2])
ax4 = fig.add_subplot(gs[1, 2])
ax1.plot(np.arange(batch_inputs_by_class.shape[-1]),
batch_inputs_by_class[c, i])
ax2.scatter(x=np.arange(batch_inputs_by_class.shape[-1]),
y=batch_inputs_by_class[c, i],
c=attributions_by_class[c, i])
zero_diagonal_interactions = interactions_by_class[c, i].copy()
np.fill_diagonal(zero_diagonal_interactions, 0.0)
ax3.imshow(interactions_by_class[c, i])
ax4.imshow(zero_diagonal_interactions)
def bin_dimensions(array, join_ranges):
array = array.copy()
delete_slices = []
for join_range in join_ranges:
array[:, :, join_range[0]] = np.sum(array[:, :, join_range[0]:join_range[1]], axis=2)
delete_slices.append(np.arange(join_range[0] + 1, join_range[1]))
delete_slices = np.concatenate(delete_slices, axis=0)
array = np.delete(array, delete_slices, axis=2)
return array
def bin_dimensions_matrix(array, join_ranges):
array = array.copy()
delete_slices = []
for join_range in join_ranges:
array[:, :, join_range[0], :] = np.sum(array[:, :, join_range[0]:join_range[1], :], axis=2)
array[:, :, :, join_range[0]] = np.sum(array[:, :, :, join_range[0]:join_range[1]], axis=3)
delete_slices.append(np.arange(join_range[0] + 1, join_range[1]))
delete_slices = np.concatenate(delete_slices, axis=0)
array = np.delete(array, delete_slices, axis=2)
array = np.delete(array, delete_slices, axis=3)
return array
num_bins = 15
step = int(np.ceil(187 / num_bins))
bins = [(i * step, min(187, (i + 1) * step)) for i in range(num_bins)]
binned_attributions_by_class = bin_dimensions(attributions_by_class, bins)
binned_interactions_by_class = bin_dimensions_matrix(interactions_by_class, bins)
fig, axs = plt.subplots(1, 5, figsize=(20, 4))
for i in range(5):
mean_interactions_by_class = np.mean(np.abs(attributions_by_class[i]), axis=0)
ax = axs[i]
ax.imshow(np.tile(np.expand_dims(mean_interactions_by_class, axis=0), reps=(187, 1)))
ax.set_title('Attributions in class {}'.format(i))
fig, axs = plt.subplots(1, 5, figsize=(20, 4))
for i in range(5):
mean_interactions_by_class = np.mean(np.abs(interactions_by_class[i]), axis=0)
zeroed_mean_interactions_by_class = mean_interactions_by_class.copy()
np.fill_diagonal(zeroed_mean_interactions_by_class, 0.0)
ax = axs[i]
ax.imshow(zeroed_mean_interactions_by_class)
ax.set_title('Interaction map in class {}'.format(i))
fig, axs = plt.subplots(1, 5, figsize=(20, 4))
for i in range(5):
ax = axs[i]
ax.imshow(np.mean(np.abs(binned_interactions_by_class[i]), axis=0))
ax.set_title('Binned interaction map in class {}'.format(i))
def get_bin_summary_statistics(array, join_ranges):
mean = []
sd = []
maximum = []
minimum = []
max_range = []
skewness = []
kurtosis = []
for join_range in join_ranges:
ranged_array = array[:, :, join_range[0]:join_range[1]]
mean.append(np.mean(ranged_array, axis=-1))
sd.append(np.std(ranged_array, axis=-1))
maximum.append(np.max(ranged_array, axis=-1))
minimum.append(np.min(ranged_array, axis=-1))
max_range.append(np.max(ranged_array, axis=-1) - np.min(ranged_array, axis=-1))
skewness.append(scipy.stats.skew(ranged_array, axis=-1))
kurtosis.append(scipy.stats.kurtosis(ranged_array, axis=-1))
stats_dict = {
'mean': np.stack(mean, axis=-1),
'sd': np.stack(sd, axis=-1),
'maximum': np.stack(maximum, axis=-1),
'minimum': np.stack(minimum, axis=-1),
'range': np.stack(max_range, axis=-1),
'skewness': np.stack(skewness, axis=-1),
'kurtosis': np.stack(kurtosis, axis=-1)
}
return stats_dict
binned_attribution_stats_by_class = get_bin_summary_statistics(attributions_by_class, bins)
c = 3
fig, axs = plt.subplots(7, 7, figsize=(49, 49))
for i in range(7):
for j, stat in enumerate(binned_input_stats_by_class.keys()):
ax = axs[i, j]
ax.set_xlabel('Statistic `{}` of bin'.format(stat))
ax.set_ylabel('Max attribution to bin {}: ({}, {})'.format(i, bins[i][0], bins[i][1]))
ax.scatter(binned_input_stats_by_class[stat][c, :, i],
binned_attribution_stats_by_class['maximum'][c, :, i])
c = 2
fig, axs = plt.subplots(7, 7, figsize=(49, 49))
for i in range(7):
for j, stat in enumerate(binned_input_stats_by_class.keys()):
ax = axs[i, j]
ax.set_xlabel('Statistic `{}` of bin'.format(stat))
ax.set_ylabel('Mean attribution to bin {}: ({}, {})'.format(i, bins[i][0], bins[i][1]))
ax.scatter(binned_input_stats_by_class[stat][c, :, i],
binned_attribution_stats_by_class['mean'][c, :, i])
| 0.537041 | 0.545286 |
### Plotly chart wrappers (python)
#### Motivation
This is a set of functions wrapping the powerful customization options of Plotly charts into single-function calls with a few parameters that should produce good-looking charts covering at least 50% of the typical charting of a data analyst. The rationale behind writing the wrappers is twofold:
1) To streamline analyst work and make the generation of lucid charts accessible even to analysts who are not familiar with the intricacies of Plotly's Python library
2) To set up a consistent visual style that would be easily customizable to fit any corporate design by pre-defining colors and font styles used in all the charts
#### Contents
0) Style setup
1) Bar charts (stacked, grouped, percentage)
2) Line charts
3) Scatter plots
4) Box plots
#### Reference
1) [Plotly reference](https://plot.ly/python/)
2) [IBM Sample Datasets](https://www.ibm.com/communities/analytics/watson-analytics-blog/guide-to-sample-datasets/)
```
# Get the sample data
import urllib.request
url = "https://community.watsonanalytics.com/wp-content/uploads/2015/03/WA_Fn-UseC_-Marketing-Campaign-Eff-UseC_-FastF.csv"
download_path = "data/WA_Fn-UseC_-Marketing-Campaign-Eff-UseC_-FastF.csv"
try:
dat = pd.read_csv(download_path)
except:
urllib.request.urlretrieve(url, download_path)
dat = pd.read_csv(download_path)
import pandas as pd
import numpy as np
import plotly.offline as py
from plotly.graph_objs import *
from plotly import tools
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
%matplotlib inline
dat.head()
dat.describe()
sales_per_market_and_promo = dat.groupby(["MarketID","Promotion"])["SalesInThousands"].sum().reset_index()
sales_per_market_and_promo.head()
```
### 0. Style setup: Fonts and colors
### 1. Bar charts
```
# Generate a discrete colorscale for "the rest"
def generate_discrete_scl(rgb_start, rgb_end, n_levels):
"""
Generates an RGB colorscale of n_levels between the specified
start and end colors provided as 3-number tuples: (0,0,0)
Returns a list of RGB color code tuples of size n_levels
['rgb(229,245,249)',...]
"""
col_list = zip(rgb_start,rgb_end)
scl_rgb = []
for elem in col_list:
try:
incr = (elem[1] - elem[0])//(n_levels-1)
scl_single = []
x = rgb_start[0]
scl_single.append(x)
for item in range(1,n_levels):
x = item*incr
scl_single.append(x)
scl_single[-1]=rgb_end[0]
scl_rgb.append(scl_single)
except:
print("Incorrect number of levels!")
scl_out = ["rgb" + str(item) for item in list(zip(scl_rgb[0],scl_rgb[1],scl_rgb[2]))]
return (scl_out)
def generate_title_string(x,y,group):
return y + ' per ' + x + ' grouped by ' + group
```
Fonts:
These include "Arial", "Balto", "Courier New", "Droid Sans",, "Droid Serif", "Droid Sans Mono", "Gravitas One", "Old Standard TT", "Open Sans", "Overpass", "PT Sans Narrow", "Raleway", "Times New Roman".
```
# Overall style setting
style_config = dict(
font_family = "Helvetica, Droid Sans",
font_size = 14,
accent_1_color = "rgb(230,85,13)",
accent_2_color = "rgb(49,130,189)",
output_type = "plot") # enum: "html", "plot": "html" returns code for further rendering, "plot" draws a plot inline
# Chart
def plotly_bar(config, df, x, y, group, barmode, accent_1=None, accent_2=None,
title = None, xaxis_title = None, yaxis_title = None):
groups =df[group].unique()
if (accent_1 is not None) or (accent_2 is not None):
rest = sorted([item for item in groups if item not in [accent_1, accent_2]])[::-1]
else:
rest = sorted([item for item in groups])[::-1]
scl = generate_discrete_scl((37,37,37),(200,200,200),len(rest))
rest_colors = dict(zip(rest,scl))
traces = []
# All other traces
for item in rest:
text = list(df[df[group]==item][y])
trace_idx = 0
traces.append(
Bar(x = df[df[group]==item][x],
y = df[df[group]==item][y],
name = str(item),
marker=dict(
color=list(np.repeat(rest_colors[item],len(df[df[group]==item][y])))
)
)
)
trace_idx = trace_idx + 1
# Append traces for both accents, the order is important for rendering
# Only these accents get text overlay and custom colors
if config["accent_2_color"] is None:
accent_2_color = "rgb(49,130,189)"
if config["accent_1_color"] is None:
accent_1_color = "rgb(230,85,13)"
# Accent 2
if accent_2 is not None:
traces.append(
Bar(x = df[df[group]==accent_2][x],
y = df[df[group]==accent_2][y],
name = str(accent_2),
text = df[df[group]==accent_2][y].round(1),textposition = 'auto',constraintext="none",
marker=dict(color=list(np.repeat(config["accent_2_color"],len(df[df[group]==accent_2][y]))))
)
)
# Accent 1
if accent_1 is not None:
traces.append(
Bar(x = df[df[group]==accent_1][x],
y = df[df[group]==accent_1][y],
name = str(accent_1),
text = df[df[group]==accent_1][y].round(1),textposition = 'auto',constraintext="none",
marker=dict(color=list(np.repeat(config["accent_1_color"],len(df[df[group]==accent_1][y]))))
)
)
data = traces
if xaxis_title is None:
xaxis_title = x
if yaxis_title is None:
yaxis_title = y
if title is None:
title = generate_title_string(x,y,group)
layout = Layout(
title=title,
xaxis=dict(
title= xaxis_title,
titlefont=dict(
size=config["font_size"]-1,
color='rgb(107, 107, 107)'
),
tickfont=dict(
size=config["font_size"]-1,
color='rgb(107, 107, 107)'
)
),
yaxis=dict(
title=yaxis_title,
titlefont=dict(
size=config["font_size"]-1,
color='rgb(107, 107, 107)'
),
tickfont=dict(
size=config["font_size"]-1,
color='rgb(107, 107, 107)'
)
),
legend=dict(orientation="h",x=0,y=-0.22,
bgcolor='rgba(0, 0, 0, 0)',
bordercolor='rgba(0, 0, 0, 0)'
),
margin=dict(t=100),
barmode=barmode,bargap=0.15,bargroupgap=0.1,
hovermode="closest", hoverlabel = dict(font=dict(family=config["font_family"], size=config["font_size"])),
font=dict(family=config["font_family"], size=config["font_size"]))
fig = Figure(data=data, layout=layout)
if config["output_type"] == "html":
print("HTML output not enabled yet")
else:
py.iplot(fig, filename='bar')
# General chart setup
df = sales_per_market_and_promo
x = "Promotion" # "Promotion"
y = "SalesInThousands"
group = "MarketID" # variable to make groups
barmode = "group" # enum: "group", "stack", "relative" (for positive and negative values)
# Accents (selected levels of grouping variable)
accent_1 = 1
accent_2 = 2
plotly_bar(config=style_config, df=df, x=x, y=y,group=group,barmode=barmode,accent_1=1,accent_2=5)
plotly_bar(config=style_config, df=df, x=x, y=y,group=group,barmode="stack",accent_1=1,accent_2=5)
```
### 2. Line charts
```
def plotly_line(config, df, x, y, group, accent_1=None, accent_2=None, accent_linemode=None,
title = None, xaxis_title = None, yaxis_title = None):
groups = df[group].unique()
if (accent_1 is not None) or (accent_2 is not None):
rest = sorted([item for item in groups if item not in [accent_1, accent_2]])[::-1]
else:
rest = sorted([item for item in groups])[::-1]
scl = generate_discrete_scl((37,37,37),(200,200,200),len(rest))
rest_colors = dict(zip(rest,scl))
traces = []
# All other traces
for item in rest:
text = list(df[df[group]==item][y])
trace_idx = 0
traces.append(
Scatter(x = df[df[group]==item][x],
y = df[df[group]==item][y],
name = str(item),
mode = 'lines',
line=dict(
color=rest_colors[item],shape='spline',smoothing=0.5))
)
trace_idx = trace_idx + 1
# Append traces for both accents, the order is important for rendering
# Only these accents get text overlay and custom colors
if config["accent_2_color"] is None:
accent_2_color = "rgb(49,130,189)"
if config["accent_1_color"] is None:
accent_1_color = "rgb(230,85,13)"
# Accent 2
if accent_2 is not None:
traces.append(
Scatter(x = df[df[group]==accent_2][x],
y = df[df[group]==accent_2][y],
name = str(accent_2),
mode = accent_linemode,
text = df[df[group]==accent_2][y].round(1),textposition = 'top middle',
line=dict(color=config["accent_2_color"], width = 3,shape='spline',
smoothing=0.5),
marker=dict(size=8),textfont=dict(color=config["accent_2_color"])
)
)
# Accent 1
if accent_1 is not None:
traces.append(
Scatter(x = df[df[group]==accent_1][x],
y = df[df[group]==accent_1][y],
name = str(accent_1),
mode = accent_linemode,
text = df[df[group]==accent_1][y].round(1),textposition = 'top middle',
line=dict(color=config["accent_1_color"], width = 3,shape='spline',
smoothing=0.5),
marker=dict(size=8),textfont=dict(color=config["accent_1_color"])
)
)
data = traces
if xaxis_title is None:
xaxis_title = x
if yaxis_title is None:
yaxis_title = y
if title is None:
title = generate_title_string(x,y,group)
layout = Layout(
title=title,
xaxis=dict(
title= xaxis_title,
titlefont=dict(
size=config["font_size"]-1,
color='rgb(107, 107, 107)'
),
tickfont=dict(
size=config["font_size"]-1,
color='rgb(107, 107, 107)'
)
),
yaxis=dict(
title=yaxis_title,
titlefont=dict(
size=config["font_size"]-1,
color='rgb(107, 107, 107)'
),
tickfont=dict(
size=config["font_size"]-1,
color='rgb(107, 107, 107)'
)
),
legend=dict(orientation="h",x=0,y=-0.22,
bgcolor='rgba(0, 0, 0, 0)',
bordercolor='rgba(0, 0, 0, 0)'
),
margin=dict(t=100),
# barmode=barmode,bargap=0.15,bargroupgap=0.1,
hovermode="closest", hoverlabel = dict(font=dict(family=config["font_family"], size=config["font_size"])),
font=dict(family=config["font_family"], size=config["font_size"]))
fig = Figure(data=data, layout=layout)
if config["output_type"] == "html":
print("HTML output not enabled yet")
else:
py.iplot(fig, filename='line')
plotly_line(config=style_config, df=df, x="MarketID", y="SalesInThousands",
group="Promotion",accent_1=3,accent_2=None, accent_linemode="lines+markers+text")
plotly_line(config=style_config, df=df, x="Promotion", y="SalesInThousands",
group="MarketID",accent_1=2,accent_2=3, accent_linemode="lines+markers+text")
```
### 3. Scatter plots
```
def plotly_scatter(config, df, x, y, group, accent_1=None, accent_2=None,
title = None, xaxis_title = None, yaxis_title = None):
groups = df[group].unique()
if (accent_1 is not None) or (accent_2 is not None):
rest = sorted([item for item in groups if item not in [accent_1, accent_2]])[::-1]
else:
rest = sorted([item for item in groups])[::-1]
scl = generate_discrete_scl((37,37,37),(200,200,200),len(rest))
rest_colors = dict(zip(rest,scl))
traces = []
# All other traces
for item in rest:
text = list(df[df[group]==item][y])
trace_idx = 0
traces.append(
Scatter(x = df[df[group]==item][x],
y = df[df[group]==item][y],
name = str(item),
mode = 'markers',
marker=dict(color=rest_colors[item]))
)
trace_idx = trace_idx + 1
# Append traces for both accents, the order is important for rendering
# Only these accents get text overlay and custom colors
if config["accent_2_color"] is None:
accent_2_color = "rgb(49,130,189)"
if config["accent_1_color"] is None:
accent_1_color = "rgb(230,85,13)"
# Accent 2
if accent_2 is not None:
traces.append(
Scatter(x = df[df[group]==accent_2][x],
y = df[df[group]==accent_2][y],
name = str(accent_2),
mode = "markers",
marker=dict(color=config["accent_2_color"])
)
)
# Accent 1
if accent_1 is not None:
traces.append(
Scatter(x = df[df[group]==accent_1][x],
y = df[df[group]==accent_1][y],
name = str(accent_1),
mode = "markers",
marker=dict(color=config["accent_1_color"])
)
)
data = traces
if xaxis_title is None:
xaxis_title = x
if yaxis_title is None:
yaxis_title = y
if title is None:
title = generate_title_string(x,y,group)
layout = Layout(
title=title,
xaxis=dict(
title= xaxis_title,
titlefont=dict(
size=config["font_size"]-1,
color='rgb(107, 107, 107)'
),
tickfont=dict(
size=config["font_size"]-1,
color='rgb(107, 107, 107)'
)
),
yaxis=dict(
title=yaxis_title,
titlefont=dict(
size=config["font_size"]-1,
color='rgb(107, 107, 107)'
),
tickfont=dict(
size=config["font_size"]-1,
color='rgb(107, 107, 107)'
)
),
legend=dict(orientation="h",x=0,y=-0.22,
bgcolor='rgba(0, 0, 0, 0)',
bordercolor='rgba(0, 0, 0, 0)'
),
margin=dict(t=100),
hovermode="closest", hoverlabel = dict(font=dict(family=config["font_family"], size=config["font_size"])),
font=dict(family=config["font_family"], size=config["font_size"]))
fig = Figure(data=data, layout=layout)
if config["output_type"] == "html":
print("HTML output not enabled yet")
else:
py.iplot(fig, filename='scatter')
plotly_scatter(config=style_config, df=dat, x="AgeOfStore", y="SalesInThousands",
group="MarketID",accent_1=2,accent_2=3)
plotly_scatter(config=style_config, df=dat, x="week", y="SalesInThousands",
group="Promotion")
```
To be fixed:
4) Scatter and other charts - make group optional - skip the colorscale and just plot dark grey
5) Extend for HTML div output: compare (https://stackoverflow.com/questions/36262748/python-save-plotly-plot-to-local-file-and-insert-into-html)
6) Extend to further chart types (one function per chart type)
7) Grouping variable name is missing from the legend
Low prio / nice to have:
- Further enhancement for scl generation (values over 255, levels over 255.., avoid negative values)
- X Axis ticks for categorical variables
- Add % of group total option into text for the accents
|
github_jupyter
|
# Get the sample data
import urllib.request
url = "https://community.watsonanalytics.com/wp-content/uploads/2015/03/WA_Fn-UseC_-Marketing-Campaign-Eff-UseC_-FastF.csv"
download_path = "data/WA_Fn-UseC_-Marketing-Campaign-Eff-UseC_-FastF.csv"
try:
dat = pd.read_csv(download_path)
except:
urllib.request.urlretrieve(url, download_path)
dat = pd.read_csv(download_path)
import pandas as pd
import numpy as np
import plotly.offline as py
from plotly.graph_objs import *
from plotly import tools
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
%matplotlib inline
dat.head()
dat.describe()
sales_per_market_and_promo = dat.groupby(["MarketID","Promotion"])["SalesInThousands"].sum().reset_index()
sales_per_market_and_promo.head()
# Generate a discrete colorscale for "the rest"
def generate_discrete_scl(rgb_start, rgb_end, n_levels):
"""
Generates an RGB colorscale of n_levels between the specified
start and end colors provided as 3-number tuples: (0,0,0)
Returns a list of RGB color code tuples of size n_levels
['rgb(229,245,249)',...]
"""
col_list = zip(rgb_start,rgb_end)
scl_rgb = []
for elem in col_list:
try:
incr = (elem[1] - elem[0])//(n_levels-1)
scl_single = []
x = rgb_start[0]
scl_single.append(x)
for item in range(1,n_levels):
x = item*incr
scl_single.append(x)
scl_single[-1]=rgb_end[0]
scl_rgb.append(scl_single)
except:
print("Incorrect number of levels!")
scl_out = ["rgb" + str(item) for item in list(zip(scl_rgb[0],scl_rgb[1],scl_rgb[2]))]
return (scl_out)
def generate_title_string(x,y,group):
return y + ' per ' + x + ' grouped by ' + group
# Overall style setting
style_config = dict(
font_family = "Helvetica, Droid Sans",
font_size = 14,
accent_1_color = "rgb(230,85,13)",
accent_2_color = "rgb(49,130,189)",
output_type = "plot") # enum: "html", "plot": "html" returns code for further rendering, "plot" draws a plot inline
# Chart
def plotly_bar(config, df, x, y, group, barmode, accent_1=None, accent_2=None,
title = None, xaxis_title = None, yaxis_title = None):
groups =df[group].unique()
if (accent_1 is not None) or (accent_2 is not None):
rest = sorted([item for item in groups if item not in [accent_1, accent_2]])[::-1]
else:
rest = sorted([item for item in groups])[::-1]
scl = generate_discrete_scl((37,37,37),(200,200,200),len(rest))
rest_colors = dict(zip(rest,scl))
traces = []
# All other traces
for item in rest:
text = list(df[df[group]==item][y])
trace_idx = 0
traces.append(
Bar(x = df[df[group]==item][x],
y = df[df[group]==item][y],
name = str(item),
marker=dict(
color=list(np.repeat(rest_colors[item],len(df[df[group]==item][y])))
)
)
)
trace_idx = trace_idx + 1
# Append traces for both accents, the order is important for rendering
# Only these accents get text overlay and custom colors
if config["accent_2_color"] is None:
accent_2_color = "rgb(49,130,189)"
if config["accent_1_color"] is None:
accent_1_color = "rgb(230,85,13)"
# Accent 2
if accent_2 is not None:
traces.append(
Bar(x = df[df[group]==accent_2][x],
y = df[df[group]==accent_2][y],
name = str(accent_2),
text = df[df[group]==accent_2][y].round(1),textposition = 'auto',constraintext="none",
marker=dict(color=list(np.repeat(config["accent_2_color"],len(df[df[group]==accent_2][y]))))
)
)
# Accent 1
if accent_1 is not None:
traces.append(
Bar(x = df[df[group]==accent_1][x],
y = df[df[group]==accent_1][y],
name = str(accent_1),
text = df[df[group]==accent_1][y].round(1),textposition = 'auto',constraintext="none",
marker=dict(color=list(np.repeat(config["accent_1_color"],len(df[df[group]==accent_1][y]))))
)
)
data = traces
if xaxis_title is None:
xaxis_title = x
if yaxis_title is None:
yaxis_title = y
if title is None:
title = generate_title_string(x,y,group)
layout = Layout(
title=title,
xaxis=dict(
title= xaxis_title,
titlefont=dict(
size=config["font_size"]-1,
color='rgb(107, 107, 107)'
),
tickfont=dict(
size=config["font_size"]-1,
color='rgb(107, 107, 107)'
)
),
yaxis=dict(
title=yaxis_title,
titlefont=dict(
size=config["font_size"]-1,
color='rgb(107, 107, 107)'
),
tickfont=dict(
size=config["font_size"]-1,
color='rgb(107, 107, 107)'
)
),
legend=dict(orientation="h",x=0,y=-0.22,
bgcolor='rgba(0, 0, 0, 0)',
bordercolor='rgba(0, 0, 0, 0)'
),
margin=dict(t=100),
barmode=barmode,bargap=0.15,bargroupgap=0.1,
hovermode="closest", hoverlabel = dict(font=dict(family=config["font_family"], size=config["font_size"])),
font=dict(family=config["font_family"], size=config["font_size"]))
fig = Figure(data=data, layout=layout)
if config["output_type"] == "html":
print("HTML output not enabled yet")
else:
py.iplot(fig, filename='bar')
# General chart setup
df = sales_per_market_and_promo
x = "Promotion" # "Promotion"
y = "SalesInThousands"
group = "MarketID" # variable to make groups
barmode = "group" # enum: "group", "stack", "relative" (for positive and negative values)
# Accents (selected levels of grouping variable)
accent_1 = 1
accent_2 = 2
plotly_bar(config=style_config, df=df, x=x, y=y,group=group,barmode=barmode,accent_1=1,accent_2=5)
plotly_bar(config=style_config, df=df, x=x, y=y,group=group,barmode="stack",accent_1=1,accent_2=5)
def plotly_line(config, df, x, y, group, accent_1=None, accent_2=None, accent_linemode=None,
title = None, xaxis_title = None, yaxis_title = None):
groups = df[group].unique()
if (accent_1 is not None) or (accent_2 is not None):
rest = sorted([item for item in groups if item not in [accent_1, accent_2]])[::-1]
else:
rest = sorted([item for item in groups])[::-1]
scl = generate_discrete_scl((37,37,37),(200,200,200),len(rest))
rest_colors = dict(zip(rest,scl))
traces = []
# All other traces
for item in rest:
text = list(df[df[group]==item][y])
trace_idx = 0
traces.append(
Scatter(x = df[df[group]==item][x],
y = df[df[group]==item][y],
name = str(item),
mode = 'lines',
line=dict(
color=rest_colors[item],shape='spline',smoothing=0.5))
)
trace_idx = trace_idx + 1
# Append traces for both accents, the order is important for rendering
# Only these accents get text overlay and custom colors
if config["accent_2_color"] is None:
accent_2_color = "rgb(49,130,189)"
if config["accent_1_color"] is None:
accent_1_color = "rgb(230,85,13)"
# Accent 2
if accent_2 is not None:
traces.append(
Scatter(x = df[df[group]==accent_2][x],
y = df[df[group]==accent_2][y],
name = str(accent_2),
mode = accent_linemode,
text = df[df[group]==accent_2][y].round(1),textposition = 'top middle',
line=dict(color=config["accent_2_color"], width = 3,shape='spline',
smoothing=0.5),
marker=dict(size=8),textfont=dict(color=config["accent_2_color"])
)
)
# Accent 1
if accent_1 is not None:
traces.append(
Scatter(x = df[df[group]==accent_1][x],
y = df[df[group]==accent_1][y],
name = str(accent_1),
mode = accent_linemode,
text = df[df[group]==accent_1][y].round(1),textposition = 'top middle',
line=dict(color=config["accent_1_color"], width = 3,shape='spline',
smoothing=0.5),
marker=dict(size=8),textfont=dict(color=config["accent_1_color"])
)
)
data = traces
if xaxis_title is None:
xaxis_title = x
if yaxis_title is None:
yaxis_title = y
if title is None:
title = generate_title_string(x,y,group)
layout = Layout(
title=title,
xaxis=dict(
title= xaxis_title,
titlefont=dict(
size=config["font_size"]-1,
color='rgb(107, 107, 107)'
),
tickfont=dict(
size=config["font_size"]-1,
color='rgb(107, 107, 107)'
)
),
yaxis=dict(
title=yaxis_title,
titlefont=dict(
size=config["font_size"]-1,
color='rgb(107, 107, 107)'
),
tickfont=dict(
size=config["font_size"]-1,
color='rgb(107, 107, 107)'
)
),
legend=dict(orientation="h",x=0,y=-0.22,
bgcolor='rgba(0, 0, 0, 0)',
bordercolor='rgba(0, 0, 0, 0)'
),
margin=dict(t=100),
# barmode=barmode,bargap=0.15,bargroupgap=0.1,
hovermode="closest", hoverlabel = dict(font=dict(family=config["font_family"], size=config["font_size"])),
font=dict(family=config["font_family"], size=config["font_size"]))
fig = Figure(data=data, layout=layout)
if config["output_type"] == "html":
print("HTML output not enabled yet")
else:
py.iplot(fig, filename='line')
plotly_line(config=style_config, df=df, x="MarketID", y="SalesInThousands",
group="Promotion",accent_1=3,accent_2=None, accent_linemode="lines+markers+text")
plotly_line(config=style_config, df=df, x="Promotion", y="SalesInThousands",
group="MarketID",accent_1=2,accent_2=3, accent_linemode="lines+markers+text")
def plotly_scatter(config, df, x, y, group, accent_1=None, accent_2=None,
title = None, xaxis_title = None, yaxis_title = None):
groups = df[group].unique()
if (accent_1 is not None) or (accent_2 is not None):
rest = sorted([item for item in groups if item not in [accent_1, accent_2]])[::-1]
else:
rest = sorted([item for item in groups])[::-1]
scl = generate_discrete_scl((37,37,37),(200,200,200),len(rest))
rest_colors = dict(zip(rest,scl))
traces = []
# All other traces
for item in rest:
text = list(df[df[group]==item][y])
trace_idx = 0
traces.append(
Scatter(x = df[df[group]==item][x],
y = df[df[group]==item][y],
name = str(item),
mode = 'markers',
marker=dict(color=rest_colors[item]))
)
trace_idx = trace_idx + 1
# Append traces for both accents, the order is important for rendering
# Only these accents get text overlay and custom colors
if config["accent_2_color"] is None:
accent_2_color = "rgb(49,130,189)"
if config["accent_1_color"] is None:
accent_1_color = "rgb(230,85,13)"
# Accent 2
if accent_2 is not None:
traces.append(
Scatter(x = df[df[group]==accent_2][x],
y = df[df[group]==accent_2][y],
name = str(accent_2),
mode = "markers",
marker=dict(color=config["accent_2_color"])
)
)
# Accent 1
if accent_1 is not None:
traces.append(
Scatter(x = df[df[group]==accent_1][x],
y = df[df[group]==accent_1][y],
name = str(accent_1),
mode = "markers",
marker=dict(color=config["accent_1_color"])
)
)
data = traces
if xaxis_title is None:
xaxis_title = x
if yaxis_title is None:
yaxis_title = y
if title is None:
title = generate_title_string(x,y,group)
layout = Layout(
title=title,
xaxis=dict(
title= xaxis_title,
titlefont=dict(
size=config["font_size"]-1,
color='rgb(107, 107, 107)'
),
tickfont=dict(
size=config["font_size"]-1,
color='rgb(107, 107, 107)'
)
),
yaxis=dict(
title=yaxis_title,
titlefont=dict(
size=config["font_size"]-1,
color='rgb(107, 107, 107)'
),
tickfont=dict(
size=config["font_size"]-1,
color='rgb(107, 107, 107)'
)
),
legend=dict(orientation="h",x=0,y=-0.22,
bgcolor='rgba(0, 0, 0, 0)',
bordercolor='rgba(0, 0, 0, 0)'
),
margin=dict(t=100),
hovermode="closest", hoverlabel = dict(font=dict(family=config["font_family"], size=config["font_size"])),
font=dict(family=config["font_family"], size=config["font_size"]))
fig = Figure(data=data, layout=layout)
if config["output_type"] == "html":
print("HTML output not enabled yet")
else:
py.iplot(fig, filename='scatter')
plotly_scatter(config=style_config, df=dat, x="AgeOfStore", y="SalesInThousands",
group="MarketID",accent_1=2,accent_2=3)
plotly_scatter(config=style_config, df=dat, x="week", y="SalesInThousands",
group="Promotion")
| 0.480722 | 0.84489 |
# Question 2
## Description
Develop an algorithm to find an image in another image by compare image histogram with the samples from another image
## Import required dependencies
- Import cv2 for read and map the color channels
- Import math to get the maximum number we can use as infinite
- Import numpy to work with arrays
- Import matplotlib to show image for each step
```
import cv2
import math
import numpy as np
import matplotlib.pyplot as plt
```
## Read Image
Read Messi Image and map the color from bgr to rgb, because imread read image in bgr order
```
image = cv2.imread("../images/messi5.jpg")
# Convert BGR order to RGB
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
```
# Crop Ball From Messi Image
Crop an image to use is as target image
```
ball_image = image[290:335, 338:387, :]
plt.imshow(ball_image)
```
## Defined Histogram Function
Histogram is an array contains 256 item that each item is the count of the color in the image
```
def calcHist(image):
histogram = np.zeros(256)
for c in range(image.shape[2]):
for h in range(image.shape[0]):
for w in range(image.shape[1]):
# Increase the number of color
histogram[image[h, w, c]] += 1
return histogram
```
## Calculate Histogram
Calculate histogram of the image, this method are too slow to use, i recommend use OpenCV method
```
ball_image_hist = calcHist(ball_image)
plt.plot(ball_image_hist)
plt.show()
```
## Calculate Ball Histogram Using OpenCV
```
ball_image_hist = cv2.calcHist([ball_image],[0],None,[256],[0,256])
plt.plot(ball_image_hist)
plt.show()
```
## Implement Compare Histogram Function
We need to have a function to give us the diffrence of two histogram
1. Calculate the difference between the two array for each color
2. Calculate absolute value of the result array for each color
3. Caclulate sum of the values
```
def compareHistogram(sample, target):
return np.sum(np.abs(sample - target))
```
## Create Padding Image
To match and compare sample of the orginal image with target image we should paddding the original image
```
width_padding = math.floor((image.shape[1] % ball_image.shape[1]) / 2)
height_padding = math.floor((image.shape[0] % ball_image.shape[0]) / 2)
# Set the borders
padding_image = cv2.copyMakeBorder(image, height_padding, height_padding, width_padding, width_padding, cv2.BORDER_CONSTANT, value=0)
plt.imshow(padding_image)
```
## Initialize Variable
Store the size of the target image and initial the position target should be in original image
```
sample_step = 1
target_width = ball_image.shape[1]
target_height = ball_image.shape[0]
target_in_image_position = (0, 0, math.inf) # (height, width, thrashold)
```
## Define Process Fucntion
A function scanning the image, crop sample from original image and compare it with histogram of the target image
```
def process(original_image):
global target_in_image_position
for j in range(0, original_image.shape[0], sample_step):
for i in range(0, original_image.shape[1], sample_step):
# Crop Sample from image
sample = original_image[j:j + target_height, i:i + target_width, :]
sample_hist = cv2.calcHist([sample], [0], None, [256], [0, 256])
thrashold = compareHistogram(sample_hist, ball_image_hist)
if(thrashold < target_in_image_position[2]):
target_in_image_position = (j, i, thrashold)
```
## Process Image
```
process(padding_image)
```
# Crop Target From Image
Crop the result from image to confirm algorithm works
```
target_image = image[target_in_image_position[0]:target_in_image_position[0] +
target_height, target_in_image_position[1]:target_in_image_position[1] + target_width, :]
plt.imshow(target_image)
```
As result, we see ball with a small error, so i try process without padding image to check algorithm
## Initialize Variable
Reset the target position to check image without padding
```
target_in_image_position = (0, 0, math.inf) # (height, width, thrashold)
```
## Process Image Without Padding
```
process(image)
```
# Crop Target From Image
```
target_image = image[target_in_image_position[0]:target_in_image_position[0] +
target_height, target_in_image_position[1]:target_in_image_position[1] + target_width, :]
plt.imshow(target_image)
```
Finnaly the ball find in the exacly place :D
|
github_jupyter
|
import cv2
import math
import numpy as np
import matplotlib.pyplot as plt
image = cv2.imread("../images/messi5.jpg")
# Convert BGR order to RGB
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
ball_image = image[290:335, 338:387, :]
plt.imshow(ball_image)
def calcHist(image):
histogram = np.zeros(256)
for c in range(image.shape[2]):
for h in range(image.shape[0]):
for w in range(image.shape[1]):
# Increase the number of color
histogram[image[h, w, c]] += 1
return histogram
ball_image_hist = calcHist(ball_image)
plt.plot(ball_image_hist)
plt.show()
ball_image_hist = cv2.calcHist([ball_image],[0],None,[256],[0,256])
plt.plot(ball_image_hist)
plt.show()
def compareHistogram(sample, target):
return np.sum(np.abs(sample - target))
width_padding = math.floor((image.shape[1] % ball_image.shape[1]) / 2)
height_padding = math.floor((image.shape[0] % ball_image.shape[0]) / 2)
# Set the borders
padding_image = cv2.copyMakeBorder(image, height_padding, height_padding, width_padding, width_padding, cv2.BORDER_CONSTANT, value=0)
plt.imshow(padding_image)
sample_step = 1
target_width = ball_image.shape[1]
target_height = ball_image.shape[0]
target_in_image_position = (0, 0, math.inf) # (height, width, thrashold)
def process(original_image):
global target_in_image_position
for j in range(0, original_image.shape[0], sample_step):
for i in range(0, original_image.shape[1], sample_step):
# Crop Sample from image
sample = original_image[j:j + target_height, i:i + target_width, :]
sample_hist = cv2.calcHist([sample], [0], None, [256], [0, 256])
thrashold = compareHistogram(sample_hist, ball_image_hist)
if(thrashold < target_in_image_position[2]):
target_in_image_position = (j, i, thrashold)
process(padding_image)
target_image = image[target_in_image_position[0]:target_in_image_position[0] +
target_height, target_in_image_position[1]:target_in_image_position[1] + target_width, :]
plt.imshow(target_image)
target_in_image_position = (0, 0, math.inf) # (height, width, thrashold)
process(image)
target_image = image[target_in_image_position[0]:target_in_image_position[0] +
target_height, target_in_image_position[1]:target_in_image_position[1] + target_width, :]
plt.imshow(target_image)
| 0.496094 | 0.984411 |
```
import os
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from sklearn.metrics import confusion_matrix
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
import seaborn as sns
sns.set()
os.chdir("C:\\Users\\user\\Desktop\\ML Files")
df = pd.read_csv('bank_cleaned.csv')
df.head()
from sklearn.model_selection import train_test_split
X= df.drop(['response_binary'], axis=1)
y= df['response_binary']
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.3,random_state=1)
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_squared_error,r2_score
from sklearn.preprocessing import OneHotEncoder as onehot
from sklearn.preprocessing import LabelEncoder
num_atribute=['age' , 'balance' , 'day' , 'duration' , 'campaign' , 'pdays' , 'previous']
cat_atribute=['job' , 'marital' , 'education' , 'default' , 'housing' , 'loan' , 'poutcome' , 'month']
le = LabelEncoder()
X_train[cat_atribute] = X_train[cat_atribute].apply(le.fit_transform)
X_train[cat_atribute].head()
ss = StandardScaler()
ss.fit_transform(X_train[num_atribute])
a = X_train[cat_atribute]
b = ss.transform(X_train[num_atribute])
Xtr = np.hstack([a,b])
Xtr.shape
knn = KNeighborsClassifier(n_neighbors=5, metric='euclidean')
knn.fit(Xtr, y_train)
ss = StandardScaler()
ss.fit_transform(X_test[num_atribute])
le = LabelEncoder()
X_test[cat_atribute] = X_test[cat_atribute].apply(le.fit_transform)
a1 = X_test[cat_atribute]
b1 = ss.transform(X_test[num_atribute])
Xtr1 = np.hstack([a1,b1])
Xtr1.shape
y_pred = knn.predict(Xtr1)
confusion_matrix(y_test, y_pred)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
from sklearn.metrics import precision_recall_fscore_support
precision_recall_fscore_support(y_test, y_pred)
from sklearn.metrics import precision_score
precision_score(y_test, y_pred)
from sklearn.metrics import recall_score
recall_score(y_test, y_pred)
from sklearn.metrics import f1_score
f1_score(y_test, y_pred)
error_rate = []
for i in range(1,40):
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(Xtr,y_train)
pred_i = knn.predict(Xtr1)
error_rate.append(1-accuracy_score(y_test, pred_i))
plt.figure(figsize=(10,6))
plt.plot(range(1,40),error_rate,color='blue', linestyle='dashed',
marker='o',markerfacecolor='red', markersize=10)
plt.title('Error Rate vs. K Value')
plt.xlabel('K')
plt.ylabel('Error Rate')
plt.show()
print("Minimum error:-",min(error_rate),"at K =",error_rate.index(min(error_rate))+1)
knn = KNeighborsClassifier(n_neighbors=9, metric='euclidean')
knn.fit(Xtr, y_train)
y_pred = knn.predict(Xtr1)
accuracy_score(y_test, y_pred)
from sklearn.neighbors import KNeighborsRegressor
reg = KNeighborsRegressor(n_neighbors=3)
reg.fit(Xtr, y_train)
ss = StandardScaler()
ss.fit_transform(X_test[num_atribute])
le = LabelEncoder()
X_test[cat_atribute] = X_test[cat_atribute].apply(le.fit_transform)
a1 = X_test[cat_atribute]
b1 = ss.transform(X_test[num_atribute])
Xtr1 = np.hstack([a1,b1])
Xtr1.shape
from sklearn.model_selection import cross_val_score
def initial_check(model,X_train,y_train):
rmse_score=cross_val_score(model,X_train,y_train,scoring='neg_mean_squared_error',cv=10)
r2_score=cross_val_score(model,X_train,y_train,scoring='r2',cv=10)
print("RMSE (cross_val_score): ",np.sqrt(-rmse_score).mean())
print("R2 Score (cross_val_score): ",r2_score.mean())
initial_check(reg,Xtr,y_train)
error_rate = []
for i in range(1,40):
knn = KNeighborsRegressor(n_neighbors=i)
knn.fit(Xtr,y_train)
rmse_score=cross_val_score(knn,Xtr,y_train,scoring='neg_mean_squared_error',cv=10)
error_rate.append(np.sqrt(-rmse_score).mean())
plt.figure(figsize=(10,6))
plt.plot(range(1,40),error_rate,color='blue', linestyle='dashed',
marker='o',markerfacecolor='red', markersize=10)
plt.title('Error Rate vs. K Value')
plt.xlabel('K')
plt.ylabel('Error Rate')
plt.show()
print("Minimum error:-",min(error_rate),"at K =",error_rate.index(min(error_rate))+1)
```
|
github_jupyter
|
import os
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from sklearn.metrics import confusion_matrix
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
import seaborn as sns
sns.set()
os.chdir("C:\\Users\\user\\Desktop\\ML Files")
df = pd.read_csv('bank_cleaned.csv')
df.head()
from sklearn.model_selection import train_test_split
X= df.drop(['response_binary'], axis=1)
y= df['response_binary']
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.3,random_state=1)
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_squared_error,r2_score
from sklearn.preprocessing import OneHotEncoder as onehot
from sklearn.preprocessing import LabelEncoder
num_atribute=['age' , 'balance' , 'day' , 'duration' , 'campaign' , 'pdays' , 'previous']
cat_atribute=['job' , 'marital' , 'education' , 'default' , 'housing' , 'loan' , 'poutcome' , 'month']
le = LabelEncoder()
X_train[cat_atribute] = X_train[cat_atribute].apply(le.fit_transform)
X_train[cat_atribute].head()
ss = StandardScaler()
ss.fit_transform(X_train[num_atribute])
a = X_train[cat_atribute]
b = ss.transform(X_train[num_atribute])
Xtr = np.hstack([a,b])
Xtr.shape
knn = KNeighborsClassifier(n_neighbors=5, metric='euclidean')
knn.fit(Xtr, y_train)
ss = StandardScaler()
ss.fit_transform(X_test[num_atribute])
le = LabelEncoder()
X_test[cat_atribute] = X_test[cat_atribute].apply(le.fit_transform)
a1 = X_test[cat_atribute]
b1 = ss.transform(X_test[num_atribute])
Xtr1 = np.hstack([a1,b1])
Xtr1.shape
y_pred = knn.predict(Xtr1)
confusion_matrix(y_test, y_pred)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
from sklearn.metrics import precision_recall_fscore_support
precision_recall_fscore_support(y_test, y_pred)
from sklearn.metrics import precision_score
precision_score(y_test, y_pred)
from sklearn.metrics import recall_score
recall_score(y_test, y_pred)
from sklearn.metrics import f1_score
f1_score(y_test, y_pred)
error_rate = []
for i in range(1,40):
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(Xtr,y_train)
pred_i = knn.predict(Xtr1)
error_rate.append(1-accuracy_score(y_test, pred_i))
plt.figure(figsize=(10,6))
plt.plot(range(1,40),error_rate,color='blue', linestyle='dashed',
marker='o',markerfacecolor='red', markersize=10)
plt.title('Error Rate vs. K Value')
plt.xlabel('K')
plt.ylabel('Error Rate')
plt.show()
print("Minimum error:-",min(error_rate),"at K =",error_rate.index(min(error_rate))+1)
knn = KNeighborsClassifier(n_neighbors=9, metric='euclidean')
knn.fit(Xtr, y_train)
y_pred = knn.predict(Xtr1)
accuracy_score(y_test, y_pred)
from sklearn.neighbors import KNeighborsRegressor
reg = KNeighborsRegressor(n_neighbors=3)
reg.fit(Xtr, y_train)
ss = StandardScaler()
ss.fit_transform(X_test[num_atribute])
le = LabelEncoder()
X_test[cat_atribute] = X_test[cat_atribute].apply(le.fit_transform)
a1 = X_test[cat_atribute]
b1 = ss.transform(X_test[num_atribute])
Xtr1 = np.hstack([a1,b1])
Xtr1.shape
from sklearn.model_selection import cross_val_score
def initial_check(model,X_train,y_train):
rmse_score=cross_val_score(model,X_train,y_train,scoring='neg_mean_squared_error',cv=10)
r2_score=cross_val_score(model,X_train,y_train,scoring='r2',cv=10)
print("RMSE (cross_val_score): ",np.sqrt(-rmse_score).mean())
print("R2 Score (cross_val_score): ",r2_score.mean())
initial_check(reg,Xtr,y_train)
error_rate = []
for i in range(1,40):
knn = KNeighborsRegressor(n_neighbors=i)
knn.fit(Xtr,y_train)
rmse_score=cross_val_score(knn,Xtr,y_train,scoring='neg_mean_squared_error',cv=10)
error_rate.append(np.sqrt(-rmse_score).mean())
plt.figure(figsize=(10,6))
plt.plot(range(1,40),error_rate,color='blue', linestyle='dashed',
marker='o',markerfacecolor='red', markersize=10)
plt.title('Error Rate vs. K Value')
plt.xlabel('K')
plt.ylabel('Error Rate')
plt.show()
print("Minimum error:-",min(error_rate),"at K =",error_rate.index(min(error_rate))+1)
| 0.438545 | 0.373676 |
# 막대 그래프 (Bar Chart) 그리는 방법 - pandas, matplotlib, seaborn
시각화할 때 막대 그래프 자주 사용하는데, 검색할 때 마다 방법이 너무 다양하다... 정리해보자.
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import random
np.random.seed(seed=1)
group_list = ['A','B','C','D']
n_size = 20
group = [random.choice(group_list) for i in range(n_size)]
xval = np.random.poisson(lam=10,size=n_size)
label = np.random.binomial(n=1, p=0.5, size=n_size)
label = list(map(str, label))
df = pd.DataFrame({'xval':xval, 'group':group, 'label':label})
df.head()
df_by_group = df.groupby(['group'])['xval'].sum()
df_by_group_label = df.groupby(['group','label'])['xval'].sum()
df_by_group
df_by_group_label
```
## 1. pandas
> __DataFrame.plot.bar(self, x=None, y=None, **kwargs)__
> x: xlabel or position, optional
> y: ylabel or position, optional
### 한 개의 그룹이 있을 경우
```
df_by_group = df_by_group.reset_index()
df_by_group.plot.bar(x='group',y='xval',rot=0)
df_by_group.plot.barh(x='group',y='xval',rot=0)
```
### 두 개의 그룹이 있을 경우
- 그룹화된 요약 테이블을 피봇테이블로 만든다
```
df_by_group_label = df_by_group_label.reset_index()
df_pivot = df_by_group_label.pivot(index='group',columns='label',values='xval')
df_pivot
df_pivot.plot.bar(rot=0)
df_pivot.plot.bar(stacked=True, rot=0)
```
## 2. matplotlib
> __matplotlib.pyplot.bar(x, height, width=0.8, bottom=None, *, align='center', data=None, **kwargs)__
> x : sequence of scalars
> height : scalar or sequence of scalars
> width : scalar or array-like, optional
> bottom : scalar or array-like, optional
### 한 개의 그룹이 있을 경우
```
df_by_group = df.groupby(['group'])['xval'].sum()
df_by_group
label = df_by_group.index
index = np.arange(len(label)) # 0,1,2,3
plt.bar(index, df_by_group)
plt.xticks(index, label, fontsize=15) # label 이름 넣기
plt.barh(index, df_by_group)
plt.yticks(index, label, fontsize=15)
```
### 두 개의 그룹이 있을 경우
- plt.bar의 'bottom' 또는 'width' 옵션 을 활용하자
- 2번째층 그룹의 라벨에 따라 데이터프레임을 따로 정의해야한다
```
df_by_group_by0 = df[df['label']=='0'].groupby(['group'])['xval'].sum()
df_by_group_by1 = df[df['label']=='1'].groupby(['group'])['xval'].sum()
label = df.group.unique()
label = sorted(label)
index = np.arange(len(label))
p1 = plt.bar(index,df_by_group_by0, color='red', alpha=0.5)
p2 = plt.bar(index,df_by_group_by1, color='blue', alpha=0.5,
bottom=df_by_group_by0)
plt.xticks(index,label)
plt.legend((p1[0], p2[0]), ('0', '1'), fontsize=15)
p1 = plt.bar(index,df_by_group_by0, color='red', alpha=0.5,
width=0.4)
p2 = plt.bar(index+0.4,df_by_group_by1, color='blue', alpha=0.5,
width=0.4)
plt.xticks(index,label)
plt.legend((p1[0], p2[0]), ('0', '1'), fontsize=15)
```
* 문자열 리스트 정렬하기
[참조](https://hashcode.co.kr/questions/1058/%EB%A6%AC%EC%8A%A4%ED%8A%B8%EB%A5%BC-%EC%82%AC%EC%A0%84%EC%88%9C%EC%9C%BC%EB%A1%9C-%EC%A0%95%EB%A0%AC%ED%95%98%EB%A0%A4%EA%B3%A0-%ED%95%A9%EB%8B%88%EB%8B%A4)
```
import locale
import functools
mylist = ["사과", "바나나", "딸기", "포도"]
locale.setlocale(locale.LC_ALL, '') #한국 기준으로 set
sortedByLocale = sorted(mylist, key=functools.cmp_to_key(locale.strcoll))
sortedByLocale
```
## 3. seaborn
> __seaborn.barplot(x=None, y=None, hue=None, data=None, order=None, hue_order=None, estimator=<function mean at 0x10a2a03b0>, ci=95, n_boot=1000, units=None, seed=None, orient=None, color=None, palette=None, saturation=0.75, errcolor='.26', errwidth=None, capsize=None, dodge=True, ax=None, **kwargs)__
> x, y, hue: x, y, huenames of variables in data or vector data, optional
> data: dataDataFrame, array, or list of arrays, optional
> dodge: dodgebool, optional (When hue nesting is used, whether elements should be shifted along the categorical axis.)
### 한 개의 그룹이 있을 경우
```
df_by_group = df.groupby(['group'])['xval'].sum().reset_index()
sns.barplot(x='group', y='xval', data=df_by_group)
```
### 두 개의 그룹이 있을 경우
```
df_by_group_label = df.groupby(['group','label'])['xval'].sum().reset_index()
sns.barplot(x='group', y='xval', hue='label',data=df_by_group_label )
df_by_group_by0 = df[df['label']=='0'].groupby(['group'])['xval'].sum().reset_index()
df_by_group_by1 = df[df['label']=='1'].groupby(['group'])['xval'].sum().reset_index()
sns.barplot(x='group', y='xval', data=df_by_group,color="red",alpha=0.5)
sns.barplot(x='group', y='xval', data=df_by_group_by0 ,color="blue",alpha=0.5)
```
## 4. 파이썬에서 R의 'ggplot2' 사용하기
```
%matplotlib inline
import plotnine as p9
p9.ggplot(data=df,mapping=p9.aes(x='group',y='xval'))+p9.geom_bar(stat='identity')
p9.ggplot(data=df,mapping=p9.aes(x='group',y='xval',fill='label'))+p9.geom_bar(stat='identity')
p9.ggplot(data=df,mapping=p9.aes(x='group',y='xval',fill='label'))+p9.geom_bar(stat='identity')+p9.coord_flip()
p9.ggplot(data=df,mapping=p9.aes(x='group',y='xval',fill='label'))+p9.geom_bar(stat='identity',position='dodge')
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import random
np.random.seed(seed=1)
group_list = ['A','B','C','D']
n_size = 20
group = [random.choice(group_list) for i in range(n_size)]
xval = np.random.poisson(lam=10,size=n_size)
label = np.random.binomial(n=1, p=0.5, size=n_size)
label = list(map(str, label))
df = pd.DataFrame({'xval':xval, 'group':group, 'label':label})
df.head()
df_by_group = df.groupby(['group'])['xval'].sum()
df_by_group_label = df.groupby(['group','label'])['xval'].sum()
df_by_group
df_by_group_label
df_by_group = df_by_group.reset_index()
df_by_group.plot.bar(x='group',y='xval',rot=0)
df_by_group.plot.barh(x='group',y='xval',rot=0)
df_by_group_label = df_by_group_label.reset_index()
df_pivot = df_by_group_label.pivot(index='group',columns='label',values='xval')
df_pivot
df_pivot.plot.bar(rot=0)
df_pivot.plot.bar(stacked=True, rot=0)
df_by_group = df.groupby(['group'])['xval'].sum()
df_by_group
label = df_by_group.index
index = np.arange(len(label)) # 0,1,2,3
plt.bar(index, df_by_group)
plt.xticks(index, label, fontsize=15) # label 이름 넣기
plt.barh(index, df_by_group)
plt.yticks(index, label, fontsize=15)
df_by_group_by0 = df[df['label']=='0'].groupby(['group'])['xval'].sum()
df_by_group_by1 = df[df['label']=='1'].groupby(['group'])['xval'].sum()
label = df.group.unique()
label = sorted(label)
index = np.arange(len(label))
p1 = plt.bar(index,df_by_group_by0, color='red', alpha=0.5)
p2 = plt.bar(index,df_by_group_by1, color='blue', alpha=0.5,
bottom=df_by_group_by0)
plt.xticks(index,label)
plt.legend((p1[0], p2[0]), ('0', '1'), fontsize=15)
p1 = plt.bar(index,df_by_group_by0, color='red', alpha=0.5,
width=0.4)
p2 = plt.bar(index+0.4,df_by_group_by1, color='blue', alpha=0.5,
width=0.4)
plt.xticks(index,label)
plt.legend((p1[0], p2[0]), ('0', '1'), fontsize=15)
import locale
import functools
mylist = ["사과", "바나나", "딸기", "포도"]
locale.setlocale(locale.LC_ALL, '') #한국 기준으로 set
sortedByLocale = sorted(mylist, key=functools.cmp_to_key(locale.strcoll))
sortedByLocale
df_by_group = df.groupby(['group'])['xval'].sum().reset_index()
sns.barplot(x='group', y='xval', data=df_by_group)
df_by_group_label = df.groupby(['group','label'])['xval'].sum().reset_index()
sns.barplot(x='group', y='xval', hue='label',data=df_by_group_label )
df_by_group_by0 = df[df['label']=='0'].groupby(['group'])['xval'].sum().reset_index()
df_by_group_by1 = df[df['label']=='1'].groupby(['group'])['xval'].sum().reset_index()
sns.barplot(x='group', y='xval', data=df_by_group,color="red",alpha=0.5)
sns.barplot(x='group', y='xval', data=df_by_group_by0 ,color="blue",alpha=0.5)
%matplotlib inline
import plotnine as p9
p9.ggplot(data=df,mapping=p9.aes(x='group',y='xval'))+p9.geom_bar(stat='identity')
p9.ggplot(data=df,mapping=p9.aes(x='group',y='xval',fill='label'))+p9.geom_bar(stat='identity')
p9.ggplot(data=df,mapping=p9.aes(x='group',y='xval',fill='label'))+p9.geom_bar(stat='identity')+p9.coord_flip()
p9.ggplot(data=df,mapping=p9.aes(x='group',y='xval',fill='label'))+p9.geom_bar(stat='identity',position='dodge')
| 0.245537 | 0.865167 |
<a href="https://colab.research.google.com/github/DJCordhose/ml-workshop/blob/master/notebooks/tf2/tf-low-to-high-2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Classification with TensorFlow 2 Keras Layers
## Objectives
- activation functions
- classification
```
import matplotlib.pyplot as plt
# plt.xkcd()
# plt.style.use('ggplot')
%matplotlib inline
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (20, 8)
```
## A new challange: predicting a category instead of a continous value
* so far we were inferring a continous value for another
* now we want to infer which category a point in 2d belongs to
* this is called a classification
* since we only have two categories (0/1 or red/blue) this is called a binary classification
```
#@title Configure our example { run: "auto", display-mode: "both" }
# https://colab.research.google.com/notebooks/forms.ipynb
n = 100 #@param {type:"slider", min:1, max:1000, step:1}
m = -1 #@param {type:"slider", min:-10, max:10, step: 0.1}
b = 1 #@param {type:"slider", min:-10, max:10, step: 0.1}
noise_level = 0.2 #@param {type:"slider", min:0.1, max:1.0, step:0.1}
title = 'Categories expressed as colors' #@param {type:"string"}
dim_1_label = 'x1' #@param {type:"string"}
dim_2_label = 'x2' #@param {type:"string"}
import numpy as np
# all points
X = np.random.uniform(0, 1, (n, 2))
# below or above line determines which category they belong to (plus noise)
noise = np.random.normal(0, noise_level, n)
y_bool = X[:, 1] > m*X[:, 0]+b + noise
y = y_bool.astype(int)
plt.xlabel(dim_1_label)
plt.ylabel(dim_2_label)
plt.title(title)
size=100
plt.scatter(X[:,0], X[:,1], c=y, cmap=plt.cm.bwr, marker='o', edgecolors='k', s=y*size);
plt.scatter(X[:,0], X[:,1], c=y, cmap=plt.cm.bwr, marker='^', edgecolors='k', s=~y_bool*size);
```
### Can you think of an application for this? What could be on the axes?
_Let's adapt the example to something we can relate to_
```
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
ax.set_title('Hyperplane predicting the group')
# we can have the probability encoded in shade of color
ax.scatter(X[:,0], X[:,1], y, c=y,
cmap=plt.cm.bwr,
marker='o',
edgecolors='k',
depthshade=False,
s=y*size)
ax.scatter(X[:,0], X[:,1], y, c=y,
cmap=plt.cm.bwr,
marker='^',
edgecolors='k',
depthshade=False,
s=~y_bool*size)
# https://en.wikipedia.org/wiki/Azimuth
ax.view_init(elev=10, azim=-40)
```
## Training using so called 'Logictic Regression'
```
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
tf.__version__
```
### We have two dimensions as input now
```
x = tf.constant(X, dtype='float32')
y_true = tf.constant(y, dtype='float32')
x.shape, y.shape
plt.hist(y, bins=n)
plt.title('Distribution of ground truth');
from tensorflow.keras.layers import Dense
model = tf.keras.Sequential()
model.add(Dense(units=1, input_dim=2))
model.summary()
%%time
model.compile(loss='mse',
optimizer='sgd')
history = model.fit(x, y_true, epochs=100, verbose=0)
# plt.yscale('log')
plt.ylabel("loss")
plt.xlabel("epochs")
plt.plot(history.history['loss']);
```
### It does train ok, but how does the output look like?
```
y_pred = model.predict(x)
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
ax.set_title('Hyperplane predicting the group')
# we can have the probability encoded in shade of color
ax.scatter(X[:, 0], X[:, 1], y_pred, c=y_pred.ravel(),
cmap=plt.cm.bwr,
marker='o',
depthshade=False,
edgecolors='k',
s=size)
# https://en.wikipedia.org/wiki/Azimuth
ax.view_init(elev=15, azim=-40)
# ax.view_init(elev=10, azim=-40)
# also try to get a better idea how the hyperplane looks like
# ax.view_init(elev=20, azim=-75)
# ax.view_init(elev=10, azim=-40)
plt.hist(y_pred, bins=n, color='green')
plt.hist(y, bins=n)
plt.title('Distribution of predictions and ground truth')
```
### We would love to predict a value compressed between 0 and 1
_everything below 0.5 counts as 0, everthing above as 1_
<img src='https://github.com/DJCordhose/ml-workshop/blob/master/notebooks/tf2/img/logistic.jpg?raw=1'>
```
y_pred_binary = (y_pred > 0.5).astype(int).ravel()
y_pred_binary
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
ax.set_title('Hyperplane predicting the group')
ax.scatter(X[:, 0], X[:, 1], y_pred, c=y_pred_binary,
cmap=plt.cm.bwr,
depthshade=True,
marker='o', edgecolors='k')
ax.scatter(X[:,0], X[:,1], y_pred, c=y_pred_binary,
cmap=plt.cm.bwr,
marker='o',
edgecolors='k',
depthshade=False,
s=y_pred_binary*size)
ax.scatter(X[:,0], X[:,1], y_pred, c=y_pred_binary,
cmap=plt.cm.bwr,
marker='^',
edgecolors='k',
depthshade=False,
s=~y_pred_binary.astype(bool)*size)
# https://en.wikipedia.org/wiki/Azimuth
# ax.view_init(elev=30, azim=-40)
ax.view_init(elev=15, azim=-40)
# also try to get a better idea how the hyperplane looks like
# ax.view_init(elev=20, azim=-75)
# ax.view_init(elev=10, azim=-40)
from matplotlib.colors import ListedColormap
misclassified = y_true - y_pred_binary
plt.scatter(X[:,0], X[:,1], c=misclassified, cmap=ListedColormap(['#FF0000', '#FFFFFF', '#0000FF']), marker='o', s=y_pred_binary*size)
plt.scatter(X[:,0], X[:,1], c=y_pred_binary, cmap=ListedColormap(['#FF6666', '#6666FF']), marker='o', edgecolors='k', s=y_pred_binary*size, alpha=0.5)
plt.scatter(X[:,0], X[:,1], c=misclassified, cmap=ListedColormap(['#FF0000', '#FFFFFF', '#0000FF']), marker='^', s=~y_pred_binary.astype(bool)*size)
plt.scatter(X[:,0], X[:,1], c=y_pred_binary, cmap=ListedColormap(['#FF6666', '#6666FF']), marker='^', edgecolors='k', s=~y_pred_binary.astype(bool)*size, alpha=0.5)
plt.xlabel(dim_1_label)
plt.ylabel(dim_2_label)
plt.title('Classification results (Strong colors indicate misclassification)');
# Adapted from:
# http://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html
# http://jponttuset.cat/xkcd-deep-learning/
from matplotlib.colors import ListedColormap
import numpy as np
import pandas as pd
cmap = ListedColormap(['#FF6666', '#6666FF'])
font_size=15
title_font_size=25
def meshGrid(x_data, y_data):
h = .05 # step size in the mesh
# x_min, x_max = -0.1, 1.1
# y_min, y_max = -0.1, 1.1
x_min, x_max = x_data.min() - .1, x_data.max() + .1
y_min, y_max = y_data.min() - .1, y_data.max() + .1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
return (xx,yy)
def plotPrediction(clf, x_data, y_data, x_label, y_label, ground_truth, title="",
size=(15, 8), n_samples=None, proba=True, prediction=True,
ax=None, marker_size=100
):
xx,yy = meshGrid(x_data, y_data)
if ax is None:
_, ax = plt.subplots(figsize=size)
if clf:
Z = clf.predict_proba(np.c_[yy.ravel(), xx.ravel()])
Z = Z.reshape(xx.shape)
ax.contourf(xx, yy, Z, cmap=plt.cm.RdBu, alpha=.6)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
samples = pd.DataFrame(np.array([x_data, y_data, ground_truth]).T)
if n_samples:
samples = samples.sample(n_samples, random_state=42)
classes = samples[2]
ax.scatter(samples[0], samples[1], c=classes, cmap=cmap, marker='o', edgecolors='k', s=classes*marker_size)
ax.scatter(samples[0], samples[1], c=classes, cmap=cmap, marker='^', edgecolors='k', s=~classes.astype(bool)*marker_size)
ax.set_xlabel(x_label, fontsize=font_size)
ax.set_ylabel(y_label, fontsize=font_size)
ax.set_title(title, fontsize=title_font_size)
return ax
plotPrediction(model, X[:, 0], X[:, 1],
dim_1_label, dim_2_label, y_true,
title="Classification probabilities (dark is certain)");
```
### Interpretation
* some values are negative
* some are above 1
* we have a lot of variance
### Is there a way to decrease variance of the prediction and actually compress the values between 0 and 1?
## Understandinging the effect of activation functions
Typically, the output of a neuron is transformed using an activation function which compresses the output to a value between 0 and 1 (sigmoid), or between -1 and 1 (tanh) or sets all negative values to zero (relu).
<img src='https://raw.githubusercontent.com/DJCordhose/deep-learning-crash-course-notebooks/master/img/neuron.jpg'>
### Typical Activation Functions
<img src='https://djcordhose.github.io/ai/img/activation-functions.jpg'>
### We can use sigmoid as the activation function
```
model = tf.keras.Sequential()
model.add(Dense(units=1, input_dim=2, activation='sigmoid'))
model.summary()
```
### Reconsidering the loss function
_cross entropy is an alternative to mean squared error_
* cross entropy can be used as an error measure when a network's outputs can be thought of as representing independent hypotheses
* activations can be understood as representing the probability that each hypothesis might be true
* the loss indicates the distance between what the network believes this distribution should be, and what the teacher says it should be
* in this case we are dealing with two exclusive hypothesis: either a sample is blue or it is red
* this makes this binary cross entropy
https://en.wikipedia.org/wiki/Cross_entropy
http://www.cse.unsw.edu.au/~billw/cs9444/crossentropy.html
### We also have a new metric: what share of predictions is correct?
* basic metric for classification: share of correctly predicted samples
* https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/metrics/Accuracy
### Advanced Optimizer (pretty much standard)
```
%%time
model.compile(loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(learning_rate=1e-2),
metrics=['accuracy'])
history = model.fit(x, y_true, epochs=2000, verbose=0)
loss, accuracy = model.evaluate(x, y_true, verbose=0)
loss, accuracy
plt.yscale('log')
plt.ylabel("loss")
plt.xlabel("epochs")
plt.title('Loss over time')
plt.plot(history.history['loss']);
plt.ylabel("accuracy")
plt.xlabel("epochs")
plt.title('Accuracy over time')
plt.plot(history.history['accuracy']);
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
ax.set_title('Hyperplane predicting the group')
# we can have the probability encoded in shade of color
ax.scatter(X[:, 0], X[:, 1], y_pred, c=y_pred.ravel(),
cmap=plt.cm.bwr,
marker='o',
depthshade=False,
edgecolors='k',
s=size)
# https://en.wikipedia.org/wiki/Azimuth
ax.view_init(elev=15, azim=-40)
# ax.view_init(elev=10, azim=-40)
# also try to get a better idea how the hyperplane looks like
# ax.view_init(elev=20, azim=-75)
# ax.view_init(elev=10, azim=-40)
y_pred = model.predict(x)
plt.hist(y_pred, bins=n, color='green')
plt.hist(y, bins=n)
plt.title('Distribution of predictions, more dense around extremes');
y_pred_binary = (y_pred > 0.5).astype(int).ravel()
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
ax.set_title('Hyperplane predicting the group')
ax.scatter(X[:,0], X[:,1], y_pred, c=y_pred.ravel(),
cmap=plt.cm.bwr,
marker='o',
edgecolors='k',
depthshade=False,
s=y_pred_binary*size)
ax.scatter(X[:,0], X[:,1], y_pred, c=y_pred.ravel(),
cmap=plt.cm.bwr,
marker='^',
edgecolors='k',
depthshade=False,
s=~y_pred_binary.astype(bool)*size)
# https://en.wikipedia.org/wiki/Azimuth
# ax.view_init(elev=30, azim=-40)
ax.view_init(elev=15, azim=-40)
# also try to get a better idea how the hyperplane looks like
# ax.view_init(elev=20, azim=-75)
# ax.view_init(elev=10, azim=-40)
misclassified = y_true - y_pred_binary
plt.scatter(X[:,0], X[:,1], c=misclassified, cmap=ListedColormap(['#FF0000', '#FFFFFF', '#0000FF']), marker='o', s=y_pred_binary*size)
plt.scatter(X[:,0], X[:,1], c=y_pred_binary, cmap=ListedColormap(['#FF6666', '#6666FF']), marker='o', edgecolors='k', s=y_pred_binary*size, alpha=0.5)
plt.scatter(X[:,0], X[:,1], c=misclassified, cmap=ListedColormap(['#FF0000', '#FFFFFF', '#0000FF']), marker='^', s=~y_pred_binary.astype(bool)*size)
plt.scatter(X[:,0], X[:,1], c=y_pred_binary, cmap=ListedColormap(['#FF6666', '#6666FF']), marker='^', edgecolors='k', s=~y_pred_binary.astype(bool)*size, alpha=0.5)
plt.xlabel(dim_1_label)
plt.ylabel(dim_2_label)
plt.title('Classification results (Strong colors indicate misclassification)');
plotPrediction(model, X[:, 0], X[:, 1],
dim_1_label, dim_2_label, y_true,
title="Classification probabilities (dark is certain)");
```
## From single neuron to network in the TensorFlow Playground
<img src='https://djcordhose.github.io/ai/img/tf-plaground.png'>
https://playground.tensorflow.org/
|
github_jupyter
|
import matplotlib.pyplot as plt
# plt.xkcd()
# plt.style.use('ggplot')
%matplotlib inline
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (20, 8)
#@title Configure our example { run: "auto", display-mode: "both" }
# https://colab.research.google.com/notebooks/forms.ipynb
n = 100 #@param {type:"slider", min:1, max:1000, step:1}
m = -1 #@param {type:"slider", min:-10, max:10, step: 0.1}
b = 1 #@param {type:"slider", min:-10, max:10, step: 0.1}
noise_level = 0.2 #@param {type:"slider", min:0.1, max:1.0, step:0.1}
title = 'Categories expressed as colors' #@param {type:"string"}
dim_1_label = 'x1' #@param {type:"string"}
dim_2_label = 'x2' #@param {type:"string"}
import numpy as np
# all points
X = np.random.uniform(0, 1, (n, 2))
# below or above line determines which category they belong to (plus noise)
noise = np.random.normal(0, noise_level, n)
y_bool = X[:, 1] > m*X[:, 0]+b + noise
y = y_bool.astype(int)
plt.xlabel(dim_1_label)
plt.ylabel(dim_2_label)
plt.title(title)
size=100
plt.scatter(X[:,0], X[:,1], c=y, cmap=plt.cm.bwr, marker='o', edgecolors='k', s=y*size);
plt.scatter(X[:,0], X[:,1], c=y, cmap=plt.cm.bwr, marker='^', edgecolors='k', s=~y_bool*size);
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
ax.set_title('Hyperplane predicting the group')
# we can have the probability encoded in shade of color
ax.scatter(X[:,0], X[:,1], y, c=y,
cmap=plt.cm.bwr,
marker='o',
edgecolors='k',
depthshade=False,
s=y*size)
ax.scatter(X[:,0], X[:,1], y, c=y,
cmap=plt.cm.bwr,
marker='^',
edgecolors='k',
depthshade=False,
s=~y_bool*size)
# https://en.wikipedia.org/wiki/Azimuth
ax.view_init(elev=10, azim=-40)
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
tf.__version__
x = tf.constant(X, dtype='float32')
y_true = tf.constant(y, dtype='float32')
x.shape, y.shape
plt.hist(y, bins=n)
plt.title('Distribution of ground truth');
from tensorflow.keras.layers import Dense
model = tf.keras.Sequential()
model.add(Dense(units=1, input_dim=2))
model.summary()
%%time
model.compile(loss='mse',
optimizer='sgd')
history = model.fit(x, y_true, epochs=100, verbose=0)
# plt.yscale('log')
plt.ylabel("loss")
plt.xlabel("epochs")
plt.plot(history.history['loss']);
y_pred = model.predict(x)
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
ax.set_title('Hyperplane predicting the group')
# we can have the probability encoded in shade of color
ax.scatter(X[:, 0], X[:, 1], y_pred, c=y_pred.ravel(),
cmap=plt.cm.bwr,
marker='o',
depthshade=False,
edgecolors='k',
s=size)
# https://en.wikipedia.org/wiki/Azimuth
ax.view_init(elev=15, azim=-40)
# ax.view_init(elev=10, azim=-40)
# also try to get a better idea how the hyperplane looks like
# ax.view_init(elev=20, azim=-75)
# ax.view_init(elev=10, azim=-40)
plt.hist(y_pred, bins=n, color='green')
plt.hist(y, bins=n)
plt.title('Distribution of predictions and ground truth')
y_pred_binary = (y_pred > 0.5).astype(int).ravel()
y_pred_binary
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
ax.set_title('Hyperplane predicting the group')
ax.scatter(X[:, 0], X[:, 1], y_pred, c=y_pred_binary,
cmap=plt.cm.bwr,
depthshade=True,
marker='o', edgecolors='k')
ax.scatter(X[:,0], X[:,1], y_pred, c=y_pred_binary,
cmap=plt.cm.bwr,
marker='o',
edgecolors='k',
depthshade=False,
s=y_pred_binary*size)
ax.scatter(X[:,0], X[:,1], y_pred, c=y_pred_binary,
cmap=plt.cm.bwr,
marker='^',
edgecolors='k',
depthshade=False,
s=~y_pred_binary.astype(bool)*size)
# https://en.wikipedia.org/wiki/Azimuth
# ax.view_init(elev=30, azim=-40)
ax.view_init(elev=15, azim=-40)
# also try to get a better idea how the hyperplane looks like
# ax.view_init(elev=20, azim=-75)
# ax.view_init(elev=10, azim=-40)
from matplotlib.colors import ListedColormap
misclassified = y_true - y_pred_binary
plt.scatter(X[:,0], X[:,1], c=misclassified, cmap=ListedColormap(['#FF0000', '#FFFFFF', '#0000FF']), marker='o', s=y_pred_binary*size)
plt.scatter(X[:,0], X[:,1], c=y_pred_binary, cmap=ListedColormap(['#FF6666', '#6666FF']), marker='o', edgecolors='k', s=y_pred_binary*size, alpha=0.5)
plt.scatter(X[:,0], X[:,1], c=misclassified, cmap=ListedColormap(['#FF0000', '#FFFFFF', '#0000FF']), marker='^', s=~y_pred_binary.astype(bool)*size)
plt.scatter(X[:,0], X[:,1], c=y_pred_binary, cmap=ListedColormap(['#FF6666', '#6666FF']), marker='^', edgecolors='k', s=~y_pred_binary.astype(bool)*size, alpha=0.5)
plt.xlabel(dim_1_label)
plt.ylabel(dim_2_label)
plt.title('Classification results (Strong colors indicate misclassification)');
# Adapted from:
# http://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html
# http://jponttuset.cat/xkcd-deep-learning/
from matplotlib.colors import ListedColormap
import numpy as np
import pandas as pd
cmap = ListedColormap(['#FF6666', '#6666FF'])
font_size=15
title_font_size=25
def meshGrid(x_data, y_data):
h = .05 # step size in the mesh
# x_min, x_max = -0.1, 1.1
# y_min, y_max = -0.1, 1.1
x_min, x_max = x_data.min() - .1, x_data.max() + .1
y_min, y_max = y_data.min() - .1, y_data.max() + .1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
return (xx,yy)
def plotPrediction(clf, x_data, y_data, x_label, y_label, ground_truth, title="",
size=(15, 8), n_samples=None, proba=True, prediction=True,
ax=None, marker_size=100
):
xx,yy = meshGrid(x_data, y_data)
if ax is None:
_, ax = plt.subplots(figsize=size)
if clf:
Z = clf.predict_proba(np.c_[yy.ravel(), xx.ravel()])
Z = Z.reshape(xx.shape)
ax.contourf(xx, yy, Z, cmap=plt.cm.RdBu, alpha=.6)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
samples = pd.DataFrame(np.array([x_data, y_data, ground_truth]).T)
if n_samples:
samples = samples.sample(n_samples, random_state=42)
classes = samples[2]
ax.scatter(samples[0], samples[1], c=classes, cmap=cmap, marker='o', edgecolors='k', s=classes*marker_size)
ax.scatter(samples[0], samples[1], c=classes, cmap=cmap, marker='^', edgecolors='k', s=~classes.astype(bool)*marker_size)
ax.set_xlabel(x_label, fontsize=font_size)
ax.set_ylabel(y_label, fontsize=font_size)
ax.set_title(title, fontsize=title_font_size)
return ax
plotPrediction(model, X[:, 0], X[:, 1],
dim_1_label, dim_2_label, y_true,
title="Classification probabilities (dark is certain)");
model = tf.keras.Sequential()
model.add(Dense(units=1, input_dim=2, activation='sigmoid'))
model.summary()
%%time
model.compile(loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(learning_rate=1e-2),
metrics=['accuracy'])
history = model.fit(x, y_true, epochs=2000, verbose=0)
loss, accuracy = model.evaluate(x, y_true, verbose=0)
loss, accuracy
plt.yscale('log')
plt.ylabel("loss")
plt.xlabel("epochs")
plt.title('Loss over time')
plt.plot(history.history['loss']);
plt.ylabel("accuracy")
plt.xlabel("epochs")
plt.title('Accuracy over time')
plt.plot(history.history['accuracy']);
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
ax.set_title('Hyperplane predicting the group')
# we can have the probability encoded in shade of color
ax.scatter(X[:, 0], X[:, 1], y_pred, c=y_pred.ravel(),
cmap=plt.cm.bwr,
marker='o',
depthshade=False,
edgecolors='k',
s=size)
# https://en.wikipedia.org/wiki/Azimuth
ax.view_init(elev=15, azim=-40)
# ax.view_init(elev=10, azim=-40)
# also try to get a better idea how the hyperplane looks like
# ax.view_init(elev=20, azim=-75)
# ax.view_init(elev=10, azim=-40)
y_pred = model.predict(x)
plt.hist(y_pred, bins=n, color='green')
plt.hist(y, bins=n)
plt.title('Distribution of predictions, more dense around extremes');
y_pred_binary = (y_pred > 0.5).astype(int).ravel()
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
ax.set_title('Hyperplane predicting the group')
ax.scatter(X[:,0], X[:,1], y_pred, c=y_pred.ravel(),
cmap=plt.cm.bwr,
marker='o',
edgecolors='k',
depthshade=False,
s=y_pred_binary*size)
ax.scatter(X[:,0], X[:,1], y_pred, c=y_pred.ravel(),
cmap=plt.cm.bwr,
marker='^',
edgecolors='k',
depthshade=False,
s=~y_pred_binary.astype(bool)*size)
# https://en.wikipedia.org/wiki/Azimuth
# ax.view_init(elev=30, azim=-40)
ax.view_init(elev=15, azim=-40)
# also try to get a better idea how the hyperplane looks like
# ax.view_init(elev=20, azim=-75)
# ax.view_init(elev=10, azim=-40)
misclassified = y_true - y_pred_binary
plt.scatter(X[:,0], X[:,1], c=misclassified, cmap=ListedColormap(['#FF0000', '#FFFFFF', '#0000FF']), marker='o', s=y_pred_binary*size)
plt.scatter(X[:,0], X[:,1], c=y_pred_binary, cmap=ListedColormap(['#FF6666', '#6666FF']), marker='o', edgecolors='k', s=y_pred_binary*size, alpha=0.5)
plt.scatter(X[:,0], X[:,1], c=misclassified, cmap=ListedColormap(['#FF0000', '#FFFFFF', '#0000FF']), marker='^', s=~y_pred_binary.astype(bool)*size)
plt.scatter(X[:,0], X[:,1], c=y_pred_binary, cmap=ListedColormap(['#FF6666', '#6666FF']), marker='^', edgecolors='k', s=~y_pred_binary.astype(bool)*size, alpha=0.5)
plt.xlabel(dim_1_label)
plt.ylabel(dim_2_label)
plt.title('Classification results (Strong colors indicate misclassification)');
plotPrediction(model, X[:, 0], X[:, 1],
dim_1_label, dim_2_label, y_true,
title="Classification probabilities (dark is certain)");
| 0.788461 | 0.979629 |
```
import numpy as np
def convolution2d_output_shape(input_shape, kernel_size, stride=1, padding=0, output_padding=0, dilation=1):
return np.array((input_shape + 2 * padding - dilation * (kernel_size - 1) - 1) / stride + 1).astype(int)
def convolution2d_transpose_output_shape(input_shape, kernel_size, stride=1, padding=0, output_padding=0, dilation=1):
return np.floor((input_shape - 1) * stride - 2 * padding + dilation * (kernel_size - 1) + output_padding + 1).astype(int)
print("Possibilidade de convoluções para imagens de tamanho 256")
shape = 256
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 1, 0)
print(shape)
print("Possibilidade de deconvoluções para imagens de tamanho 256")
shape = 1
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
print("Possibilidade de convoluções para imagens de tamanho 512")
shape = 512
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 1, 0)
print(shape)
print("Possibilidade de deconvoluções para imagens de tamanho 512")
shape = 1
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
print("Possibilidade de convoluções para imagens de tamanho 2048")
shape = 2048
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 1, 0)
print(shape)
print("Possibilidade de deconvoluções para imagens de tamanho 2048")
shape = 1
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
```
|
github_jupyter
|
import numpy as np
def convolution2d_output_shape(input_shape, kernel_size, stride=1, padding=0, output_padding=0, dilation=1):
return np.array((input_shape + 2 * padding - dilation * (kernel_size - 1) - 1) / stride + 1).astype(int)
def convolution2d_transpose_output_shape(input_shape, kernel_size, stride=1, padding=0, output_padding=0, dilation=1):
return np.floor((input_shape - 1) * stride - 2 * padding + dilation * (kernel_size - 1) + output_padding + 1).astype(int)
print("Possibilidade de convoluções para imagens de tamanho 256")
shape = 256
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 1, 0)
print(shape)
print("Possibilidade de deconvoluções para imagens de tamanho 256")
shape = 1
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
print("Possibilidade de convoluções para imagens de tamanho 512")
shape = 512
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 1, 0)
print(shape)
print("Possibilidade de deconvoluções para imagens de tamanho 512")
shape = 1
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
print("Possibilidade de convoluções para imagens de tamanho 2048")
shape = 2048
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_output_shape(shape, 4, 1, 0)
print(shape)
print("Possibilidade de deconvoluções para imagens de tamanho 2048")
shape = 1
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
shape = convolution2d_transpose_output_shape(shape, 4, 2, 1)
print(shape)
| 0.651687 | 0.84137 |
### clinvar missense prediction w/ feature intersection
* only use consistent positions
* only missense clinvar
* use positions w/ mpc **OR** pathogenic fraction
* calc path freq using counts
* total path freq
* total benign freq
```
import pandas, numpy
from scipy.stats import entropy
import pydot, pydotplus, graphviz
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
from sklearn import linear_model, metrics, tree, svm
from sklearn.neural_network import MLPClassifier
from sklearn.externals.six import StringIO
from sklearn.preprocessing import PolynomialFeatures
from sklearn.ensemble import ExtraTreesClassifier
from IPython.display import HTML
%matplotlib inline
def calc_path_freq(rows):
# sum of freqs for path
df = rows[ (rows.clin_class=='PATHOGENIC') |
(rows.clin_class=='LIKLEY_PATHOGENIC')]
l = len(df)
pathogenic_sum = sum(df['freq'])
neg = sum(df['neg_fam'])
if l == 0:
return 0, 0, -1, 0
return pathogenic_sum, pathogenic_sum/l, entropy(df['freq']/pathogenic_sum), l
def calc_benign_freq(rows):
# sum of freqs for
df = rows[ (rows.clin_class=='LIKELY_BENIGN') |
(rows.clin_class=='BENIGN')]
benign_sum = sum(df['freq'])
l = len(df)
neg = sum(df['neg_fam'])
if l == 0:
return 0, 0, -1, 0
return benign_sum, benign_sum/l, entropy(df['freq']/benign_sum), l
def calc_path_frac(rows):
pfam = list(rows['pfam'].values)[0]
pathogenic = len(rows[ (rows.clin_class=='PATHOGENIC') | (rows.clin_class=='LIKLEY_PATHOGENIC')])
benign = len(rows[ (rows.clin_class=='LIKELY_BENIGN') | (rows.clin_class=='BENIGN')])
frac = -1
if pathogenic+benign:
frac = pathogenic/(pathogenic+benign)
pf, pf_avg, pf_ent, pcount = calc_path_freq(rows)
bf, bf_avg, bf_ent, bcount = calc_benign_freq(rows)
r = -1
if bf:
r = pf/bf
return pandas.Series([frac, len(rows), pf, pf_avg, pf_ent, pcount, bf, bf_avg, bf_ent, bcount, r],
index=['path_frac', 'size',
'path_freq', 'p_freq_avg', 'p_freq_ent', 'ps',
'benign_freq', 'b_freq_avg', 'b_freq_ent', 'bs',
'fRatio'])
def calc_tot_freq_ratio(rows):
path_sum = calc_path_freq(rows)
benign_sum = calc_benign_freq(rows)
return path_sum/benign_sum
dat_file = '../data/interim/EPIv6.eff.dbnsfp.anno.hHack.dat.xls'
df_pre = pandas.read_csv(dat_file, sep='\t').fillna(0)
df_pre.loc[:, 'freq'] = df_pre['pos_fam']/(df_pre['pos_fam']+df_pre['neg_fam'])
df = (df_pre['pfam'].str.split(',', expand=True)
.stack()
.reset_index(level=0)
.set_index('level_0')
.rename(columns={0:'pfam'})
.join(df_pre.drop('pfam',1), how='left')
)
dd = df.groupby('pfam').apply(calc_path_frac)
ff = dd.reset_index()
# mk domain features
def match(row, domain_info):
ls = []
for pfam in row['pfam'].split(','):
if pfam in domain_info:
if domain_info[pfam][2] == 0:
ls.append(domain_info[pfam])
if len(ls) == 0:
for pfam in row['pfam'].split(','):
if pfam in domain_info:
return domain_info[pfam]
if len(ls):
return ls[0]
else:
return (0, 0,
0, 0, -1, 0,
0, 0, -1, 0,
-1, 1)
ff.loc[:, 'path_na'] = ff.apply(lambda row: 1 if row['path_frac']==-1 else 0, axis=1)
domain_info = {pfam:[path_frac, size,
path_freq, path_avg, path_ent, pc,
b_freq, b_avg, b_ent, bc,
fr, path_na]
for pfam, path_frac, size, path_freq, path_avg, path_ent, pc, b_freq, b_avg, b_ent, bc, fr, path_na
in ff.values}
df_pre.loc[:, 'path_frac_t'] = df_pre.apply(lambda row: match(row, domain_info)[0], axis=1)
df_pre.loc[:, 'size_t'] = df_pre.apply(lambda row: match(row, domain_info)[1], axis=1)
df_pre.loc[:, 'path_na_t'] = df_pre.apply(lambda row: match(row, domain_info)[-1], axis=1)
df_pre.loc[:, 'in_none_pfam'] = df_pre.apply(lambda row: 1 if 'none' in row['pfam'] else 0, axis=1)
# use patient counts
df_pre.loc[:, 'path_freq'] = df_pre.apply(lambda row: match(row, domain_info)[2], axis=1)
df_pre.loc[:, 'path_avg'] = df_pre.apply(lambda row: match(row, domain_info)[3], axis=1)
df_pre.loc[:, 'path_ent'] = df_pre.apply(lambda row: match(row, domain_info)[4], axis=1)
df_pre.loc[:, 'path_cnt'] = df_pre.apply(lambda row: match(row, domain_info)[5], axis=1)
df_pre.loc[:, 'benign_freq'] = df_pre.apply(lambda row: match(row, domain_info)[6], axis=1)
df_pre.loc[:, 'benign_avg'] = df_pre.apply(lambda row: match(row, domain_info)[7], axis=1)
df_pre.loc[:, 'benign_ent'] = df_pre.apply(lambda row: match(row, domain_info)[8], axis=1)
df_pre.loc[:, 'benign_cnt'] = df_pre.apply(lambda row: match(row, domain_info)[9], axis=1)
df_pre.loc[:, 'path_benign_freq_r'] = df_pre.apply(lambda row: match(row, domain_info)[10], axis=1)
#df_pre.loc[:, 'path_na_t'] = df_pre.apply(lambda row: match(row, domain_info)[2], axis=1)
# this is for training
# use not just missense
# I do not need to require an mpc score here anymore (df_pre.mpc>0)
df_x_pre = df_pre[ (df_pre.clin_class != 'VUS') ]
df_s = df_x_pre.groupby('pfam').size().reset_index()
multi_pfam = set( df_s[df_s[0]>1]['pfam'].values )
df_x_pre.loc[:, 'multi_pfam'] = df_x_pre.apply(lambda row: row['pfam'] in multi_pfam, axis=1)
df_x = df_x_pre[ (df_x_pre.multi_pfam) & (df_x_pre.eff=='missense_variant') & (df_x_pre.mpc>0)]
df_x.loc[:, 'y'] = df_x.apply(lambda row: 1 if row['clin_class'] in ('PATHOGENIC', 'LIKLEY_PATHOGENIC')
else 0, axis=1)
df_x.head()
train_keys = {':'.join([str(x) for x in v]):True for v in df_x[['chrom', 'pos', 'ref', 'alt']].values}
clin_file = '../data/interim/clinvar/clinvar.dat'
#clinvar_df_pre = pandas.read_csv(clin_file, sep='\t').fillna(0)
def calc_final_sig(row):
sig_set = set(str(row['clinSig'].split('|')))
has_benign = '2' in sig_set or '3' in sig_set
has_path = '4' in sig_set or '5' in sig_set
if has_path and not has_benign:
return 1
if not has_path and has_benign:
return 0
return -1
# & (clinvar_df_pre.not_in_training)
# need a smarter match to domain here
#m = pandas.merge(clinvar_df, ff, on='pfam', how='left')
#m.head()
def eval_pred(row):
if (row['tree_pred']>.9 and row['y']==1) or (row['tree_pred']<.1 and row['y']==0):
return 'right'
if (row['tree_pred']>.9 and row['y']==0) or (row['tree_pred']<.1 and row['y']==1):
return 'wrong'
return 'vus'
def do_it(gene):
focus_gene_ls = (gene,)
clinvar_df_pre = pandas.read_csv(clin_file, sep='\t').fillna(0)
clinvar_df_pre.loc[:, "y"] = clinvar_df_pre.apply(calc_final_sig, axis=1)
clinvar_df_pre.loc[:, "key"] = clinvar_df_pre.apply(lambda row: ':'.join([str(row[x]) for x in ['chrom', 'pos', 'ref', 'alt']]), axis=1)
clinvar_df_pre.loc[:, "not_in_training"] = clinvar_df_pre.apply(lambda row: not row['key'] in train_keys, axis=1)
clinvar_df_pre.loc[:, "is_focus"] = clinvar_df_pre.apply(lambda row: row['gene'] in focus_gene_ls, axis=1)
clinvar_df = clinvar_df_pre[(clinvar_df_pre.eff=='missense_variant')
& (clinvar_df_pre.not_in_training)
& (clinvar_df_pre.mpc>0)
& (clinvar_df_pre.is_focus)
& (clinvar_df_pre.y!=-1) ].drop_duplicates()
clinvar_df.loc[:, 'path_frac_t'] = clinvar_df.apply(lambda row: match(row, domain_info)[0], axis=1)
clinvar_df.loc[:, 'size_t'] = clinvar_df.apply(lambda row: match(row, domain_info)[1], axis=1)
clinvar_df.loc[:, 'path_freq'] = clinvar_df.apply(lambda row: match(row, domain_info)[2], axis=1)
clinvar_df.loc[:, 'path_avg'] = clinvar_df.apply(lambda row: match(row, domain_info)[3], axis=1)
clinvar_df.loc[:, 'path_ent'] = clinvar_df.apply(lambda row: match(row, domain_info)[4], axis=1)
clinvar_df.loc[:, 'path_cnt'] = clinvar_df.apply(lambda row: match(row, domain_info)[5], axis=1)
clinvar_df.loc[:, 'benign_freq'] = clinvar_df.apply(lambda row: match(row, domain_info)[6], axis=1)
clinvar_df.loc[:, 'benign_avg'] = clinvar_df.apply(lambda row: match(row, domain_info)[7], axis=1)
clinvar_df.loc[:, 'benign_ent'] = clinvar_df.apply(lambda row: match(row, domain_info)[8], axis=1)
clinvar_df.loc[:, 'benign_cnt'] = clinvar_df.apply(lambda row: match(row, domain_info)[9], axis=1)
clinvar_df.loc[:, 'path_benign_freq_r'] = clinvar_df.apply(lambda row: match(row, domain_info)[10], axis=1)
clinvar_df.loc[:, 'path_na_t'] = clinvar_df.apply(lambda row: match(row, domain_info)[-1], axis=1)
clinvar_df.loc[:, 'in_none_pfam'] = clinvar_df.apply(lambda row: 1 if 'none' in row['pfam'] else 0, axis=1)
# train new tree and apply to clinvar
forest = ExtraTreesClassifier(n_estimators=300,
random_state=13,
bootstrap=True,
max_features=7,
min_samples_split=2,
max_depth=8,
min_samples_leaf=5,
n_jobs=4)
all_preds = []
all_truth = []
cols = ['mpc', 'size_t', 'path_frac_t', 'in_none_pfam',
'path_freq', 'path_avg', 'path_ent', 'path_cnt',
'benign_freq', 'benign_avg', 'benign_ent', 'benign_cnt',
'af_1kg_all', 'mtr', 'path_benign_freq_r']
X, y = df_x[cols], df_x['y']
forest.fit(X, y)
X_clin, y_clin = clinvar_df[cols], clinvar_df['y']
preds = [ x[1] for x in forest.predict_proba(X_clin) ]
clinvar_df['tree_pred'] = preds
clinvar_df.loc[:, 'PredictionStatus'] = clinvar_df.apply(eval_pred, axis=1)
fpr_tree, tpr_tree, _ = metrics.roc_curve(y_clin, preds, pos_label=1)
tree_auc = metrics.auc(fpr_tree, tpr_tree)
print(gene, tree_auc)
g_df = (clinvar_df[['gene', 'chrom', 'pos', 'ref', 'alt', 'PredictionStatus']]
.groupby(['gene','PredictionStatus'])
.size().reset_index().rename(columns={0:'size'}))
dd = g_df.groupby('gene').sum().reset_index()
use_genes = set(dd[dd['size']>0]['gene'].values)
g_df.loc[:, 'keep'] = g_df.apply(lambda row: row['gene'] in use_genes, axis=1)
sns.set(font_scale=1.75)
flatui = ["#2ecc71", "#3498db", "#e74c3c",]
ss = sns.factorplot(x='gene', hue='PredictionStatus', y='size', data=g_df[g_df['keep']],
kind='bar', palette=sns.color_palette(flatui), size=5, aspect=3)
ss.set_ylabels('ClinVar missense variants')
ss.set_xlabels('')
ss.savefig("../docs/plots/clinvar_%s_eval.png" % (gene,))
# train new tree and apply to clinvar: just pathogenic frac
#tree_clf = linear_model.LogisticRegression(penalty='l1', fit_intercept=True)
#poly = PolynomialFeatures(degree=6, interaction_only=False, include_bias=False)
all_preds = []
all_truth = []
cols = ['size_t', 'path_frac_t', 'in_none_pfam',
'path_freq', 'path_avg', 'path_ent', 'path_cnt',
'benign_freq', 'benign_avg', 'benign_ent', 'benign_cnt',
'af_1kg_all', 'mtr', 'path_benign_freq_r']
X, y = df_x[cols], df_x['y']
forest.fit(X, y)
# cols = ['size_t', 'path_na_t', 'path_frac_t', 'in_none_pfam','path_freq', 'path_avg', 'path_ent',
# 'benign_freq', 'benign_avg', 'benign_ent',
# 'af_1kg_all', 'mtr', 'path_benign_freq_r'] #['size_t', 'path_na_t', 'path_frac_t', 'path_freq', 'benign_freq', 'in_none_pfam',]
#X, y = poly.fit_transform(df_x[cols]), df_x['y'] #X, y = df_x[cols], df_x['y']
#tree_clf.fit(X, y)
X_clin, y_clin = clinvar_df[cols], clinvar_df['y'] #X_clin, y_clin = poly.fit_transform(clinvar_df[cols]), clinvar_df['y'] #clinvar_df[cols], clinvar_df['y']
preds = [ x[1] for x in forest.predict_proba(X_clin) ]
fpr_tree_nm, tpr_tree_nm, _ = metrics.roc_curve(y_clin, preds, pos_label=1)
tree_auc_nm = metrics.auc(fpr_tree_nm, tpr_tree_nm)
scores = clinvar_df['mpc'].values
truth = clinvar_df['y'].values
fpr_mpc, tpr_mpc, _ = metrics.roc_curve(truth, scores, pos_label=1)
mpc_auc = metrics.auc(fpr_mpc, tpr_mpc)
plt.clf()
sns.set(font_scale=1.5)
plt.plot(fpr_tree, tpr_tree, label='Domain Burden + MPC (%.2f)' % (tree_auc,), color='green')
plt.plot(fpr_tree_nm, tpr_tree_nm, label='Domain Burden (%.2f)' % (tree_auc_nm,), color='orange')
plt.plot(fpr_mpc, tpr_mpc, label='MPC (%.2f)' % (mpc_auc,), color='black')
plt.legend(loc=4)
plt.title('ClinVar %s (w/o GeneDx) missense variant ROC' % (gene,))
plt.savefig("../docs/plots/clinvar_%s_roc.png" % (gene,))
plt.clf()
for gene in ('SCN1A','SCN2A','KCNQ2', 'KCNQ3', 'CDKL5',
'PCDH19', 'SCN1B', 'SCN8A', 'SLC2A1', 'SPTAN1', 'STXBP1', 'TSC1'):
do_it(gene)
```
|
github_jupyter
|
import pandas, numpy
from scipy.stats import entropy
import pydot, pydotplus, graphviz
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
from sklearn import linear_model, metrics, tree, svm
from sklearn.neural_network import MLPClassifier
from sklearn.externals.six import StringIO
from sklearn.preprocessing import PolynomialFeatures
from sklearn.ensemble import ExtraTreesClassifier
from IPython.display import HTML
%matplotlib inline
def calc_path_freq(rows):
# sum of freqs for path
df = rows[ (rows.clin_class=='PATHOGENIC') |
(rows.clin_class=='LIKLEY_PATHOGENIC')]
l = len(df)
pathogenic_sum = sum(df['freq'])
neg = sum(df['neg_fam'])
if l == 0:
return 0, 0, -1, 0
return pathogenic_sum, pathogenic_sum/l, entropy(df['freq']/pathogenic_sum), l
def calc_benign_freq(rows):
# sum of freqs for
df = rows[ (rows.clin_class=='LIKELY_BENIGN') |
(rows.clin_class=='BENIGN')]
benign_sum = sum(df['freq'])
l = len(df)
neg = sum(df['neg_fam'])
if l == 0:
return 0, 0, -1, 0
return benign_sum, benign_sum/l, entropy(df['freq']/benign_sum), l
def calc_path_frac(rows):
pfam = list(rows['pfam'].values)[0]
pathogenic = len(rows[ (rows.clin_class=='PATHOGENIC') | (rows.clin_class=='LIKLEY_PATHOGENIC')])
benign = len(rows[ (rows.clin_class=='LIKELY_BENIGN') | (rows.clin_class=='BENIGN')])
frac = -1
if pathogenic+benign:
frac = pathogenic/(pathogenic+benign)
pf, pf_avg, pf_ent, pcount = calc_path_freq(rows)
bf, bf_avg, bf_ent, bcount = calc_benign_freq(rows)
r = -1
if bf:
r = pf/bf
return pandas.Series([frac, len(rows), pf, pf_avg, pf_ent, pcount, bf, bf_avg, bf_ent, bcount, r],
index=['path_frac', 'size',
'path_freq', 'p_freq_avg', 'p_freq_ent', 'ps',
'benign_freq', 'b_freq_avg', 'b_freq_ent', 'bs',
'fRatio'])
def calc_tot_freq_ratio(rows):
path_sum = calc_path_freq(rows)
benign_sum = calc_benign_freq(rows)
return path_sum/benign_sum
dat_file = '../data/interim/EPIv6.eff.dbnsfp.anno.hHack.dat.xls'
df_pre = pandas.read_csv(dat_file, sep='\t').fillna(0)
df_pre.loc[:, 'freq'] = df_pre['pos_fam']/(df_pre['pos_fam']+df_pre['neg_fam'])
df = (df_pre['pfam'].str.split(',', expand=True)
.stack()
.reset_index(level=0)
.set_index('level_0')
.rename(columns={0:'pfam'})
.join(df_pre.drop('pfam',1), how='left')
)
dd = df.groupby('pfam').apply(calc_path_frac)
ff = dd.reset_index()
# mk domain features
def match(row, domain_info):
ls = []
for pfam in row['pfam'].split(','):
if pfam in domain_info:
if domain_info[pfam][2] == 0:
ls.append(domain_info[pfam])
if len(ls) == 0:
for pfam in row['pfam'].split(','):
if pfam in domain_info:
return domain_info[pfam]
if len(ls):
return ls[0]
else:
return (0, 0,
0, 0, -1, 0,
0, 0, -1, 0,
-1, 1)
ff.loc[:, 'path_na'] = ff.apply(lambda row: 1 if row['path_frac']==-1 else 0, axis=1)
domain_info = {pfam:[path_frac, size,
path_freq, path_avg, path_ent, pc,
b_freq, b_avg, b_ent, bc,
fr, path_na]
for pfam, path_frac, size, path_freq, path_avg, path_ent, pc, b_freq, b_avg, b_ent, bc, fr, path_na
in ff.values}
df_pre.loc[:, 'path_frac_t'] = df_pre.apply(lambda row: match(row, domain_info)[0], axis=1)
df_pre.loc[:, 'size_t'] = df_pre.apply(lambda row: match(row, domain_info)[1], axis=1)
df_pre.loc[:, 'path_na_t'] = df_pre.apply(lambda row: match(row, domain_info)[-1], axis=1)
df_pre.loc[:, 'in_none_pfam'] = df_pre.apply(lambda row: 1 if 'none' in row['pfam'] else 0, axis=1)
# use patient counts
df_pre.loc[:, 'path_freq'] = df_pre.apply(lambda row: match(row, domain_info)[2], axis=1)
df_pre.loc[:, 'path_avg'] = df_pre.apply(lambda row: match(row, domain_info)[3], axis=1)
df_pre.loc[:, 'path_ent'] = df_pre.apply(lambda row: match(row, domain_info)[4], axis=1)
df_pre.loc[:, 'path_cnt'] = df_pre.apply(lambda row: match(row, domain_info)[5], axis=1)
df_pre.loc[:, 'benign_freq'] = df_pre.apply(lambda row: match(row, domain_info)[6], axis=1)
df_pre.loc[:, 'benign_avg'] = df_pre.apply(lambda row: match(row, domain_info)[7], axis=1)
df_pre.loc[:, 'benign_ent'] = df_pre.apply(lambda row: match(row, domain_info)[8], axis=1)
df_pre.loc[:, 'benign_cnt'] = df_pre.apply(lambda row: match(row, domain_info)[9], axis=1)
df_pre.loc[:, 'path_benign_freq_r'] = df_pre.apply(lambda row: match(row, domain_info)[10], axis=1)
#df_pre.loc[:, 'path_na_t'] = df_pre.apply(lambda row: match(row, domain_info)[2], axis=1)
# this is for training
# use not just missense
# I do not need to require an mpc score here anymore (df_pre.mpc>0)
df_x_pre = df_pre[ (df_pre.clin_class != 'VUS') ]
df_s = df_x_pre.groupby('pfam').size().reset_index()
multi_pfam = set( df_s[df_s[0]>1]['pfam'].values )
df_x_pre.loc[:, 'multi_pfam'] = df_x_pre.apply(lambda row: row['pfam'] in multi_pfam, axis=1)
df_x = df_x_pre[ (df_x_pre.multi_pfam) & (df_x_pre.eff=='missense_variant') & (df_x_pre.mpc>0)]
df_x.loc[:, 'y'] = df_x.apply(lambda row: 1 if row['clin_class'] in ('PATHOGENIC', 'LIKLEY_PATHOGENIC')
else 0, axis=1)
df_x.head()
train_keys = {':'.join([str(x) for x in v]):True for v in df_x[['chrom', 'pos', 'ref', 'alt']].values}
clin_file = '../data/interim/clinvar/clinvar.dat'
#clinvar_df_pre = pandas.read_csv(clin_file, sep='\t').fillna(0)
def calc_final_sig(row):
sig_set = set(str(row['clinSig'].split('|')))
has_benign = '2' in sig_set or '3' in sig_set
has_path = '4' in sig_set or '5' in sig_set
if has_path and not has_benign:
return 1
if not has_path and has_benign:
return 0
return -1
# & (clinvar_df_pre.not_in_training)
# need a smarter match to domain here
#m = pandas.merge(clinvar_df, ff, on='pfam', how='left')
#m.head()
def eval_pred(row):
if (row['tree_pred']>.9 and row['y']==1) or (row['tree_pred']<.1 and row['y']==0):
return 'right'
if (row['tree_pred']>.9 and row['y']==0) or (row['tree_pred']<.1 and row['y']==1):
return 'wrong'
return 'vus'
def do_it(gene):
focus_gene_ls = (gene,)
clinvar_df_pre = pandas.read_csv(clin_file, sep='\t').fillna(0)
clinvar_df_pre.loc[:, "y"] = clinvar_df_pre.apply(calc_final_sig, axis=1)
clinvar_df_pre.loc[:, "key"] = clinvar_df_pre.apply(lambda row: ':'.join([str(row[x]) for x in ['chrom', 'pos', 'ref', 'alt']]), axis=1)
clinvar_df_pre.loc[:, "not_in_training"] = clinvar_df_pre.apply(lambda row: not row['key'] in train_keys, axis=1)
clinvar_df_pre.loc[:, "is_focus"] = clinvar_df_pre.apply(lambda row: row['gene'] in focus_gene_ls, axis=1)
clinvar_df = clinvar_df_pre[(clinvar_df_pre.eff=='missense_variant')
& (clinvar_df_pre.not_in_training)
& (clinvar_df_pre.mpc>0)
& (clinvar_df_pre.is_focus)
& (clinvar_df_pre.y!=-1) ].drop_duplicates()
clinvar_df.loc[:, 'path_frac_t'] = clinvar_df.apply(lambda row: match(row, domain_info)[0], axis=1)
clinvar_df.loc[:, 'size_t'] = clinvar_df.apply(lambda row: match(row, domain_info)[1], axis=1)
clinvar_df.loc[:, 'path_freq'] = clinvar_df.apply(lambda row: match(row, domain_info)[2], axis=1)
clinvar_df.loc[:, 'path_avg'] = clinvar_df.apply(lambda row: match(row, domain_info)[3], axis=1)
clinvar_df.loc[:, 'path_ent'] = clinvar_df.apply(lambda row: match(row, domain_info)[4], axis=1)
clinvar_df.loc[:, 'path_cnt'] = clinvar_df.apply(lambda row: match(row, domain_info)[5], axis=1)
clinvar_df.loc[:, 'benign_freq'] = clinvar_df.apply(lambda row: match(row, domain_info)[6], axis=1)
clinvar_df.loc[:, 'benign_avg'] = clinvar_df.apply(lambda row: match(row, domain_info)[7], axis=1)
clinvar_df.loc[:, 'benign_ent'] = clinvar_df.apply(lambda row: match(row, domain_info)[8], axis=1)
clinvar_df.loc[:, 'benign_cnt'] = clinvar_df.apply(lambda row: match(row, domain_info)[9], axis=1)
clinvar_df.loc[:, 'path_benign_freq_r'] = clinvar_df.apply(lambda row: match(row, domain_info)[10], axis=1)
clinvar_df.loc[:, 'path_na_t'] = clinvar_df.apply(lambda row: match(row, domain_info)[-1], axis=1)
clinvar_df.loc[:, 'in_none_pfam'] = clinvar_df.apply(lambda row: 1 if 'none' in row['pfam'] else 0, axis=1)
# train new tree and apply to clinvar
forest = ExtraTreesClassifier(n_estimators=300,
random_state=13,
bootstrap=True,
max_features=7,
min_samples_split=2,
max_depth=8,
min_samples_leaf=5,
n_jobs=4)
all_preds = []
all_truth = []
cols = ['mpc', 'size_t', 'path_frac_t', 'in_none_pfam',
'path_freq', 'path_avg', 'path_ent', 'path_cnt',
'benign_freq', 'benign_avg', 'benign_ent', 'benign_cnt',
'af_1kg_all', 'mtr', 'path_benign_freq_r']
X, y = df_x[cols], df_x['y']
forest.fit(X, y)
X_clin, y_clin = clinvar_df[cols], clinvar_df['y']
preds = [ x[1] for x in forest.predict_proba(X_clin) ]
clinvar_df['tree_pred'] = preds
clinvar_df.loc[:, 'PredictionStatus'] = clinvar_df.apply(eval_pred, axis=1)
fpr_tree, tpr_tree, _ = metrics.roc_curve(y_clin, preds, pos_label=1)
tree_auc = metrics.auc(fpr_tree, tpr_tree)
print(gene, tree_auc)
g_df = (clinvar_df[['gene', 'chrom', 'pos', 'ref', 'alt', 'PredictionStatus']]
.groupby(['gene','PredictionStatus'])
.size().reset_index().rename(columns={0:'size'}))
dd = g_df.groupby('gene').sum().reset_index()
use_genes = set(dd[dd['size']>0]['gene'].values)
g_df.loc[:, 'keep'] = g_df.apply(lambda row: row['gene'] in use_genes, axis=1)
sns.set(font_scale=1.75)
flatui = ["#2ecc71", "#3498db", "#e74c3c",]
ss = sns.factorplot(x='gene', hue='PredictionStatus', y='size', data=g_df[g_df['keep']],
kind='bar', palette=sns.color_palette(flatui), size=5, aspect=3)
ss.set_ylabels('ClinVar missense variants')
ss.set_xlabels('')
ss.savefig("../docs/plots/clinvar_%s_eval.png" % (gene,))
# train new tree and apply to clinvar: just pathogenic frac
#tree_clf = linear_model.LogisticRegression(penalty='l1', fit_intercept=True)
#poly = PolynomialFeatures(degree=6, interaction_only=False, include_bias=False)
all_preds = []
all_truth = []
cols = ['size_t', 'path_frac_t', 'in_none_pfam',
'path_freq', 'path_avg', 'path_ent', 'path_cnt',
'benign_freq', 'benign_avg', 'benign_ent', 'benign_cnt',
'af_1kg_all', 'mtr', 'path_benign_freq_r']
X, y = df_x[cols], df_x['y']
forest.fit(X, y)
# cols = ['size_t', 'path_na_t', 'path_frac_t', 'in_none_pfam','path_freq', 'path_avg', 'path_ent',
# 'benign_freq', 'benign_avg', 'benign_ent',
# 'af_1kg_all', 'mtr', 'path_benign_freq_r'] #['size_t', 'path_na_t', 'path_frac_t', 'path_freq', 'benign_freq', 'in_none_pfam',]
#X, y = poly.fit_transform(df_x[cols]), df_x['y'] #X, y = df_x[cols], df_x['y']
#tree_clf.fit(X, y)
X_clin, y_clin = clinvar_df[cols], clinvar_df['y'] #X_clin, y_clin = poly.fit_transform(clinvar_df[cols]), clinvar_df['y'] #clinvar_df[cols], clinvar_df['y']
preds = [ x[1] for x in forest.predict_proba(X_clin) ]
fpr_tree_nm, tpr_tree_nm, _ = metrics.roc_curve(y_clin, preds, pos_label=1)
tree_auc_nm = metrics.auc(fpr_tree_nm, tpr_tree_nm)
scores = clinvar_df['mpc'].values
truth = clinvar_df['y'].values
fpr_mpc, tpr_mpc, _ = metrics.roc_curve(truth, scores, pos_label=1)
mpc_auc = metrics.auc(fpr_mpc, tpr_mpc)
plt.clf()
sns.set(font_scale=1.5)
plt.plot(fpr_tree, tpr_tree, label='Domain Burden + MPC (%.2f)' % (tree_auc,), color='green')
plt.plot(fpr_tree_nm, tpr_tree_nm, label='Domain Burden (%.2f)' % (tree_auc_nm,), color='orange')
plt.plot(fpr_mpc, tpr_mpc, label='MPC (%.2f)' % (mpc_auc,), color='black')
plt.legend(loc=4)
plt.title('ClinVar %s (w/o GeneDx) missense variant ROC' % (gene,))
plt.savefig("../docs/plots/clinvar_%s_roc.png" % (gene,))
plt.clf()
for gene in ('SCN1A','SCN2A','KCNQ2', 'KCNQ3', 'CDKL5',
'PCDH19', 'SCN1B', 'SCN8A', 'SLC2A1', 'SPTAN1', 'STXBP1', 'TSC1'):
do_it(gene)
| 0.30819 | 0.620118 |
### 1.visuals
### 2.Feature reduction(PCA+LDA)
### 3.NNs
### 4.End to end model
```
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from keras.utils import np_utils
import warnings
warnings.filterwarnings('ignore')
data_dir='E:/diabetes/diabetes.csv'
df = pd.read_csv(data_dir)
#df = df.drop('Unnamed: 0', axis=1)
print(df.head())
print(df.shape)
print(df.columns)
import seaborn as sns
import matplotlib.pyplot as plt
import seaborn as sns
corr=df.corr()
sns.heatmap(corr, xticklabels=corr.columns.values, yticklabels=corr.columns.values)
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from sklearn.decomposition import PCA
h = .02 # step size in the mesh
names = ["Nearest Neighbors", "Linear SVM", "RBF SVM", "Gaussian Process",
"Decision Tree", "Random Forest", "Neural Net", "AdaBoost",
"Naive Bayes", "QDA"]
classifiers = [
KNeighborsClassifier(3),
SVC(kernel="linear", C=0.025),
SVC(gamma=2, C=1),
GaussianProcessClassifier(1.0 * RBF(1.0)),
DecisionTreeClassifier(max_depth=5),
RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),
MLPClassifier(alpha=1, max_iter=1000),
AdaBoostClassifier(),
GaussianNB(),
QuadraticDiscriminantAnalysis()]
X = df.drop(['Outcome'], axis = 1).values
pca = PCA(n_components=2,svd_solver='full')
X = pca.fit_transform(X)
y = df['Outcome']
datasets = [df]
figure = plt.figure(figsize=(27, 9))
i = 1
# iterate over datasets
for ds_cnt, ds in enumerate(datasets):
# preprocess dataset, split into training and test part
#X, y = ds
X = StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=.3, random_state=42)
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
X = df.drop(['Outcome'], axis = 1).values
Y = df['Outcome']
X = StandardScaler().fit_transform(X)
X_Train, X_Test, Y_Train, Y_Test = train_test_split(X, Y, test_size = 0.30, random_state = 101)
from sklearn import svm
import matplotlib.pyplot as plt
def feature_plot(classifier, feature_names, top_features=4):
coef = classifier.coef_.ravel()
top_positive_coefficients = np.argsort(coef)[-top_features:]
top_negative_coefficients = np.argsort(coef)[:top_features]
top_coefficients = np.hstack([top_negative_coefficients, top_positive_coefficients])
plt.figure(figsize=(18, 7))
colors = ['green' if c < 0 else 'blue' for c in coef[top_coefficients]]
plt.bar(np.arange(2 * top_features), coef[top_coefficients], color=colors)
feature_names = np.array(feature_names)
plt.xticks(np.arange(1 + 2 * top_features), feature_names[top_coefficients], rotation=45, ha='right')
plt.show()
print(df.drop(['Outcome'], axis = 1).columns.values)
trainedsvm = svm.LinearSVC().fit(X, Y)
feature_plot(trainedsvm, df.drop(['Outcome'], axis = 1).columns.values)
# Preprocessing :
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import classification_report,confusion_matrix
from itertools import product
# Classifiers
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn import svm
from sklearn import tree
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.manifold import TSNE
from sklearn.decomposition import PCA
trainedmodel = LogisticRegression().fit(X_Train,Y_Train)
predictions =trainedmodel.predict(X_Test)
print(confusion_matrix(Y_Test,predictions))
print(classification_report(Y_Test,predictions))
trainedforest = RandomForestClassifier(n_estimators=700).fit(X_Train,Y_Train)
predictionforest = trainedforest.predict(X_Test)
print(confusion_matrix(Y_Test,predictionforest))
print(classification_report(Y_Test,predictionforest))
trainedsvm = svm.LinearSVC().fit(X_Train, Y_Train)
predictionsvm = trainedsvm.predict(X_Test)
print(confusion_matrix(Y_Test,predictionsvm))
print(classification_report(Y_Test,predictionsvm))
trainedtree = tree.DecisionTreeClassifier().fit(X_Train, Y_Train)
predictionstree = trainedtree.predict(X_Test)
print(confusion_matrix(Y_Test,predictionstree))
print(classification_report(Y_Test,predictionstree))
import graphviz
from sklearn.tree import DecisionTreeClassifier, export_graphviz
data = export_graphviz(trainedtree,out_file=None,feature_names=df.drop(['Outcome'], axis = 1).columns,
class_names=['0', '1'],
filled=True, rounded=True,
max_depth=2,
special_characters=True)
graph = graphviz.Source(data)
graph
trainedlda = LinearDiscriminantAnalysis().fit(X_Train, Y_Train)
predictionlda = trainedlda.predict(X_Test)
print(confusion_matrix(Y_Test,predictionlda))
print(classification_report(Y_Test,predictionlda))
trainednb = GaussianNB().fit(X_Train, Y_Train)
predictionnb = trainednb.predict(X_Test)
print(confusion_matrix(Y_Test,predictionnb))
print(classification_report(Y_Test,predictionnb))
from xgboost import XGBClassifier
from xgboost import plot_tree
import matplotlib.pyplot as plt
model = XGBClassifier()
# Train
model.fit(X_Train, Y_Train)
plt.figure(figsize = (50,55))
plt.show()
from itertools import product
import itertools
predictions =model.predict(X_Test)
print(confusion_matrix(Y_Test,predictions))
print(classification_report(Y_Test,predictions))
# Thanks to: https://www.kaggle.com/tejainece/data-visualization-and-machine-learning-algorithms
def plot_confusion_matrix(cm, classes=["0", "1"], title="",
cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title('Confusion matrix ' +title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
cm_plot = confusion_matrix(Y_Test,predictions)
plt.figure()
plot_confusion_matrix(cm_plot, title = 'XGBClassifier')
pca = PCA(n_components=2,svd_solver='full')
X_pca = pca.fit_transform(X)
# print(pca.explained_variance_)
X_reduced, X_test_reduced, Y_Train, Y_Test = train_test_split(X_pca, Y, test_size = 0.30, random_state = 101)
# pca = PCA(n_components=2,svd_solver='full')
# X_reduced = pca.fit_transform(X_Train)
#X_reduced = TSNE(n_components=2).fit_transform(X_Train, Y_Train)
trainednb = GaussianNB().fit(X_reduced, Y_Train)
trainedsvm = svm.LinearSVC().fit(X_reduced, Y_Train)
trainedforest = RandomForestClassifier(n_estimators=700).fit(X_reduced,Y_Train)
trainedmodel = LogisticRegression().fit(X_reduced,Y_Train)
# pca = PCA(n_components=2,svd_solver='full')
# X_test_reduced = pca.fit_transform(X_Test)
#X_test_reduced = TSNE(n_components=2).fit_transform(X_Test, Y_Test)
print('Naive Bayes')
predictionnb = trainednb.predict(X_test_reduced)
print(confusion_matrix(Y_Test,predictionnb))
print(classification_report(Y_Test,predictionnb))
print('SVM')
predictionsvm = trainedsvm.predict(X_test_reduced)
print(confusion_matrix(Y_Test,predictionsvm))
print(classification_report(Y_Test,predictionsvm))
print('Random Forest')
predictionforest = trainedforest.predict(X_test_reduced)
print(confusion_matrix(Y_Test,predictionforest))
print(classification_report(Y_Test,predictionforest))
print('Logistic Regression')
predictions =trainedmodel.predict(X_test_reduced)
print(confusion_matrix(Y_Test,predictions))
print(classification_report(Y_Test,predictions))
reduced_data = X_reduced
trainednb = GaussianNB().fit(reduced_data, Y_Train)
trainedsvm = svm.LinearSVC().fit(reduced_data, Y_Train)
trainedforest = RandomForestClassifier(n_estimators=700).fit(reduced_data,Y_Train)
trainedmodel = LogisticRegression().fit(reduced_data,Y_Train)
# Thanks to: https://scikit-learn.org/stable/auto_examples/ensemble/plot_voting_decision_regions.html
x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1
y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1),
np.arange(y_min, y_max, 0.1))
f, axarr = plt.subplots(2, 2, sharex='col', sharey='row', figsize=(10, 8))
for idx, clf, tt in zip(product([0, 1], [0, 1]),
[trainednb, trainedsvm, trainedforest, trainedmodel],
['Naive Bayes Classifier', 'SVM',
'Random Forest', 'Logistic Regression']):
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
axarr[idx[0], idx[1]].contourf(xx, yy, Z,cmap=plt.cm.coolwarm, alpha=0.4)
axarr[idx[0], idx[1]].scatter(reduced_data[:, 0], reduced_data[:, 1], c=Y_Train,
s=20, edgecolor='k')
axarr[idx[0], idx[1]].set_title(tt)
plt.show()
# Load libraries
from sklearn import datasets
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
# Create an LDA that will reduce the data down to 1 feature
lda = LinearDiscriminantAnalysis(n_components=2)
# run an LDA and use it to transform the features
X_lda = lda.fit(X, Y).transform(X)
# Print the number of features
print('Original number of features:', X.shape[1])
print('Reduced number of features:', X_lda.shape[1])
## View the ratio of explained variance
print(lda.explained_variance_ratio_)
X_reduced, X_test_reduced, Y_Train, Y_Test = train_test_split(X_lda, Y, test_size = 0.30, random_state = 101)
trainednb = GaussianNB().fit(X_reduced, Y_Train)
trainedsvm = svm.LinearSVC().fit(X_reduced, Y_Train)
print('Naive Bayes')
predictionnb = trainednb.predict(X_test_reduced)
print(confusion_matrix(Y_Test,predictionnb))
print(classification_report(Y_Test,predictionnb))
print('SVM')
predictionsvm = trainedsvm.predict(X_test_reduced)
print(confusion_matrix(Y_Test,predictionsvm))
print(classification_report(Y_Test,predictionsvm))
pca = PCA(n_components=2,svd_solver='full')
X_pca = pca.fit_transform(X)
# print(pca.explained_variance_)
# print('Original number of features:', X.shape[1])
# print('Reduced number of features:', X_lda.shape[1])
print(pca.explained_variance_ratio_)
X_reduced, X_test_reduced, Y_Train, Y_Test = train_test_split(X_pca, Y, test_size = 0.30, random_state = 101)
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=2, random_state=0).fit(X_reduced)
kpredictions = kmeans.predict(X_test_reduced)
print(confusion_matrix(Y_Test,kpredictions))
print(classification_report(Y_Test,kpredictions))
plt.scatter(X_test_reduced[kpredictions ==0,0], X_test_reduced[kpredictions == 0,1], s=100, c='red')
plt.scatter(X_test_reduced[kpredictions ==1,0], X_test_reduced[kpredictions == 1,1], s=100, c='black')
from keras.utils.np_utils import to_categorical
Y_Train = to_categorical(Y_Train)
print(type(X_Train))
print(type(Y_Train))
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.layers import Dense, Dropout, BatchNormalization, Activation
#Y_Test = to_categorical(Y_Test)
input_dim = X_Train.shape[1]
nb_classes = Y_Train.shape[1]
# Here's a Deep Dumb MLP (DDMLP)
model = Sequential()
model.add(Dense(512, input_dim=input_dim))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.15))
model.add(Dense(256))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.15))
model.add(Dense(nb_classes))
model.add(BatchNormalization())
model.add(Activation('sigmoid'))
# we'll use categorical xent for the loss, and RMSprop as the optimizer
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
print("Training...")
model.fit(X_Train, Y_Train, nb_epoch=50, batch_size=16, validation_split=0.1, verbose=80)
preds = model.predict_classes(X_Test, verbose=0)
print(confusion_matrix(Y_Test,preds))
print(classification_report(Y_Test,preds))
```
###### https://www.kaggle.com/pierpaolo28/pima-indians-diabetes-database/notebook
|
github_jupyter
|
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from keras.utils import np_utils
import warnings
warnings.filterwarnings('ignore')
data_dir='E:/diabetes/diabetes.csv'
df = pd.read_csv(data_dir)
#df = df.drop('Unnamed: 0', axis=1)
print(df.head())
print(df.shape)
print(df.columns)
import seaborn as sns
import matplotlib.pyplot as plt
import seaborn as sns
corr=df.corr()
sns.heatmap(corr, xticklabels=corr.columns.values, yticklabels=corr.columns.values)
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from sklearn.decomposition import PCA
h = .02 # step size in the mesh
names = ["Nearest Neighbors", "Linear SVM", "RBF SVM", "Gaussian Process",
"Decision Tree", "Random Forest", "Neural Net", "AdaBoost",
"Naive Bayes", "QDA"]
classifiers = [
KNeighborsClassifier(3),
SVC(kernel="linear", C=0.025),
SVC(gamma=2, C=1),
GaussianProcessClassifier(1.0 * RBF(1.0)),
DecisionTreeClassifier(max_depth=5),
RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),
MLPClassifier(alpha=1, max_iter=1000),
AdaBoostClassifier(),
GaussianNB(),
QuadraticDiscriminantAnalysis()]
X = df.drop(['Outcome'], axis = 1).values
pca = PCA(n_components=2,svd_solver='full')
X = pca.fit_transform(X)
y = df['Outcome']
datasets = [df]
figure = plt.figure(figsize=(27, 9))
i = 1
# iterate over datasets
for ds_cnt, ds in enumerate(datasets):
# preprocess dataset, split into training and test part
#X, y = ds
X = StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=.3, random_state=42)
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
X = df.drop(['Outcome'], axis = 1).values
Y = df['Outcome']
X = StandardScaler().fit_transform(X)
X_Train, X_Test, Y_Train, Y_Test = train_test_split(X, Y, test_size = 0.30, random_state = 101)
from sklearn import svm
import matplotlib.pyplot as plt
def feature_plot(classifier, feature_names, top_features=4):
coef = classifier.coef_.ravel()
top_positive_coefficients = np.argsort(coef)[-top_features:]
top_negative_coefficients = np.argsort(coef)[:top_features]
top_coefficients = np.hstack([top_negative_coefficients, top_positive_coefficients])
plt.figure(figsize=(18, 7))
colors = ['green' if c < 0 else 'blue' for c in coef[top_coefficients]]
plt.bar(np.arange(2 * top_features), coef[top_coefficients], color=colors)
feature_names = np.array(feature_names)
plt.xticks(np.arange(1 + 2 * top_features), feature_names[top_coefficients], rotation=45, ha='right')
plt.show()
print(df.drop(['Outcome'], axis = 1).columns.values)
trainedsvm = svm.LinearSVC().fit(X, Y)
feature_plot(trainedsvm, df.drop(['Outcome'], axis = 1).columns.values)
# Preprocessing :
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import classification_report,confusion_matrix
from itertools import product
# Classifiers
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn import svm
from sklearn import tree
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.manifold import TSNE
from sklearn.decomposition import PCA
trainedmodel = LogisticRegression().fit(X_Train,Y_Train)
predictions =trainedmodel.predict(X_Test)
print(confusion_matrix(Y_Test,predictions))
print(classification_report(Y_Test,predictions))
trainedforest = RandomForestClassifier(n_estimators=700).fit(X_Train,Y_Train)
predictionforest = trainedforest.predict(X_Test)
print(confusion_matrix(Y_Test,predictionforest))
print(classification_report(Y_Test,predictionforest))
trainedsvm = svm.LinearSVC().fit(X_Train, Y_Train)
predictionsvm = trainedsvm.predict(X_Test)
print(confusion_matrix(Y_Test,predictionsvm))
print(classification_report(Y_Test,predictionsvm))
trainedtree = tree.DecisionTreeClassifier().fit(X_Train, Y_Train)
predictionstree = trainedtree.predict(X_Test)
print(confusion_matrix(Y_Test,predictionstree))
print(classification_report(Y_Test,predictionstree))
import graphviz
from sklearn.tree import DecisionTreeClassifier, export_graphviz
data = export_graphviz(trainedtree,out_file=None,feature_names=df.drop(['Outcome'], axis = 1).columns,
class_names=['0', '1'],
filled=True, rounded=True,
max_depth=2,
special_characters=True)
graph = graphviz.Source(data)
graph
trainedlda = LinearDiscriminantAnalysis().fit(X_Train, Y_Train)
predictionlda = trainedlda.predict(X_Test)
print(confusion_matrix(Y_Test,predictionlda))
print(classification_report(Y_Test,predictionlda))
trainednb = GaussianNB().fit(X_Train, Y_Train)
predictionnb = trainednb.predict(X_Test)
print(confusion_matrix(Y_Test,predictionnb))
print(classification_report(Y_Test,predictionnb))
from xgboost import XGBClassifier
from xgboost import plot_tree
import matplotlib.pyplot as plt
model = XGBClassifier()
# Train
model.fit(X_Train, Y_Train)
plt.figure(figsize = (50,55))
plt.show()
from itertools import product
import itertools
predictions =model.predict(X_Test)
print(confusion_matrix(Y_Test,predictions))
print(classification_report(Y_Test,predictions))
# Thanks to: https://www.kaggle.com/tejainece/data-visualization-and-machine-learning-algorithms
def plot_confusion_matrix(cm, classes=["0", "1"], title="",
cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title('Confusion matrix ' +title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
cm_plot = confusion_matrix(Y_Test,predictions)
plt.figure()
plot_confusion_matrix(cm_plot, title = 'XGBClassifier')
pca = PCA(n_components=2,svd_solver='full')
X_pca = pca.fit_transform(X)
# print(pca.explained_variance_)
X_reduced, X_test_reduced, Y_Train, Y_Test = train_test_split(X_pca, Y, test_size = 0.30, random_state = 101)
# pca = PCA(n_components=2,svd_solver='full')
# X_reduced = pca.fit_transform(X_Train)
#X_reduced = TSNE(n_components=2).fit_transform(X_Train, Y_Train)
trainednb = GaussianNB().fit(X_reduced, Y_Train)
trainedsvm = svm.LinearSVC().fit(X_reduced, Y_Train)
trainedforest = RandomForestClassifier(n_estimators=700).fit(X_reduced,Y_Train)
trainedmodel = LogisticRegression().fit(X_reduced,Y_Train)
# pca = PCA(n_components=2,svd_solver='full')
# X_test_reduced = pca.fit_transform(X_Test)
#X_test_reduced = TSNE(n_components=2).fit_transform(X_Test, Y_Test)
print('Naive Bayes')
predictionnb = trainednb.predict(X_test_reduced)
print(confusion_matrix(Y_Test,predictionnb))
print(classification_report(Y_Test,predictionnb))
print('SVM')
predictionsvm = trainedsvm.predict(X_test_reduced)
print(confusion_matrix(Y_Test,predictionsvm))
print(classification_report(Y_Test,predictionsvm))
print('Random Forest')
predictionforest = trainedforest.predict(X_test_reduced)
print(confusion_matrix(Y_Test,predictionforest))
print(classification_report(Y_Test,predictionforest))
print('Logistic Regression')
predictions =trainedmodel.predict(X_test_reduced)
print(confusion_matrix(Y_Test,predictions))
print(classification_report(Y_Test,predictions))
reduced_data = X_reduced
trainednb = GaussianNB().fit(reduced_data, Y_Train)
trainedsvm = svm.LinearSVC().fit(reduced_data, Y_Train)
trainedforest = RandomForestClassifier(n_estimators=700).fit(reduced_data,Y_Train)
trainedmodel = LogisticRegression().fit(reduced_data,Y_Train)
# Thanks to: https://scikit-learn.org/stable/auto_examples/ensemble/plot_voting_decision_regions.html
x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1
y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1),
np.arange(y_min, y_max, 0.1))
f, axarr = plt.subplots(2, 2, sharex='col', sharey='row', figsize=(10, 8))
for idx, clf, tt in zip(product([0, 1], [0, 1]),
[trainednb, trainedsvm, trainedforest, trainedmodel],
['Naive Bayes Classifier', 'SVM',
'Random Forest', 'Logistic Regression']):
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
axarr[idx[0], idx[1]].contourf(xx, yy, Z,cmap=plt.cm.coolwarm, alpha=0.4)
axarr[idx[0], idx[1]].scatter(reduced_data[:, 0], reduced_data[:, 1], c=Y_Train,
s=20, edgecolor='k')
axarr[idx[0], idx[1]].set_title(tt)
plt.show()
# Load libraries
from sklearn import datasets
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
# Create an LDA that will reduce the data down to 1 feature
lda = LinearDiscriminantAnalysis(n_components=2)
# run an LDA and use it to transform the features
X_lda = lda.fit(X, Y).transform(X)
# Print the number of features
print('Original number of features:', X.shape[1])
print('Reduced number of features:', X_lda.shape[1])
## View the ratio of explained variance
print(lda.explained_variance_ratio_)
X_reduced, X_test_reduced, Y_Train, Y_Test = train_test_split(X_lda, Y, test_size = 0.30, random_state = 101)
trainednb = GaussianNB().fit(X_reduced, Y_Train)
trainedsvm = svm.LinearSVC().fit(X_reduced, Y_Train)
print('Naive Bayes')
predictionnb = trainednb.predict(X_test_reduced)
print(confusion_matrix(Y_Test,predictionnb))
print(classification_report(Y_Test,predictionnb))
print('SVM')
predictionsvm = trainedsvm.predict(X_test_reduced)
print(confusion_matrix(Y_Test,predictionsvm))
print(classification_report(Y_Test,predictionsvm))
pca = PCA(n_components=2,svd_solver='full')
X_pca = pca.fit_transform(X)
# print(pca.explained_variance_)
# print('Original number of features:', X.shape[1])
# print('Reduced number of features:', X_lda.shape[1])
print(pca.explained_variance_ratio_)
X_reduced, X_test_reduced, Y_Train, Y_Test = train_test_split(X_pca, Y, test_size = 0.30, random_state = 101)
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=2, random_state=0).fit(X_reduced)
kpredictions = kmeans.predict(X_test_reduced)
print(confusion_matrix(Y_Test,kpredictions))
print(classification_report(Y_Test,kpredictions))
plt.scatter(X_test_reduced[kpredictions ==0,0], X_test_reduced[kpredictions == 0,1], s=100, c='red')
plt.scatter(X_test_reduced[kpredictions ==1,0], X_test_reduced[kpredictions == 1,1], s=100, c='black')
from keras.utils.np_utils import to_categorical
Y_Train = to_categorical(Y_Train)
print(type(X_Train))
print(type(Y_Train))
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.layers import Dense, Dropout, BatchNormalization, Activation
#Y_Test = to_categorical(Y_Test)
input_dim = X_Train.shape[1]
nb_classes = Y_Train.shape[1]
# Here's a Deep Dumb MLP (DDMLP)
model = Sequential()
model.add(Dense(512, input_dim=input_dim))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.15))
model.add(Dense(256))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.15))
model.add(Dense(nb_classes))
model.add(BatchNormalization())
model.add(Activation('sigmoid'))
# we'll use categorical xent for the loss, and RMSprop as the optimizer
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
print("Training...")
model.fit(X_Train, Y_Train, nb_epoch=50, batch_size=16, validation_split=0.1, verbose=80)
preds = model.predict_classes(X_Test, verbose=0)
print(confusion_matrix(Y_Test,preds))
print(classification_report(Y_Test,preds))
| 0.606964 | 0.73041 |
```
from google.colab import drive
drive.mount('/content/drive')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import OrdinalEncoder
traindata = pd.read_csv("/content/drive/MyDrive/Humana/2021_Competition_Training.csv")
testdata = pd.read_csv("/content/drive/MyDrive/Humana/2021_Competition_Holdout.csv")
```
# 1. Data Preprocessing
* Detect and identify missing values
* Correct data types
* Remove useless features
* Adding information
* Handle missing values
* Standardize data
```
# According to Warinings, print value summaries of all the features that have mixed types
train_mixtypelist = [2,8,9,11,13,16,20,26,28,29,31,33,51,55,58,62,64,66,68,
75,85,102,124,127,131,132,135,160,174,180,187,192,202,
209,210,211,215,220,230,234,240,243,247,251,255,261,285,
293,297,300,305,306,309,323,334,344,345,352,353,355,359]
test_mixtypelist = [2,8,9,11,13,16,20,26,28,29,31,33,51,55,58,62,64,66,75,
82,85,102,124,131,132,135,159,173,179,191,208,209,210,
219,233,239,246,254,260,284,287,292,296,304,305,307,308,
322,333,343,344,349,351,352,354]
def Datacleaning(data, indexlist):
df = data.copy()
indexlst = indexlist.copy()
print("Start data cleaning...")
# 1. Replace "*" with na value
df.replace(["*", " "], np.nan, inplace=True)
print("Step 1 completed!")
# 2. In indexlist, all features are numeric except src_div_id,
# so we should implement it in a different way
df.loc[:, "src_div_id"] = df.loc[:, "src_div_id"].astype("object")
print("Step 2 completed!")
# 3. Remove sri_div_id index from indexlist
indexlst.remove(df.columns.to_list().index("src_div_id"))
print("Step 3 completed!")
# 4. Change all features to appropriate dtypes
df.iloc[:, indexlst] = df.iloc[:, indexlst].astype("float64")
col_rest = ["cms_orig_reas_entitle_cd","race_cd","atlas_type_2015_update","cms_orig_reas_entitle_cd"]
df.loc[:, col_rest] = df.loc[:, col_rest].astype("object")
df['hedis_dia_hba1c_ge9'].replace({"Y":1,"N":0},inplace = True)
print("Step 4 completed!")
# 5. Sort columns alphabetically
df = df.reindex(sorted(df.columns), axis=1)
print("Step 5 completed!")
# 6. Drop Unnamed:0, zip_cd, features that contain only one value or few diversity
col_useless = ["Unnamed: 0","zip_cd"]
for i in df:
valuesum = df[i].value_counts()
perc = valuesum.max()/valuesum.sum()
if valuesum.size == 1:
col_useless.append(i)
elif (valuesum.size == 2) & (perc>0.999):
col_useless.append(i)
df.drop(columns=col_useless, inplace=True)
print("Data cleaning completed!")
return df
def FeatureProcessing(data):
print("Start feature processing...")
df = data.copy()
#1. Create new features for rows with similar NA patterns
cons_feature = [
'cons_chmi', 'cons_lwcm10', 'cons_cwht', 'cons_n2pmr', 'cons_cgqs',
'cons_rxadhm', 'cons_estinv30_rc', 'cons_nwperadult',
'cons_n2phi', 'cons_chva', 'cons_lwcm07', 'cons_hxwearbl',
'cons_stlnindx', 'cons_rxadhs', 'cons_n2pwh', 'cons_rxmaint',
'cons_hxmioc'
]
atlas_feature = [
'atlas_pct_fmrkt_baked16', 'atlas_pct_fmrkt_anmlprod16',
'atlas_pct_fmrkt_sfmnp16', 'atlas_pct_fmrkt_wiccash16',
'atlas_pct_fmrkt_wic16', 'atlas_pct_fmrkt_snap16',
'atlas_pct_fmrkt_otherfood16', 'atlas_pct_fmrkt_credit16',
'atlas_pct_fmrkt_frveg16'
]
df["cons_17na"] = ((df.loc[:,cons_feature].shape[1] \
- df.loc[:,cons_feature].count(axis=1))==17).astype("Int64")
df["atlas_9na"] = ((df.loc[:,atlas_feature].shape[1] \
- df.loc[:,atlas_feature].count(axis=1))==9).astype("Int64")
df["mabh_seg_na"] = df.loc[:, "mabh_seg"].isnull().astype("Int64")
# 2. Fairness adjustment
adjlist = ["race_cd","sex_cd","cons_hhcomp","rx_gpi2_56_dist_gpi6_pmpm_ct_3to6m_b4"]
df.drop(columns=adjlist, inplace=True)
# df = df.convert_dtypes(convert_string=False)
print("Feature processing completed!")
return df
def Getobjlist(data,istest=False):
objlist = data.select_dtypes(include="object").columns.to_list()
objlist.remove("ID")
if not istest:
objlist.remove("covid_vaccination")
return objlist
def Fillna(data):
print("Start filling na...")
df = data.copy()
objlist = df.select_dtypes(include="object").columns.to_list()
# Handling missing values for categorical features:
for column in objlist:
bef_fill = df.loc[:, column]
if bef_fill.isna().sum()/df.shape[0] > 0.1:
df.loc[:, column] = df.loc[:, column].fillna("Blank")
else:
df.loc[:,column] = df.loc[:, column].fillna(bef_fill.mode()[0])
print("Filling na completed!")
return df
def Standardize(data,objlist,istest=False):
print("Start Standardizing...")
df = data.copy()
# 1. devide X and y for traindata
if not istest:
data_X = df.drop(columns = ["covid_vaccination"])
label = df["covid_vaccination"]
label.replace({"no_vacc": 0, "vacc": 1},inplace = True)# concat later
else:
data_X = df.copy()
del df
print("Step 1 completed!")
# 2. Turn categorical features into ordinal type
objcol = objlist
ord_enc = OrdinalEncoder()
data_str = data_X.loc[:,objcol].astype("str")
data_ord = ord_enc.fit_transform(data_str)
del data_str
data_ord = pd.DataFrame(data_ord, columns=objcol)
data_ord = data_ord.astype("category") # concat later
print("Step 2 completed!")
# 3. Standardize all numerical features
scaler = StandardScaler()
objcol.append("ID")
data_int = data_X.drop(columns = objcol)
del data_X
data_int_s = scaler.fit_transform(data_int.astype(float))
data_int_s = pd.DataFrame(data_int_s, columns=data_int.columns,dtype="float32")# concat later
del data_int
if not istest:
df = pd.concat([data_int_s,data_ord,label],axis=1)
else:
df = pd.concat([data_int_s,data_ord],axis=1)
print("Standardizing completed!")
return df
def SplitXy(data):
df = data.copy()
y = df.loc[:,"covid_vaccination"]
X = df.drop(columns = ["covid_vaccination"])
return X, y
traindata_clean = Datacleaning(traindata, train_mixtypelist)
traindata_featureprocess = FeatureProcessing(traindata_clean)
objlist = Getobjlist(traindata_featureprocess)
traindata_fillna = Fillna(traindata_featureprocess)
traindata_standard = Standardize(traindata_fillna,objlist)
X,y = SplitXy(traindata_standard)
del traindata_clean
del traindata_featureprocess
del traindata
del traindata_fillna
del traindata_standard
```
# 2. Build LightGBM Classification Model
* Split train and test set
* Build prediction model
* Get feature importance
* Parameters tuning using cross validation
```
import lightgbm as lgb
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn import metrics
```
## Split train and test dataset
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=1204)
```
## Build classification model
```
gbm = lgb.LGBMClassifier(boosting_type = 'gbdt',
objective="binary",
metric = 'auc',
random_state = 1204,
max_depth = 12,
num_leaves = 42,
learning_rate = 0.05,
n_estimators = 500)
gbm.fit(X_train, y_train)
y_pred = gbm.predict_proba(X_test)
fpr_lr, tpr_lr, thresholds_lr = metrics.roc_curve(y_test, y_pred[:,1])
roc_auc_lr = metrics.auc(fpr_lr, tpr_lr)
roc_auc_lr
plt.figure(figsize=(10,10))
sns.set_style("white")
lw = 2
plt.plot(fpr_lr, tpr_lr, color='#548C6F',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc_lr)
plt.plot([0, 1], [0, 1], color='#8F5A54', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve for LightGBM')
plt.legend(loc="lower right")
plt.show()
fig, ax = plt.subplots(figsize=(10,10))
sns.set_theme(style="white")
metrics.plot_confusion_matrix(gbm,X_test,y_test,cmap=plt.cm.Greens,normalize="all",ax = ax)
plt.show()
```
## Get feature importance
```
model = lgb.LGBMClassifier(boosting_type = 'gbdt',
objective="binary",
metric = 'auc',
random_state = 1204,
max_depth = 12,
num_leaves = 42,
learning_rate = 0.05,
n_estimators = 500).fit(X_train, y_train)
model.importance_type = "gain"
import seaborn as sns
def plotImp(model, X , num = 20, fig_size = (40, 20)):
feature_imp = pd.DataFrame({'Value':model.feature_importances_ ,'Feature':X.columns})
plt.figure(figsize=fig_size)
sns.set(font_scale = 4)
sns.set_style("whitegrid")
sns.barplot(x="Value", y="Feature",
data=feature_imp.sort_values(by="Value",ascending=False)[0:num],
palette = sns.light_palette("#548C6F",reverse=True,n_colors=60))
plt.title('LightGBM Features Importance (Top 50)')
plt.tight_layout()
plt.xlabel("Total gains of splits")
plt.savefig('lgbm_importances-01.png')
plt.show()
plotImp(model, X_train , num = 50, fig_size = (40, 40))
Feature_Imp = pd.DataFrame({"Feature":X.columns.to_list(),
"Importance":model.feature_importances_.tolist()}).sort_values(by = "Importance",ascending = False).reset_index(drop = True)
Feature_Imp.head(40).to_csv("40 most important features.csv")
impf = Feature_Imp.head(40).Feature.to_list()
impf.append("covid_vaccination")
df_imp = traindata_clean.loc[:,impf]
df_imp
df_imp.to_csv("/content/drive/MyDrive/Humana/EDAfile.csv",index=False)
```
## Parameters tuning
```
parameters = {
"bagging_freq":[10,20,50,100],
"bagging_fraction":[0.1,0.2,0.5,1]
}#自定义需要调的参数以及区间
gbm = lgb.LGBMClassifier(boosting_type = 'gbdt',
objective="binary",
metric = 'auc',
random_state = 1204,
max_depth = 12,
num_leaves = 42,
learning_rate = 0.05,
n_estimators = 500
)
gsearch = GridSearchCV(gbm, param_grid=parameters, scoring='roc_auc', cv=5)
gsearch.fit(X, y)
print("Best score: %0.8f" % gsearch.best_score_)
print("Best parameters set:")
best_parameters = gsearch.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print("\t%s: %r" % (param_name, best_parameters[param_name]))# 0.68192742
# Use best parameters to refine prediction model:
gbm = lgb.LGBMClassifier(boosting_type = 'gbdt',
metric = 'auc',
objective="binary",
random_state = 1204)
gbm.fit(X_train, y_train,categorical_feature = objlist)
```
# 3. Predict Test Dataset
```
testdata_clean = Datacleaning(testdata, test_mixtypelist)
test_featureprocess = FeatureProcessing(testdata_clean)
testdata_fillna = Fillna(test_featureprocess)
ID = testdata_fillna["ID"]
testdata_standard = Standardize(testdata_fillna,True)
y_holdout = gbm.predict_proba(testdata_standard)
df_predict = pd.DataFrame({"ID":ID, "Score": y_holdout[:,0]})
df_predict["Score"] = df_predict["Score"].astype("float32")
df_predict.sort_values(by="Score",ascending = False,inplace = True)
df_predict.reset_index(drop=True,inplace=True)
df_predict["Rank"] = df_predict.index+1
df_predict["Rank"] = df_predict["Rank"].astype("int32")
df_predict
df_predict.to_csv("/content/drive/MyDrive/Humana/2021CaseCompetition_Yichao_Liu_20211005.csv",index=False)
```
---
# Some extra exploration
1. Figure out different kinds of NA (blank value, "*") (**Solved**)
2. Comparing correlation of different features to race and sex
```
# 1. Indentify whether * and NAN should be treated separately.
for i in traindata:
if (sum(traindata[i]=="*")>0) & (traindata[i].isna().sum()>0):
print(i)
print("Number of *: ",sum(traindata[i]=="*"))
print("Number of NA: ",traindata[i].isna().sum())
# Conclision: No need to treat separately.
# 2. Comparing correlation of different features to race and sex
sex = traindata_clean["sex_cd"]
race = traindata_clean["race_cd"]
df = traindata_clean.drop(columns=["sex_cd","race_cd","covid_vaccination"])
df_corr = pd.DataFrame(columns=["Feature","Dtype","Sex_Chi2pvalue","Sex_CramersV","Race_Chi2pvalue","Race_CramersV"])
cnt=0
for i in df:
dtype = df[i].dtype
if (df[i].unique().size<=5) | (df[i].dtype=="object"):
# Sex
ct1 = np.array(pd.crosstab(sex,df[i]))
stat1, p1, dof1, expected1 = stats.chi2_contingency(ct1)
p1 = round(p1,4)
n1 = ct1.sum()
minDim1 = min(ct1.shape)-1
V_sex = np.sqrt((stat1/n1) /minDim1)
# Race
ct2 = np.array(pd.crosstab(race,df[i]))
stat2, p2, dof2, expected2 = stats.chi2_contingency(ct2)
p2 = round(p2,4)
n2 = ct2.sum()
minDim2 = min(ct2.shape)-1
V_race = np.sqrt((stat2/n2) /minDim2)
df_corr.loc[cnt]=[i,dtype,p1,V_sex,p2,V_race]
else:
col_cat = pd.cut(df[i],bins=5)
# Sex
ct1 = np.array(pd.crosstab(sex,col_cat))
stat1, p1, dof1, expected1 = stats.chi2_contingency(ct1)
p1 = round(p1,4)
n1 = ct1.sum()
minDim1 = min(ct1.shape)-1
V_sex = np.sqrt((stat1/n1) /minDim1)
# Race
ct2 = np.array(pd.crosstab(race,col_cat))
stat2, p2, dof2, expected2 = stats.chi2_contingency(ct2)
p2 = round(p2,4)
n2 = ct2.sum()
minDim2 = min(ct2.shape)-1
V_race = np.sqrt((stat2/n2) /minDim2)
df_corr.loc[cnt]=[i,dtype,p1,V_sex,p2,V_race]
cnt+=1
df_corr.sort_values(by="Sex_CramersV",ascending=False).head(5)
df_corr.sort_values(by="Race_CramersV",ascending=False).head(5)
```
|
github_jupyter
|
from google.colab import drive
drive.mount('/content/drive')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import OrdinalEncoder
traindata = pd.read_csv("/content/drive/MyDrive/Humana/2021_Competition_Training.csv")
testdata = pd.read_csv("/content/drive/MyDrive/Humana/2021_Competition_Holdout.csv")
# According to Warinings, print value summaries of all the features that have mixed types
train_mixtypelist = [2,8,9,11,13,16,20,26,28,29,31,33,51,55,58,62,64,66,68,
75,85,102,124,127,131,132,135,160,174,180,187,192,202,
209,210,211,215,220,230,234,240,243,247,251,255,261,285,
293,297,300,305,306,309,323,334,344,345,352,353,355,359]
test_mixtypelist = [2,8,9,11,13,16,20,26,28,29,31,33,51,55,58,62,64,66,75,
82,85,102,124,131,132,135,159,173,179,191,208,209,210,
219,233,239,246,254,260,284,287,292,296,304,305,307,308,
322,333,343,344,349,351,352,354]
def Datacleaning(data, indexlist):
df = data.copy()
indexlst = indexlist.copy()
print("Start data cleaning...")
# 1. Replace "*" with na value
df.replace(["*", " "], np.nan, inplace=True)
print("Step 1 completed!")
# 2. In indexlist, all features are numeric except src_div_id,
# so we should implement it in a different way
df.loc[:, "src_div_id"] = df.loc[:, "src_div_id"].astype("object")
print("Step 2 completed!")
# 3. Remove sri_div_id index from indexlist
indexlst.remove(df.columns.to_list().index("src_div_id"))
print("Step 3 completed!")
# 4. Change all features to appropriate dtypes
df.iloc[:, indexlst] = df.iloc[:, indexlst].astype("float64")
col_rest = ["cms_orig_reas_entitle_cd","race_cd","atlas_type_2015_update","cms_orig_reas_entitle_cd"]
df.loc[:, col_rest] = df.loc[:, col_rest].astype("object")
df['hedis_dia_hba1c_ge9'].replace({"Y":1,"N":0},inplace = True)
print("Step 4 completed!")
# 5. Sort columns alphabetically
df = df.reindex(sorted(df.columns), axis=1)
print("Step 5 completed!")
# 6. Drop Unnamed:0, zip_cd, features that contain only one value or few diversity
col_useless = ["Unnamed: 0","zip_cd"]
for i in df:
valuesum = df[i].value_counts()
perc = valuesum.max()/valuesum.sum()
if valuesum.size == 1:
col_useless.append(i)
elif (valuesum.size == 2) & (perc>0.999):
col_useless.append(i)
df.drop(columns=col_useless, inplace=True)
print("Data cleaning completed!")
return df
def FeatureProcessing(data):
print("Start feature processing...")
df = data.copy()
#1. Create new features for rows with similar NA patterns
cons_feature = [
'cons_chmi', 'cons_lwcm10', 'cons_cwht', 'cons_n2pmr', 'cons_cgqs',
'cons_rxadhm', 'cons_estinv30_rc', 'cons_nwperadult',
'cons_n2phi', 'cons_chva', 'cons_lwcm07', 'cons_hxwearbl',
'cons_stlnindx', 'cons_rxadhs', 'cons_n2pwh', 'cons_rxmaint',
'cons_hxmioc'
]
atlas_feature = [
'atlas_pct_fmrkt_baked16', 'atlas_pct_fmrkt_anmlprod16',
'atlas_pct_fmrkt_sfmnp16', 'atlas_pct_fmrkt_wiccash16',
'atlas_pct_fmrkt_wic16', 'atlas_pct_fmrkt_snap16',
'atlas_pct_fmrkt_otherfood16', 'atlas_pct_fmrkt_credit16',
'atlas_pct_fmrkt_frveg16'
]
df["cons_17na"] = ((df.loc[:,cons_feature].shape[1] \
- df.loc[:,cons_feature].count(axis=1))==17).astype("Int64")
df["atlas_9na"] = ((df.loc[:,atlas_feature].shape[1] \
- df.loc[:,atlas_feature].count(axis=1))==9).astype("Int64")
df["mabh_seg_na"] = df.loc[:, "mabh_seg"].isnull().astype("Int64")
# 2. Fairness adjustment
adjlist = ["race_cd","sex_cd","cons_hhcomp","rx_gpi2_56_dist_gpi6_pmpm_ct_3to6m_b4"]
df.drop(columns=adjlist, inplace=True)
# df = df.convert_dtypes(convert_string=False)
print("Feature processing completed!")
return df
def Getobjlist(data,istest=False):
objlist = data.select_dtypes(include="object").columns.to_list()
objlist.remove("ID")
if not istest:
objlist.remove("covid_vaccination")
return objlist
def Fillna(data):
print("Start filling na...")
df = data.copy()
objlist = df.select_dtypes(include="object").columns.to_list()
# Handling missing values for categorical features:
for column in objlist:
bef_fill = df.loc[:, column]
if bef_fill.isna().sum()/df.shape[0] > 0.1:
df.loc[:, column] = df.loc[:, column].fillna("Blank")
else:
df.loc[:,column] = df.loc[:, column].fillna(bef_fill.mode()[0])
print("Filling na completed!")
return df
def Standardize(data,objlist,istest=False):
print("Start Standardizing...")
df = data.copy()
# 1. devide X and y for traindata
if not istest:
data_X = df.drop(columns = ["covid_vaccination"])
label = df["covid_vaccination"]
label.replace({"no_vacc": 0, "vacc": 1},inplace = True)# concat later
else:
data_X = df.copy()
del df
print("Step 1 completed!")
# 2. Turn categorical features into ordinal type
objcol = objlist
ord_enc = OrdinalEncoder()
data_str = data_X.loc[:,objcol].astype("str")
data_ord = ord_enc.fit_transform(data_str)
del data_str
data_ord = pd.DataFrame(data_ord, columns=objcol)
data_ord = data_ord.astype("category") # concat later
print("Step 2 completed!")
# 3. Standardize all numerical features
scaler = StandardScaler()
objcol.append("ID")
data_int = data_X.drop(columns = objcol)
del data_X
data_int_s = scaler.fit_transform(data_int.astype(float))
data_int_s = pd.DataFrame(data_int_s, columns=data_int.columns,dtype="float32")# concat later
del data_int
if not istest:
df = pd.concat([data_int_s,data_ord,label],axis=1)
else:
df = pd.concat([data_int_s,data_ord],axis=1)
print("Standardizing completed!")
return df
def SplitXy(data):
df = data.copy()
y = df.loc[:,"covid_vaccination"]
X = df.drop(columns = ["covid_vaccination"])
return X, y
traindata_clean = Datacleaning(traindata, train_mixtypelist)
traindata_featureprocess = FeatureProcessing(traindata_clean)
objlist = Getobjlist(traindata_featureprocess)
traindata_fillna = Fillna(traindata_featureprocess)
traindata_standard = Standardize(traindata_fillna,objlist)
X,y = SplitXy(traindata_standard)
del traindata_clean
del traindata_featureprocess
del traindata
del traindata_fillna
del traindata_standard
import lightgbm as lgb
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn import metrics
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=1204)
gbm = lgb.LGBMClassifier(boosting_type = 'gbdt',
objective="binary",
metric = 'auc',
random_state = 1204,
max_depth = 12,
num_leaves = 42,
learning_rate = 0.05,
n_estimators = 500)
gbm.fit(X_train, y_train)
y_pred = gbm.predict_proba(X_test)
fpr_lr, tpr_lr, thresholds_lr = metrics.roc_curve(y_test, y_pred[:,1])
roc_auc_lr = metrics.auc(fpr_lr, tpr_lr)
roc_auc_lr
plt.figure(figsize=(10,10))
sns.set_style("white")
lw = 2
plt.plot(fpr_lr, tpr_lr, color='#548C6F',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc_lr)
plt.plot([0, 1], [0, 1], color='#8F5A54', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve for LightGBM')
plt.legend(loc="lower right")
plt.show()
fig, ax = plt.subplots(figsize=(10,10))
sns.set_theme(style="white")
metrics.plot_confusion_matrix(gbm,X_test,y_test,cmap=plt.cm.Greens,normalize="all",ax = ax)
plt.show()
model = lgb.LGBMClassifier(boosting_type = 'gbdt',
objective="binary",
metric = 'auc',
random_state = 1204,
max_depth = 12,
num_leaves = 42,
learning_rate = 0.05,
n_estimators = 500).fit(X_train, y_train)
model.importance_type = "gain"
import seaborn as sns
def plotImp(model, X , num = 20, fig_size = (40, 20)):
feature_imp = pd.DataFrame({'Value':model.feature_importances_ ,'Feature':X.columns})
plt.figure(figsize=fig_size)
sns.set(font_scale = 4)
sns.set_style("whitegrid")
sns.barplot(x="Value", y="Feature",
data=feature_imp.sort_values(by="Value",ascending=False)[0:num],
palette = sns.light_palette("#548C6F",reverse=True,n_colors=60))
plt.title('LightGBM Features Importance (Top 50)')
plt.tight_layout()
plt.xlabel("Total gains of splits")
plt.savefig('lgbm_importances-01.png')
plt.show()
plotImp(model, X_train , num = 50, fig_size = (40, 40))
Feature_Imp = pd.DataFrame({"Feature":X.columns.to_list(),
"Importance":model.feature_importances_.tolist()}).sort_values(by = "Importance",ascending = False).reset_index(drop = True)
Feature_Imp.head(40).to_csv("40 most important features.csv")
impf = Feature_Imp.head(40).Feature.to_list()
impf.append("covid_vaccination")
df_imp = traindata_clean.loc[:,impf]
df_imp
df_imp.to_csv("/content/drive/MyDrive/Humana/EDAfile.csv",index=False)
parameters = {
"bagging_freq":[10,20,50,100],
"bagging_fraction":[0.1,0.2,0.5,1]
}#自定义需要调的参数以及区间
gbm = lgb.LGBMClassifier(boosting_type = 'gbdt',
objective="binary",
metric = 'auc',
random_state = 1204,
max_depth = 12,
num_leaves = 42,
learning_rate = 0.05,
n_estimators = 500
)
gsearch = GridSearchCV(gbm, param_grid=parameters, scoring='roc_auc', cv=5)
gsearch.fit(X, y)
print("Best score: %0.8f" % gsearch.best_score_)
print("Best parameters set:")
best_parameters = gsearch.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print("\t%s: %r" % (param_name, best_parameters[param_name]))# 0.68192742
# Use best parameters to refine prediction model:
gbm = lgb.LGBMClassifier(boosting_type = 'gbdt',
metric = 'auc',
objective="binary",
random_state = 1204)
gbm.fit(X_train, y_train,categorical_feature = objlist)
testdata_clean = Datacleaning(testdata, test_mixtypelist)
test_featureprocess = FeatureProcessing(testdata_clean)
testdata_fillna = Fillna(test_featureprocess)
ID = testdata_fillna["ID"]
testdata_standard = Standardize(testdata_fillna,True)
y_holdout = gbm.predict_proba(testdata_standard)
df_predict = pd.DataFrame({"ID":ID, "Score": y_holdout[:,0]})
df_predict["Score"] = df_predict["Score"].astype("float32")
df_predict.sort_values(by="Score",ascending = False,inplace = True)
df_predict.reset_index(drop=True,inplace=True)
df_predict["Rank"] = df_predict.index+1
df_predict["Rank"] = df_predict["Rank"].astype("int32")
df_predict
df_predict.to_csv("/content/drive/MyDrive/Humana/2021CaseCompetition_Yichao_Liu_20211005.csv",index=False)
# 1. Indentify whether * and NAN should be treated separately.
for i in traindata:
if (sum(traindata[i]=="*")>0) & (traindata[i].isna().sum()>0):
print(i)
print("Number of *: ",sum(traindata[i]=="*"))
print("Number of NA: ",traindata[i].isna().sum())
# Conclision: No need to treat separately.
# 2. Comparing correlation of different features to race and sex
sex = traindata_clean["sex_cd"]
race = traindata_clean["race_cd"]
df = traindata_clean.drop(columns=["sex_cd","race_cd","covid_vaccination"])
df_corr = pd.DataFrame(columns=["Feature","Dtype","Sex_Chi2pvalue","Sex_CramersV","Race_Chi2pvalue","Race_CramersV"])
cnt=0
for i in df:
dtype = df[i].dtype
if (df[i].unique().size<=5) | (df[i].dtype=="object"):
# Sex
ct1 = np.array(pd.crosstab(sex,df[i]))
stat1, p1, dof1, expected1 = stats.chi2_contingency(ct1)
p1 = round(p1,4)
n1 = ct1.sum()
minDim1 = min(ct1.shape)-1
V_sex = np.sqrt((stat1/n1) /minDim1)
# Race
ct2 = np.array(pd.crosstab(race,df[i]))
stat2, p2, dof2, expected2 = stats.chi2_contingency(ct2)
p2 = round(p2,4)
n2 = ct2.sum()
minDim2 = min(ct2.shape)-1
V_race = np.sqrt((stat2/n2) /minDim2)
df_corr.loc[cnt]=[i,dtype,p1,V_sex,p2,V_race]
else:
col_cat = pd.cut(df[i],bins=5)
# Sex
ct1 = np.array(pd.crosstab(sex,col_cat))
stat1, p1, dof1, expected1 = stats.chi2_contingency(ct1)
p1 = round(p1,4)
n1 = ct1.sum()
minDim1 = min(ct1.shape)-1
V_sex = np.sqrt((stat1/n1) /minDim1)
# Race
ct2 = np.array(pd.crosstab(race,col_cat))
stat2, p2, dof2, expected2 = stats.chi2_contingency(ct2)
p2 = round(p2,4)
n2 = ct2.sum()
minDim2 = min(ct2.shape)-1
V_race = np.sqrt((stat2/n2) /minDim2)
df_corr.loc[cnt]=[i,dtype,p1,V_sex,p2,V_race]
cnt+=1
df_corr.sort_values(by="Sex_CramersV",ascending=False).head(5)
df_corr.sort_values(by="Race_CramersV",ascending=False).head(5)
| 0.239972 | 0.542924 |
```
import numpy as np
import time
def visualize_seg(label_map,mc,one_hot=False):
if one_hot:
label_map=np.argmax(label_map,axis=-1)
out=np.zeros(label_map.shape[0],label_map.shape[1],label_map.shape[2],3)
for l in range(1,mc.NUMCLASS):
out[label_map==l,:] = mc.CLS_COLOR_MAP[l]
return out
def brg_to_rgb(ims):
# convert a list ohttp://www.istocknow.com/live/f images from BGR to RGB
out = []
for im in ims:
out.append(im[:,:,::-1])
return out
class Time(object):
def __init__(self):
self.total_time = 0.0
self.calls =0
self.start_time =0.0
self.duration =0.0
self.average_time =0.0
def tic(self):
self.start_time = time.time()
def toc(self,average =True):
self.dutation = time.time()
self.total_time+= self.duration
self.calls += 1
self.average_time = self.total_time/self.calls
if average:
return self.average_time
else:
return self.duration
def conf_error_rate_at_thresh_fn(mask,conf,thresh):
return np.mean((conf>thresh)!= mask)
def rmse_fn(diff,nnz):
return np.sqrt(np.sum(diff**2/nnz))
def abs_accuracy_at_thresh_fn(diff,thresh,mask):
return np.sum((np.abs(diff) < thresh)*mask)/float(np.sum(mask))
def rel_accuracy_at_thresh_fn(pred_ogm,gt_ogm,mask,thresh):
return np.sum(
mask*(np.maximum(pred_ogm,gt_ogm)/
np.minimum(gt_ogm,pred_ogm) < thresh
)/float(np.sum(mask))
)
def evaluate_iou(label,pred,n_class,epsilon = 1e-12):
# evaluation script to compute pixel level IOU
"""
Args:
label : N-d array of shape [batch,W,H], where each element is a class index.
pred : N-d array of shape [batch,W,H], the each element is the predicted class index
n_class: number of classes
epsilon: a small value to prevent division by 0
return:
IOU: array of length n_class, where each element is the average IoU for this class
tps: same shape as IoU, where each element is the number of TP for each class
fps: same shape as IoU, where each element is the number of FP for each class
fns: same shape as IoU, where each element is the number of FN for each class
"""
assert label.shape == pred.shape \
'label and pred shape mismatch: {} vs {}'.format(label.shape,pred.shape)
ious = np.zeros(n_class)
tps = np.zeros(n_class)
fns = np.zeros(n_class)
fps = np.zeros(n_class)
for cls_id in range(n_class):
tp = np.sum(pred[label == cls_id]==cls_id)
fp = np.sum(label[pred == cls_id]!=cls_id)
fn = np.sum(pred[label == cls_id]!=cls_id)
ious = np.zeros(n_class)
tps = np.zeros(n_class)
fns = np.zeros(n_class)
fps = np.zeros(n_class)
return ious,tps,fns,fps
def condensing_matrix(size_z,size_a,in_channel):
assert size_z % 2 == 1 and size_a 2 ==1, \
'size_z and size_a should be odd number'
half_filter_dim = (size_z*size_a) //2
# moving nergoring pixels to channel dimension
nbr2ch_mat =np.zeros(
(size_z,size_a,in_channel,size_z*size_a*in_channel),dtype =np.float32
)
for z in range (size_z):
for a in range (size_a):
for ch in range(in_channel):
nbr2ch_mat[z,a,ch,z*(size_a*in_channel)+a+in_channel+ch] = 1
nbr2ch_mat = np.concatenate(
[nbr2ch_mat[:, :, :, :in_channel*half_filter_dim],
nbr2ch_mat[:, :, :, in_channel*(half_filter_dim+1):]],
axis=3
)
```
|
github_jupyter
|
import numpy as np
import time
def visualize_seg(label_map,mc,one_hot=False):
if one_hot:
label_map=np.argmax(label_map,axis=-1)
out=np.zeros(label_map.shape[0],label_map.shape[1],label_map.shape[2],3)
for l in range(1,mc.NUMCLASS):
out[label_map==l,:] = mc.CLS_COLOR_MAP[l]
return out
def brg_to_rgb(ims):
# convert a list ohttp://www.istocknow.com/live/f images from BGR to RGB
out = []
for im in ims:
out.append(im[:,:,::-1])
return out
class Time(object):
def __init__(self):
self.total_time = 0.0
self.calls =0
self.start_time =0.0
self.duration =0.0
self.average_time =0.0
def tic(self):
self.start_time = time.time()
def toc(self,average =True):
self.dutation = time.time()
self.total_time+= self.duration
self.calls += 1
self.average_time = self.total_time/self.calls
if average:
return self.average_time
else:
return self.duration
def conf_error_rate_at_thresh_fn(mask,conf,thresh):
return np.mean((conf>thresh)!= mask)
def rmse_fn(diff,nnz):
return np.sqrt(np.sum(diff**2/nnz))
def abs_accuracy_at_thresh_fn(diff,thresh,mask):
return np.sum((np.abs(diff) < thresh)*mask)/float(np.sum(mask))
def rel_accuracy_at_thresh_fn(pred_ogm,gt_ogm,mask,thresh):
return np.sum(
mask*(np.maximum(pred_ogm,gt_ogm)/
np.minimum(gt_ogm,pred_ogm) < thresh
)/float(np.sum(mask))
)
def evaluate_iou(label,pred,n_class,epsilon = 1e-12):
# evaluation script to compute pixel level IOU
"""
Args:
label : N-d array of shape [batch,W,H], where each element is a class index.
pred : N-d array of shape [batch,W,H], the each element is the predicted class index
n_class: number of classes
epsilon: a small value to prevent division by 0
return:
IOU: array of length n_class, where each element is the average IoU for this class
tps: same shape as IoU, where each element is the number of TP for each class
fps: same shape as IoU, where each element is the number of FP for each class
fns: same shape as IoU, where each element is the number of FN for each class
"""
assert label.shape == pred.shape \
'label and pred shape mismatch: {} vs {}'.format(label.shape,pred.shape)
ious = np.zeros(n_class)
tps = np.zeros(n_class)
fns = np.zeros(n_class)
fps = np.zeros(n_class)
for cls_id in range(n_class):
tp = np.sum(pred[label == cls_id]==cls_id)
fp = np.sum(label[pred == cls_id]!=cls_id)
fn = np.sum(pred[label == cls_id]!=cls_id)
ious = np.zeros(n_class)
tps = np.zeros(n_class)
fns = np.zeros(n_class)
fps = np.zeros(n_class)
return ious,tps,fns,fps
def condensing_matrix(size_z,size_a,in_channel):
assert size_z % 2 == 1 and size_a 2 ==1, \
'size_z and size_a should be odd number'
half_filter_dim = (size_z*size_a) //2
# moving nergoring pixels to channel dimension
nbr2ch_mat =np.zeros(
(size_z,size_a,in_channel,size_z*size_a*in_channel),dtype =np.float32
)
for z in range (size_z):
for a in range (size_a):
for ch in range(in_channel):
nbr2ch_mat[z,a,ch,z*(size_a*in_channel)+a+in_channel+ch] = 1
nbr2ch_mat = np.concatenate(
[nbr2ch_mat[:, :, :, :in_channel*half_filter_dim],
nbr2ch_mat[:, :, :, in_channel*(half_filter_dim+1):]],
axis=3
)
| 0.523908 | 0.499207 |
# Linear Regression Example
```
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('lrex').getOrCreate()
from pyspark.ml.regression import LinearRegression
training = spark.read.format('libsvm').load('sample_linear_regression_data.txt')
```
Interesting! We haven't seen libsvm formats before. In fact the aren't very popular when
working with datasets in Python, but the Spark Documentation makes use of them a lot
because of their formatting. Let's see what the training data looks like:
```
training.show()
```
This is the format that Spark expects. Two columns with the names "label" and "features".
The "label" column then needs to have the numerical label, either a regression numerical value, or a numerical value that matches to a classification grouping.
The feature column has inside of it a vector of all the features that belong to that row. Usually what we end up doing is combining the various feature columns we have into a single 'features' column using the data transformations we've learned about.
```
training.select(['features']).head(2)[0][0]
# These are the default values for the featuresCol, labelCol, predictionCol
lr = LinearRegression(featuresCol='features', labelCol='label', predictionCol='prediction')
# You could also pass in additional parameters for regularization, do the reading
lrModel = lr.fit(training)
lrModel.coefficients
len(lrModel.coefficients)
lrModel.intercept
# Print the coefficients and intercept for linear regression
print("Coefficients: {}".format(str(lrModel.coefficients))) # For each feature...
print('\n')
print("Intercept:{}".format(str(lrModel.intercept)))
```
Here is the summary attribute that contains more info.
```
training_summary = lrModel.summary
training_summary.r2
training_summary.rootMeanSquaredError
all_data = spark.read.format('libsvm').load('sample_linear_regression_data.txt')
```
## Train/Test Splits
Spark DataFrames have an method of splitting the data!
```
# Pass in the split between training/test as a list.
# No correct, but generally 70/30 or 60/40 splits are used.
# Depending on how much data you have and how unbalanced it is.
train_data, test_data = all_data.randomSplit([0.7, 0.3])
train_data.show()
train_data.describe().show()
test_data.describe().show()
correct_model = lr.fit(train_data)
test_result = correct_model.evaluate(test_data)
test_result.residuals.show()
test_result.meanSquaredError
test_result.rootMeanSquaredError
```
Well that is nice, but realistically we will eventually want to test this model against unlabeled data, after all, that is the whole point of building the model in the first place. We can again do this with a convenient method call, in this case, transform(). Which was actually being called within the evaluate() method.
```
unlabeled_data = test_data.select('features')
unlabeled_data.show()
predictions = correct_model.transform(unlabeled_data)
predictions.show()
```
Actually, this data is a bit meaningless, so let's explore this same process with some data that actually makes a little more intuitive sense!
|
github_jupyter
|
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('lrex').getOrCreate()
from pyspark.ml.regression import LinearRegression
training = spark.read.format('libsvm').load('sample_linear_regression_data.txt')
training.show()
training.select(['features']).head(2)[0][0]
# These are the default values for the featuresCol, labelCol, predictionCol
lr = LinearRegression(featuresCol='features', labelCol='label', predictionCol='prediction')
# You could also pass in additional parameters for regularization, do the reading
lrModel = lr.fit(training)
lrModel.coefficients
len(lrModel.coefficients)
lrModel.intercept
# Print the coefficients and intercept for linear regression
print("Coefficients: {}".format(str(lrModel.coefficients))) # For each feature...
print('\n')
print("Intercept:{}".format(str(lrModel.intercept)))
training_summary = lrModel.summary
training_summary.r2
training_summary.rootMeanSquaredError
all_data = spark.read.format('libsvm').load('sample_linear_regression_data.txt')
# Pass in the split between training/test as a list.
# No correct, but generally 70/30 or 60/40 splits are used.
# Depending on how much data you have and how unbalanced it is.
train_data, test_data = all_data.randomSplit([0.7, 0.3])
train_data.show()
train_data.describe().show()
test_data.describe().show()
correct_model = lr.fit(train_data)
test_result = correct_model.evaluate(test_data)
test_result.residuals.show()
test_result.meanSquaredError
test_result.rootMeanSquaredError
unlabeled_data = test_data.select('features')
unlabeled_data.show()
predictions = correct_model.transform(unlabeled_data)
predictions.show()
| 0.827863 | 0.97458 |
# Data Download
This notebook reaches out and downloads some common datasets for use with testing the SEE library. We have provided a ```DataDownload``` script to help in downloading. Running the script on the command line will default download all of the datasets (or you can just run this notebook).
# KOMATSUNA Plant Dataset
<img src="http://limu.ait.kyushu-u.ac.jp/~agri/komatsuna/setup.png" width="50%">
## Website
- http://limu.ait.kyushu-u.ac.jp/~agri/komatsuna/
## Direct Links
* http://limu.ait.kyushu-u.ac.jp/~agri/komatsuna/rgbd_plant.zip
* http://limu.ait.kyushu-u.ac.jp/~agri/komatsuna/rgbd_label.zip
## Reference:
Hideaki Uchiyama, Shunsuke Sakurai, Masashi Mishima, Daisaku Arita, Takashi Okayasu, Atsushi Shimada and Rin-ichiro Taniguchi, "An easy-to-setup 3D phenotyping platform for KOMATSUNA dataset," ICCV Workshop on Computer Vision Problems in Plant Phenotyping, pp.2038-2045, 2017. [link](http://openaccess.thecvf.com/ICCV2017_workshops/ICCV2017_W29.py)
```
#The following code will download the dataset to the Images_data folder
from see import DataDownload as dd
dd.downloadKOMATSUNA()
```
# Sky Dataset
<img src="https://www.ime.usp.br/~eduardob/datasets/sky/images/sky_intro.png">
## Website
* https://www.ime.usp.br/~eduardob/datasets/sky/
## Direct Links
* https://www.ime.usp.br/~eduardob/datasets/sky/sky.zip
## Reference:
Eduardo B. Alexandre and Paulo A.V. Miranda. IFT-SLIC: Geração de Superpixels com Base em Agrupamento Iterativo Linear Simples e Transformada Imagem-Floresta. Master dissertation, IME - USP.
```
#The following code will download the dataset to the Images_data folder
# NOTE: This dataset required some file conversions which is included in the following code.
from see import DataDownload as dd
dd.downloadSky()
```
# COSKEL Dataset
## Website
* https://github.com/jkoteswarrao/Object-Co-skeletonization-with-Co-segmentation
## Direct Links
* https://github.com/jkoteswarrao/Object-Co-skeletonization-with-Co-segmentation/raw/master/CO-SKEL_v1.1.zip
## References:
[1] K. R. Jerripothula, J. Cai, J. Lu and J. Yuan, "Object Co-skeletonization with Co-segmentation", IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, USA, 2017, pp. 6205-6213.
[2] K. R. Jerripothula, J. Cai, and J. Yuan, “CATS: Co-saliency Activated Tracklet Selection for Video Co-localization" 2016 European Conference on Computer Vision (ECCV), Springer International Publishing, Amsterdam,The Netherlands, 2016, Part VII, pp. 187-202.
[3] K. R. Jerripothula, J. Cai, F. Meng and J. Yuan, "Automatic image co-segmentation using geometric mean saliency" 2014 IEEE International Conference on Image Processing (ICIP), Paris, 2014, pp. 3277-3281.
NOTE: Unzip the downloded folder in the root folder (or it's immediate one) of any drive. This is to ensure that absolute path of "coskeletonization_CVPR17" is as short as possible.
download
```
from see import DataDownload as dd
dd.downloadCOSKEL()
```
# Future Resources
Many of these datasets are cataloged in the following CVonline website:
http://homepages.inf.ed.ac.uk/rbf/CVonline/Imagedbase.htm#segmentation
https://www.irit.fr/~Sylvie.Chambon/CrackDataset.zip
```
from pathlib import Path
def downloadCrackData(filename = 'CrackDataset.zip',
folder = '.',
url = 'https://www.irit.fr/~Sylvie.Chambon/CrackDataset.zip',
force=True):
from urllib.request import urlretrieve
import zipfile
zfile = Path(folder+filename)
if not zfile.is_file() or force:
print(f"Downloading {filename} from {url}")
urlretrieve(url,folder+filename)
print(f"Unzipping {filename}")
with zipfile.ZipFile(folder+filename, 'r') as zip_ref:
zip_ref.extractall(folder)
print(f"Converting files in {folder}")
images, masks, outputs = getSkyFolderLists()
for i in masks:
print(f"{i}")
img = readpgm(i)
img.astype(np.uint8)
imageio.imsave(i,img)
print(f"Download and Convert Complete")
downloadCrackData()
```
|
github_jupyter
|
#The following code will download the dataset to the Images_data folder
from see import DataDownload as dd
dd.downloadKOMATSUNA()
#The following code will download the dataset to the Images_data folder
# NOTE: This dataset required some file conversions which is included in the following code.
from see import DataDownload as dd
dd.downloadSky()
from see import DataDownload as dd
dd.downloadCOSKEL()
from pathlib import Path
def downloadCrackData(filename = 'CrackDataset.zip',
folder = '.',
url = 'https://www.irit.fr/~Sylvie.Chambon/CrackDataset.zip',
force=True):
from urllib.request import urlretrieve
import zipfile
zfile = Path(folder+filename)
if not zfile.is_file() or force:
print(f"Downloading {filename} from {url}")
urlretrieve(url,folder+filename)
print(f"Unzipping {filename}")
with zipfile.ZipFile(folder+filename, 'r') as zip_ref:
zip_ref.extractall(folder)
print(f"Converting files in {folder}")
images, masks, outputs = getSkyFolderLists()
for i in masks:
print(f"{i}")
img = readpgm(i)
img.astype(np.uint8)
imageio.imsave(i,img)
print(f"Download and Convert Complete")
downloadCrackData()
| 0.373762 | 0.935817 |
```
import os
from shutil import copyfile, rmtree
from random import shuffle, seed
from json import dump as savejson
def prepare_training_validation(setname, threshold, root_dir='/image/data/folder/',
random_seed=2017, step=1):
seed(random_seed)
step_name = ['', 'pseudo_pairing', 'roc']
origin_path = os.path.join(root_dir,'imgs', setname)
dataset_path = os.path.join(root_dir, 'train_test_data', step_name[step], setname)
target_root = os.path.join(dataset_path, '%d' % threshold)
target_path1 = os.path.join(target_root, 'train')
target_path2 = os.path.join(target_root, 'test')
try:
os.mkdir(dataset_path)
except:
pass
try:
rmtree(target_root)
except:
pass
try:
os.mkdir(target_root)
os.mkdir(target_path1)
os.mkdir(target_path2)
except OSError:
pass
print "set name: %s" % setname
print "total seller: %d" % len(os.listdir(origin_path))
print "threshold: %d images" % threshold
sellernames = os.listdir(origin_path)
shuffle(sellernames)
distractor_ct = 0
sellername_dict = {'training_seller': [],
'validation_seller': [],
'pseudo_seller': [],
'training_distractor': [],
'validation_distractor': []}
for seller in sellernames:
seller_path = os.path.join(origin_path, seller)
seller_img_ct = len(os.listdir(seller_path))
if seller_img_ct < threshold:
continue
oripath = os.path.join(origin_path, seller)
imagefiles = os.listdir(oripath)
imagefiles = map(lambda x: os.path.join(oripath, x), imagefiles)
if seller_img_ct >= 2 * threshold:
tarpath1 = os.path.join(target_path1, seller)
os.mkdir(tarpath1)
tarpath2 = os.path.join(target_path2, seller)
os.mkdir(tarpath2)
shuffle(imagefiles)
halfimagenum = len(imagefiles) / 2
for image in imagefiles[:halfimagenum]:
copyfile(image, image.replace(oripath, tarpath1))
for image in imagefiles[halfimagenum:]:
copyfile(image, image.replace(oripath, tarpath2))
sellername_dict['training_seller'].append(seller)
sellername_dict['validation_seller'].append(seller)
sellername_dict['pseudo_seller'].append(seller)
else:
if distractor_ct % step == 0:
tarpath1 = os.path.join(target_path1, seller)
os.mkdir(tarpath1)
for image in imagefiles:
copyfile(image, image.replace(oripath, tarpath1))
sellername_dict['training_seller'].append(seller)
sellername_dict['training_distractor'].append(seller)
else:
tarpath2 = os.path.join(target_path2, seller)
os.mkdir(tarpath2)
for image in imagefiles:
copyfile(image, image.replace(oripath, tarpath2))
sellername_dict['validation_seller'].append(seller)
sellername_dict['validation_distractor'].append(seller)
distractor_ct += 1
print "valid seller: %d" % (len(sellername_dict['training_distractor']) +
len(sellername_dict['validation_distractor']) +
len(sellername_dict['pseudo_seller']))
print "training count: %d, validation count: %d" % (len(sellername_dict['training_seller']),
len(sellername_dict['validation_seller']))
print "training distractor: %d, validation distractor: %d" % (len(sellername_dict['training_distractor']),
len(sellername_dict['validation_distractor']))
print "pseudo seller: %d" % len(sellername_dict['pseudo_seller'])
with open(os.path.join(target_root, 'seller_name.json'), 'w') as fp:
savejson(sellername_dict, fp)
return [target_root, target_path1, target_path2]
def prepare_train_val_label(setname, tar_path, random_seed=201710):
seed(random_seed)
train_sellers = sorted(os.listdir(tar_path[1]))
train_classes = [os.path.join(tar_path[1], x) for x in train_sellers]
data_path = os.path.join(tar_path[0], 'labels')
try:
rmtree(data_path)
except:
pass
try:
os.makedirs(data_path)
except:
pass
with open(os.path.join(data_path, 'train.txt'), 'w') as fp_tr:
for i in range(len(train_classes)):
cl = train_classes[i]
imgs = [os.path.join(cl, x) for x in sorted(os.listdir(cl))]
for img in imgs:
fp_tr.write("%s %d\n" % (img, i))
class_name = {}
train_class_ct = len(train_classes)
for i in xrange(train_class_ct):
class_name[i] = train_classes[i].split('/')[-1]
test_cl_index = []
test_sellers = sorted(os.listdir(tar_path[2]))
test_classes = [os.path.join(tar_path[2], x) for x in test_sellers]
exist_test_class_ct, new_test_class_ct = 0, 0
for i in range(len(test_sellers)):
if test_sellers[i] in train_sellers:
exist_test_class_ct += 1
test_cl_index.append(train_sellers.index(test_sellers[i]))
else:
index_i = train_class_ct + new_test_class_ct
new_test_class_ct += 1
class_name[index_i] = test_sellers[i]
test_cl_index.append(index_i)
print "train class: %d, test class: %d" % (train_class_ct, exist_test_class_ct + new_test_class_ct)
print "exist test class: %d, new test class: %d" % (exist_test_class_ct, new_test_class_ct)
with open(os.path.join(data_path, 'class_name.json'), 'w') as fp:
savejson(class_name, fp)
with open(os.path.join(data_path, 'test.txt'), 'w') as fp:
for i in range(len(test_cl_index)):
cl = test_classes[i]
imgs = [os.path.join(cl, x) for x in os.listdir(cl)]
for img in imgs:
fp.write("%s %d\n" % (img, test_cl_index[i]))
print 'Data preparation finished.'
def prepare_data(setname, threshold, step=1, random_seed=None):
if random_seed is None:
tar_path = prepare_training_validation(setname, threshold, step=step)
print('----')
prepare_train_val_label(setname, tar_path)
print('---------------')
else:
tar_path = prepare_training_validation(setname, threshold, step=step, random_seed=random_seed)
print('----')
prepare_train_val_label(setname, tar_path, random_seed=random_seed)
print('---------------')
prepare_data('Agora', 10)
prepare_data('Agora', 20)
prepare_data('Agora', 40)
prepare_data('Evolution', 10)
prepare_data('Evolution', 20)
prepare_data('Evolution', 40)
prepare_data('SilkRoad2', 10)
prepare_data('SilkRoad2', 20)
prepare_data('SilkRoad2', 40)
prepare_data('Agora', 10, step=2)
prepare_data('Agora', 20, step=2)
prepare_data('Agora', 40, step=2)
prepare_data('Evolution', 10, step=2)
prepare_data('Evolution', 20, step=2)
prepare_data('Evolution', 40, step=2)
prepare_data('SilkRoad2', 10, step=2)
prepare_data('SilkRoad2', 20, step=2)
prepare_data('SilkRoad2', 40, step=2)
prepare_data('Agora_dedup', 10)
prepare_data('Agora_dedup', 20)
prepare_data('Agora_dedup', 40, random_seed=2017121605)
prepare_data('Evolution_dedup', 10, random_seed=2017121605)
prepare_data('Evolution_dedup', 20)
prepare_data('Evolution_dedup', 40)
prepare_data('SilkRoad2_dedup', 10)
prepare_data('SilkRoad2_dedup', 20)
prepare_data('SilkRoad2_dedup', 40)
prepare_data('Agora_dedup', 10, step=2)
prepare_data('Agora_dedup', 20, step=2)
prepare_data('Agora_dedup', 40, step=2)
prepare_data('Evolution_dedup', 10, step=2)
prepare_data('Evolution_dedup', 20, step=2)
prepare_data('Evolution_dedup', 40, step=2)
prepare_data('SilkRoad2_dedup', 10, step=2)
prepare_data('SilkRoad2_dedup', 20, step=2)
prepare_data('SilkRoad2_dedup', 40, step=2)
```
|
github_jupyter
|
import os
from shutil import copyfile, rmtree
from random import shuffle, seed
from json import dump as savejson
def prepare_training_validation(setname, threshold, root_dir='/image/data/folder/',
random_seed=2017, step=1):
seed(random_seed)
step_name = ['', 'pseudo_pairing', 'roc']
origin_path = os.path.join(root_dir,'imgs', setname)
dataset_path = os.path.join(root_dir, 'train_test_data', step_name[step], setname)
target_root = os.path.join(dataset_path, '%d' % threshold)
target_path1 = os.path.join(target_root, 'train')
target_path2 = os.path.join(target_root, 'test')
try:
os.mkdir(dataset_path)
except:
pass
try:
rmtree(target_root)
except:
pass
try:
os.mkdir(target_root)
os.mkdir(target_path1)
os.mkdir(target_path2)
except OSError:
pass
print "set name: %s" % setname
print "total seller: %d" % len(os.listdir(origin_path))
print "threshold: %d images" % threshold
sellernames = os.listdir(origin_path)
shuffle(sellernames)
distractor_ct = 0
sellername_dict = {'training_seller': [],
'validation_seller': [],
'pseudo_seller': [],
'training_distractor': [],
'validation_distractor': []}
for seller in sellernames:
seller_path = os.path.join(origin_path, seller)
seller_img_ct = len(os.listdir(seller_path))
if seller_img_ct < threshold:
continue
oripath = os.path.join(origin_path, seller)
imagefiles = os.listdir(oripath)
imagefiles = map(lambda x: os.path.join(oripath, x), imagefiles)
if seller_img_ct >= 2 * threshold:
tarpath1 = os.path.join(target_path1, seller)
os.mkdir(tarpath1)
tarpath2 = os.path.join(target_path2, seller)
os.mkdir(tarpath2)
shuffle(imagefiles)
halfimagenum = len(imagefiles) / 2
for image in imagefiles[:halfimagenum]:
copyfile(image, image.replace(oripath, tarpath1))
for image in imagefiles[halfimagenum:]:
copyfile(image, image.replace(oripath, tarpath2))
sellername_dict['training_seller'].append(seller)
sellername_dict['validation_seller'].append(seller)
sellername_dict['pseudo_seller'].append(seller)
else:
if distractor_ct % step == 0:
tarpath1 = os.path.join(target_path1, seller)
os.mkdir(tarpath1)
for image in imagefiles:
copyfile(image, image.replace(oripath, tarpath1))
sellername_dict['training_seller'].append(seller)
sellername_dict['training_distractor'].append(seller)
else:
tarpath2 = os.path.join(target_path2, seller)
os.mkdir(tarpath2)
for image in imagefiles:
copyfile(image, image.replace(oripath, tarpath2))
sellername_dict['validation_seller'].append(seller)
sellername_dict['validation_distractor'].append(seller)
distractor_ct += 1
print "valid seller: %d" % (len(sellername_dict['training_distractor']) +
len(sellername_dict['validation_distractor']) +
len(sellername_dict['pseudo_seller']))
print "training count: %d, validation count: %d" % (len(sellername_dict['training_seller']),
len(sellername_dict['validation_seller']))
print "training distractor: %d, validation distractor: %d" % (len(sellername_dict['training_distractor']),
len(sellername_dict['validation_distractor']))
print "pseudo seller: %d" % len(sellername_dict['pseudo_seller'])
with open(os.path.join(target_root, 'seller_name.json'), 'w') as fp:
savejson(sellername_dict, fp)
return [target_root, target_path1, target_path2]
def prepare_train_val_label(setname, tar_path, random_seed=201710):
seed(random_seed)
train_sellers = sorted(os.listdir(tar_path[1]))
train_classes = [os.path.join(tar_path[1], x) for x in train_sellers]
data_path = os.path.join(tar_path[0], 'labels')
try:
rmtree(data_path)
except:
pass
try:
os.makedirs(data_path)
except:
pass
with open(os.path.join(data_path, 'train.txt'), 'w') as fp_tr:
for i in range(len(train_classes)):
cl = train_classes[i]
imgs = [os.path.join(cl, x) for x in sorted(os.listdir(cl))]
for img in imgs:
fp_tr.write("%s %d\n" % (img, i))
class_name = {}
train_class_ct = len(train_classes)
for i in xrange(train_class_ct):
class_name[i] = train_classes[i].split('/')[-1]
test_cl_index = []
test_sellers = sorted(os.listdir(tar_path[2]))
test_classes = [os.path.join(tar_path[2], x) for x in test_sellers]
exist_test_class_ct, new_test_class_ct = 0, 0
for i in range(len(test_sellers)):
if test_sellers[i] in train_sellers:
exist_test_class_ct += 1
test_cl_index.append(train_sellers.index(test_sellers[i]))
else:
index_i = train_class_ct + new_test_class_ct
new_test_class_ct += 1
class_name[index_i] = test_sellers[i]
test_cl_index.append(index_i)
print "train class: %d, test class: %d" % (train_class_ct, exist_test_class_ct + new_test_class_ct)
print "exist test class: %d, new test class: %d" % (exist_test_class_ct, new_test_class_ct)
with open(os.path.join(data_path, 'class_name.json'), 'w') as fp:
savejson(class_name, fp)
with open(os.path.join(data_path, 'test.txt'), 'w') as fp:
for i in range(len(test_cl_index)):
cl = test_classes[i]
imgs = [os.path.join(cl, x) for x in os.listdir(cl)]
for img in imgs:
fp.write("%s %d\n" % (img, test_cl_index[i]))
print 'Data preparation finished.'
def prepare_data(setname, threshold, step=1, random_seed=None):
if random_seed is None:
tar_path = prepare_training_validation(setname, threshold, step=step)
print('----')
prepare_train_val_label(setname, tar_path)
print('---------------')
else:
tar_path = prepare_training_validation(setname, threshold, step=step, random_seed=random_seed)
print('----')
prepare_train_val_label(setname, tar_path, random_seed=random_seed)
print('---------------')
prepare_data('Agora', 10)
prepare_data('Agora', 20)
prepare_data('Agora', 40)
prepare_data('Evolution', 10)
prepare_data('Evolution', 20)
prepare_data('Evolution', 40)
prepare_data('SilkRoad2', 10)
prepare_data('SilkRoad2', 20)
prepare_data('SilkRoad2', 40)
prepare_data('Agora', 10, step=2)
prepare_data('Agora', 20, step=2)
prepare_data('Agora', 40, step=2)
prepare_data('Evolution', 10, step=2)
prepare_data('Evolution', 20, step=2)
prepare_data('Evolution', 40, step=2)
prepare_data('SilkRoad2', 10, step=2)
prepare_data('SilkRoad2', 20, step=2)
prepare_data('SilkRoad2', 40, step=2)
prepare_data('Agora_dedup', 10)
prepare_data('Agora_dedup', 20)
prepare_data('Agora_dedup', 40, random_seed=2017121605)
prepare_data('Evolution_dedup', 10, random_seed=2017121605)
prepare_data('Evolution_dedup', 20)
prepare_data('Evolution_dedup', 40)
prepare_data('SilkRoad2_dedup', 10)
prepare_data('SilkRoad2_dedup', 20)
prepare_data('SilkRoad2_dedup', 40)
prepare_data('Agora_dedup', 10, step=2)
prepare_data('Agora_dedup', 20, step=2)
prepare_data('Agora_dedup', 40, step=2)
prepare_data('Evolution_dedup', 10, step=2)
prepare_data('Evolution_dedup', 20, step=2)
prepare_data('Evolution_dedup', 40, step=2)
prepare_data('SilkRoad2_dedup', 10, step=2)
prepare_data('SilkRoad2_dedup', 20, step=2)
prepare_data('SilkRoad2_dedup', 40, step=2)
| 0.084876 | 0.282648 |
# WeatherPy
----
#### Note
* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
### Observations:
The weather does get warmer the closer you get to the equator, and colder the further away you get.
Amongst my graphs, the "Northern Hemisphere - Max Temp vs. Latitude Linear Regression" has the closest to 1 r-squared value which means most of the data fits the regression model.
Humidity, Cloudiness, and Wind Speed don't appear to be affected by latitude.
```
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import json
import requests
import time
from scipy.stats import linregress
# Import API key
from api_keys import weather_api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "../output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
```
## Generate Cities List
```
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(low=-90.000, high=90.000, size=1500)
lngs = np.random.uniform(low=-180.000, high=180.000, size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
print(len(cities))
```
### Perform API Calls
* Perform a weather check on each city using a series of successive API calls.
* Include a print log of each city as it'sbeing processed (with the city number and city name).
```
# empty arrays to be appended
city = []
cloudiness = []
country = []
date = []
humidity = []
lat = []
lng = []
max_temp = []
wind_speed = []
# base url
url = "http://api.openweathermap.org/data/2.5/weather?"
units = "imperial"
# start of the print
print("Beginning Data Retrieval")
print("-----------------------------")
# start counter
count = 0
for citi in cities:
# Build query URL
query_url = f"{url}appid={weather_api_key}&q={citi}&units={units}"
# Get weather data
weather_json = requests.get(query_url).json()
# increase count
count += 1
try:
#print city name
name = weather_json["name"]
print(f"Processing Record {count} of {len(cities)}: {name}")
#append arrays
city.append(weather_json["name"])
cloudiness.append(weather_json["clouds"]["all"])
country.append(weather_json["sys"]["country"])
date.append(weather_json["dt"])
humidity.append(weather_json["main"]["humidity"])
max_temp.append(weather_json["main"]["temp_max"])
wind_speed.append(weather_json["wind"]["speed"])
lat.append(weather_json["coord"]["lat"])
lng.append(weather_json["coord"]["lon"])
except:
print("City not found. Skipping...")
print("-----------------------------")
print("Data Retrieval Complete")
print("-----------------------------")
```
### Convert Raw Data to DataFrame
* Export the city data into a .csv.
* Display the DataFrame
```
# to convert timestamp to regular date
from datetime import datetime
converted_date = []
for dt in date:
converted_date.append(datetime.fromtimestamp(dt))
# read csv file
df = pd.DataFrame({
"City": city,
"Country": country,
"Date": converted_date,
"Latitude": lat,
"Longitude": lng,
"Cloudiness": cloudiness,
"Humidity": humidity,
"Max Temperature": max_temp,
"Wind Speed": wind_speed
})
# save data frame as csv
df.to_csv("../output_data/cities.csv", encoding='utf-8', index=False)
# view number of items per column
df.count()
# print data frame
df
```
## Inspect the data and remove the cities where the humidity > 100%.
----
Skip this step if there are no cities that have humidity > 100%.
## Plotting the Data
* Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.
* Save the plotted figures as .pngs.
## Latitude vs. Temperature Plot
```
# create scatter plot
plt.scatter(df["Latitude"], df["Max Temperature"])
# add labels and title
plt.title(f"City Latitude vs. Max Temperature {converted_date[0]}")
plt.xlabel("Latitude")
plt.ylabel("Max Temperature (F)")
# add grid lines
plt.grid()
# show and save pic
plt.savefig("../output_data/1LatvTemp.png")
plt.show()
```
# Graph Explanation :
This scatterplot shows the relationship between the max temperature (F) in each city based on its latitude. Based on the results, it seems the closer you get to the equator the hotter it gets, and the further away, the colder it gets.
## Latitude vs. Humidity Plot
```
# create scatter plot
plt.scatter(df["Latitude"], df["Humidity"])
# add labels and title
plt.title(f"City Latitude vs. Humidity {converted_date[0]}")
plt.xlabel("Latitude")
plt.ylabel("Humidity (%)")
# add grid lines
plt.grid()
# show and save pic
plt.savefig("../output_data/2LatvHumid.png")
plt.show()
```
# Graph Explanation :
This scatterplot shows the relationship between the humidity (%) in each city based on its latitude. Based on the results, it does not seem that latitude affects humidity since the data points are all over the place.
## Latitude vs. Cloudiness Plot
```
# create scatter plot
plt.scatter(df["Latitude"], df["Cloudiness"])
# add labels and title
plt.title(f"City Latitude vs. Cloudiness {converted_date[0]}")
plt.xlabel("Latitude")
plt.ylabel("Cloudiness (%)")
# add grid lines
plt.grid()
# show and save pic
plt.savefig("../output_data/3LatvCloud.png")
plt.show()
```
# Graph Explanation :
This scatterplot shows the relationship between the cloudiness (%) in each city based on its latitude. Based on the results, it does not seem that latitude affects cloudiness since the data points are all over the place.
## Latitude vs. Wind Speed Plot
```
# create scatter plot
plt.scatter(df["Latitude"], df["Wind Speed"])
# add labels and title
plt.title(f"City Latitude vs. Wind Speed {converted_date[0]}")
plt.xlabel("Latitude")
plt.ylabel("Wind Speed (mph)")
# add grid lines
plt.grid()
# show and save pic
plt.savefig("../output_data/4LatvWind.png")
plt.show()
```
# Graph Explanation :
This scatterplot shows the relationship between the wind speed (mph) in each city based on its latitude. Based on the results, it does not seem that latitude affects wind speed since the data points are all over the place.
## Linear Regression
```
# x axis for noth and souht
nx_values = []
sx_values = []
# y axis for temp
ny_values = []
sy_values = []
# y axis for humidity
nhy_values = []
shy_values = []
# y axis for cloudiness
ncy_values = []
scy_values = []
# y axis for wind speed
nwy_values = []
swy_values = []
# create index
indexes = range(0, len(df["City"]))
# append arrays
for index in indexes:
if df["Latitude"][index] >= 0:
nx_values.append(df["Latitude"][index])
ny_values.append(df["Max Temperature"][index])
nhy_values.append(df["Humidity"][index])
ncy_values.append(df["Cloudiness"][index])
nwy_values.append(df["Wind Speed"][index])
if df["Latitude"][index] < 0:
sx_values.append(df["Latitude"][index])
sy_values.append(df["Max Temperature"][index])
shy_values.append(df["Humidity"][index])
scy_values.append(df["Cloudiness"][index])
swy_values.append(df["Wind Speed"][index])
# convert all array values from float to integer
nx_values = np.array(nx_values, dtype = "int")
sx_values = np.array(sx_values, dtype = "int")
ny_values = np.array(ny_values, dtype = "int")
sy_values = np.array(sy_values, dtype = "int")
nhy_values = np.array(nhy_values, dtype = "int")
shy_values = np.array(shy_values, dtype = "int")
ncy_values = np.array(ncy_values, dtype = "int")
scy_values = np.array(scy_values, dtype = "int")
nwy_values = np.array(nwy_values, dtype = "int")
swy_values = np.array(swy_values, dtype = "int")
print(len(nx_values))
print(len(sx_values))
```
# Northern Hemisphere - Max Temp vs. Latitude Linear Regression
```
(slope, intercept, rvalue, pvalue, stderr) = linregress(nx_values, ny_values)
regress_values = nx_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(nx_values, ny_values)
plt.plot(nx_values,regress_values,"r-")
plt.annotate(line_eq,(6,10),fontsize=15,color="red")
plt.title("Northern Latitude Cities vs. Max Temperature")
plt.xlabel("Latitude")
plt.ylabel("Max Temperature (F)")
print(f"The r-squared is: {rvalue}")
# show and save pic
plt.savefig("../output_data/5NLatvTemp.png")
plt.show()
```
### Northern Hemisphere: Latitude vs. Max Temp Analysis :
The linear regression line shows a downward slope, therefore the decrease in temperature is due to the increase in lattitude. It can be concluded that as we move away from the equator the temperature gets lower.
# Southern Hemisphere - Max Temp vs. Latitude Linear Regression
```
(slope, intercept, rvalue, pvalue, stderr) = linregress(sx_values, sy_values)
regress_values = sx_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(sx_values, sy_values)
plt.plot(sx_values,regress_values,"r-")
plt.annotate(line_eq,(-30,50),fontsize=15,color="red")
plt.title("Southern Latitude Cities vs. Max Temperature")
plt.xlabel("Latitude")
plt.ylabel("Max Temperature (F)")
print(f"The r-squared is: {rvalue}")
# show and save pic
plt.savefig("../output_data/6SLatvTemp.png")
plt.show()
```
### Southern Hemisphere: Latitude vs. Max Temp Analysis :
The linear regression line shows a upward slope, therefore the increase in temperature is due to the decrease in lattitude. It can be concluded that as we move closer to the equator the temperature gets higher.
### Graph Explanation :
These scatterplots shows the relationship between the max temperature (F) in each northern and southern city based on its latitude. Based on the results, it seems the closer you get to the equator the hotter it gets, and the further away, the colder it gets. The Northern graph has a higher r-squared value because it has more of data points (391 vs 171) from the original.
# Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression
```
(slope, intercept, rvalue, pvalue, stderr) = linregress(nx_values, nhy_values)
regress_values = nx_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(nx_values, nhy_values)
plt.plot(nx_values,regress_values,"r-")
plt.annotate(line_eq,(45,10),fontsize=15,color="red")
plt.title("Northern Latitude Cities vs. Humidity")
plt.xlabel("Latitude")
plt.ylabel("Humidity (%)")
print(f"The r-squared is: {rvalue}")
# show and save pic
plt.savefig("../output_data/7NLatvHumid.png")
plt.show()
```
### Northern Hemisphere: Latitude vs. Humidity Plot
The regression line shows an slight upward trend however it can not be used to draw a conclusion.
# Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression
```
(slope, intercept, rvalue, pvalue, stderr) = linregress(sx_values, shy_values)
regress_values = sx_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(sx_values, shy_values)
plt.plot(sx_values,regress_values,"r-")
plt.annotate(line_eq,(-50,55),fontsize=15,color="red")
plt.title("Southern Latitude Cities vs. Humidity")
plt.xlabel("Latitude")
plt.ylabel("Humidity (%)")
print(f"The r-squared is: {rvalue}")
# show and save pic
plt.savefig("../output_data/8SLatvHumid.png")
plt.show()
```
### Southern Hemisphere: Latitude vs. Humidity Plot
The regression line shows an slight downward trend however it can not be used to draw a conclusion.
### Graph Explanation :
These scatterplots shows the relationship between the humidity (%) in each northern and southern city based on its latitude. Based on the results, it does not seem that latitude affects humidity since the data points are all over the place.
# Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
```
(slope, intercept, rvalue, pvalue, stderr) = linregress(nx_values, ncy_values)
regress_values = nx_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(nx_values, ncy_values)
plt.plot(nx_values,regress_values,"r-")
plt.annotate(line_eq,(45,55),fontsize=15,color="red")
plt.title("Northern Latitude Cities vs. Cloudiness")
plt.xlabel("Latitude")
plt.ylabel("Cloudiness (%)")
print(f"The r-squared is: {rvalue}")
# show and save pic
plt.savefig("../output_data/9NLatvCloud.png")
plt.show()
```
### Northern Hemisphere - Cloudiness (%) vs. Latitude Analysis
The regression line shows an slight upward trend however it can not be used to draw a conclusion.
# Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
```
(slope, intercept, rvalue, pvalue, stderr) = linregress(sx_values, scy_values)
regress_values = sx_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(sx_values, scy_values)
plt.plot(sx_values,regress_values,"r-")
plt.annotate(line_eq,(-45,30),fontsize=15,color="red")
plt.title("Southern Latitude Cities vs. Cloudiness")
plt.xlabel("Latitude")
plt.ylabel("Cloudiness (%)")
print(f"The r-squared is: {rvalue}")
# show and save pic
plt.savefig("../output_data/10SLatvCloud.png")
plt.show()
```
### Southern Hemisphere - Cloudiness (%) vs. Latitude Analysis
The regression line shows an slight downward trend however it can not be used to draw a conclusion.
### Graph Explanation :
These scatterplots shows the relationship between the cloudiness (%) in each northern and southern city based on its latitude. Based on the results, it does not seem that latitude affects cloudiness since the data points are all over the place.
# Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
```
(slope, intercept, rvalue, pvalue, stderr) = linregress(nx_values, nwy_values)
regress_values = nx_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(nx_values, nwy_values)
plt.plot(nx_values,regress_values,"r-")
plt.annotate(line_eq,(30,25),fontsize=15,color="red")
plt.title("Northern Latitude Cities vs. Wind Speed")
plt.xlabel("Latitude")
plt.ylabel("Wind Speed (mph)")
print(f"The r-squared is: {rvalue}")
# show and save pic
plt.savefig("../output_data/11NLatvWind.png")
plt.show()
```
### Northern Hemisphere - Wind Speed vs. Latitude Analysis
The regression line shows a slight upward trend however it can not be used to draw a conclusion.
# Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
```
(slope, intercept, rvalue, pvalue, stderr) = linregress(sx_values, swy_values)
regress_values = sx_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(sx_values, swy_values)
plt.plot(sx_values,regress_values,"r-")
plt.annotate(line_eq,(-30,20),fontsize=15,color="red")
plt.title("Southern Latitude Cities vs. Wind Speed")
plt.xlabel("Latitude")
plt.ylabel("Wind Speed (mph)")
print(f"The r-squared is: {rvalue}")
# show and save pic
plt.savefig("../output_data/12sLatvWind.png")
plt.show()
```
# Southern Hemisphere - Wind Speed vs. Latitude Analysis
The regression line shows a slight downward trend however it can not be used to draw a concrete conclusion.
### Graph Explanation :
These scatterplots shows the relationship between the wind speed (mph) in each northern and southern city based on its latitude.
Based on the results, it does not seem that latitude affects wind speed since the data points are all over the place.
|
github_jupyter
|
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import json
import requests
import time
from scipy.stats import linregress
# Import API key
from api_keys import weather_api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "../output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(low=-90.000, high=90.000, size=1500)
lngs = np.random.uniform(low=-180.000, high=180.000, size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
print(len(cities))
# empty arrays to be appended
city = []
cloudiness = []
country = []
date = []
humidity = []
lat = []
lng = []
max_temp = []
wind_speed = []
# base url
url = "http://api.openweathermap.org/data/2.5/weather?"
units = "imperial"
# start of the print
print("Beginning Data Retrieval")
print("-----------------------------")
# start counter
count = 0
for citi in cities:
# Build query URL
query_url = f"{url}appid={weather_api_key}&q={citi}&units={units}"
# Get weather data
weather_json = requests.get(query_url).json()
# increase count
count += 1
try:
#print city name
name = weather_json["name"]
print(f"Processing Record {count} of {len(cities)}: {name}")
#append arrays
city.append(weather_json["name"])
cloudiness.append(weather_json["clouds"]["all"])
country.append(weather_json["sys"]["country"])
date.append(weather_json["dt"])
humidity.append(weather_json["main"]["humidity"])
max_temp.append(weather_json["main"]["temp_max"])
wind_speed.append(weather_json["wind"]["speed"])
lat.append(weather_json["coord"]["lat"])
lng.append(weather_json["coord"]["lon"])
except:
print("City not found. Skipping...")
print("-----------------------------")
print("Data Retrieval Complete")
print("-----------------------------")
# to convert timestamp to regular date
from datetime import datetime
converted_date = []
for dt in date:
converted_date.append(datetime.fromtimestamp(dt))
# read csv file
df = pd.DataFrame({
"City": city,
"Country": country,
"Date": converted_date,
"Latitude": lat,
"Longitude": lng,
"Cloudiness": cloudiness,
"Humidity": humidity,
"Max Temperature": max_temp,
"Wind Speed": wind_speed
})
# save data frame as csv
df.to_csv("../output_data/cities.csv", encoding='utf-8', index=False)
# view number of items per column
df.count()
# print data frame
df
# create scatter plot
plt.scatter(df["Latitude"], df["Max Temperature"])
# add labels and title
plt.title(f"City Latitude vs. Max Temperature {converted_date[0]}")
plt.xlabel("Latitude")
plt.ylabel("Max Temperature (F)")
# add grid lines
plt.grid()
# show and save pic
plt.savefig("../output_data/1LatvTemp.png")
plt.show()
# create scatter plot
plt.scatter(df["Latitude"], df["Humidity"])
# add labels and title
plt.title(f"City Latitude vs. Humidity {converted_date[0]}")
plt.xlabel("Latitude")
plt.ylabel("Humidity (%)")
# add grid lines
plt.grid()
# show and save pic
plt.savefig("../output_data/2LatvHumid.png")
plt.show()
# create scatter plot
plt.scatter(df["Latitude"], df["Cloudiness"])
# add labels and title
plt.title(f"City Latitude vs. Cloudiness {converted_date[0]}")
plt.xlabel("Latitude")
plt.ylabel("Cloudiness (%)")
# add grid lines
plt.grid()
# show and save pic
plt.savefig("../output_data/3LatvCloud.png")
plt.show()
# create scatter plot
plt.scatter(df["Latitude"], df["Wind Speed"])
# add labels and title
plt.title(f"City Latitude vs. Wind Speed {converted_date[0]}")
plt.xlabel("Latitude")
plt.ylabel("Wind Speed (mph)")
# add grid lines
plt.grid()
# show and save pic
plt.savefig("../output_data/4LatvWind.png")
plt.show()
# x axis for noth and souht
nx_values = []
sx_values = []
# y axis for temp
ny_values = []
sy_values = []
# y axis for humidity
nhy_values = []
shy_values = []
# y axis for cloudiness
ncy_values = []
scy_values = []
# y axis for wind speed
nwy_values = []
swy_values = []
# create index
indexes = range(0, len(df["City"]))
# append arrays
for index in indexes:
if df["Latitude"][index] >= 0:
nx_values.append(df["Latitude"][index])
ny_values.append(df["Max Temperature"][index])
nhy_values.append(df["Humidity"][index])
ncy_values.append(df["Cloudiness"][index])
nwy_values.append(df["Wind Speed"][index])
if df["Latitude"][index] < 0:
sx_values.append(df["Latitude"][index])
sy_values.append(df["Max Temperature"][index])
shy_values.append(df["Humidity"][index])
scy_values.append(df["Cloudiness"][index])
swy_values.append(df["Wind Speed"][index])
# convert all array values from float to integer
nx_values = np.array(nx_values, dtype = "int")
sx_values = np.array(sx_values, dtype = "int")
ny_values = np.array(ny_values, dtype = "int")
sy_values = np.array(sy_values, dtype = "int")
nhy_values = np.array(nhy_values, dtype = "int")
shy_values = np.array(shy_values, dtype = "int")
ncy_values = np.array(ncy_values, dtype = "int")
scy_values = np.array(scy_values, dtype = "int")
nwy_values = np.array(nwy_values, dtype = "int")
swy_values = np.array(swy_values, dtype = "int")
print(len(nx_values))
print(len(sx_values))
(slope, intercept, rvalue, pvalue, stderr) = linregress(nx_values, ny_values)
regress_values = nx_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(nx_values, ny_values)
plt.plot(nx_values,regress_values,"r-")
plt.annotate(line_eq,(6,10),fontsize=15,color="red")
plt.title("Northern Latitude Cities vs. Max Temperature")
plt.xlabel("Latitude")
plt.ylabel("Max Temperature (F)")
print(f"The r-squared is: {rvalue}")
# show and save pic
plt.savefig("../output_data/5NLatvTemp.png")
plt.show()
(slope, intercept, rvalue, pvalue, stderr) = linregress(sx_values, sy_values)
regress_values = sx_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(sx_values, sy_values)
plt.plot(sx_values,regress_values,"r-")
plt.annotate(line_eq,(-30,50),fontsize=15,color="red")
plt.title("Southern Latitude Cities vs. Max Temperature")
plt.xlabel("Latitude")
plt.ylabel("Max Temperature (F)")
print(f"The r-squared is: {rvalue}")
# show and save pic
plt.savefig("../output_data/6SLatvTemp.png")
plt.show()
(slope, intercept, rvalue, pvalue, stderr) = linregress(nx_values, nhy_values)
regress_values = nx_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(nx_values, nhy_values)
plt.plot(nx_values,regress_values,"r-")
plt.annotate(line_eq,(45,10),fontsize=15,color="red")
plt.title("Northern Latitude Cities vs. Humidity")
plt.xlabel("Latitude")
plt.ylabel("Humidity (%)")
print(f"The r-squared is: {rvalue}")
# show and save pic
plt.savefig("../output_data/7NLatvHumid.png")
plt.show()
(slope, intercept, rvalue, pvalue, stderr) = linregress(sx_values, shy_values)
regress_values = sx_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(sx_values, shy_values)
plt.plot(sx_values,regress_values,"r-")
plt.annotate(line_eq,(-50,55),fontsize=15,color="red")
plt.title("Southern Latitude Cities vs. Humidity")
plt.xlabel("Latitude")
plt.ylabel("Humidity (%)")
print(f"The r-squared is: {rvalue}")
# show and save pic
plt.savefig("../output_data/8SLatvHumid.png")
plt.show()
(slope, intercept, rvalue, pvalue, stderr) = linregress(nx_values, ncy_values)
regress_values = nx_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(nx_values, ncy_values)
plt.plot(nx_values,regress_values,"r-")
plt.annotate(line_eq,(45,55),fontsize=15,color="red")
plt.title("Northern Latitude Cities vs. Cloudiness")
plt.xlabel("Latitude")
plt.ylabel("Cloudiness (%)")
print(f"The r-squared is: {rvalue}")
# show and save pic
plt.savefig("../output_data/9NLatvCloud.png")
plt.show()
(slope, intercept, rvalue, pvalue, stderr) = linregress(sx_values, scy_values)
regress_values = sx_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(sx_values, scy_values)
plt.plot(sx_values,regress_values,"r-")
plt.annotate(line_eq,(-45,30),fontsize=15,color="red")
plt.title("Southern Latitude Cities vs. Cloudiness")
plt.xlabel("Latitude")
plt.ylabel("Cloudiness (%)")
print(f"The r-squared is: {rvalue}")
# show and save pic
plt.savefig("../output_data/10SLatvCloud.png")
plt.show()
(slope, intercept, rvalue, pvalue, stderr) = linregress(nx_values, nwy_values)
regress_values = nx_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(nx_values, nwy_values)
plt.plot(nx_values,regress_values,"r-")
plt.annotate(line_eq,(30,25),fontsize=15,color="red")
plt.title("Northern Latitude Cities vs. Wind Speed")
plt.xlabel("Latitude")
plt.ylabel("Wind Speed (mph)")
print(f"The r-squared is: {rvalue}")
# show and save pic
plt.savefig("../output_data/11NLatvWind.png")
plt.show()
(slope, intercept, rvalue, pvalue, stderr) = linregress(sx_values, swy_values)
regress_values = sx_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(sx_values, swy_values)
plt.plot(sx_values,regress_values,"r-")
plt.annotate(line_eq,(-30,20),fontsize=15,color="red")
plt.title("Southern Latitude Cities vs. Wind Speed")
plt.xlabel("Latitude")
plt.ylabel("Wind Speed (mph)")
print(f"The r-squared is: {rvalue}")
# show and save pic
plt.savefig("../output_data/12sLatvWind.png")
plt.show()
| 0.273186 | 0.85446 |
## Apple Health Processor
-----
## Dependencies and Libraries
```
from datetime import date, datetime, timedelta as td
import pytz
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
```
------
## Functions for Processing Dates and Timezones
```
# functions to convert UTC to Shanghai time zone and extract date/time elements
convert_tz = lambda x: x.to_pydatetime().replace(tzinfo=pytz.utc).astimezone(pytz.timezone('Asia/Shanghai'))
get_year = lambda x: convert_tz(x).year
get_month = lambda x: '{}-{:02}'.format(convert_tz(x).year, convert_tz(x).month) #inefficient
get_date = lambda x: '{}-{:02}-{:02}'.format(convert_tz(x).year, convert_tz(x).month, convert_tz(x).day) #inefficient
get_day = lambda x: convert_tz(x).day
get_hour = lambda x: convert_tz(x).hour
get_day_of_week = lambda x: convert_tz(x).weekday()
```
---
## Steps
```
steps = pd.read_csv("data/StepCount.csv")
steps.tail()
# parse out date and time elements as Shanghai time
steps['startDate'] = pd.to_datetime(steps['startDate'])
steps['year'] = steps['startDate'].map(get_year)
steps['month'] = steps['startDate'].map(get_month)
steps['date'] = steps['startDate'].map(get_date)
steps['day'] = steps['startDate'].map(get_day)
steps['hour'] = steps['startDate'].map(get_hour)
steps['dow'] = steps['startDate'].map(get_day_of_week)
steps.head()
steps.columns
steps_by_date = steps.groupby(['date'])['value'].sum().reset_index(name='Steps')
steps_by_date.tail()
# steps_by_date.tail(10)
steps_by_date.to_csv("data/steps_per_day.csv", index=False)
```
-----
### Use Only Watch Steps, Remove Phone Steps
```
steps_device_by_year = steps.groupby(['year', 'sourceName'])['value'].sum().reset_index(name='Steps')
steps_device_by_year
steps.sourceName.unique()
# drop phone steps
steps = steps[steps.sourceName == 'Mark’s Apple\xa0Watch']
# steps.head()
```
## Rolling Average
```
steps_by_date['RollingMeanSteps'] = steps_by_date.Steps.rolling(window=10, center=True).mean()
steps_by_date.plot(x='date', y='RollingMeanSteps', title= 'Daily step counts rolling mean over 10 days', figsize=[10, 6])
```
## Steps by Day of Week
```
steps_by_date['date'] = pd.to_datetime(steps_by_date['date'])
steps_by_date['dow'] = steps_by_date['date'].dt.weekday
data = steps_by_date.groupby(['dow'])['Steps'].mean()
fig, ax = plt.subplots(figsize=[10, 6])
ax = data.plot(kind='bar', x='day_of_week')
n_groups = len(data)
index = np.arange(n_groups)
opacity = 0.75
#fig, ax = plt.subplots(figsize=[10, 6])
ax.yaxis.grid(True)
plt.suptitle('Average Steps by Day of the Week', fontsize=16)
dow_labels = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']
plt.xticks(index, dow_labels, rotation=45)
plt.xlabel('Day of Week', fontsize=12, color='red')
```
------
## Monthly Steps
```
total_steps_by_month = steps.groupby(['month'])['value'].sum().reset_index(name='Steps')
total_steps_by_month
# a bit of a hackish solution, could use improvement
dataset = total_steps_by_month
chart_title = 'Number of Steps per month'
n_groups = len(dataset)
index = np.arange(n_groups)
ax = dataset.plot(kind='line', figsize=[12, 5], linewidth=4, alpha=1, marker='o', color='#6684c1',
markeredgecolor='#6684c1', markerfacecolor='w', markersize=8, markeredgewidth=2)
# ax.set_xlim((year_counts.index[0], year_counts.index[-1]))
ax.yaxis.grid(True)
ax.xaxis.grid(True)
# ax.set_ylim(0, 1000)
ax.set_xticks(index)
ax.set_ylabel('Step Count')
# ax.set_xlabel('')
plt.xticks(index, dataset.month, rotation=90)
ax.set_title(chart_title)
plt.show()
```
----
## Steps Per Year
```
total_steps_by_years = steps.groupby(['year'])['value'].sum().reset_index(name='Steps')
total_steps_by_years
dataset = total_steps_by_years
n_groups = len(dataset)
opacity = 0.5
fig, ax = plt.subplots(figsize=[10, 6])
ax.yaxis.grid(True)
index = np.arange(n_groups)
bar_width = 0.4
data = plt.bar(index, dataset.Steps, bar_width,
alpha=opacity,
color='c',
label='Steps')
data[-1].set_color('r')
plt.ylabel('Steps')
plt.title('Total Steps Per Year')
plt.xticks(index, dataset.year, rotation=45)
plt.legend()
plt.tight_layout()
plt.show()
```
-----
## Steps by Hour of Day
```
hour_steps = steps.groupby(['hour'])['value'].sum().reset_index(name='Steps')
# hour_steps
ax = hour_steps.Steps.plot(kind='line', figsize=[10, 5], linewidth=4, alpha=1, marker='o', color='#6684c1',
markeredgecolor='#6684c1', markerfacecolor='w', markersize=8, markeredgewidth=2)
xlabels = hour_steps.index.map(lambda x: '{:02}:00'.format(x))
ax.set_xticks(range(len(xlabels)))
ax.set_xticklabels(xlabels, rotation=45, rotation_mode='anchor', ha='right')
# ax.set_xlim((hour_steps.index[0], hour_steps.index[-1]))
ax.yaxis.grid(True)
# ax.set_ylim((0, 1300))
ax.set_ylabel('Steps')
ax.set_xlabel('')
ax.set_title('Steps by hour the day')
plt.show()
```
-----
```
weight = pd.read_csv("data/BodyMass.csv")
# weight.columns
# parse out date and time elements as Shanghai time
weight['startDate'] = pd.to_datetime(weight['startDate'])
weight['year'] = weight['startDate'].map(get_year)
weight['month'] = weight['startDate'].map(get_month)
weight['date'] = weight['startDate'].map(get_date)
weight.tail()
month_weight = weight.groupby(['month'])['value'].mean().reset_index(name='Weight')
# month_weight
# a bit of a hackish solution, could use improvement
dataset = month_weight
chart_title = 'Monthly Weight'
n_groups = len(dataset)
index = np.arange(n_groups)
ax = dataset.plot(kind='line', figsize=[12, 5], linewidth=4, alpha=1, marker='o', color='#6684c1',
markeredgecolor='#6684c1', markerfacecolor='w', markersize=8, markeredgewidth=2)
# ax.set_xlim((year_counts.index[0], year_counts.index[-1]))
ax.yaxis.grid(True)
ax.xaxis.grid(True)
# ax.set_ylim(0, 1000)
ax.set_xticks(index)
ax.set_ylabel('Weight (lbs)')
plt.xticks(index, dataset.month, rotation=90)
ax.set_title(chart_title)
plt.show()
# convert to kg
month_weight['kg'] = round(month_weight['Weight'] / 2.205, 2)
month_weight.columns
# a bit of a hackish solution, could use improvement
dataset = month_weight[['month', 'kg']]
chart_title = 'Monthly Weight'
n_groups = len(dataset)
index = np.arange(n_groups)
ax = dataset.plot(kind='line', figsize=[12, 5], linewidth=4, alpha=1, marker='o', color='#6684c1',
markeredgecolor='#6684c1', markerfacecolor='w', markersize=8, markeredgewidth=2)
# ax.set_xlim((year_counts.index[0], year_counts.index[-1]))
ax.yaxis.grid(True)
ax.xaxis.grid(True)
# ax.set_ylim(0, 1000)
ax.set_xticks(index)
ax.set_ylabel('Weight (kg)')
plt.xticks(index, dataset.month, rotation=90)
ax.set_title(chart_title)
plt.show()
```
## TODO: Heart Rate
------
# Sleep
```
sleep_raw = pd.read_csv("data/SleepAnalysis.csv")
sleep_raw.tail()
# parse out date and time elements as Shanghai time
steps['startDate'] = pd.to_datetime(steps['startDate'])
steps['year'] = steps['startDate'].map(get_year)
steps['month'] = steps['startDate'].map(get_month)
steps['date'] = steps['startDate'].map(get_date)
steps['day'] = steps['startDate'].map(get_day)
steps['hour'] = steps['startDate'].map(get_hour)
steps['dow'] = steps['startDate'].map(get_day_of_week)
```
|
github_jupyter
|
from datetime import date, datetime, timedelta as td
import pytz
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# functions to convert UTC to Shanghai time zone and extract date/time elements
convert_tz = lambda x: x.to_pydatetime().replace(tzinfo=pytz.utc).astimezone(pytz.timezone('Asia/Shanghai'))
get_year = lambda x: convert_tz(x).year
get_month = lambda x: '{}-{:02}'.format(convert_tz(x).year, convert_tz(x).month) #inefficient
get_date = lambda x: '{}-{:02}-{:02}'.format(convert_tz(x).year, convert_tz(x).month, convert_tz(x).day) #inefficient
get_day = lambda x: convert_tz(x).day
get_hour = lambda x: convert_tz(x).hour
get_day_of_week = lambda x: convert_tz(x).weekday()
steps = pd.read_csv("data/StepCount.csv")
steps.tail()
# parse out date and time elements as Shanghai time
steps['startDate'] = pd.to_datetime(steps['startDate'])
steps['year'] = steps['startDate'].map(get_year)
steps['month'] = steps['startDate'].map(get_month)
steps['date'] = steps['startDate'].map(get_date)
steps['day'] = steps['startDate'].map(get_day)
steps['hour'] = steps['startDate'].map(get_hour)
steps['dow'] = steps['startDate'].map(get_day_of_week)
steps.head()
steps.columns
steps_by_date = steps.groupby(['date'])['value'].sum().reset_index(name='Steps')
steps_by_date.tail()
# steps_by_date.tail(10)
steps_by_date.to_csv("data/steps_per_day.csv", index=False)
steps_device_by_year = steps.groupby(['year', 'sourceName'])['value'].sum().reset_index(name='Steps')
steps_device_by_year
steps.sourceName.unique()
# drop phone steps
steps = steps[steps.sourceName == 'Mark’s Apple\xa0Watch']
# steps.head()
steps_by_date['RollingMeanSteps'] = steps_by_date.Steps.rolling(window=10, center=True).mean()
steps_by_date.plot(x='date', y='RollingMeanSteps', title= 'Daily step counts rolling mean over 10 days', figsize=[10, 6])
steps_by_date['date'] = pd.to_datetime(steps_by_date['date'])
steps_by_date['dow'] = steps_by_date['date'].dt.weekday
data = steps_by_date.groupby(['dow'])['Steps'].mean()
fig, ax = plt.subplots(figsize=[10, 6])
ax = data.plot(kind='bar', x='day_of_week')
n_groups = len(data)
index = np.arange(n_groups)
opacity = 0.75
#fig, ax = plt.subplots(figsize=[10, 6])
ax.yaxis.grid(True)
plt.suptitle('Average Steps by Day of the Week', fontsize=16)
dow_labels = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']
plt.xticks(index, dow_labels, rotation=45)
plt.xlabel('Day of Week', fontsize=12, color='red')
total_steps_by_month = steps.groupby(['month'])['value'].sum().reset_index(name='Steps')
total_steps_by_month
# a bit of a hackish solution, could use improvement
dataset = total_steps_by_month
chart_title = 'Number of Steps per month'
n_groups = len(dataset)
index = np.arange(n_groups)
ax = dataset.plot(kind='line', figsize=[12, 5], linewidth=4, alpha=1, marker='o', color='#6684c1',
markeredgecolor='#6684c1', markerfacecolor='w', markersize=8, markeredgewidth=2)
# ax.set_xlim((year_counts.index[0], year_counts.index[-1]))
ax.yaxis.grid(True)
ax.xaxis.grid(True)
# ax.set_ylim(0, 1000)
ax.set_xticks(index)
ax.set_ylabel('Step Count')
# ax.set_xlabel('')
plt.xticks(index, dataset.month, rotation=90)
ax.set_title(chart_title)
plt.show()
total_steps_by_years = steps.groupby(['year'])['value'].sum().reset_index(name='Steps')
total_steps_by_years
dataset = total_steps_by_years
n_groups = len(dataset)
opacity = 0.5
fig, ax = plt.subplots(figsize=[10, 6])
ax.yaxis.grid(True)
index = np.arange(n_groups)
bar_width = 0.4
data = plt.bar(index, dataset.Steps, bar_width,
alpha=opacity,
color='c',
label='Steps')
data[-1].set_color('r')
plt.ylabel('Steps')
plt.title('Total Steps Per Year')
plt.xticks(index, dataset.year, rotation=45)
plt.legend()
plt.tight_layout()
plt.show()
hour_steps = steps.groupby(['hour'])['value'].sum().reset_index(name='Steps')
# hour_steps
ax = hour_steps.Steps.plot(kind='line', figsize=[10, 5], linewidth=4, alpha=1, marker='o', color='#6684c1',
markeredgecolor='#6684c1', markerfacecolor='w', markersize=8, markeredgewidth=2)
xlabels = hour_steps.index.map(lambda x: '{:02}:00'.format(x))
ax.set_xticks(range(len(xlabels)))
ax.set_xticklabels(xlabels, rotation=45, rotation_mode='anchor', ha='right')
# ax.set_xlim((hour_steps.index[0], hour_steps.index[-1]))
ax.yaxis.grid(True)
# ax.set_ylim((0, 1300))
ax.set_ylabel('Steps')
ax.set_xlabel('')
ax.set_title('Steps by hour the day')
plt.show()
weight = pd.read_csv("data/BodyMass.csv")
# weight.columns
# parse out date and time elements as Shanghai time
weight['startDate'] = pd.to_datetime(weight['startDate'])
weight['year'] = weight['startDate'].map(get_year)
weight['month'] = weight['startDate'].map(get_month)
weight['date'] = weight['startDate'].map(get_date)
weight.tail()
month_weight = weight.groupby(['month'])['value'].mean().reset_index(name='Weight')
# month_weight
# a bit of a hackish solution, could use improvement
dataset = month_weight
chart_title = 'Monthly Weight'
n_groups = len(dataset)
index = np.arange(n_groups)
ax = dataset.plot(kind='line', figsize=[12, 5], linewidth=4, alpha=1, marker='o', color='#6684c1',
markeredgecolor='#6684c1', markerfacecolor='w', markersize=8, markeredgewidth=2)
# ax.set_xlim((year_counts.index[0], year_counts.index[-1]))
ax.yaxis.grid(True)
ax.xaxis.grid(True)
# ax.set_ylim(0, 1000)
ax.set_xticks(index)
ax.set_ylabel('Weight (lbs)')
plt.xticks(index, dataset.month, rotation=90)
ax.set_title(chart_title)
plt.show()
# convert to kg
month_weight['kg'] = round(month_weight['Weight'] / 2.205, 2)
month_weight.columns
# a bit of a hackish solution, could use improvement
dataset = month_weight[['month', 'kg']]
chart_title = 'Monthly Weight'
n_groups = len(dataset)
index = np.arange(n_groups)
ax = dataset.plot(kind='line', figsize=[12, 5], linewidth=4, alpha=1, marker='o', color='#6684c1',
markeredgecolor='#6684c1', markerfacecolor='w', markersize=8, markeredgewidth=2)
# ax.set_xlim((year_counts.index[0], year_counts.index[-1]))
ax.yaxis.grid(True)
ax.xaxis.grid(True)
# ax.set_ylim(0, 1000)
ax.set_xticks(index)
ax.set_ylabel('Weight (kg)')
plt.xticks(index, dataset.month, rotation=90)
ax.set_title(chart_title)
plt.show()
sleep_raw = pd.read_csv("data/SleepAnalysis.csv")
sleep_raw.tail()
# parse out date and time elements as Shanghai time
steps['startDate'] = pd.to_datetime(steps['startDate'])
steps['year'] = steps['startDate'].map(get_year)
steps['month'] = steps['startDate'].map(get_month)
steps['date'] = steps['startDate'].map(get_date)
steps['day'] = steps['startDate'].map(get_day)
steps['hour'] = steps['startDate'].map(get_hour)
steps['dow'] = steps['startDate'].map(get_day_of_week)
| 0.630002 | 0.878053 |
# Use Keras to recognize hand-written digits with Watson Machine Learning REST API
This notebook contains steps and code to demonstrate support of Keras Deep Learning experiments in Watson Machine Learning Service. It introduces commands for getting data, training experiments, persisting pipelines, publishing models, deploying models and scoring.
Some familiarity with cURL is helpful. This notebook uses cURL examples.
## Learning goals
The learning goals of this notebook are:
- Working with Watson Machine Learning experiments to train Deep Learning models.
- Downloading computed models to local storage.
- Online deployment and scoring of trained model.
## Contents
This notebook contains the following parts:
1. [Setup](#setup)
2. [Experiment definition](#experiment_definition)
3. [Model definition](#model_definition)
4. [Experiment Run](#run)
5. [Historical runs](#runs)
6. [Deploy and Score](#deploy_and_score)
7. [Cleaning](#cleaning)
8. [Summary and next steps](#summary)
<a id="setup"></a>
## 1. Set up the environment
Before you use the sample code in this notebook, you must perform the following setup tasks:
- Create a <a href="https://console.ng.bluemix.net/catalog/services/ibm-watson-machine-learning/" target="_blank" rel="noopener no referrer">Watson Machine Learning (WML) Service</a> instance (a free plan is offered and information about how to create the instance can be found <a href="https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.html?context=analytics" target="_blank" rel="noopener no referrer">here</a>).
- Create a <a href="https://console.bluemix.net/catalog/infrastructure/cloud-object-storage" target="_blank" rel="noopener no referrer">Cloud Object Storage (COS)</a> instance (a lite plan is offered and information about how to order storage can be found <a href="https://console.bluemix.net/docs/services/cloud-object-storage/basics/order-storage.html#order-storage" target="_blank" rel="noopener no referrer">here</a>). <br/>**Note: When using Watson Studio, you already have a COS instance associated with the project you are running the notebook in.**
You can find your COS credentials in COS instance dashboard under the **Service credentials** tab.
Go to the **Endpoint** tab in the COS instance's dashboard to get the endpoint information.
Authenticate the Watson Machine Learning service on IBM Cloud.
Your Cloud API key can be generated by going to the [**Users** section of the Cloud console](https://cloud.ibm.com/iam#/users). From that page, click your name, scroll down to the **API Keys** section, and click **Create an IBM Cloud API key**. Give your key a name and click **Create**, then copy the created key and paste it below.
**NOTE:** You can also get service specific apikey by going to the [**Service IDs** section of the Cloud Console](https://cloud.ibm.com/iam/serviceids). From that page, click **Create**, then copy the created key and paste it below.
```
%env API_KEY=...
%env WML_ENDPOINT_URL=...
%env WML_INSTANCE_CRN="fill out only if you want to create a new space"
%env WML_INSTANCE_NAME=...
%env COS_CRN="fill out only if you want to create a new space"
%env COS_ENDPOINT=...
%env COS_BUCKET=...
%env COS_ACCESS_KEY_ID=...
%env COS_SECRET_ACCESS_KEY=...
%env COS_API_KEY=...
%env SPACE_ID="fill out only if you have space already created"
%env DATAPLATFORM_URL=https://api.dataplatform.cloud.ibm.com
%env AUTH_ENDPOINT=https://iam.cloud.ibm.com/oidc/token
```
<a id="wml_token"></a>
### Getting WML authorization token for further cURL calls
<a href="https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-curl#curl-token" target="_blank" rel="noopener no referrer">Example of cURL call to get WML token</a>
```
%%bash --out token
curl -sk -X POST \
--header "Content-Type: application/x-www-form-urlencoded" \
--header "Accept: application/json" \
--data-urlencode "grant_type=urn:ibm:params:oauth:grant-type:apikey" \
--data-urlencode "apikey=$API_KEY" \
"$AUTH_ENDPOINT" \
| cut -d '"' -f 4
%env TOKEN=$token
```
<a id="space_creation"></a>
### Space creation
**Tip:** If you do not have `space` already created, please convert below three cells to `code` and run them.
First of all, you need to create a `space` that will be used in all of your further cURL calls.
If you do not have `space` already created, below is the cURL call to create one.
<a href="https://cpd-spaces-api.eu-gb.cf.appdomain.cloud/#/Spaces/spaces_create"
target="_blank" rel="noopener no referrer">Space creation</a>
Space creation is asynchronous. This means that you need to check space creation status after creation call.
Make sure that your newly created space is `active`.
<a href="https://cpd-spaces-api.eu-gb.cf.appdomain.cloud/#/Spaces/spaces_get"
target="_blank" rel="noopener no referrer">Get space information</a>
<a id="experiment_definition"></a>
## 2. Experiment / optimizer configuration
<a id="training_connection"></a>
### Training data connection
Define connection information to COS bucket and training data npz file. This example uses the MNIST dataset.
The dataset can be downloaded from [here](https://s3.amazonaws.com/img-datasets/mnist.npz). You can also download it to local filesystem by running the cell below.
**Action**: Upload training data to COS bucket and enter location information in the next cURL examples.
```
%%bash
wget -q https://s3.amazonaws.com/img-datasets/mnist.npz \
-O mnist.npz
```
<a id="cos_token"></a>
### Get COS token
Retrieve COS token for further authentication calls.
<a href="https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-curl#curl-token"
target="_blank" rel="noopener no referrer">Retrieve COS authentication token</a>
```
%%bash --out cos_token
curl -s -X "POST" "$AUTH_ENDPOINT" \
-H 'Accept: application/json' \
-H 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode "apikey=$COS_API_KEY" \
--data-urlencode "response_type=cloud_iam" \
--data-urlencode "grant_type=urn:ibm:params:oauth:grant-type:apikey" \
| cut -d '"' -f 4
%env COS_TOKEN=$cos_token
```
<a id="cos_upload"></a>
### Upload file to COS
Upload your local dataset into your COS bucket
<a href="https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-curl#curl-put-object"
target="_blank" rel="noopener no referrer">Upload file to COS</a>
```
%%bash
curl -sk -X PUT \
--header "Authorization: Bearer $COS_TOKEN" \
--header "Content-Type: application/octet-stream" \
--data-binary "@mnist.npz" \
"$COS_ENDPOINT/$COS_BUCKET/mnist.npz"
```
There should be an empty response when upload finished succesfully.
<a id="model_definition"></a>
## 3. Model definition
This section provides samples about how to store model definition via cURL calls.
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Model%20Definitions/model_definitions_create"
target="_blank" rel="noopener no referrer">Store a model definition for Deep Learning experiment</a>
```
%%bash --out model_definition_payload
MODEL_DEFINITION_PAYLOAD='{"name": "mlp-model-definition", "space_id": "'"$SPACE_ID"'", "description": "mlp-model-definition", "tags": ["DL", "MNIST"], "version": "2.0", "platform": {"name": "python", "versions": ["3.7"]}, "command": "python3 mnist_mlp.py --trainImagesFile ${DATA_DIR}/train-images-idx3-ubyte.gz --trainLabelsFile ${DATA_DIR}/train-labels-idx1-ubyte.gz --testImagesFile ${DATA_DIR}/t10k-images-idx3-ubyte.gz --testLabelsFile ${DATA_DIR}/t10k-labels-idx1-ubyte.gz --learningRate 0.001 --trainingIters 6000"}'
echo $MODEL_DEFINITION_PAYLOAD | python -m json.tool
%env MODEL_DEFINITION_PAYLOAD=$model_definition_payload
%%bash --out model_definition_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$MODEL_DEFINITION_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/model_definitions?version=2020-08-01"| grep '"id": ' | awk -F '"' '{ print $4 }'
%env MODEL_DEFINITION_ID=$model_definition_id
```
<a id="model_preparation"></a>
### Model preparation
Download files with keras code. You can either download it via link below or run the cell below the link.
<a href="https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/definitions/keras/mnist/MNIST.zip"
target="_blank" rel="noopener no referrer">Download MNIST.zip</a>
```
%%bash
wget https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/definitions/keras/mnist/MNIST.zip \
-O MNIST.zip
```
**Tip**: Convert below cell to code and run it to see model deinition's code.
<a id="def_upload"></a>
### Upload model for the model definition
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Model%20Definitions/model_definitions_upload_model"
target="_blank" rel="noopener no referrer">Upload model for the model definition</a>
```
%%bash
curl -sk -X PUT \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data-binary "@MNIST.zip" \
"$WML_ENDPOINT_URL/ml/v4/model_definitions/$MODEL_DEFINITION_ID/model?version=2020-08-01&space_id=$SPACE_ID" \
| python -m json.tool
```
<a id="run"></a>
## 4. Experiment run
This section provides samples about how to trigger Deep Learning experiment via cURL calls.
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Trainings/trainings_create"
target="_blank" rel="noopener no referrer">Schedule a training job for Deep Learning experiment</a>
```
%%bash --out training_payload
TRAINING_PAYLOAD='{"training_data_references": [{"name": "training_input_data", "type": "s3", "connection": {"endpoint_url": "'"$COS_ENDPOINT"'", "access_key_id": "'"$COS_ACCESS_KEY_ID"'", "secret_access_key": "'"$COS_SECRET_ACCESS_KEY"'"}, "location": {"bucket": "'"$COS_BUCKET"'"}, "schema": {"id": "idmlp_schema", "fields": [{"name": "text", "type": "string"}]}}], "results_reference": {"name": "MNIST results", "connection": {"endpoint_url": "'"$COS_ENDPOINT"'", "access_key_id": "'"$COS_ACCESS_KEY_ID"'", "secret_access_key": "'"$COS_SECRET_ACCESS_KEY"'"}, "location": {"bucket": "'"$COS_BUCKET"'"}, "type": "s3"}, "tags": [{"value": "tags_mnist", "description": "dome MNIST"}], "name": "MNIST mlp", "description": "test training modeldef MNIST", "model_definition": {"id": "'"$MODEL_DEFINITION_ID"'", "command": "python3 mnist_mlp.py --trainImagesFile ${DATA_DIR}/train-images-idx3-ubyte.gz --trainLabelsFile ${DATA_DIR}/train-labels-idx1-ubyte.gz --testImagesFile ${DATA_DIR}/t10k-images-idx3-ubyte.gz --testLabelsFile ${DATA_DIR}/t10k-labels-idx1-ubyte.gz --learningRate 0.001 --trainingIters 6000", "hardware_spec": {"name": "K80", "nodes": 1}, "software_spec": {"name": "tensorflow_2.1-py3.7"}, "parameters": {"name": "MNIST mlp", "description": "Simple MNIST mlp model"}}, "space_id": "'"$SPACE_ID"'"}'
echo $TRAINING_PAYLOAD | python -m json.tool
%env TRAINING_PAYLOAD=$training_payload
%%bash --out training_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$TRAINING_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/trainings?version=2020-08-01" | awk -F'"id":' '{print $2}' | cut -c2-37
%env TRAINING_ID=$training_id
```
<a id="training_details"></a>
### Get training details
Treining is an asynchronous endpoint. In case you want to monitor training status and details,
you need to use a GET method and specify which training you want to monitor by usage of training ID.
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Trainings/trainings_get"
target="_blank" rel="noopener no referrer">Get information about training job</a>
### Get training status
```
%%bash
STATUS=$(curl -sk -X GET\
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID?space_id=$SPACE_ID&version=2020-08-01")
STATUS=${STATUS#*state\":\"}
STATUS=${STATUS%%\"*}
echo $STATUS
```
Please make sure that training is completed before you go to the next sections.
Monitor `state` of your training by running above cell couple of times.
<a id="download_model"></a>
### Get selected model
Get a Keras saved model location in COS from the Deep Learning training job.
```
%%bash --out model_name
PATH=$(curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID?space_id=$SPACE_ID&version=2020-08-01")
PATH=${PATH#*logs\":\"}
MODEL_NAME=${PATH%%\"*}
echo $MODEL_NAME
%env MODEL_NAME=$model_name
```
<a id="runs"></a>
## 5. Historical runs
In this section you will see cURL examples describing how to get historical training runs information.
Output should be similar to the output from training creation but you should see more trainings entries.
Listing trainings:
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Trainings/trainings_list"
target="_blank" rel="noopener no referrer">Get list of historical training jobs information</a>
```
%%bash
HISTORICAL_TRAINING_LIMIT_TO_GET=2
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings?space_id=$SPACE_ID&version=2020-08-01&limit=$HISTORICAL_TRAINING_LIMIT_TO_GET" \
| python -m json.tool
```
<a id="training_cancel"></a>
### Cancel training run
**Tip:** If you want to cancel your training, please convert below cell to `code`, specify training ID and run.
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Trainings/trainings_delete"
target="_blank" rel="noopener no referrer">Canceling training</a>
---
<a id="deploy_and_score"></a>
## 6. Deploy and Score
In this section you will learn how to deploy and score pipeline model as webservice using WML instance.
Before deployment creation, you need store your model in WML repository.
Please see below cURL call example how to do it. Remember that you need to
specify where your chosen model is stored in COS.
<a id="model_store"></a>
### Store Deep Learning model
Store information about your model to WML repository.
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Models/models_create"
target="_blank" rel="noopener no referrer">Model storing</a>
```
%%bash --out model_payload
MODEL_PAYLOAD='{"space_id": "'"$SPACE_ID"'", "name": "MNIST mlp", "description": "This is description", "type": "tensorflow_2.1", "software_spec": {"name": "default_py3.7"}, "content_location": { "type": "s3", "contents": [ { "content_format": "native", "file_name": "'"$MODEL_NAME.zip"'", "location": "'"$TRAINING_ID/assets/$TRAINING_ID/resources/wml_model/$MODEL_NAME.zip"'"}],"connection": {"endpoint_url": "'"$COS_ENDPOINT"'", "access_key_id": "'"$COS_ACCESS_KEY_ID"'", "secret_access_key": "'"$COS_SECRET_ACCESS_KEY"'"}, "location": {"bucket": "'"$COS_BUCKET"'"}}}'
echo $MODEL_PAYLOAD | python -m json.tool
%env MODEL_PAYLOAD=$model_payload
%%bash --out model_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$MODEL_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/models?version=2020-08-01" | grep '"id": ' | awk -F '"' '{ print $4 }' | sed -n 2p
%env MODEL_ID=$model_id
```
<a id="model_content_download"></a>
### Download model content
If you want to download your saved model, please make the following call.
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Models/models_filtered_download"
target="_blank" rel="noopener no referrer">Download model content</a>
```
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--output "mnist_cnn.h5.tar.gz" \
"$WML_ENDPOINT_URL/ml/v4/models/$MODEL_ID/download?space_id=$SPACE_ID&version=2020-08-01"
!ls -l mnist_cnn.h5.tar.gz
```
<a id="deployment_creation"></a>
### Deployment creation
An Deep Learning online deployment creation is presented below.
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Deployments/deployments_create"
target="_blank" rel="noopener no referrer">Create deployment</a>
```
%%bash --out deployment_payload
DEPLOYMENT_PAYLOAD='{"space_id": "'"$SPACE_ID"'","name": "MNIST deployment", "description": "This is description","online": {},"hardware_spec": {"name": "S"},"asset": {"id": "'"$MODEL_ID"'"}}'
echo $DEPLOYMENT_PAYLOAD | python -m json.tool
%env DEPLOYMENT_PAYLOAD=$deployment_payload
%%bash --out deployment_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$DEPLOYMENT_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/deployments?version=2020-08-01" | grep '"id": ' | awk -F '"' '{ print $4 }' | sed -n 3p
%env DEPLOYMENT_ID=$deployment_id
```
<a id="deployment_details"></a>
### Get deployment details
As deployment API is asynchronous, please make sure your deployment is in `ready` state before going to the next points.
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Deployments/deployments_get"
target="_blank" rel="noopener no referrer">Get deployment details</a>
```
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/deployments/$DEPLOYMENT_ID?space_id=$SPACE_ID&version=2020-08-01" \
| python -m json.tool
```
<a id="input_score"></a>
### Prepare scoring input data
**Hint:** You may need to install numpy using following command `!pip install numpy`
```
import numpy as np
mnist_dataset = np.load('mnist.npz')
test_mnist = mnist_dataset['x_test']
image_1 = (test_mnist[0].ravel() / 255).tolist()
image_2 = (test_mnist[1].ravel() / 255).tolist()
%matplotlib inline
import matplotlib.pyplot as plt
for i, image in enumerate([test_mnist[0], test_mnist[1]]):
plt.subplot(2, 2, i + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
```
<a id="webservice_score"></a>
### Scoring of a webservice
If you want to make a `score` call on your deployment, please follow a below method:
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Deployment%20Jobs/deployment_jobs_create"
target="_blank" rel="noopener no referrer">Create deployment job</a>
```
%%bash -s "$image_1" "$image_2"
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data '{"space_id": "$SPACE_ID","input_data": [{"values": ['"$1"', '"$2"']}]}' \
"$WML_ENDPOINT_URL/ml/v4/deployments/$DEPLOYMENT_ID/predictions?version=2020-08-01" \
| python -m json.tool
```
<a id="deployments_list"></a>
### Listing all deployments
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Deployments/deployments_list"
target="_blank" rel="noopener no referrer">List deployments details</a>
```
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/deployments?space_id=$SPACE_ID&version=2020-08-01" \
| python -m json.tool
```
<a id="cleaning"></a>
## 7. Cleaning section
Below section is useful when you want to clean all of your previous work within this notebook.
Just convert below cells into the `code` and run them.
<a id="training_delete"></a>
### Delete training run
**Tip:** You can completely delete a training run with its metadata.
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Trainings/trainings_delete"
target="_blank" rel="noopener no referrer">Deleting training</a>
<a id="deployment_delete"></a>
### Deleting deployment
**Tip:** You can delete existing deployment by calling DELETE method.
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Deployments/deployments_delete"
target="_blank" rel="noopener no referrer">Delete deployment</a>
<a id="model_delete"></a>
### Delete model from repository
**Tip:** If you want to completely remove your stored model and model metadata, just use a DELETE method.
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Models/models_delete"
target="_blank" rel="noopener no referrer">Delete model from repository</a>
<a id="def_delete"></a>
### Delete model definition
**Tip:** If you want to completely remove your model definition, just use a DELETE method.
<a href="https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Model%20Definitions/model_definitions_delete"
target="_blank" rel="noopener no referrer">Delete model definition</a>
<a id="summary"></a>
## 8. Summary and next steps
You successfully completed this notebook!.
You learned how to use `cURL` calls to store, deploy and score a Keras Deep Learning model in WML.
### Authors
**Jan Sołtysik**, Intern in Watson Machine Learning at IBM
Copyright © 2020 IBM. This notebook and its source code are released under the terms of the MIT License.
|
github_jupyter
|
%env API_KEY=...
%env WML_ENDPOINT_URL=...
%env WML_INSTANCE_CRN="fill out only if you want to create a new space"
%env WML_INSTANCE_NAME=...
%env COS_CRN="fill out only if you want to create a new space"
%env COS_ENDPOINT=...
%env COS_BUCKET=...
%env COS_ACCESS_KEY_ID=...
%env COS_SECRET_ACCESS_KEY=...
%env COS_API_KEY=...
%env SPACE_ID="fill out only if you have space already created"
%env DATAPLATFORM_URL=https://api.dataplatform.cloud.ibm.com
%env AUTH_ENDPOINT=https://iam.cloud.ibm.com/oidc/token
%%bash --out token
curl -sk -X POST \
--header "Content-Type: application/x-www-form-urlencoded" \
--header "Accept: application/json" \
--data-urlencode "grant_type=urn:ibm:params:oauth:grant-type:apikey" \
--data-urlencode "apikey=$API_KEY" \
"$AUTH_ENDPOINT" \
| cut -d '"' -f 4
%env TOKEN=$token
%%bash
wget -q https://s3.amazonaws.com/img-datasets/mnist.npz \
-O mnist.npz
%%bash --out cos_token
curl -s -X "POST" "$AUTH_ENDPOINT" \
-H 'Accept: application/json' \
-H 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode "apikey=$COS_API_KEY" \
--data-urlencode "response_type=cloud_iam" \
--data-urlencode "grant_type=urn:ibm:params:oauth:grant-type:apikey" \
| cut -d '"' -f 4
%env COS_TOKEN=$cos_token
%%bash
curl -sk -X PUT \
--header "Authorization: Bearer $COS_TOKEN" \
--header "Content-Type: application/octet-stream" \
--data-binary "@mnist.npz" \
"$COS_ENDPOINT/$COS_BUCKET/mnist.npz"
%%bash --out model_definition_payload
MODEL_DEFINITION_PAYLOAD='{"name": "mlp-model-definition", "space_id": "'"$SPACE_ID"'", "description": "mlp-model-definition", "tags": ["DL", "MNIST"], "version": "2.0", "platform": {"name": "python", "versions": ["3.7"]}, "command": "python3 mnist_mlp.py --trainImagesFile ${DATA_DIR}/train-images-idx3-ubyte.gz --trainLabelsFile ${DATA_DIR}/train-labels-idx1-ubyte.gz --testImagesFile ${DATA_DIR}/t10k-images-idx3-ubyte.gz --testLabelsFile ${DATA_DIR}/t10k-labels-idx1-ubyte.gz --learningRate 0.001 --trainingIters 6000"}'
echo $MODEL_DEFINITION_PAYLOAD | python -m json.tool
%env MODEL_DEFINITION_PAYLOAD=$model_definition_payload
%%bash --out model_definition_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$MODEL_DEFINITION_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/model_definitions?version=2020-08-01"| grep '"id": ' | awk -F '"' '{ print $4 }'
%env MODEL_DEFINITION_ID=$model_definition_id
%%bash
wget https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/definitions/keras/mnist/MNIST.zip \
-O MNIST.zip
%%bash
curl -sk -X PUT \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data-binary "@MNIST.zip" \
"$WML_ENDPOINT_URL/ml/v4/model_definitions/$MODEL_DEFINITION_ID/model?version=2020-08-01&space_id=$SPACE_ID" \
| python -m json.tool
%%bash --out training_payload
TRAINING_PAYLOAD='{"training_data_references": [{"name": "training_input_data", "type": "s3", "connection": {"endpoint_url": "'"$COS_ENDPOINT"'", "access_key_id": "'"$COS_ACCESS_KEY_ID"'", "secret_access_key": "'"$COS_SECRET_ACCESS_KEY"'"}, "location": {"bucket": "'"$COS_BUCKET"'"}, "schema": {"id": "idmlp_schema", "fields": [{"name": "text", "type": "string"}]}}], "results_reference": {"name": "MNIST results", "connection": {"endpoint_url": "'"$COS_ENDPOINT"'", "access_key_id": "'"$COS_ACCESS_KEY_ID"'", "secret_access_key": "'"$COS_SECRET_ACCESS_KEY"'"}, "location": {"bucket": "'"$COS_BUCKET"'"}, "type": "s3"}, "tags": [{"value": "tags_mnist", "description": "dome MNIST"}], "name": "MNIST mlp", "description": "test training modeldef MNIST", "model_definition": {"id": "'"$MODEL_DEFINITION_ID"'", "command": "python3 mnist_mlp.py --trainImagesFile ${DATA_DIR}/train-images-idx3-ubyte.gz --trainLabelsFile ${DATA_DIR}/train-labels-idx1-ubyte.gz --testImagesFile ${DATA_DIR}/t10k-images-idx3-ubyte.gz --testLabelsFile ${DATA_DIR}/t10k-labels-idx1-ubyte.gz --learningRate 0.001 --trainingIters 6000", "hardware_spec": {"name": "K80", "nodes": 1}, "software_spec": {"name": "tensorflow_2.1-py3.7"}, "parameters": {"name": "MNIST mlp", "description": "Simple MNIST mlp model"}}, "space_id": "'"$SPACE_ID"'"}'
echo $TRAINING_PAYLOAD | python -m json.tool
%env TRAINING_PAYLOAD=$training_payload
%%bash --out training_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$TRAINING_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/trainings?version=2020-08-01" | awk -F'"id":' '{print $2}' | cut -c2-37
%env TRAINING_ID=$training_id
%%bash
STATUS=$(curl -sk -X GET\
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID?space_id=$SPACE_ID&version=2020-08-01")
STATUS=${STATUS#*state\":\"}
STATUS=${STATUS%%\"*}
echo $STATUS
%%bash --out model_name
PATH=$(curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings/$TRAINING_ID?space_id=$SPACE_ID&version=2020-08-01")
PATH=${PATH#*logs\":\"}
MODEL_NAME=${PATH%%\"*}
echo $MODEL_NAME
%env MODEL_NAME=$model_name
%%bash
HISTORICAL_TRAINING_LIMIT_TO_GET=2
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
"$WML_ENDPOINT_URL/ml/v4/trainings?space_id=$SPACE_ID&version=2020-08-01&limit=$HISTORICAL_TRAINING_LIMIT_TO_GET" \
| python -m json.tool
%%bash --out model_payload
MODEL_PAYLOAD='{"space_id": "'"$SPACE_ID"'", "name": "MNIST mlp", "description": "This is description", "type": "tensorflow_2.1", "software_spec": {"name": "default_py3.7"}, "content_location": { "type": "s3", "contents": [ { "content_format": "native", "file_name": "'"$MODEL_NAME.zip"'", "location": "'"$TRAINING_ID/assets/$TRAINING_ID/resources/wml_model/$MODEL_NAME.zip"'"}],"connection": {"endpoint_url": "'"$COS_ENDPOINT"'", "access_key_id": "'"$COS_ACCESS_KEY_ID"'", "secret_access_key": "'"$COS_SECRET_ACCESS_KEY"'"}, "location": {"bucket": "'"$COS_BUCKET"'"}}}'
echo $MODEL_PAYLOAD | python -m json.tool
%env MODEL_PAYLOAD=$model_payload
%%bash --out model_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$MODEL_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/models?version=2020-08-01" | grep '"id": ' | awk -F '"' '{ print $4 }' | sed -n 2p
%env MODEL_ID=$model_id
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--output "mnist_cnn.h5.tar.gz" \
"$WML_ENDPOINT_URL/ml/v4/models/$MODEL_ID/download?space_id=$SPACE_ID&version=2020-08-01"
!ls -l mnist_cnn.h5.tar.gz
%%bash --out deployment_payload
DEPLOYMENT_PAYLOAD='{"space_id": "'"$SPACE_ID"'","name": "MNIST deployment", "description": "This is description","online": {},"hardware_spec": {"name": "S"},"asset": {"id": "'"$MODEL_ID"'"}}'
echo $DEPLOYMENT_PAYLOAD | python -m json.tool
%env DEPLOYMENT_PAYLOAD=$deployment_payload
%%bash --out deployment_id
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data "$DEPLOYMENT_PAYLOAD" \
"$WML_ENDPOINT_URL/ml/v4/deployments?version=2020-08-01" | grep '"id": ' | awk -F '"' '{ print $4 }' | sed -n 3p
%env DEPLOYMENT_ID=$deployment_id
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/deployments/$DEPLOYMENT_ID?space_id=$SPACE_ID&version=2020-08-01" \
| python -m json.tool
import numpy as np
mnist_dataset = np.load('mnist.npz')
test_mnist = mnist_dataset['x_test']
image_1 = (test_mnist[0].ravel() / 255).tolist()
image_2 = (test_mnist[1].ravel() / 255).tolist()
%matplotlib inline
import matplotlib.pyplot as plt
for i, image in enumerate([test_mnist[0], test_mnist[1]]):
plt.subplot(2, 2, i + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
%%bash -s "$image_1" "$image_2"
curl -sk -X POST \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
--header "Accept: application/json" \
--data '{"space_id": "$SPACE_ID","input_data": [{"values": ['"$1"', '"$2"']}]}' \
"$WML_ENDPOINT_URL/ml/v4/deployments/$DEPLOYMENT_ID/predictions?version=2020-08-01" \
| python -m json.tool
%%bash
curl -sk -X GET \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/json" \
"$WML_ENDPOINT_URL/ml/v4/deployments?space_id=$SPACE_ID&version=2020-08-01" \
| python -m json.tool
| 0.293202 | 0.934574 |
# Stellargraph example: Simplified Graph Convolutions (SGC) on the CORA citation dataset
This notebook demonstrates the use of `StellarGraph`'s GCN [[2]](#refs), class for training the simplified graph convolution (SGC) model in introduced in [[1]](#refs).
We show how to use `StellarGraph` to perform node attribute inference on the Cora citation network using SGC by creating a single layer GCN model with softmax activation.
SGC simplifies GCN in the following ways,
- It removes the non-linearities in the graph convolutional layers.
- It smooths the node input features using powers of the normalized adjacency matrix with self loops (see [[2]](#refs)).
- It uses a single softmax layer such that GCN is simplified to logistic regression on smoothed node features.
For a graph with $N$ nodes, $F$-dimensional node features and $C$ number of classes, SGC simplifies GCN using the following logistic regression classifier with smoothed features,
$\hat{\boldsymbol{Y}}_{SGC} = \mathtt{softmax}(\boldsymbol{S}^K \boldsymbol{X}\; \boldsymbol{\Theta})$
where $\hat{\boldsymbol{Y}}_{SGC} \in \mathbb{R}^{N\times C}$ are the class predictions; $\boldsymbol{S}^K \in \mathbb{R}^{N\times N}$ is the normalised graph adjacency matrix with self loops raised to the K-th power; $\boldsymbol{X}\in \mathbb{R}^{N\times F}$ are the node input features; and $\boldsymbol{\Theta} \in \mathbb{R}^{F\times C}$ are the classifier's parameters to be learned.
<a name="refs"></a>
**References**
[1] Simplifying Graph Convolutional Networks. F. Wu, T. Zhang, A. H. de Souza Jr., C. Fifty, T. Yu, and K. Q. Weinberger, arXiv: 1902.07153. [link](https://arxiv.org/abs/1902.07153)
[2] Semi-Supervised Classification with Graph Convolutional Networks. T. N. Kipf and M. Welling, ICLR 2016. [link](https://arxiv.org/abs/1609.02907)
```
import networkx as nx
import pandas as pd
import os
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
import pandas as pd
import numpy as np
import stellargraph as sg
from stellargraph.mapper import FullBatchNodeGenerator
from stellargraph.layer import GCN
from keras import layers, optimizers, losses, metrics, Model, regularizers
from sklearn import preprocessing, feature_extraction, model_selection
```
### Loading the CORA network
**Downloading the CORA dataset:**
The dataset used in this demo can be downloaded from [here](https://linqs-data.soe.ucsc.edu/public/lbc/cora.tgz).
The following is the description of the dataset:
> The Cora dataset consists of 2708 scientific publications classified into one of seven classes.
> The citation network consists of 5429 links. Each publication in the dataset is described by a
> 0/1-valued word vector indicating the absence/presence of the corresponding word from the dictionary.
> The dictionary consists of 1433 unique words. The README file in the dataset provides more details.
Download and unzip the cora.tgz file to a location on your computer and set the `data_dir` variable to
point to the location of the dataset (the directory containing "cora.cites" and "cora.content").
```
data_dir = os.path.expanduser("~/data/cora")
```
Load the graph from edgelist (in the order `cited-paper` <- `citing-paper`)
```
edgelist = pd.read_csv(os.path.join(data_dir, "cora.cites"), sep='\t', header=None, names=["target", "source"])
edgelist["label"] = "cites"
Gnx = nx.from_pandas_edgelist(edgelist, edge_attr="label")
nx.set_node_attributes(Gnx, "paper", "label")
```
Load the features and subject for the nodes
```
feature_names = ["w_{}".format(ii) for ii in range(1433)]
column_names = feature_names + ["subject"]
node_data = pd.read_csv(os.path.join(data_dir, "cora.content"), sep='\t', header=None, names=column_names)
```
We aim to train a graph-ML model that will predict the "subject" attribute on the nodes. These subjects are one of 7 categories:
```
set(node_data["subject"])
```
### Splitting the data
For machine learning we want to take a subset of the nodes for training, and use the rest for validation and testing. We'll use scikit-learn again to do this.
Here we're taking 140 node labels for training, 500 for validation, and the rest for testing.
```
train_data, test_data = model_selection.train_test_split(node_data, train_size=140, test_size=None, stratify=node_data['subject'])
val_data, test_data = model_selection.train_test_split(test_data, train_size=500, test_size=None, stratify=test_data['subject'])
```
Note using stratified sampling gives the following counts:
```
from collections import Counter
Counter(train_data['subject'])
```
The training set has class imbalance that might need to be compensated, e.g., via using a weighted cross-entropy loss in model training, with class weights inversely proportional to class support. However, we will ignore the class imbalance in this example, for simplicity.
### Converting to numeric arrays
For our categorical target, we will use one-hot vectors that will be fed into a soft-max Keras layer during training. To do this conversion ...
```
target_encoding = feature_extraction.DictVectorizer(sparse=False)
train_targets = target_encoding.fit_transform(train_data[["subject"]].to_dict('records'))
val_targets = target_encoding.transform(val_data[["subject"]].to_dict('records'))
test_targets = target_encoding.transform(test_data[["subject"]].to_dict('records'))
```
We now do the same for the node attributes we want to use to predict the subject. These are the feature vectors that the Keras model will use as input. The CORA dataset contains attributes 'w_x' that correspond to words found in that publication. If a word occurs more than once in a publication the relevant attribute will be set to one, otherwise it will be zero.
```
node_features = node_data[feature_names]
```
## Create the StellarGraph object
We have the graph in networkx format and the node features and targets in a Pandas Dataframe. We are going to use these to create a StellarGraph object that is suitable for machine learning on graphs.
```
G = sg.StellarGraph(Gnx, node_features=node_features)
print(G.info())
```
## Prepare node generator
To feed data from the graph to the Keras model we need a generator. Since SGC is a full-batch model, we use the `FullBatchNodeGenerator` class to feed node features and graph adjacency matrix to the model.
For SGC, we need to tell the generator to smooth the node features by some power of the normalised adjacency matric with self loops before multiplying by the model parameters.
We achieve this by specifying `model='sgc'` and `k=2`, in this example, to use the SGC method and take the square of the adjacency matrix. For the setting `k=2` we are considering a 2-hop neighbourhood that is equivalent to a 2-layer GCN. We can set `k` larger to consider larger node neighbourhoods but this carries an associated computational penalty.
```
generator = FullBatchNodeGenerator(G, method="sgc", k=2)
```
For training we map only the training nodes returned from our splitter and the target values.
```
train_gen = generator.flow(train_data.index, train_targets)
```
## Creating the SGC model in Keras
Now we can specify our machine learning model, we need a few more parameters for this:
* the `layer_sizes` is a list of hidden feature sizes of each layer in the model. For SGC, we use a single hidden layer with output dimensionality equal to the number of classes.
* `activations` is the activation function for the output layer. For SGC the output layer is the classification layer and for multi-class classification it should be a `softmax` activation.
* Arguments such as `bias` and `dropout` are internal parameters of the model, execute `?GCN` for details.
**Note:** The SGC model is a single layer GCN model with `softmax` activation and the full batch generator we created above that smoothes the node features based on the graph structure. So, our SGC model is declared as a `StellarGraph.layer.GCN` model.
```
sgc = GCN(
layer_sizes=[train_targets.shape[1]],
generator=generator,
bias=True,
dropout=0.5,
activations=["softmax"],
kernel_regularizer=regularizers.l2(5e-4),
)
# Expose the input and output tensors of the SGC model for node prediction,
# via GCN.node_model() method:
x_inp, predictions = sgc.node_model()
```
### Training the model
Now let's create the actual Keras model with the input tensors `x_inp` and output tensors being the predictions `predictions` from the final dense layer
```
model = Model(inputs=x_inp, outputs=predictions)
model.compile(
optimizer=optimizers.Adam(lr=0.2),
loss=losses.categorical_crossentropy,
metrics=["acc"],
)
```
Train the model, keeping track of its loss and accuracy on the training set, and its generalisation performance on the validation set (we need to create another generator over the validation data for this)
```
val_gen = generator.flow(val_data.index, val_targets)
```
Create callbacks for early stopping (if validation accuracy stops improving) and best model checkpoint saving:
```
from keras.callbacks import EarlyStopping, ModelCheckpoint
if not os.path.isdir("logs"):
os.makedirs("logs")
es_callback = EarlyStopping(monitor="val_acc", patience=50) # patience is the number of epochs to wait before early stopping in case of no further improvement
mc_callback = ModelCheckpoint(
"logs/best_model.h5",
monitor="val_acc",
save_best_only=True,
save_weights_only=True,
)
```
Train the model
```
history = model.fit_generator(
train_gen,
epochs=50,
validation_data=val_gen,
verbose=0,
shuffle=False, # this should be False, since shuffling data means shuffling the whole graph
callbacks=[es_callback, mc_callback],
)
```
Plot the training history:
```
import matplotlib.pyplot as plt
%matplotlib inline
def remove_prefix(text, prefix):
return text[text.startswith(prefix) and len(prefix):]
def plot_history(history):
metrics = sorted(set([remove_prefix(m, "val_") for m in list(history.history.keys())]))
for m in metrics:
# summarize history for metric m
plt.plot(history.history[m])
plt.plot(history.history['val_' + m])
plt.title(m)
plt.ylabel(m)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='best')
plt.show()
plot_history(history)
```
Reload the saved weights of the best model found during the training (according to validation accuracy)
```
model.load_weights("logs/best_model.h5")
```
Evaluate the best model on the test set
```
test_gen = generator.flow(test_data.index, test_targets)
test_metrics = model.evaluate_generator(test_gen)
print("\nTest Set Metrics:")
for name, val in zip(model.metrics_names, test_metrics):
print("\t{}: {:0.4f}".format(name, val))
```
### Making predictions with the model
Now let's get the predictions for all nodes:
Note that the `predict` or `predict_generator` function now operates differently to the `GraphSAGE` or `HinSAGE` models
in that if you give it a set of nodes, it will still return predictions for **all** nodes in the graph, and in a fixed order defined by the order of nodes in `X` and `A` (which is defined by the order of `G.nodes()`).
```
all_nodes = node_data.index
all_gen = generator.flow(all_nodes)
all_predictions = model.predict_generator(all_gen)
```
Note that for full-batch methods the batch size is 1 and the predictions have shape $(1, N_{nodes}, N_{classes})$ so we we remove the batch dimension to obtain predictions of shape $(N_{nodes}, N_{classes})$.
```
all_predictions = all_predictions.squeeze()
```
These predictions will be the output of the softmax layer, so to get final categories we'll use the `inverse_transform` method of our target attribute specifcation to turn these values back to the original categories
```
node_predictions = target_encoding.inverse_transform(all_predictions)
results = pd.DataFrame(node_predictions, index=all_nodes).idxmax(axis=1)
```
Let's have a look at a few:
```
df = pd.DataFrame({"Predicted": results, "True": node_data['subject']})
df.head(20)
```
## Node representations
Evaluate node representations as activations of the output layer and visualise them, coloring nodes by their true subject label. We expect to see nice clusters of papers in the node representation space, with papers of the same subject belonging to the same cluster.
We are going to project the node representations to 2d using either TSNE or PCA transform, and visualise them, coloring nodes by their true subject label.
```
X = all_predictions
y = np.argmax(target_encoding.transform(node_data[["subject"]].to_dict('records')), axis=1)
if X.shape[1] > 2:
transform = TSNE #PCA
trans = transform(n_components=2)
emb_transformed = pd.DataFrame(trans.fit_transform(X), index=list(G.nodes()))
emb_transformed['label'] = y
else:
emb_transformed = pd.DataFrame(X, index=list(G.nodes()))
emb_transformed = emb_transformed.rename(columns = {'0':0, '1':1})
emb_transformed['label'] = y
alpha = 0.7
fig, ax = plt.subplots(figsize=(7,7))
ax.scatter(emb_transformed[0], emb_transformed[1], c=emb_transformed['label'].astype("category"),
cmap="jet", alpha=alpha)
ax.set(aspect="equal", xlabel="$X_1$", ylabel="$X_2$")
plt.title('{} visualization of SGC activations for cora dataset'.format(transform.__name__))
plt.show()
```
|
github_jupyter
|
import networkx as nx
import pandas as pd
import os
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
import pandas as pd
import numpy as np
import stellargraph as sg
from stellargraph.mapper import FullBatchNodeGenerator
from stellargraph.layer import GCN
from keras import layers, optimizers, losses, metrics, Model, regularizers
from sklearn import preprocessing, feature_extraction, model_selection
data_dir = os.path.expanduser("~/data/cora")
edgelist = pd.read_csv(os.path.join(data_dir, "cora.cites"), sep='\t', header=None, names=["target", "source"])
edgelist["label"] = "cites"
Gnx = nx.from_pandas_edgelist(edgelist, edge_attr="label")
nx.set_node_attributes(Gnx, "paper", "label")
feature_names = ["w_{}".format(ii) for ii in range(1433)]
column_names = feature_names + ["subject"]
node_data = pd.read_csv(os.path.join(data_dir, "cora.content"), sep='\t', header=None, names=column_names)
set(node_data["subject"])
train_data, test_data = model_selection.train_test_split(node_data, train_size=140, test_size=None, stratify=node_data['subject'])
val_data, test_data = model_selection.train_test_split(test_data, train_size=500, test_size=None, stratify=test_data['subject'])
from collections import Counter
Counter(train_data['subject'])
target_encoding = feature_extraction.DictVectorizer(sparse=False)
train_targets = target_encoding.fit_transform(train_data[["subject"]].to_dict('records'))
val_targets = target_encoding.transform(val_data[["subject"]].to_dict('records'))
test_targets = target_encoding.transform(test_data[["subject"]].to_dict('records'))
node_features = node_data[feature_names]
G = sg.StellarGraph(Gnx, node_features=node_features)
print(G.info())
generator = FullBatchNodeGenerator(G, method="sgc", k=2)
train_gen = generator.flow(train_data.index, train_targets)
sgc = GCN(
layer_sizes=[train_targets.shape[1]],
generator=generator,
bias=True,
dropout=0.5,
activations=["softmax"],
kernel_regularizer=regularizers.l2(5e-4),
)
# Expose the input and output tensors of the SGC model for node prediction,
# via GCN.node_model() method:
x_inp, predictions = sgc.node_model()
model = Model(inputs=x_inp, outputs=predictions)
model.compile(
optimizer=optimizers.Adam(lr=0.2),
loss=losses.categorical_crossentropy,
metrics=["acc"],
)
val_gen = generator.flow(val_data.index, val_targets)
from keras.callbacks import EarlyStopping, ModelCheckpoint
if not os.path.isdir("logs"):
os.makedirs("logs")
es_callback = EarlyStopping(monitor="val_acc", patience=50) # patience is the number of epochs to wait before early stopping in case of no further improvement
mc_callback = ModelCheckpoint(
"logs/best_model.h5",
monitor="val_acc",
save_best_only=True,
save_weights_only=True,
)
history = model.fit_generator(
train_gen,
epochs=50,
validation_data=val_gen,
verbose=0,
shuffle=False, # this should be False, since shuffling data means shuffling the whole graph
callbacks=[es_callback, mc_callback],
)
import matplotlib.pyplot as plt
%matplotlib inline
def remove_prefix(text, prefix):
return text[text.startswith(prefix) and len(prefix):]
def plot_history(history):
metrics = sorted(set([remove_prefix(m, "val_") for m in list(history.history.keys())]))
for m in metrics:
# summarize history for metric m
plt.plot(history.history[m])
plt.plot(history.history['val_' + m])
plt.title(m)
plt.ylabel(m)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='best')
plt.show()
plot_history(history)
model.load_weights("logs/best_model.h5")
test_gen = generator.flow(test_data.index, test_targets)
test_metrics = model.evaluate_generator(test_gen)
print("\nTest Set Metrics:")
for name, val in zip(model.metrics_names, test_metrics):
print("\t{}: {:0.4f}".format(name, val))
all_nodes = node_data.index
all_gen = generator.flow(all_nodes)
all_predictions = model.predict_generator(all_gen)
all_predictions = all_predictions.squeeze()
node_predictions = target_encoding.inverse_transform(all_predictions)
results = pd.DataFrame(node_predictions, index=all_nodes).idxmax(axis=1)
df = pd.DataFrame({"Predicted": results, "True": node_data['subject']})
df.head(20)
X = all_predictions
y = np.argmax(target_encoding.transform(node_data[["subject"]].to_dict('records')), axis=1)
if X.shape[1] > 2:
transform = TSNE #PCA
trans = transform(n_components=2)
emb_transformed = pd.DataFrame(trans.fit_transform(X), index=list(G.nodes()))
emb_transformed['label'] = y
else:
emb_transformed = pd.DataFrame(X, index=list(G.nodes()))
emb_transformed = emb_transformed.rename(columns = {'0':0, '1':1})
emb_transformed['label'] = y
alpha = 0.7
fig, ax = plt.subplots(figsize=(7,7))
ax.scatter(emb_transformed[0], emb_transformed[1], c=emb_transformed['label'].astype("category"),
cmap="jet", alpha=alpha)
ax.set(aspect="equal", xlabel="$X_1$", ylabel="$X_2$")
plt.title('{} visualization of SGC activations for cora dataset'.format(transform.__name__))
plt.show()
| 0.746416 | 0.992364 |
## Dependencies
```
import json, warnings, shutil
from tweet_utility_scripts import *
from tweet_utility_preprocess_roberta_scripts_aux import *
from transformers import TFRobertaModel, RobertaConfig
from tokenizers import ByteLevelBPETokenizer
from tensorflow.keras.models import Model
from tensorflow.keras import optimizers, metrics, losses, layers
from scripts_step_lr_schedulers import *
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
```
# Load data
```
# Unzip files
!tar -xf /kaggle/input/tweet-dataset-5fold-roberta-64/fold_1.tar.gz
!tar -xf /kaggle/input/tweet-dataset-5fold-roberta-64/fold_2.tar.gz
!tar -xf /kaggle/input/tweet-dataset-5fold-roberta-64/fold_3.tar.gz
!tar -xf /kaggle/input/tweet-dataset-5fold-roberta-64/fold_4.tar.gz
!tar -xf /kaggle/input/tweet-dataset-5fold-roberta-64/fold_5.tar.gz
database_base_path = '/kaggle/input/tweet-dataset-5fold-roberta-64/'
k_fold = pd.read_csv(database_base_path + '5-fold.csv')
print(f'Training set samples: {len(k_fold)}')
display(k_fold.head())
```
# Model parameters
```
vocab_path = database_base_path + 'vocab.json'
merges_path = database_base_path + 'merges.txt'
base_path = '/kaggle/input/qa-transformers/roberta/'
config = {
'MAX_LEN': 64,
'BATCH_SIZE': 32,
'EPOCHS': 3,
'LEARNING_RATE': 3e-5,
'ES_PATIENCE': 3,
'N_FOLDS': 5,
'question_size': 4,
'base_model_path': base_path + 'roberta-base-tf_model.h5',
'config_path': base_path + 'roberta-base-config.json'
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
```
# Tokenizer
```
tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path,
lowercase=True, add_prefix_space=True)
tokenizer.save('./')
```
## Learning rate schedule
```
# lr_start = 1e-4
# lr_min = 1e-6
# lr_max = config['LEARNING_RATE']
# warmup_steps=0
# hold_max_steps=0
# num_cycles=0.5
# train_size = len(k_fold[k_fold['fold_1'] == 'train'])
# step_size = train_size // config['BATCH_SIZE']
# total_steps = config['EPOCHS'] * step_size
# rng = [i for i in range(0, total_steps, config['BATCH_SIZE'])]
# y = [cosine_schedule_with_warmup(tf.cast(x, tf.float32), total_steps=total_steps, warmup_steps=warmup_steps,
# hold_max_steps=hold_max_steps, lr_start=lr_start, lr_max=lr_max,
# lr_min=lr_min, num_cycles=num_cycles) for x in rng]
# sns.set(style='whitegrid')
# fig, ax = plt.subplots(figsize=(20, 6))
# plt.plot(rng, y)
# print('Learning rate schedule: {:.3g} to {:.3g} to {:.3g}'.format(y[0], max(y), y[-1]))
```
# Model
```
module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name='base_model')
last_hidden_state, _ = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
x = layers.Dropout(.1)(last_hidden_state)
x_start = layers.Dropout(0.1)(x)
x_start = layers.Conv1D(768, 2, padding='same')(x_start)
x_start = layers.LeakyReLU()(x_start)
x_start = layers.Conv1D(64, 2, padding='same')(x_start)
x_start = layers.Dense(1)(x_start)
x_start = layers.Flatten()(x_start)
y_start = layers.Activation('softmax', name='y_start')(x_start)
x_end = layers.Dropout(0.1)(x)
x_end = layers.Conv1D(768, 2, padding='same')(x_end)
x_end = layers.LeakyReLU()(x_end)
x_end = layers.Conv1D(64, 2, padding='same')(x_end)
x_end = layers.Dense(1)(x_end)
x_end = layers.Flatten()(x_end)
y_end = layers.Activation('softmax', name='y_end')(x_end)
model = Model(inputs=[input_ids, attention_mask], outputs=[y_start, y_end])
return model
```
# Train
```
AUTO = tf.data.experimental.AUTOTUNE
strategy = tf.distribute.get_strategy()
k_fold_best = k_fold.copy()
history_list = []
for n_fold in range(config['N_FOLDS']):
n_fold +=1
print('\nFOLD: %d' % (n_fold))
# Load data
base_data_path = 'fold_%d/' % (n_fold)
x_train = np.load(base_data_path + 'x_train.npy')
y_train = np.load(base_data_path + 'y_train.npy')
x_valid = np.load(base_data_path + 'x_valid.npy')
y_valid = np.load(base_data_path + 'y_valid.npy')
step_size = x_train.shape[1] // config['BATCH_SIZE']
valid_step_size = x_valid.shape[1] // config['BATCH_SIZE']
# Train model
model_path = 'model_fold_%d.h5' % (n_fold)
model = model_fn(config['MAX_LEN'])
optimizer = optimizers.Adam(learning_rate=config['LEARNING_RATE'])
model.compile(optimizer, loss={'y_start': losses.CategoricalCrossentropy(label_smoothing=0.1),
'y_end': losses.CategoricalCrossentropy(label_smoothing=0.1)})
es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'],
restore_best_weights=False, verbose=1)
checkpoint = ModelCheckpoint(model_path, monitor='val_loss', mode='min',
save_best_only=True, save_weights_only=True)
history = model.fit(get_training_dataset(x_train, y_train, config['BATCH_SIZE'], AUTO, seed=SEED),
validation_data=(get_validation_dataset(x_valid, y_valid, config['BATCH_SIZE'], AUTO, repeated=False, seed=SEED)),
epochs=config['EPOCHS'],
steps_per_epoch=step_size,
callbacks=[checkpoint, es],
verbose=2).history
history_list.append(history)
model.save_weights('last_' + model_path)
# Make predictions (last model)
predict_eval_df(k_fold, model, x_train, x_valid, get_test_dataset, decode, n_fold, tokenizer, config, config['question_size'])
# Make predictions (best model)
model.load_weights(model_path)
predict_eval_df(k_fold_best, model, x_train, x_valid, get_test_dataset, decode, n_fold, tokenizer, config, config['question_size'])
### Delete data dir
shutil.rmtree(base_data_path)
```
# Model loss graph
```
for n_fold in range(config['N_FOLDS']):
print('Fold: %d' % (n_fold+1))
plot_metrics(history_list[n_fold])
```
# Model evaluation (best model)
```
display(evaluate_model_kfold(k_fold_best, config['N_FOLDS']).style.applymap(color_map))
```
# Model evaluation (last model)
```
display(evaluate_model_kfold(k_fold, config['N_FOLDS']).style.applymap(color_map))
```
# Visualize predictions
```
display(k_fold[[c for c in k_fold.columns if not (c.startswith('textID') or
c.startswith('text_len') or
c.startswith('selected_text_len') or
c.startswith('text_wordCnt') or
c.startswith('selected_text_wordCnt') or
c.startswith('fold_') or
c.startswith('start_fold_') or
c.startswith('end_fold_'))]].head(15))
```
|
github_jupyter
|
import json, warnings, shutil
from tweet_utility_scripts import *
from tweet_utility_preprocess_roberta_scripts_aux import *
from transformers import TFRobertaModel, RobertaConfig
from tokenizers import ByteLevelBPETokenizer
from tensorflow.keras.models import Model
from tensorflow.keras import optimizers, metrics, losses, layers
from scripts_step_lr_schedulers import *
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
# Unzip files
!tar -xf /kaggle/input/tweet-dataset-5fold-roberta-64/fold_1.tar.gz
!tar -xf /kaggle/input/tweet-dataset-5fold-roberta-64/fold_2.tar.gz
!tar -xf /kaggle/input/tweet-dataset-5fold-roberta-64/fold_3.tar.gz
!tar -xf /kaggle/input/tweet-dataset-5fold-roberta-64/fold_4.tar.gz
!tar -xf /kaggle/input/tweet-dataset-5fold-roberta-64/fold_5.tar.gz
database_base_path = '/kaggle/input/tweet-dataset-5fold-roberta-64/'
k_fold = pd.read_csv(database_base_path + '5-fold.csv')
print(f'Training set samples: {len(k_fold)}')
display(k_fold.head())
vocab_path = database_base_path + 'vocab.json'
merges_path = database_base_path + 'merges.txt'
base_path = '/kaggle/input/qa-transformers/roberta/'
config = {
'MAX_LEN': 64,
'BATCH_SIZE': 32,
'EPOCHS': 3,
'LEARNING_RATE': 3e-5,
'ES_PATIENCE': 3,
'N_FOLDS': 5,
'question_size': 4,
'base_model_path': base_path + 'roberta-base-tf_model.h5',
'config_path': base_path + 'roberta-base-config.json'
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path,
lowercase=True, add_prefix_space=True)
tokenizer.save('./')
# lr_start = 1e-4
# lr_min = 1e-6
# lr_max = config['LEARNING_RATE']
# warmup_steps=0
# hold_max_steps=0
# num_cycles=0.5
# train_size = len(k_fold[k_fold['fold_1'] == 'train'])
# step_size = train_size // config['BATCH_SIZE']
# total_steps = config['EPOCHS'] * step_size
# rng = [i for i in range(0, total_steps, config['BATCH_SIZE'])]
# y = [cosine_schedule_with_warmup(tf.cast(x, tf.float32), total_steps=total_steps, warmup_steps=warmup_steps,
# hold_max_steps=hold_max_steps, lr_start=lr_start, lr_max=lr_max,
# lr_min=lr_min, num_cycles=num_cycles) for x in rng]
# sns.set(style='whitegrid')
# fig, ax = plt.subplots(figsize=(20, 6))
# plt.plot(rng, y)
# print('Learning rate schedule: {:.3g} to {:.3g} to {:.3g}'.format(y[0], max(y), y[-1]))
module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name='base_model')
last_hidden_state, _ = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
x = layers.Dropout(.1)(last_hidden_state)
x_start = layers.Dropout(0.1)(x)
x_start = layers.Conv1D(768, 2, padding='same')(x_start)
x_start = layers.LeakyReLU()(x_start)
x_start = layers.Conv1D(64, 2, padding='same')(x_start)
x_start = layers.Dense(1)(x_start)
x_start = layers.Flatten()(x_start)
y_start = layers.Activation('softmax', name='y_start')(x_start)
x_end = layers.Dropout(0.1)(x)
x_end = layers.Conv1D(768, 2, padding='same')(x_end)
x_end = layers.LeakyReLU()(x_end)
x_end = layers.Conv1D(64, 2, padding='same')(x_end)
x_end = layers.Dense(1)(x_end)
x_end = layers.Flatten()(x_end)
y_end = layers.Activation('softmax', name='y_end')(x_end)
model = Model(inputs=[input_ids, attention_mask], outputs=[y_start, y_end])
return model
AUTO = tf.data.experimental.AUTOTUNE
strategy = tf.distribute.get_strategy()
k_fold_best = k_fold.copy()
history_list = []
for n_fold in range(config['N_FOLDS']):
n_fold +=1
print('\nFOLD: %d' % (n_fold))
# Load data
base_data_path = 'fold_%d/' % (n_fold)
x_train = np.load(base_data_path + 'x_train.npy')
y_train = np.load(base_data_path + 'y_train.npy')
x_valid = np.load(base_data_path + 'x_valid.npy')
y_valid = np.load(base_data_path + 'y_valid.npy')
step_size = x_train.shape[1] // config['BATCH_SIZE']
valid_step_size = x_valid.shape[1] // config['BATCH_SIZE']
# Train model
model_path = 'model_fold_%d.h5' % (n_fold)
model = model_fn(config['MAX_LEN'])
optimizer = optimizers.Adam(learning_rate=config['LEARNING_RATE'])
model.compile(optimizer, loss={'y_start': losses.CategoricalCrossentropy(label_smoothing=0.1),
'y_end': losses.CategoricalCrossentropy(label_smoothing=0.1)})
es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'],
restore_best_weights=False, verbose=1)
checkpoint = ModelCheckpoint(model_path, monitor='val_loss', mode='min',
save_best_only=True, save_weights_only=True)
history = model.fit(get_training_dataset(x_train, y_train, config['BATCH_SIZE'], AUTO, seed=SEED),
validation_data=(get_validation_dataset(x_valid, y_valid, config['BATCH_SIZE'], AUTO, repeated=False, seed=SEED)),
epochs=config['EPOCHS'],
steps_per_epoch=step_size,
callbacks=[checkpoint, es],
verbose=2).history
history_list.append(history)
model.save_weights('last_' + model_path)
# Make predictions (last model)
predict_eval_df(k_fold, model, x_train, x_valid, get_test_dataset, decode, n_fold, tokenizer, config, config['question_size'])
# Make predictions (best model)
model.load_weights(model_path)
predict_eval_df(k_fold_best, model, x_train, x_valid, get_test_dataset, decode, n_fold, tokenizer, config, config['question_size'])
### Delete data dir
shutil.rmtree(base_data_path)
for n_fold in range(config['N_FOLDS']):
print('Fold: %d' % (n_fold+1))
plot_metrics(history_list[n_fold])
display(evaluate_model_kfold(k_fold_best, config['N_FOLDS']).style.applymap(color_map))
display(evaluate_model_kfold(k_fold, config['N_FOLDS']).style.applymap(color_map))
display(k_fold[[c for c in k_fold.columns if not (c.startswith('textID') or
c.startswith('text_len') or
c.startswith('selected_text_len') or
c.startswith('text_wordCnt') or
c.startswith('selected_text_wordCnt') or
c.startswith('fold_') or
c.startswith('start_fold_') or
c.startswith('end_fold_'))]].head(15))
| 0.564819 | 0.344526 |
# Watershed Distance Transform for 3D Data
---
Implementation of papers:
[Deep Watershed Transform for Instance Segmentation](http://openaccess.thecvf.com/content_cvpr_2017/papers/Bai_Deep_Watershed_Transform_CVPR_2017_paper.pdf)
[Learn to segment single cells with deep distance estimator and deep cell detector](https://arxiv.org/abs/1803.10829)
```
import os
import errno
import numpy as np
import deepcell
```
### Load the Training Data
```
# Download the data (saves to ~/.keras/datasets)
filename = 'mousebrain.npz'
(X_train, y_train), (X_test, y_test) = deepcell.datasets.mousebrain.load_data(filename)
print('X.shape: {}\ny.shape: {}'.format(X_train.shape, y_train.shape))
```
### Set up filepath constants
```
# the path to the data file is currently required for `train_model_()` functions
# change DATA_DIR if you are not using `deepcell.datasets`
DATA_DIR = os.path.expanduser(os.path.join('~', '.keras', 'datasets'))
# DATA_FILE should be a npz file, preferably from `make_training_data`
DATA_FILE = os.path.join(DATA_DIR, filename)
# confirm the data file is available
assert os.path.isfile(DATA_FILE)
# Set up other required filepaths
# If the data file is in a subdirectory, mirror it in MODEL_DIR and LOG_DIR
PREFIX = os.path.relpath(os.path.dirname(DATA_FILE), DATA_DIR)
ROOT_DIR = '/data' # TODO: Change this! Usually a mounted volume
MODEL_DIR = os.path.abspath(os.path.join(ROOT_DIR, 'models', PREFIX))
LOG_DIR = os.path.abspath(os.path.join(ROOT_DIR, 'logs', PREFIX))
# create directories if they do not exist
for d in (MODEL_DIR, LOG_DIR):
try:
os.makedirs(d)
except OSError as exc: # Guard against race condition
if exc.errno != errno.EEXIST:
raise
```
### Set up training parameters
```
from tensorflow.keras.optimizers import SGD
from deepcell.utils.train_utils import rate_scheduler
fgbg_model_name = 'sample_fgbg_3d_model'
sample_model_name = 'sample_watershed_3d_model'
n_epoch = 1 # Number of training epochs
test_size = .10 # % of data saved as test
norm_method = 'std' # data normalization
receptive_field = 61 # should be adjusted for the scale of the data
optimizer = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
lr_sched = rate_scheduler(lr=0.01, decay=0.99)
# Transformation settings
transform = 'watershed'
distance_bins = 4 # number of distance classes
erosion_width = 0 # erode edges
# 3D Settings
frames_per_batch = 3
norm_method = 'whole_image' # data normalization - `whole_image` for 3d conv
# Sample mode settings
batch_size = 64 # number of images per batch (should be 2 ^ n)
win = (receptive_field - 1) // 2 # sample window size
win_z = (frames_per_batch - 1) // 2 # z window size
balance_classes = True # sample each class equally
max_class_samples = 1e7 # max number of samples per class.
```
### First, create a foreground/background separation model
#### Instantiate the fgbg model
```
from deepcell import model_zoo
fgbg_model = model_zoo.bn_feature_net_3D(
receptive_field=receptive_field,
n_features=2,
norm_method=norm_method,
n_frames=frames_per_batch,
n_channels=X_train.shape[-1])
```
#### Train the model fgbg model
```
from deepcell.training import train_model_sample
fgbg_model = train_model_sample(
model=fgbg_model,
dataset=DATA_FILE, # full path to npz file
model_name=fgbg_model_name,
window_size=(win, win, win_z),
optimizer=optimizer,
batch_size=batch_size,
balance_classes=balance_classes,
max_class_samples=max_class_samples,
transform='fgbg',
n_epoch=n_epoch,
model_dir=MODEL_DIR,
lr_sched=lr_sched,
rotation_range=180,
flip=True,
shear=False,
zoom_range=(0.8, 1.2))
```
### Next, Create a model for the watershed energy transform
#### Instantiate the deepcell transform model
```
from deepcell import model_zoo
watershed_model = model_zoo.bn_feature_net_3D(
receptive_field=receptive_field,
n_features=distance_bins,
norm_method=norm_method,
n_frames=frames_per_batch,
n_channels=X_train.shape[-1])
```
#### Train the watershed transform model
```
from deepcell.training import train_model_sample
watershed_model = train_model_sample(
model=watershed_model,
dataset=DATA_FILE, # full path to npz file
model_name=sample_model_name,
window_size=(win, win, win_z),
transform='watershed',
distance_bins=distance_bins,
erosion_width=erosion_width,
optimizer=optimizer,
batch_size=batch_size,
balance_classes=balance_classes,
max_class_samples=max_class_samples,
n_epoch=n_epoch,
model_dir=MODEL_DIR,
expt='sample_watershed',
lr_sched=lr_sched,
rotation_range=180,
flip=True,
shear=False,
zoom_range=(0.8, 1.2))
```
### Run the model
The model was trained on small samples of data of shape `(receptive_field, receptive_field)`.
in order to process full-sized images, the trained weights will be saved and loaded into a new model with `dilated=True` and proper `input_shape`.
#### Save weights of trained models
```
fgbg_weights_file = os.path.join(MODEL_DIR, '{}.h5'.format(fgbg_model_name))
fgbg_model.save_weights(fgbg_weights_file)
watershed_weights_file = os.path.join(MODEL_DIR, '{}.h5'.format(sample_model_name))
watershed_model.save_weights(watershed_weights_file)
```
#### Initialize dilated models and load the weights
```
from deepcell import model_zoo
run_fgbg_model = model_zoo.bn_feature_net_3D(
receptive_field=receptive_field,
dilated=True,
n_features=2,
n_frames=frames_per_batch,
input_shape=tuple(X_test.shape[1:]))
run_fgbg_model.load_weights(fgbg_weights_file)
run_watershed_model = model_zoo.bn_feature_net_3D(
receptive_field=receptive_field,
dilated=True,
n_features=distance_bins,
n_frames=frames_per_batch,
input_shape=tuple(X_test.shape[1:]))
run_watershed_model.load_weights(watershed_weights_file)
```
#### Make predictions on test data
```
test_images = run_watershed_model.predict(X_test[:4])
test_images_fgbg = run_fgbg_model.predict(X_test[:4])
print('watershed transform shape:', test_images.shape)
print('segmentation mask shape:', test_images_fgbg.shape)
```
#### Watershed post-processing
```
argmax_images = []
for i in range(test_images.shape[0]):
max_image = np.argmax(test_images[i], axis=-1)
argmax_images.append(max_image)
argmax_images = np.array(argmax_images)
argmax_images = np.expand_dims(argmax_images, axis=-1)
print('watershed argmax shape:', argmax_images.shape)
# threshold the foreground/background
# and remove back ground from watershed transform
threshold = 0.8
fg_thresh = test_images_fgbg[..., 1] > threshold
fg_thresh = np.expand_dims(fg_thresh, axis=-1)
argmax_images_post_fgbg = argmax_images * fg_thresh
# Apply watershed method with the distance transform as seed
from skimage.measure import label
from skimage.morphology import watershed
from skimage.feature import peak_local_max
watershed_images = []
for i in range(argmax_images_post_fgbg.shape[0]):
image = fg_thresh[i, ..., 0]
distance = argmax_images_post_fgbg[i, ..., 0]
local_maxi = peak_local_max(test_images[i, ..., -1],
min_distance=15,
exclude_border=False,
indices=False,
labels=image)
markers = label(local_maxi)
segments = watershed(-distance, markers, mask=image)
watershed_images.append(segments)
watershed_images = np.array(watershed_images)
watershed_images = np.expand_dims(watershed_images, axis=-1)
# Plot the results
import matplotlib.pyplot as plt
index = np.random.randint(low=0, high=watershed_images.shape[0])
frame = np.random.randint(low=0, high=watershed_images.shape[1])
print('Image:', index)
print('Frame:', frame)
fig, axes = plt.subplots(ncols=3, nrows=2, figsize=(15, 15), sharex=True, sharey=True)
ax = axes.ravel()
ax[0].imshow(X_test[index, frame, ..., 0])
ax[0].set_title('Source Image')
ax[1].imshow(test_images_fgbg[index, frame, ..., 1])
ax[1].set_title('Segmentation Prediction')
ax[2].imshow(fg_thresh[index, frame, ..., 0], cmap='jet')
ax[2].set_title('Thresholded Segmentation')
ax[3].imshow(argmax_images[index, frame, ..., 0], cmap='jet')
ax[3].set_title('Watershed Transform')
ax[4].imshow(argmax_images_post_fgbg[index, frame, ..., 0], cmap='jet')
ax[4].set_title('Watershed Transform w/o Background')
ax[5].imshow(watershed_images[index, frame, ..., 0], cmap='jet')
ax[5].set_title('Watershed Segmentation')
fig.tight_layout()
plt.show()
from deepcell.utils.plot_utils import get_js_video
from IPython.display import HTML
HTML(get_js_video(watershed_images, batch=0, channel=0))
```
|
github_jupyter
|
import os
import errno
import numpy as np
import deepcell
# Download the data (saves to ~/.keras/datasets)
filename = 'mousebrain.npz'
(X_train, y_train), (X_test, y_test) = deepcell.datasets.mousebrain.load_data(filename)
print('X.shape: {}\ny.shape: {}'.format(X_train.shape, y_train.shape))
# the path to the data file is currently required for `train_model_()` functions
# change DATA_DIR if you are not using `deepcell.datasets`
DATA_DIR = os.path.expanduser(os.path.join('~', '.keras', 'datasets'))
# DATA_FILE should be a npz file, preferably from `make_training_data`
DATA_FILE = os.path.join(DATA_DIR, filename)
# confirm the data file is available
assert os.path.isfile(DATA_FILE)
# Set up other required filepaths
# If the data file is in a subdirectory, mirror it in MODEL_DIR and LOG_DIR
PREFIX = os.path.relpath(os.path.dirname(DATA_FILE), DATA_DIR)
ROOT_DIR = '/data' # TODO: Change this! Usually a mounted volume
MODEL_DIR = os.path.abspath(os.path.join(ROOT_DIR, 'models', PREFIX))
LOG_DIR = os.path.abspath(os.path.join(ROOT_DIR, 'logs', PREFIX))
# create directories if they do not exist
for d in (MODEL_DIR, LOG_DIR):
try:
os.makedirs(d)
except OSError as exc: # Guard against race condition
if exc.errno != errno.EEXIST:
raise
from tensorflow.keras.optimizers import SGD
from deepcell.utils.train_utils import rate_scheduler
fgbg_model_name = 'sample_fgbg_3d_model'
sample_model_name = 'sample_watershed_3d_model'
n_epoch = 1 # Number of training epochs
test_size = .10 # % of data saved as test
norm_method = 'std' # data normalization
receptive_field = 61 # should be adjusted for the scale of the data
optimizer = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
lr_sched = rate_scheduler(lr=0.01, decay=0.99)
# Transformation settings
transform = 'watershed'
distance_bins = 4 # number of distance classes
erosion_width = 0 # erode edges
# 3D Settings
frames_per_batch = 3
norm_method = 'whole_image' # data normalization - `whole_image` for 3d conv
# Sample mode settings
batch_size = 64 # number of images per batch (should be 2 ^ n)
win = (receptive_field - 1) // 2 # sample window size
win_z = (frames_per_batch - 1) // 2 # z window size
balance_classes = True # sample each class equally
max_class_samples = 1e7 # max number of samples per class.
from deepcell import model_zoo
fgbg_model = model_zoo.bn_feature_net_3D(
receptive_field=receptive_field,
n_features=2,
norm_method=norm_method,
n_frames=frames_per_batch,
n_channels=X_train.shape[-1])
from deepcell.training import train_model_sample
fgbg_model = train_model_sample(
model=fgbg_model,
dataset=DATA_FILE, # full path to npz file
model_name=fgbg_model_name,
window_size=(win, win, win_z),
optimizer=optimizer,
batch_size=batch_size,
balance_classes=balance_classes,
max_class_samples=max_class_samples,
transform='fgbg',
n_epoch=n_epoch,
model_dir=MODEL_DIR,
lr_sched=lr_sched,
rotation_range=180,
flip=True,
shear=False,
zoom_range=(0.8, 1.2))
from deepcell import model_zoo
watershed_model = model_zoo.bn_feature_net_3D(
receptive_field=receptive_field,
n_features=distance_bins,
norm_method=norm_method,
n_frames=frames_per_batch,
n_channels=X_train.shape[-1])
from deepcell.training import train_model_sample
watershed_model = train_model_sample(
model=watershed_model,
dataset=DATA_FILE, # full path to npz file
model_name=sample_model_name,
window_size=(win, win, win_z),
transform='watershed',
distance_bins=distance_bins,
erosion_width=erosion_width,
optimizer=optimizer,
batch_size=batch_size,
balance_classes=balance_classes,
max_class_samples=max_class_samples,
n_epoch=n_epoch,
model_dir=MODEL_DIR,
expt='sample_watershed',
lr_sched=lr_sched,
rotation_range=180,
flip=True,
shear=False,
zoom_range=(0.8, 1.2))
fgbg_weights_file = os.path.join(MODEL_DIR, '{}.h5'.format(fgbg_model_name))
fgbg_model.save_weights(fgbg_weights_file)
watershed_weights_file = os.path.join(MODEL_DIR, '{}.h5'.format(sample_model_name))
watershed_model.save_weights(watershed_weights_file)
from deepcell import model_zoo
run_fgbg_model = model_zoo.bn_feature_net_3D(
receptive_field=receptive_field,
dilated=True,
n_features=2,
n_frames=frames_per_batch,
input_shape=tuple(X_test.shape[1:]))
run_fgbg_model.load_weights(fgbg_weights_file)
run_watershed_model = model_zoo.bn_feature_net_3D(
receptive_field=receptive_field,
dilated=True,
n_features=distance_bins,
n_frames=frames_per_batch,
input_shape=tuple(X_test.shape[1:]))
run_watershed_model.load_weights(watershed_weights_file)
test_images = run_watershed_model.predict(X_test[:4])
test_images_fgbg = run_fgbg_model.predict(X_test[:4])
print('watershed transform shape:', test_images.shape)
print('segmentation mask shape:', test_images_fgbg.shape)
argmax_images = []
for i in range(test_images.shape[0]):
max_image = np.argmax(test_images[i], axis=-1)
argmax_images.append(max_image)
argmax_images = np.array(argmax_images)
argmax_images = np.expand_dims(argmax_images, axis=-1)
print('watershed argmax shape:', argmax_images.shape)
# threshold the foreground/background
# and remove back ground from watershed transform
threshold = 0.8
fg_thresh = test_images_fgbg[..., 1] > threshold
fg_thresh = np.expand_dims(fg_thresh, axis=-1)
argmax_images_post_fgbg = argmax_images * fg_thresh
# Apply watershed method with the distance transform as seed
from skimage.measure import label
from skimage.morphology import watershed
from skimage.feature import peak_local_max
watershed_images = []
for i in range(argmax_images_post_fgbg.shape[0]):
image = fg_thresh[i, ..., 0]
distance = argmax_images_post_fgbg[i, ..., 0]
local_maxi = peak_local_max(test_images[i, ..., -1],
min_distance=15,
exclude_border=False,
indices=False,
labels=image)
markers = label(local_maxi)
segments = watershed(-distance, markers, mask=image)
watershed_images.append(segments)
watershed_images = np.array(watershed_images)
watershed_images = np.expand_dims(watershed_images, axis=-1)
# Plot the results
import matplotlib.pyplot as plt
index = np.random.randint(low=0, high=watershed_images.shape[0])
frame = np.random.randint(low=0, high=watershed_images.shape[1])
print('Image:', index)
print('Frame:', frame)
fig, axes = plt.subplots(ncols=3, nrows=2, figsize=(15, 15), sharex=True, sharey=True)
ax = axes.ravel()
ax[0].imshow(X_test[index, frame, ..., 0])
ax[0].set_title('Source Image')
ax[1].imshow(test_images_fgbg[index, frame, ..., 1])
ax[1].set_title('Segmentation Prediction')
ax[2].imshow(fg_thresh[index, frame, ..., 0], cmap='jet')
ax[2].set_title('Thresholded Segmentation')
ax[3].imshow(argmax_images[index, frame, ..., 0], cmap='jet')
ax[3].set_title('Watershed Transform')
ax[4].imshow(argmax_images_post_fgbg[index, frame, ..., 0], cmap='jet')
ax[4].set_title('Watershed Transform w/o Background')
ax[5].imshow(watershed_images[index, frame, ..., 0], cmap='jet')
ax[5].set_title('Watershed Segmentation')
fig.tight_layout()
plt.show()
from deepcell.utils.plot_utils import get_js_video
from IPython.display import HTML
HTML(get_js_video(watershed_images, batch=0, channel=0))
| 0.451568 | 0.924039 |
# 911 Calls Capstone Project
For this capstone project we will be analyzing some 911 call data from [Kaggle](https://www.kaggle.com/mchirico/montcoalert). The data contains the following fields:
* lat : String variable, Latitude
* lng: String variable, Longitude
* desc: String variable, Description of the Emergency Call
* zip: String variable, Zipcode
* title: String variable, Title
* timeStamp: String variable, YYYY-MM-DD HH:MM:SS
* twp: String variable, Township
* addr: String variable, Address
* e: String variable, Dummy variable (always 1)
Just go along with this notebook and try to complete the instructions or answer the questions in bold using your Python and Data Science skills!
## Data and Setup
____
** Import numpy and pandas **
```
import numpy as np
import pandas as pd
```
** Import visualization libraries and set %matplotlib inline. **
```
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
```
** Read in the csv file as a dataframe called df **
```
df = pd.read_csv('911.csv')
```
** Check the info() of the df **
```
df.head()
df.info()
```
** Check the head of df **
## Basic Questions
** What are the top 5 zipcodes for 911 calls? **
```
df['zip'].value_counts().head()
```
** What are the top 5 townships (twp) for 911 calls? **
```
df['twp'].value_counts().head(5)
```
** Take a look at the 'title' column, how many unique title codes are there? **
```
df['title'].value_counts().count()
df['title'].nunique()
```
## Creating new features
** In the titles column there are "Reasons/Departments" specified before the title code. These are EMS, Fire, and Traffic. Use .apply() with a custom lambda expression to create a new column called "Reason" that contains this string value.**
**For example, if the title column value is EMS: BACK PAINS/INJURY , the Reason column value would be EMS. **
```
df['reason'] = df['title'].apply(lambda x: x.split(':')[0])
```
** What is the most common Reason for a 911 call based off of this new column? **
```
df['reason'].value_counts()
df.head(4)
```
** Now use seaborn to create a countplot of 911 calls by Reason. **
```
sns.set_style('whitegrid')
sns.countplot(x='reason', data=df)
```
___
** Now let us begin to focus on time information. What is the data type of the objects in the timeStamp column? **
```
type(df['timeStamp'].iloc[0])
```
** You should have seen that these timestamps are still strings. Use [pd.to_datetime](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html) to convert the column from strings to DateTime objects. **
```
df['timeStamp'] = pd.to_datetime(df['timeStamp'])
time = df['timeStamp'].iloc[0]
time.dayofweek
```
** You can now grab specific attributes from a Datetime object by calling them. For example:**
time = df['timeStamp'].iloc[0]
time.hour
**You can use Jupyter's tab method to explore the various attributes you can call. Now that the timestamp column are actually DateTime objects, use .apply() to create 3 new columns called Hour, Month, and Day of Week. You will create these columns based off of the timeStamp column, reference the solutions if you get stuck on this step.**
```
# df['Hour'] = df['timeStamp'].iloc[0].hour
df['Hour'] = df['timeStamp'].apply(lambda time: time.hour)
df['Month'] = df['timeStamp'].apply(lambda time: time.month)
df['Day_Of_Week'] = df['timeStamp'].apply(lambda time: time.dayofweek)
dmap = {0:'Mon',1:'Tue',2:'Wed',3:'Thu',4:'Fri',5:'Sat',6:'Sun'}
df['Day_Of_Week'] = df['Day_Of_Week'].map(dmap)
```
** Notice how the Day of Week is an integer 0-6. Use the .map() with this dictionary to map the actual string names to the day of the week: **
dmap = {0:'Mon',1:'Tue',2:'Wed',3:'Thu',4:'Fri',5:'Sat',6:'Sun'}
```
sns.countplot(x='Day_Of_Week', data=df, hue='reason')
# To relocate the legend
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., frameon=False)
# 'frameon' removes the border from the legend
```
** Now use seaborn to create a countplot of the Day of Week column with the hue based off of the Reason column. **
**Now do the same for Month:**
```
sns.countplot(x='Month', data=df, hue='reason')
# Relocate the legend
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., frameon=False)
```
**Did you notice something strange about the Plot?**
_____
** You should have noticed it was missing some Months, let's see if we can maybe fill in this information by plotting the information in another way, possibly a simple line plot that fills in the missing months, in order to do this, we'll need to do some work with pandas... **
```
df.head(3)
```
** Now create a gropuby object called byMonth, where you group the DataFrame by the month column and use the count() method for aggregation. Use the head() method on this returned DataFrame. **
```
byMonth = df.groupby('Month').count()
byMonth.head()
```
** Now create a simple plot off of the dataframe indicating the count of calls per month. **
```
byMonth['Day_Of_Week'].plot()
```
** Now see if you can use seaborn's lmplot() to create a linear fit on the number of calls per month. Keep in mind you may need to reset the index to a column. **
```
# resetting the index
reset_index = byMonth.reset_index()
sns.lmplot(x='Month', y='twp', data=reset_index)
```
**Create a new column called 'Date' that contains the date from the timeStamp column. You'll need to use apply along with the .date() method. **
```
df.head(2)
df['date'] = df['timeStamp'].apply(lambda x:x.date())
df['date'].head(2)
byDate = df.groupby('date').count()
byDate.head(2)
```
** Now groupby this Date column with the count() aggregate and create a plot of counts of 911 calls.**
```
# sns.countplot(x='twp', data=byDate)
byDate['twp'].plot()
plt.tight_layout()
```
** Now recreate this plot but create 3 separate plots with each plot representing a Reason for the 911 call**
```
df['reason'].value_counts()
df[df['reason']=='EMS'].groupby('date').count()['twp'].plot()
plt.title('EMS')
plt.tight_layout()
df[df['reason']=='Traffic'].groupby('date').count()['twp'].plot()
plt.title('Traffic')
plt.tight_layout()
df[df['reason']=='Fire'].groupby('date').count()['twp'].plot()
plt.title('Fire')
plt.tight_layout()
```
____
** Now let's move on to creating heatmaps with seaborn and our data. We'll first need to restructure the dataframe so that the columns become the Hours and the Index becomes the Day of the Week. There are lots of ways to do this, but I would recommend trying to combine groupby with an [unstack](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html) method. Reference the solutions if you get stuck on this!**
```
dayHour = df.groupby(by=['Day_Of_Week', 'Hour']).count()['reason'].unstack()
dayHour
```
** Now create a HeatMap using this new DataFrame. **
```
plt.figure(figsize=(12,6))
sns.heatmap(dayHour, cmap='viridis')
```
** Now create a clustermap using this DataFrame. **
```
plt.figure(figsize=(12,4))
sns.clustermap(dayHour, cmap='mako')
```
** Now repeat these same plots and operations, for a DataFrame that shows the Month as the column. **
```
dayMonth = df.groupby(by=['Day_Of_Week', 'Month']).count()['reason'].unstack()
dayMonth
plt.figure(figsize=(12, 5))
sns.heatmap(dayMonth, cmap='YlGnBu')
myCustomColors = ['#ffc0cb', '#292E49', '#7303c0', '#03001e']
sns.clustermap(dayMonth, cmap=myCustomColors)
```
**Continue exploring the Data however you see fit!**
# Great Job!
|
github_jupyter
|
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('911.csv')
df.head()
df.info()
df['zip'].value_counts().head()
df['twp'].value_counts().head(5)
df['title'].value_counts().count()
df['title'].nunique()
df['reason'] = df['title'].apply(lambda x: x.split(':')[0])
df['reason'].value_counts()
df.head(4)
sns.set_style('whitegrid')
sns.countplot(x='reason', data=df)
type(df['timeStamp'].iloc[0])
df['timeStamp'] = pd.to_datetime(df['timeStamp'])
time = df['timeStamp'].iloc[0]
time.dayofweek
# df['Hour'] = df['timeStamp'].iloc[0].hour
df['Hour'] = df['timeStamp'].apply(lambda time: time.hour)
df['Month'] = df['timeStamp'].apply(lambda time: time.month)
df['Day_Of_Week'] = df['timeStamp'].apply(lambda time: time.dayofweek)
dmap = {0:'Mon',1:'Tue',2:'Wed',3:'Thu',4:'Fri',5:'Sat',6:'Sun'}
df['Day_Of_Week'] = df['Day_Of_Week'].map(dmap)
sns.countplot(x='Day_Of_Week', data=df, hue='reason')
# To relocate the legend
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., frameon=False)
# 'frameon' removes the border from the legend
sns.countplot(x='Month', data=df, hue='reason')
# Relocate the legend
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., frameon=False)
df.head(3)
byMonth = df.groupby('Month').count()
byMonth.head()
byMonth['Day_Of_Week'].plot()
# resetting the index
reset_index = byMonth.reset_index()
sns.lmplot(x='Month', y='twp', data=reset_index)
df.head(2)
df['date'] = df['timeStamp'].apply(lambda x:x.date())
df['date'].head(2)
byDate = df.groupby('date').count()
byDate.head(2)
# sns.countplot(x='twp', data=byDate)
byDate['twp'].plot()
plt.tight_layout()
df['reason'].value_counts()
df[df['reason']=='EMS'].groupby('date').count()['twp'].plot()
plt.title('EMS')
plt.tight_layout()
df[df['reason']=='Traffic'].groupby('date').count()['twp'].plot()
plt.title('Traffic')
plt.tight_layout()
df[df['reason']=='Fire'].groupby('date').count()['twp'].plot()
plt.title('Fire')
plt.tight_layout()
dayHour = df.groupby(by=['Day_Of_Week', 'Hour']).count()['reason'].unstack()
dayHour
plt.figure(figsize=(12,6))
sns.heatmap(dayHour, cmap='viridis')
plt.figure(figsize=(12,4))
sns.clustermap(dayHour, cmap='mako')
dayMonth = df.groupby(by=['Day_Of_Week', 'Month']).count()['reason'].unstack()
dayMonth
plt.figure(figsize=(12, 5))
sns.heatmap(dayMonth, cmap='YlGnBu')
myCustomColors = ['#ffc0cb', '#292E49', '#7303c0', '#03001e']
sns.clustermap(dayMonth, cmap=myCustomColors)
| 0.297368 | 0.968441 |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
%matplotlib inline
sns.set_style('whitegrid')
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import cross_val_predict,cross_val_score
from sklearn.metrics import roc_auc_score,roc_curve
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.metrics import mean_squared_error
from sklearn.svm import SVC
```
# Implemented Methods
```
def getCost(y_test,prediction):
'''
evaluate the total cost without modified threshold
'''
tn, fp, fn, tp = confusion_matrix(y_test,prediction).ravel()
confusionData = [[tn,fp],[fn,tp]]
print("Confusion Matrix\n")
print(pd.DataFrame(confusionData,columns=['FN','FP'],index=['TN','TP']))
cost = 10*fp+500*fn
values = {'Score':[cost],'Number of Type 1 faults':[fp],'Number of Type 2 faults':[fn]}
print("\n\nCost\n")
print(pd.DataFrame(values))
def getCostWithThreshold(X_test,y_test,prediction,threshold,model):
"""
evaluate the total cost with modified threshold
model = model instance
"""
THRESHOLD = threshold #optimal one chosen from the roc curve
thresholdPrediction = np.where(model.predict_proba(X_test)[:,1] > THRESHOLD, 1,0)
tn, fp, fn, tp = confusion_matrix(y_test,thresholdPrediction).ravel()
cost = 10*fp+500*fn
values = {'Score':[cost],'Number of Type 1 faults':[fp],'Number of Type 2 faults':[fn]}
pd.DataFrame(values)
def aucForThreshold(X_test,y_test,model):
"""
return roc auc curve for determining the optimal threshold
model = desired model's instance
"""
from sklearn.metrics import roc_auc_score,roc_curve
logit_roc_auc = roc_auc_score(y_test, model.predict_proba(X_test)[:,1])
fpr, tpr, thresholds = roc_curve(y_test,model.predict_proba(X_test)[:,1])
plt.figure()
plt.plot(fpr, tpr, label='Logistic Regression (area = %0.2f)' % logit_roc_auc)
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="upper center")
plt.savefig('Log_ROC')
# create the axis of thresholds (scores)
ax2 = plt.gca().twinx()
ax2.plot(fpr, thresholds, markeredgecolor='g',linestyle='dashed', color='g',label = 'Threshold')
ax2.set_ylabel('Threshold',color='g')
ax2.set_ylim([thresholds[-1],thresholds[0]])
ax2.set_xlim([fpr[0],fpr[-1]])
plt.legend(loc="lower right")
plt.savefig('roc_and_threshold.png')
plt.show()
def evaluationScored(y_test,prediction):
acc = metrics.accuracy_score(y_test, prediction)
r2 = metrics.r2_score(y_test, prediction)
f1 = metrics.f1_score(y_test, prediction)
mse = metrics.mean_squared_error(y_test, prediction)
values = {'Accuracy Score':[acc],'R2':[r2],'F1':[f1],'MSE':[mse]}
print("\n\nScores")
print (pd.DataFrame(values))
from sklearn.metrics.scorer import make_scorer
def my_scorer(y_true,y_pred):
tn, fp, fn, tp = confusion_matrix(y_true, y_pred).ravel()
cost = 10*fp+500*fn
return cost
my_func = make_scorer(my_scorer, greater_is_better=False)
def predictionWithStandardScaling(testingData,model):
"""
This will scale the input testing data using Standard Scaler and then predict using the given model
"""
scalerTesting = StandardScaler()
scalerTesting.fit(X_test)
X_test_scaled = scalerTesting.transform(X_test)
pipePrediction = model.predict(X_test_scaled)
return pipePrediction
```
# Preprocessing
# Training Data
```
training_data = pd.read_csv("../Data/aps_failure_training_set.csv",na_values="na")
training_data.head()
plt.figure(figsize=(20,12))
sns.heatmap(training_data.isnull(),yticklabels=False,cbar=False,cmap = 'viridis')
missing = training_data.isna().sum().to_frame().sort_values(by=0, ascending = False)
missing.plot.bar(figsize=(50,10))
```
# Missing value handling
We are going to use different approches with missing values:
1. Removing the column having 80% missing values (**Self intuition)
2. Keeping all the features
3. Later, we will try to implement some feature engineering
**For the rest of the missing values, we are replacing them with their mean() for now (**Ref)
<big><b>Second Approach</b>
```
sample_training_data = training_data
sample_training_data.fillna(sample_training_data.mean(),inplace=True)
#after replacing with mean()
plt.figure(figsize=(20,12))
sns.heatmap(sample_training_data.isnull(),yticklabels=False,cbar=False,cmap='viridis')
#as all the other values are numerical except Class column so we can replace them with 1 and 0
sample_training_data = sample_training_data.replace('neg',0)
sample_training_data = sample_training_data.replace('pos',1)
sample_training_data.head()
```
# Testing Data preprocessing
```
testing_data = pd.read_csv("../Data/aps_failure_test_set.csv",na_values="na")
testing_data.head()
sample_testing_data = testing_data
sample_testing_data.fillna(sample_testing_data.mean(),inplace=True)
#after replacing with mean()
plt.figure(figsize=(20,12))
sns.heatmap(sample_testing_data.isnull(),yticklabels=False,cbar=False,cmap='viridis')
#as all the other values are numerical except Class column so we can replace them with 1 and 0
sample_testing_data = sample_testing_data.replace('neg',0)
sample_testing_data = sample_testing_data.replace('pos',1)
sample_testing_data.head()
```
# Model Implementation
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
%matplotlib inline
sns.set_style('whitegrid')
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import cross_val_predict,cross_val_score
from sklearn.metrics import roc_auc_score,roc_curve
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.metrics import mean_squared_error
from sklearn.svm import SVC
def getCost(y_test,prediction):
'''
evaluate the total cost without modified threshold
'''
tn, fp, fn, tp = confusion_matrix(y_test,prediction).ravel()
confusionData = [[tn,fp],[fn,tp]]
print("Confusion Matrix\n")
print(pd.DataFrame(confusionData,columns=['FN','FP'],index=['TN','TP']))
cost = 10*fp+500*fn
values = {'Score':[cost],'Number of Type 1 faults':[fp],'Number of Type 2 faults':[fn]}
print("\n\nCost\n")
print(pd.DataFrame(values))
def getCostWithThreshold(X_test,y_test,prediction,threshold,model):
"""
evaluate the total cost with modified threshold
model = model instance
"""
THRESHOLD = threshold #optimal one chosen from the roc curve
thresholdPrediction = np.where(model.predict_proba(X_test)[:,1] > THRESHOLD, 1,0)
tn, fp, fn, tp = confusion_matrix(y_test,thresholdPrediction).ravel()
cost = 10*fp+500*fn
values = {'Score':[cost],'Number of Type 1 faults':[fp],'Number of Type 2 faults':[fn]}
pd.DataFrame(values)
def aucForThreshold(X_test,y_test,model):
"""
return roc auc curve for determining the optimal threshold
model = desired model's instance
"""
from sklearn.metrics import roc_auc_score,roc_curve
logit_roc_auc = roc_auc_score(y_test, model.predict_proba(X_test)[:,1])
fpr, tpr, thresholds = roc_curve(y_test,model.predict_proba(X_test)[:,1])
plt.figure()
plt.plot(fpr, tpr, label='Logistic Regression (area = %0.2f)' % logit_roc_auc)
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="upper center")
plt.savefig('Log_ROC')
# create the axis of thresholds (scores)
ax2 = plt.gca().twinx()
ax2.plot(fpr, thresholds, markeredgecolor='g',linestyle='dashed', color='g',label = 'Threshold')
ax2.set_ylabel('Threshold',color='g')
ax2.set_ylim([thresholds[-1],thresholds[0]])
ax2.set_xlim([fpr[0],fpr[-1]])
plt.legend(loc="lower right")
plt.savefig('roc_and_threshold.png')
plt.show()
def evaluationScored(y_test,prediction):
acc = metrics.accuracy_score(y_test, prediction)
r2 = metrics.r2_score(y_test, prediction)
f1 = metrics.f1_score(y_test, prediction)
mse = metrics.mean_squared_error(y_test, prediction)
values = {'Accuracy Score':[acc],'R2':[r2],'F1':[f1],'MSE':[mse]}
print("\n\nScores")
print (pd.DataFrame(values))
from sklearn.metrics.scorer import make_scorer
def my_scorer(y_true,y_pred):
tn, fp, fn, tp = confusion_matrix(y_true, y_pred).ravel()
cost = 10*fp+500*fn
return cost
my_func = make_scorer(my_scorer, greater_is_better=False)
def predictionWithStandardScaling(testingData,model):
"""
This will scale the input testing data using Standard Scaler and then predict using the given model
"""
scalerTesting = StandardScaler()
scalerTesting.fit(X_test)
X_test_scaled = scalerTesting.transform(X_test)
pipePrediction = model.predict(X_test_scaled)
return pipePrediction
training_data = pd.read_csv("../Data/aps_failure_training_set.csv",na_values="na")
training_data.head()
plt.figure(figsize=(20,12))
sns.heatmap(training_data.isnull(),yticklabels=False,cbar=False,cmap = 'viridis')
missing = training_data.isna().sum().to_frame().sort_values(by=0, ascending = False)
missing.plot.bar(figsize=(50,10))
sample_training_data = training_data
sample_training_data.fillna(sample_training_data.mean(),inplace=True)
#after replacing with mean()
plt.figure(figsize=(20,12))
sns.heatmap(sample_training_data.isnull(),yticklabels=False,cbar=False,cmap='viridis')
#as all the other values are numerical except Class column so we can replace them with 1 and 0
sample_training_data = sample_training_data.replace('neg',0)
sample_training_data = sample_training_data.replace('pos',1)
sample_training_data.head()
testing_data = pd.read_csv("../Data/aps_failure_test_set.csv",na_values="na")
testing_data.head()
sample_testing_data = testing_data
sample_testing_data.fillna(sample_testing_data.mean(),inplace=True)
#after replacing with mean()
plt.figure(figsize=(20,12))
sns.heatmap(sample_testing_data.isnull(),yticklabels=False,cbar=False,cmap='viridis')
#as all the other values are numerical except Class column so we can replace them with 1 and 0
sample_testing_data = sample_testing_data.replace('neg',0)
sample_testing_data = sample_testing_data.replace('pos',1)
sample_testing_data.head()
| 0.602062 | 0.78374 |
# Solving the Inverse Scattering problem using PDE constrained optimization
We want to solve using what is often referred as the near-field map
We start by loading all the necessary libraries
```
# to import the library without installing it
import context
import numpy as np
import scipy as sp
import matplotlib
import matplotlib.pyplot as plt
from functools import partial
from jax import jit
import jax
import jax.numpy as jnp
import time
import jax.scipy.optimize
# this is the package to solve the lippman-Schwinger equation
import jax_ls
```
We define the size of the domain, in this case we consider the domain of interest
$$\Omega = [-0.5, 0.5] \times [-0.5, 0.5]$$
along with number of deegres of freedom in each direction and the frequency
```
# size of the domain in x and y
ax = 1.0
ay = 1.0
# number of discretization points per dimension
n = 2**6
m = n
# we choose to have 4 points per wavelenght
omega = 2*jnp.pi*(n//8)
# grid spacing
hx = 1/(n-1)
sampling_radious = 1.0
n_angles = n
```
We store all the information in a special tuple, which contains all the parameters necessary
```
# initialize the parameters
params_nf = jax_ls.init_params_near_field(ax, ay, n, m,\
sampling_radious,\
n_angles, omega)
```
We define and sample the perturbation that we want to reconstruct. In this case it just two Gaussian bumps.
```
# definition of the perturbation by the lense
@jit
def perturbation(x,y):
return 1.0*jnp.exp(-500*(jnp.square(x+0.1) + jnp.square(y+0.2)))\
+ 1.0*jnp.exp(-500*(jnp.square(x-0.1) + jnp.square(y-0.1)))\
+ 1.0*jnp.exp(-500*(jnp.square(x-0.15) + jnp.square(y+0.3)))
# we sample the perturbation
nu = perturbation(params_nf.ls_params.X, params_nf.ls_params.Y)
nu_vect = jnp.reshape(nu, (-1,))
```
Let's take a quick look at the perturbation that we want to reconstruct
```
plt.figure(figsize=(8,5))
plt.imshow(jnp.real(nu_vect).reshape((n,n)))
plt.xticks([]); plt.yticks([]);
plt.title('Perturbation to reconstruct', color='black')
plt.colorbar()
```
### Generating the data
We define the near field map, and we produce our data.
The data itself is represented by a forward operator defined as
$$\mathcal{F}[\nu]$$
acting on $\nu$, which corresponds to a compactly supported perturbation of an otherwise constant background media. In particular the forward map corresponds to the impulse response of the perturbation $\nu$ to a probing wave.
In particular the equation satisfied is
$$\Delta u+ \omega^2 (1 + \nu) u = -\omega^2 \nu u_i $$
where $u_i$ is the probing wave impinging on the perturbation $\nu$ and $u_s$ is the scattered wave, which also needs to satisfy the Sommerfeld conditions as infinity.
In this context, there are several ways to choose the probing wave, however, the most used alternatives are either
- a plane wave $u_i(\mathbf{x}) = e^{i\omega \mathbf{s} \cdot \mathbf{x}}$, where $\mathbf{s}$ is the incoming direction, or
- a point source $u_i(\mathbf{x}) = \frac{i}{4} H^{(1)}_0(\omega | \mathbf{x}-\mathbf{s}|)$,where $\mathbf{s}$ is the location of the source.
In this case we choose the later, and we let the incident wave to be idexed by $\mathbf{s}$ which for simplicity lies in $\mathbb{S}$.
For each incoming wave $u_i^{\mathbf{s}}$ we solve the Helmholtz equation,
$$\left \{ \begin{array}{l} \Delta u^{\mathbf{s}}+ \omega^2 (1 + \nu) u^{\mathbf{s}} = -\omega^2 \nu u_i^{\mathbf{s}} \\
\partial_r u^{\mathbf{s}} - i\omega u^{\mathbf{s}} = \mathcal{O}(r^{-1/2})
\end{array} \right .
$$
the solution is then sampled in a circle around the $\nu$, i.e. $u^{\mathbf{s}}(\mathbf{r})$ for $\mathbf{r} \in \mathbb{S}$.
Therefore the near field maps can be indexed by $\mathbf{r}, \mathbf{s} \in \mathbb{S}$ such that
$$ \left( F[\nu] \right)_{\mathbf{r},\mathbf{s}} = u^{\mathbf{s}}(\mathbf{r})$$
<img src="images/near_field_sketch.png" width=500 height=500 />
```
# jitting the near field map (vectorized) with the custom vjp
near_field_vjp = jit(partial(jax_ls.near_field_map_vect_vjp, params_nf))
# reference wavefield (i.e. data)
data_near_field = near_field_vjp(nu_vect)
```
We plot the near field, which consists in the data we want to fit
```
plt.figure(figsize=(16,5))
plt.subplot(1, 2, 1)
plt.imshow(jnp.real(data_near_field).reshape((n,n)))
plt.xticks([]); plt.yticks([]);
plt.title('Real part of near field', color='black')
plt.colorbar()
plt.subplot(1, 2, 2)
plt.imshow(jnp.imag(data_near_field).reshape((n,n)))
plt.xticks([]); plt.yticks([]);
plt.title('Imag part of near field', color='black')
plt.colorbar()
```
### Defining the minimization problem
Now we define the loss with respect to a arbitrary perturbation
$$ \ell(\nu) = \frac{1}{2}\| \mathcal{F}[\nu] - D \|^2_{L^2}$$
In this case $D$ is the data generated above.
```
# jitting the near field map (vectorized) with the custom vjp
loss_vjp = jit(partial(jax_ls.near_field_l2_loss, params_nf, data_near_field.reshape((m,n))))
```
We have defined the gradient using the custom_vjp interface, this can be easily computed using adjoit state methods
```
nabla_loss = jit(jax.grad(loss_vjp))
grad_loss_0 = nabla_loss(jnp.zeros(*nu_vect.shape))
plt.figure(figsize=(16,5))
plt.subplot(1, 2, 1)
plt.imshow(jnp.real(grad_loss_0).reshape((n,n)))
plt.xticks([]); plt.yticks([]);
plt.title('Real part of the gradient for the constant medium', color='black')
plt.colorbar()
plt.subplot(1, 2, 2)
plt.imshow(jnp.imag(grad_loss_0).reshape((n,n)))
plt.xticks([]); plt.yticks([]);
plt.title('Imag part of the gradient for the constant medium', color='black')
plt.colorbar()
```
We run the PDE constrained optimization starting with a zero initial guess.
We start with computing the loss with the zero initial guess.
```
# initial guess
nu_0 = jnp.zeros(*nu_vect.shape)
plt.figure(figsize=(8,5))
plt.imshow(jnp.real(nu_0).reshape((n,n)))
plt.xticks([]); plt.yticks([]);
plt.title('Initial guess', color='black')
plt.colorbar()
# initial loss (we trigged the compilation)
print("initial loss with zero initial guess %e"%(loss_vjp(nu_0)))
```
We run the optimization algorithm (in this case only BFGS is implemented in Jax), and we time it (it takes around 40s in a RTX A6000)
```
%%time
opt_result = jax.scipy.optimize.minimize(loss_vjp, x0=nu_0, method="bfgs")
opt_nu = opt_result.x
# printing the number of evaluations
print("Number of function evaluations %d"%(opt_result.nfev))
print("Number of gradient evaluations %d"%(opt_result.njev))
```
We check the final loss, it should be around $10^{-6}$.
```
print("Final loss with zero initial guess %e"%(loss_vjp(opt_nu)))
```
We check the error of the reconstruction compared to the ground-truth
```
print("Relative Error in the reconstruction %e"%(jnp.linalg.norm(nu_vect - opt_nu)/jnp.linalg.norm(nu_vect)))
# ploting the near field map
plt.figure(figsize=(24,5))
plt.subplot(1, 3, 1)
plt.imshow(jnp.real(opt_nu).reshape((n,n)))
plt.xticks([]); plt.yticks([]);
plt.title('reconstructed media', color='black')
plt.colorbar()
plt.subplot(1, 3, 2)
plt.imshow(jnp.real(nu_vect).reshape((n,n)))
plt.xticks([]); plt.yticks([]);
plt.title('reference media', color='black')
plt.colorbar()
plt.subplot(1, 3, 3)
plt.imshow(jnp.abs(nu_vect-opt_nu).reshape((n,n)))
plt.xticks([]); plt.yticks([]);
plt.title('error', color='black')
plt.colorbar()
```
|
github_jupyter
|
# to import the library without installing it
import context
import numpy as np
import scipy as sp
import matplotlib
import matplotlib.pyplot as plt
from functools import partial
from jax import jit
import jax
import jax.numpy as jnp
import time
import jax.scipy.optimize
# this is the package to solve the lippman-Schwinger equation
import jax_ls
# size of the domain in x and y
ax = 1.0
ay = 1.0
# number of discretization points per dimension
n = 2**6
m = n
# we choose to have 4 points per wavelenght
omega = 2*jnp.pi*(n//8)
# grid spacing
hx = 1/(n-1)
sampling_radious = 1.0
n_angles = n
# initialize the parameters
params_nf = jax_ls.init_params_near_field(ax, ay, n, m,\
sampling_radious,\
n_angles, omega)
# definition of the perturbation by the lense
@jit
def perturbation(x,y):
return 1.0*jnp.exp(-500*(jnp.square(x+0.1) + jnp.square(y+0.2)))\
+ 1.0*jnp.exp(-500*(jnp.square(x-0.1) + jnp.square(y-0.1)))\
+ 1.0*jnp.exp(-500*(jnp.square(x-0.15) + jnp.square(y+0.3)))
# we sample the perturbation
nu = perturbation(params_nf.ls_params.X, params_nf.ls_params.Y)
nu_vect = jnp.reshape(nu, (-1,))
plt.figure(figsize=(8,5))
plt.imshow(jnp.real(nu_vect).reshape((n,n)))
plt.xticks([]); plt.yticks([]);
plt.title('Perturbation to reconstruct', color='black')
plt.colorbar()
# jitting the near field map (vectorized) with the custom vjp
near_field_vjp = jit(partial(jax_ls.near_field_map_vect_vjp, params_nf))
# reference wavefield (i.e. data)
data_near_field = near_field_vjp(nu_vect)
plt.figure(figsize=(16,5))
plt.subplot(1, 2, 1)
plt.imshow(jnp.real(data_near_field).reshape((n,n)))
plt.xticks([]); plt.yticks([]);
plt.title('Real part of near field', color='black')
plt.colorbar()
plt.subplot(1, 2, 2)
plt.imshow(jnp.imag(data_near_field).reshape((n,n)))
plt.xticks([]); plt.yticks([]);
plt.title('Imag part of near field', color='black')
plt.colorbar()
# jitting the near field map (vectorized) with the custom vjp
loss_vjp = jit(partial(jax_ls.near_field_l2_loss, params_nf, data_near_field.reshape((m,n))))
nabla_loss = jit(jax.grad(loss_vjp))
grad_loss_0 = nabla_loss(jnp.zeros(*nu_vect.shape))
plt.figure(figsize=(16,5))
plt.subplot(1, 2, 1)
plt.imshow(jnp.real(grad_loss_0).reshape((n,n)))
plt.xticks([]); plt.yticks([]);
plt.title('Real part of the gradient for the constant medium', color='black')
plt.colorbar()
plt.subplot(1, 2, 2)
plt.imshow(jnp.imag(grad_loss_0).reshape((n,n)))
plt.xticks([]); plt.yticks([]);
plt.title('Imag part of the gradient for the constant medium', color='black')
plt.colorbar()
# initial guess
nu_0 = jnp.zeros(*nu_vect.shape)
plt.figure(figsize=(8,5))
plt.imshow(jnp.real(nu_0).reshape((n,n)))
plt.xticks([]); plt.yticks([]);
plt.title('Initial guess', color='black')
plt.colorbar()
# initial loss (we trigged the compilation)
print("initial loss with zero initial guess %e"%(loss_vjp(nu_0)))
%%time
opt_result = jax.scipy.optimize.minimize(loss_vjp, x0=nu_0, method="bfgs")
opt_nu = opt_result.x
# printing the number of evaluations
print("Number of function evaluations %d"%(opt_result.nfev))
print("Number of gradient evaluations %d"%(opt_result.njev))
print("Final loss with zero initial guess %e"%(loss_vjp(opt_nu)))
print("Relative Error in the reconstruction %e"%(jnp.linalg.norm(nu_vect - opt_nu)/jnp.linalg.norm(nu_vect)))
# ploting the near field map
plt.figure(figsize=(24,5))
plt.subplot(1, 3, 1)
plt.imshow(jnp.real(opt_nu).reshape((n,n)))
plt.xticks([]); plt.yticks([]);
plt.title('reconstructed media', color='black')
plt.colorbar()
plt.subplot(1, 3, 2)
plt.imshow(jnp.real(nu_vect).reshape((n,n)))
plt.xticks([]); plt.yticks([]);
plt.title('reference media', color='black')
plt.colorbar()
plt.subplot(1, 3, 3)
plt.imshow(jnp.abs(nu_vect-opt_nu).reshape((n,n)))
plt.xticks([]); plt.yticks([]);
plt.title('error', color='black')
plt.colorbar()
| 0.611962 | 0.98882 |
# Online reinforcement learning with Ray AIR
In this example, we'll train a reinforcement learning agent using online training.
Online trainig means that the data from the environment is sampled while we are running the algorithm. In contrast, offline training uses data that has been stored on disk before.
Let's start with installing our dependencies:
```
!pip install -qU "ray[rllib]" gym
```
Now we can run some imports:
```
import argparse
import gym
import os
import numpy as np
import ray
from ray.air import Checkpoint
from ray.air.config import RunConfig
from ray.air.predictors.integrations.rl.rl_predictor import RLPredictor
from ray.air.train.integrations.rl.rl_trainer import RLTrainer
from ray.air.result import Result
from ray.rllib.agents.marwil import BCTrainer
from ray.tune.tuner import Tuner
```
Here we define the training function. It will create an `RLTrainer` using the `PPO` algorithm and kick off training on the `CartPole-v0` environment:
```
def train_rl_ppo_online(num_workers: int, use_gpu: bool = False) -> Result:
print("Starting online training")
trainer = RLTrainer(
run_config=RunConfig(stop={"training_iteration": 5}),
scaling_config={
"num_workers": num_workers,
"use_gpu": use_gpu,
},
algorithm="PPO",
config={
"env": "CartPole-v0",
"framework": "tf",
},
)
# Todo (krfricke/xwjiang): Enable checkpoint config in RunConfig
# result = trainer.fit()
tuner = Tuner(
trainer,
_tuner_kwargs={"checkpoint_at_end": True},
)
result = tuner.fit()[0]
return result
```
Once we trained our RL policy, we want to evaluate it on a fresh environment. For this, we will also define a utility function:
```
def evaluate_using_checkpoint(checkpoint: Checkpoint, num_episodes) -> list:
predictor = RLPredictor.from_checkpoint(checkpoint)
env = gym.make("CartPole-v0")
rewards = []
for i in range(num_episodes):
obs = env.reset()
reward = 0.0
done = False
while not done:
action = predictor.predict([obs])
obs, r, done, _ = env.step(action[0])
reward += r
rewards.append(reward)
return rewards
```
Let's put it all together. First, we run training:
```
result = train_rl_ppo_online(num_workers=2, use_gpu=False)
```
And then, using the obtained checkpoint, we evaluate the policy on a fresh environment:
```
num_eval_episodes = 3
rewards = evaluate_using_checkpoint(result.checkpoint, num_episodes=num_eval_episodes)
print(f"Average reward over {num_eval_episodes} episodes: " f"{np.mean(rewards)}")
```
|
github_jupyter
|
!pip install -qU "ray[rllib]" gym
import argparse
import gym
import os
import numpy as np
import ray
from ray.air import Checkpoint
from ray.air.config import RunConfig
from ray.air.predictors.integrations.rl.rl_predictor import RLPredictor
from ray.air.train.integrations.rl.rl_trainer import RLTrainer
from ray.air.result import Result
from ray.rllib.agents.marwil import BCTrainer
from ray.tune.tuner import Tuner
def train_rl_ppo_online(num_workers: int, use_gpu: bool = False) -> Result:
print("Starting online training")
trainer = RLTrainer(
run_config=RunConfig(stop={"training_iteration": 5}),
scaling_config={
"num_workers": num_workers,
"use_gpu": use_gpu,
},
algorithm="PPO",
config={
"env": "CartPole-v0",
"framework": "tf",
},
)
# Todo (krfricke/xwjiang): Enable checkpoint config in RunConfig
# result = trainer.fit()
tuner = Tuner(
trainer,
_tuner_kwargs={"checkpoint_at_end": True},
)
result = tuner.fit()[0]
return result
def evaluate_using_checkpoint(checkpoint: Checkpoint, num_episodes) -> list:
predictor = RLPredictor.from_checkpoint(checkpoint)
env = gym.make("CartPole-v0")
rewards = []
for i in range(num_episodes):
obs = env.reset()
reward = 0.0
done = False
while not done:
action = predictor.predict([obs])
obs, r, done, _ = env.step(action[0])
reward += r
rewards.append(reward)
return rewards
result = train_rl_ppo_online(num_workers=2, use_gpu=False)
num_eval_episodes = 3
rewards = evaluate_using_checkpoint(result.checkpoint, num_episodes=num_eval_episodes)
print(f"Average reward over {num_eval_episodes} episodes: " f"{np.mean(rewards)}")
| 0.581897 | 0.982254 |
### _This is the topic modelling Python program of the workgroup No.4, for the final project of the Digital Humanities Lab course at the Universiteit van Amsterdam.
```
# Universiteit van Amsterdam
# Digital Humanities Lab, WG04
# Final project - Topic modelling program
# project.ipynb
```
### _Importing all the necessary tools and libraries
```
import random
import pandas as pd
from nltk.stem.porter import *
from nltk.corpus import stopwords
from gensim import corpora, models
from nltk.tokenize import word_tokenize
import os, fitz, string, nltk, PyPDF2, gensim
from gensim.parsing.preprocessing import STOPWORDS
from nltk.stem import WordNetLemmatizer, SnowballStemmer
from functions import readPDF, processPDF, lemmatizeAndStem, toDataFrame, listOfWords, wordsPerArticle, tfidfCorpus
```
### _Reading all the PDF files that can be found in the given folder, preprocessing, tokenizing, lemmatizing, stemming, and then turning into a DataFrame for further usage.
```
# Reading the files and convert from pdf to string/dict
articles = readPDF('open access articles')
# Basic text-processing, tokenize and filtering
processedArticles = processPDF(articles)
# Gensim, LDA stuff
dataFrame = toDataFrame(processedArticles)
```
### _A dataframe containing the pdf file names, and all of its contents, already processed.
#### _For further information how they are processed, please check the functions.py file and the processPDF function or use "pydoc functions.functions"
```
display(dataFrame)
listOfWords = listOfWords(dataFrame)
wordsPerArticle = wordsPerArticle(dataFrame)
tfidfCorpus = tfidfCorpus(listOfWords, dataFrame)
```
### _Generating a random number of topics with a simple LDA model, using the list of words per articles.
```
lda_model = gensim.models.LdaMulticore(wordsPerArticle, num_topics=random.randrange(10,26), id2word=listOfWords, passes=random.randrange(5,11), workers=3)
for index, topic in lda_model.print_topics(-1):
print('Topic: {} \nWords: {}'.format(index, topic))
```
### _Generating a random number of topics with a TF-IDF LDA model, using the TF-IDF model's corpus of words
```
lda_model_tfidf = gensim.models.LdaMulticore(tfidfCorpus, num_topics=random.randrange(10,26), id2word=listOfWords, passes=random.randrange(5,11), workers=3)
for index, topic in lda_model_tfidf.print_topics(-1):
print('Topic: {} Word: {}'.format(index, topic))
```
### _Evaluating the performance and accuracy of the simple LDA model, with a randomly choosen article's index.
```
randomArticle = random.randrange(0,15)
print("The random generated article's index: ", randomArticle)
for index, score in sorted(lda_model[wordsPerArticle[randomArticle]], key=lambda tup: -1*tup[1]):
print("\nScore: {}\t \nTopic: {}".format(score, lda_model.print_topic(index, 10)))
```
### _Evaluating the performance and accuracy of the TF-IDF LDA model, with a randomly choosen article's index.
```
randomArticle_tfidf = random.randrange(0,15)
print("The random generated article's index: ", randomArticle_tfidf)
for index, score in sorted(lda_model_tfidf[wordsPerArticle[randomArticle_tfidf]], key=lambda tup: -1*tup[1]):
print("\nScore: {}\t \nTopic: {}".format(score, lda_model_tfidf.print_topic(index, 10)))
```
## _Acknowledgements and citations
#### _To build our topic modelling tool for the final project of this course, Sandra Li's article, and GitHub repository was a great help, and we implemented some of her ideas and methods, - such as the lemmatizing and stemming of words - to our program, and followed the pipeline of her program.
#### _The article: https://towardsdatascience.com/topic-modeling-and-latent-dirichlet-allocation-in-python-9bf156893c24
#### _The repository: https://github.com/susanli2016/NLP-with-Python/blob/master/LDA_news_headlines.ipynb
#### _Any other part of the project is coded by ourselves, and we only used the documentations of the imported modules.
### _Universiteit van Amsterdam
### _Digital Humanities Lab - WG04
### _Final project
#### _This Python program for the project was coded entirely by Tamás Molnár
|
github_jupyter
|
# Universiteit van Amsterdam
# Digital Humanities Lab, WG04
# Final project - Topic modelling program
# project.ipynb
import random
import pandas as pd
from nltk.stem.porter import *
from nltk.corpus import stopwords
from gensim import corpora, models
from nltk.tokenize import word_tokenize
import os, fitz, string, nltk, PyPDF2, gensim
from gensim.parsing.preprocessing import STOPWORDS
from nltk.stem import WordNetLemmatizer, SnowballStemmer
from functions import readPDF, processPDF, lemmatizeAndStem, toDataFrame, listOfWords, wordsPerArticle, tfidfCorpus
# Reading the files and convert from pdf to string/dict
articles = readPDF('open access articles')
# Basic text-processing, tokenize and filtering
processedArticles = processPDF(articles)
# Gensim, LDA stuff
dataFrame = toDataFrame(processedArticles)
display(dataFrame)
listOfWords = listOfWords(dataFrame)
wordsPerArticle = wordsPerArticle(dataFrame)
tfidfCorpus = tfidfCorpus(listOfWords, dataFrame)
lda_model = gensim.models.LdaMulticore(wordsPerArticle, num_topics=random.randrange(10,26), id2word=listOfWords, passes=random.randrange(5,11), workers=3)
for index, topic in lda_model.print_topics(-1):
print('Topic: {} \nWords: {}'.format(index, topic))
lda_model_tfidf = gensim.models.LdaMulticore(tfidfCorpus, num_topics=random.randrange(10,26), id2word=listOfWords, passes=random.randrange(5,11), workers=3)
for index, topic in lda_model_tfidf.print_topics(-1):
print('Topic: {} Word: {}'.format(index, topic))
randomArticle = random.randrange(0,15)
print("The random generated article's index: ", randomArticle)
for index, score in sorted(lda_model[wordsPerArticle[randomArticle]], key=lambda tup: -1*tup[1]):
print("\nScore: {}\t \nTopic: {}".format(score, lda_model.print_topic(index, 10)))
randomArticle_tfidf = random.randrange(0,15)
print("The random generated article's index: ", randomArticle_tfidf)
for index, score in sorted(lda_model_tfidf[wordsPerArticle[randomArticle_tfidf]], key=lambda tup: -1*tup[1]):
print("\nScore: {}\t \nTopic: {}".format(score, lda_model_tfidf.print_topic(index, 10)))
| 0.29696 | 0.883688 |
## GWAS Tutorial
This notebook is designed to provide a broad overview of Hail's functionality, with emphasis on the functionality to manipulate and query a genetic dataset. We walk through a genome-wide SNP association test, and demonstrate the need to control for confounding caused by population stratification.
```
import hail as hl
hl.init()
```
If the above cell ran without error, we're ready to go!
Before using Hail, we import some standard Python libraries for use throughout the notebook.
```
from hail.plot import show
from pprint import pprint
hl.plot.output_notebook()
```
### Download public 1000 Genomes data
We use a small chunk of the public 1000 Genomes dataset, created by downsampling the genotyped SNPs in the full VCF to about 20 MB. We will also integrate sample and variant metadata from separate text files.
These files are hosted by the Hail team in a public Google Storage bucket; the following cell downloads that data locally.
```
hl.utils.get_1kg('data/')
```
### Importing data from VCF
The data in a VCF file is naturally represented as a Hail [MatrixTable](https://hail.is/docs/0.2/hail.MatrixTable.html#hail.MatrixTable). By first importing the VCF file and then writing the resulting MatrixTable in Hail's native file format, all downstream operations on the VCF's data will be MUCH faster.
```
hl.import_vcf('data/1kg.vcf.bgz').write('data/1kg.mt', overwrite=True)
```
Next we read the written file, assigning the variable `mt` (for `m`atrix `t`able).
```
mt = hl.read_matrix_table('data/1kg.mt')
```
### Getting to know our data
It's important to have easy ways to slice, dice, query, and summarize a dataset. Some of this functionality is demonstrated below.
The [rows](https://hail.is/docs/0.2/hail.MatrixTable.html#hail.MatrixTable.rows) method can be used to get a table with all the row fields in our MatrixTable.
We can use `rows` along with [select](https://hail.is/docs/0.2/hail.Table.html#hail.Table.select) to pull out 5 variants. The `select` method takes either a string refering to a field name in the table, or a Hail [Expression](https://hail.is/docs/0.2/overview/expressions.html). Here, we leave the arguments blank to keep only the row key fields, `locus` and `alleles`.
Use the `show` method to display the variants.
```
mt.rows().select().show(5)
```
Alternatively:
```
mt.row_key.show(5)
```
Here is how to peek at the first few sample IDs:
```
mt.s.show(5)
```
To look at the first few genotype calls, we can use [entries](https://hail.is/docs/0.2/hail.MatrixTable.html#hail.MatrixTable.entries) along with `select` and `take`. The `take` method collects the first n rows into a list. Alternatively, we can use the `show` method, which prints the first n rows to the console in a table format.
Try changing `take` to `show` in the cell below.
```
mt.entry.take(5)
```
### Adding column fields
A Hail MatrixTable can have any number of row fields and column fields for storing data associated with each row and column. Annotations are usually a critical part of any genetic study. Column fields are where you'll store information about sample phenotypes, ancestry, sex, and covariates. Row fields can be used to store information like gene membership and functional impact for use in QC or analysis.
In this tutorial, we demonstrate how to take a text file and use it to annotate the columns in a MatrixTable.
The file provided contains the sample ID, the population and "super-population" designations, the sample sex, and two simulated phenotypes (one binary, one discrete).
This file can be imported into Hail with [import_table](https://hail.is/docs/0.2/methods/impex.html#hail.methods.import_table). This function produces a [Table](https://hail.is/docs/0.2/hail.Table.html#hail.Table) object. Think of this as a Pandas or R dataframe that isn't limited by the memory on your machine -- behind the scenes, it's distributed with Spark.
```
table = (hl.import_table('data/1kg_annotations.txt', impute=True)
.key_by('Sample'))
```
A good way to peek at the structure of a `Table` is to look at its `schema`.
```
table.describe()
```
To peek at the first few values, use the `show` method:
```
table.show(width=100)
```
Now we'll use this table to add sample annotations to our dataset, storing the annotations in column fields in our MatrixTable. First, we'll print the existing column schema:
```
print(mt.col.dtype)
```
We use the [annotate_cols](https://hail.is/docs/0.2/hail.MatrixTable.html#hail.MatrixTable.annotate_cols) method to join the table with the MatrixTable containing our dataset.
```
mt = mt.annotate_cols(pheno = table[mt.s])
mt.col.describe()
```
### Query functions and the Hail Expression Language
Hail has a number of useful query functions that can be used for gathering statistics on our dataset. These query functions take Hail Expressions as arguments.
We will start by looking at some statistics of the information in our table. The [aggregate](https://hail.is/docs/0.2/hail.Table.html#hail.Table.aggregate) method can be used to aggregate over rows of the table.
`counter` is an aggregation function that counts the number of occurrences of each unique element. We can use this to pull out the population distribution by passing in a Hail Expression for the field that we want to count by.
```
pprint(table.aggregate(hl.agg.counter(table.SuperPopulation)))
```
`stats` is an aggregation function that produces some useful statistics about numeric collections. We can use this to see the distribution of the CaffeineConsumption phenotype.
```
pprint(table.aggregate(hl.agg.stats(table.CaffeineConsumption)))
```
However, these metrics aren't perfectly representative of the samples in our dataset. Here's why:
```
table.count()
mt.count_cols()
```
Since there are fewer samples in our dataset than in the full thousand genomes cohort, we need to look at annotations on the dataset. We can use [aggregate_cols](https://hail.is/docs/0.2/hail.MatrixTable.html#hail.MatrixTable.aggregate_cols) to get the metrics for only the samples in our dataset.
```
mt.aggregate_cols(hl.agg.counter(mt.pheno.SuperPopulation))
pprint(mt.aggregate_cols(hl.agg.stats(mt.pheno.CaffeineConsumption)))
```
The functionality demonstrated in the last few cells isn't anything especially new: it's certainly not difficult to ask these questions with Pandas or R dataframes, or even Unix tools like `awk`. But Hail can use the same interfaces and query language to analyze collections that are much larger, like the set of variants.
Here we calculate the counts of each of the 12 possible unique SNPs (4 choices for the reference base * 3 choices for the alternate base).
To do this, we need to get the alternate allele of each variant and then count the occurences of each unique ref/alt pair. This can be done with Hail's `counter` function.
```
snp_counts = mt.aggregate_rows(hl.agg.counter(hl.Struct(ref=mt.alleles[0], alt=mt.alleles[1])))
pprint(snp_counts)
```
We can list the counts in descending order using Python's Counter class.
```
from collections import Counter
counts = Counter(snp_counts)
counts.most_common()
```
It's nice to see that we can actually uncover something biological from this small dataset: we see that these frequencies come in pairs. C/T and G/A are actually the same mutation, just viewed from from opposite strands. Likewise, T/A and A/T are the same mutation on opposite strands. There's a 30x difference between the frequency of C/T and A/T SNPs. Why?
The same Python, R, and Unix tools could do this work as well, but we're starting to hit a wall - the latest [gnomAD release](http://gnomad.broadinstitute.org/) publishes about 250 million variants, and that won't fit in memory on a single computer.
What about genotypes? Hail can query the collection of all genotypes in the dataset, and this is getting large even for our tiny dataset. Our 284 samples and 10,000 variants produce 10 million unique genotypes. The gnomAD dataset has about **5 trillion** unique genotypes.
Hail plotting functions allow Hail fields as arguments, so we can pass in the DP field directly here. If the range and bins arguments are not set, this function will compute the range based on minimum and maximum values of the field and use the default 50 bins.
```
p = hl.plot.histogram(mt.DP, range=(0,30), bins=30, title='DP Histogram', legend='DP')
show(p)
```
### Quality Control
QC is where analysts spend most of their time with sequencing datasets. QC is an iterative process, and is different for every project: there is no "push-button" solution for QC. Each time the Broad collects a new group of samples, it finds new batch effects. However, by practicing open science and discussing the QC process and decisions with others, we can establish a set of best practices as a community.
QC is entirely based on the ability to understand the properties of a dataset. Hail attempts to make this easier by providing the [sample_qc](https://hail.is/docs/0.2/methods/genetics.html#hail.methods.sample_qc) function, which produces a set of useful metrics and stores them in a column field.
```
mt.col.describe()
mt = hl.sample_qc(mt)
mt.col.describe()
```
Plotting the QC metrics is a good place to start.
```
p = hl.plot.histogram(mt.sample_qc.call_rate, range=(.88,1), legend='Call Rate')
show(p)
p = hl.plot.histogram(mt.sample_qc.gq_stats.mean, range=(10,70), legend='Mean Sample GQ')
show(p)
```
Often, these metrics are correlated.
```
p = hl.plot.scatter(mt.sample_qc.dp_stats.mean, mt.sample_qc.call_rate, xlabel='Mean DP', ylabel='Call Rate')
show(p)
```
Removing outliers from the dataset will generally improve association results. We can make arbitrary cutoffs and use them to filter:
```
mt = mt.filter_cols((mt.sample_qc.dp_stats.mean >= 4) & (mt.sample_qc.call_rate >= 0.97))
print('After filter, %d/284 samples remain.' % mt.count_cols())
```
Next is genotype QC. It's a good idea to filter out genotypes where the reads aren't where they should be: if we find a genotype called homozygous reference with >10% alternate reads, a genotype called homozygous alternate with >10% reference reads, or a genotype called heterozygote without a ref / alt balance near 1:1, it is likely to be an error.
In a low-depth dataset like 1KG, it is hard to detect bad genotypes using this metric, since a read ratio of 1 alt to 10 reference can easily be explained by binomial sampling. However, in a high-depth dataset, a read ratio of 10:100 is a sure cause for concern!
```
ab = mt.AD[1] / hl.sum(mt.AD)
filter_condition_ab = ((mt.GT.is_hom_ref() & (ab <= 0.1)) |
(mt.GT.is_het() & (ab >= 0.25) & (ab <= 0.75)) |
(mt.GT.is_hom_var() & (ab >= 0.9)))
fraction_filtered = mt.aggregate_entries(hl.agg.fraction(~filter_condition_ab))
print(f'Filtering {fraction_filtered * 100:.2f}% entries out of downstream analysis.')
mt = mt.filter_entries(filter_condition_ab)
```
Variant QC is a bit more of the same: we can use the [variant_qc](https://hail.is/docs/0.2/methods/genetics.html?highlight=variant%20qc#hail.methods.variant_qc) function to produce a variety of useful statistics, plot them, and filter.
```
mt = hl.variant_qc(mt)
mt.row.describe()
```
These statistics actually look pretty good: we don't need to filter this dataset. Most datasets require thoughtful quality control, though. The [filter_rows](https://hail.is/docs/0.2/hail.MatrixTable.html#hail.MatrixTable.filter_rows) method can help!
### Let's do a GWAS!
First, we need to restrict to variants that are :
- common (we'll use a cutoff of 1%)
- not so far from [Hardy-Weinberg equilibrium](https://en.wikipedia.org/wiki/Hardy%E2%80%93Weinberg_principle) as to suggest sequencing error
```
mt = mt.filter_rows(mt.variant_qc.AF[1] > 0.01)
mt = mt.filter_rows(mt.variant_qc.p_value_hwe > 1e-6)
print('Samples: %d Variants: %d' % (mt.count_cols(), mt.count_rows()))
```
These filters removed about 15% of sites (we started with a bit over 10,000). This is _NOT_ representative of most sequencing datasets! We have already downsampled the full thousand genomes dataset to include more common variants than we'd expect by chance.
In Hail, the association tests accept column fields for the sample phenotype and covariates. Since we've already got our phenotype of interest (caffeine consumption) in the dataset, we are good to go:
```
gwas = hl.linear_regression_rows(y=mt.pheno.CaffeineConsumption,
x=mt.GT.n_alt_alleles(),
covariates=[1.0])
gwas.row.describe()
```
Looking at the bottom of the above printout, you can see the linear regression adds new row fields for the beta, standard error, t-statistic, and p-value.
Hail makes it easy to visualize results! Let's make a [Manhattan plot](https://en.wikipedia.org/wiki/Manhattan_plot):
```
p = hl.plot.manhattan(gwas.p_value)
show(p)
```
This doesn't look like much of a skyline. Let's check whether our GWAS was well controlled using a [Q-Q (quantile-quantile) plot](https://en.wikipedia.org/wiki/Q–Q_plot).
```
p = hl.plot.qq(gwas.p_value)
show(p)
```
### Confounded!
The observed p-values drift away from the expectation immediately. Either every SNP in our dataset is causally linked to caffeine consumption (unlikely), or there's a confounder.
We didn't tell you, but sample ancestry was actually used to simulate this phenotype. This leads to a [stratified](https://en.wikipedia.org/wiki/Population_stratification) distribution of the phenotype. The solution is to include ancestry as a covariate in our regression.
The [linear_regression_rows](https://hail.is/docs/0.2/methods/stats.html#hail.methods.linear_regression_rows) function can also take column fields to use as covariates. We already annotated our samples with reported ancestry, but it is good to be skeptical of these labels due to human error. Genomes don't have that problem! Instead of using reported ancestry, we will use genetic ancestry by including computed principal components in our model.
The [pca](https://hail.is/docs/0.2/methods/stats.html#hail.methods.pca) function produces eigenvalues as a list and sample PCs as a Table, and can also produce variant loadings when asked. The [hwe_normalized_pca](https://hail.is/docs/0.2/methods/genetics.html#hail.methods.hwe_normalized_pca) function does the same, using HWE-normalized genotypes for the PCA.
```
eigenvalues, pcs, _ = hl.hwe_normalized_pca(mt.GT)
pprint(eigenvalues)
pcs.show(5, width=100)
```
Now that we've got principal components per sample, we may as well plot them! Human history exerts a strong effect in genetic datasets. Even with a 50MB sequencing dataset, we can recover the major human populations.
```
mt = mt.annotate_cols(scores = pcs[mt.s].scores)
p = hl.plot.scatter(mt.scores[0],
mt.scores[1],
label=mt.pheno.SuperPopulation,
title='PCA', xlabel='PC1', ylabel='PC2')
show(p)
```
Now we can rerun our linear regression, controlling for sample sex and the first few principal components. We'll do this with input variable the number of alternate alleles as before, and again with input variable the genotype dosage derived from the PL field.
```
gwas = hl.linear_regression_rows(
y=mt.pheno.CaffeineConsumption,
x=mt.GT.n_alt_alleles(),
covariates=[1.0, mt.pheno.isFemale, mt.scores[0], mt.scores[1], mt.scores[2]])
```
We'll first make a Q-Q plot to assess inflation...
```
p = hl.plot.qq(gwas.p_value)
show(p)
```
That's more like it! This shape is indicative of a well-controlled (but not especially well-powered) study. And now for the Manhattan plot:
```
p = hl.plot.manhattan(gwas.p_value)
show(p)
```
We have found a caffeine consumption locus! Now simply apply Hail's Nature paper function to publish the result.
Just kidding, that function won't land until Hail 1.0!
### Rare variant analysis
Here we'll demonstrate how one can use the expression language to group and count by any arbitrary properties in row and column fields. Hail also implements the sequence kernel association test (SKAT).
```
entries = mt.entries()
results = (entries.group_by(pop = entries.pheno.SuperPopulation, chromosome = entries.locus.contig)
.aggregate(n_het = hl.agg.count_where(entries.GT.is_het())))
results.show()
```
What if we want to group by minor allele frequency bin and hair color, and calculate the mean GQ?
```
entries = entries.annotate(maf_bin = hl.cond(entries.info.AF[0]<0.01, "< 1%",
hl.cond(entries.info.AF[0]<0.05, "1%-5%", ">5%")))
results2 = (entries.group_by(af_bin = entries.maf_bin, purple_hair = entries.pheno.PurpleHair)
.aggregate(mean_gq = hl.agg.stats(entries.GQ).mean,
mean_dp = hl.agg.stats(entries.DP).mean))
results2.show()
```
We've shown that it's easy to aggregate by a couple of arbitrary statistics. This specific examples may not provide especially useful pieces of information, but this same pattern can be used to detect effects of rare variation:
- Count the number of heterozygous genotypes per gene by functional category (synonymous, missense, or loss-of-function) to estimate per-gene functional constraint
- Count the number of singleton loss-of-function mutations per gene in cases and controls to detect genes involved in disease
### Epilogue
Congrats! You've reached the end of the first tutorial. To learn more about Hail's API and functionality, take a look at the other tutorials. You can check out the [Python API](https://hail.is/docs/0.2/api.html#python-api) for documentation on additional Hail functions. If you use Hail for your own science, we'd love to hear from you on [Zulip chat](https://hail.zulipchat.com) or the [discussion forum](http://discuss.hail.is).
For reference, here's the full workflow to all tutorial endpoints combined into one cell.
```
table = hl.import_table('data/1kg_annotations.txt', impute=True).key_by('Sample')
mt = hl.read_matrix_table('data/1kg.mt')
mt = mt.annotate_cols(pheno = table[mt.s])
mt = hl.sample_qc(mt)
mt = mt.filter_cols((mt.sample_qc.dp_stats.mean >= 4) & (mt.sample_qc.call_rate >= 0.97))
ab = mt.AD[1] / hl.sum(mt.AD)
filter_condition_ab = ((mt.GT.is_hom_ref() & (ab <= 0.1)) |
(mt.GT.is_het() & (ab >= 0.25) & (ab <= 0.75)) |
(mt.GT.is_hom_var() & (ab >= 0.9)))
mt = mt.filter_entries(filter_condition_ab)
mt = hl.variant_qc(mt)
mt = mt.filter_rows(mt.variant_qc.AF[1] > 0.01)
eigenvalues, pcs, _ = hl.hwe_normalized_pca(mt.GT)
mt = mt.annotate_cols(scores = pcs[mt.s].scores)
gwas = hl.linear_regression_rows(
y=mt.pheno.CaffeineConsumption,
x=mt.GT.n_alt_alleles(),
covariates=[1.0, mt.pheno.isFemale, mt.scores[0], mt.scores[1], mt.scores[2]])
```
|
github_jupyter
|
import hail as hl
hl.init()
from hail.plot import show
from pprint import pprint
hl.plot.output_notebook()
hl.utils.get_1kg('data/')
hl.import_vcf('data/1kg.vcf.bgz').write('data/1kg.mt', overwrite=True)
mt = hl.read_matrix_table('data/1kg.mt')
mt.rows().select().show(5)
mt.row_key.show(5)
mt.s.show(5)
mt.entry.take(5)
table = (hl.import_table('data/1kg_annotations.txt', impute=True)
.key_by('Sample'))
table.describe()
table.show(width=100)
print(mt.col.dtype)
mt = mt.annotate_cols(pheno = table[mt.s])
mt.col.describe()
pprint(table.aggregate(hl.agg.counter(table.SuperPopulation)))
pprint(table.aggregate(hl.agg.stats(table.CaffeineConsumption)))
table.count()
mt.count_cols()
mt.aggregate_cols(hl.agg.counter(mt.pheno.SuperPopulation))
pprint(mt.aggregate_cols(hl.agg.stats(mt.pheno.CaffeineConsumption)))
snp_counts = mt.aggregate_rows(hl.agg.counter(hl.Struct(ref=mt.alleles[0], alt=mt.alleles[1])))
pprint(snp_counts)
from collections import Counter
counts = Counter(snp_counts)
counts.most_common()
p = hl.plot.histogram(mt.DP, range=(0,30), bins=30, title='DP Histogram', legend='DP')
show(p)
mt.col.describe()
mt = hl.sample_qc(mt)
mt.col.describe()
p = hl.plot.histogram(mt.sample_qc.call_rate, range=(.88,1), legend='Call Rate')
show(p)
p = hl.plot.histogram(mt.sample_qc.gq_stats.mean, range=(10,70), legend='Mean Sample GQ')
show(p)
p = hl.plot.scatter(mt.sample_qc.dp_stats.mean, mt.sample_qc.call_rate, xlabel='Mean DP', ylabel='Call Rate')
show(p)
mt = mt.filter_cols((mt.sample_qc.dp_stats.mean >= 4) & (mt.sample_qc.call_rate >= 0.97))
print('After filter, %d/284 samples remain.' % mt.count_cols())
ab = mt.AD[1] / hl.sum(mt.AD)
filter_condition_ab = ((mt.GT.is_hom_ref() & (ab <= 0.1)) |
(mt.GT.is_het() & (ab >= 0.25) & (ab <= 0.75)) |
(mt.GT.is_hom_var() & (ab >= 0.9)))
fraction_filtered = mt.aggregate_entries(hl.agg.fraction(~filter_condition_ab))
print(f'Filtering {fraction_filtered * 100:.2f}% entries out of downstream analysis.')
mt = mt.filter_entries(filter_condition_ab)
mt = hl.variant_qc(mt)
mt.row.describe()
mt = mt.filter_rows(mt.variant_qc.AF[1] > 0.01)
mt = mt.filter_rows(mt.variant_qc.p_value_hwe > 1e-6)
print('Samples: %d Variants: %d' % (mt.count_cols(), mt.count_rows()))
gwas = hl.linear_regression_rows(y=mt.pheno.CaffeineConsumption,
x=mt.GT.n_alt_alleles(),
covariates=[1.0])
gwas.row.describe()
p = hl.plot.manhattan(gwas.p_value)
show(p)
p = hl.plot.qq(gwas.p_value)
show(p)
eigenvalues, pcs, _ = hl.hwe_normalized_pca(mt.GT)
pprint(eigenvalues)
pcs.show(5, width=100)
mt = mt.annotate_cols(scores = pcs[mt.s].scores)
p = hl.plot.scatter(mt.scores[0],
mt.scores[1],
label=mt.pheno.SuperPopulation,
title='PCA', xlabel='PC1', ylabel='PC2')
show(p)
gwas = hl.linear_regression_rows(
y=mt.pheno.CaffeineConsumption,
x=mt.GT.n_alt_alleles(),
covariates=[1.0, mt.pheno.isFemale, mt.scores[0], mt.scores[1], mt.scores[2]])
p = hl.plot.qq(gwas.p_value)
show(p)
p = hl.plot.manhattan(gwas.p_value)
show(p)
entries = mt.entries()
results = (entries.group_by(pop = entries.pheno.SuperPopulation, chromosome = entries.locus.contig)
.aggregate(n_het = hl.agg.count_where(entries.GT.is_het())))
results.show()
entries = entries.annotate(maf_bin = hl.cond(entries.info.AF[0]<0.01, "< 1%",
hl.cond(entries.info.AF[0]<0.05, "1%-5%", ">5%")))
results2 = (entries.group_by(af_bin = entries.maf_bin, purple_hair = entries.pheno.PurpleHair)
.aggregate(mean_gq = hl.agg.stats(entries.GQ).mean,
mean_dp = hl.agg.stats(entries.DP).mean))
results2.show()
table = hl.import_table('data/1kg_annotations.txt', impute=True).key_by('Sample')
mt = hl.read_matrix_table('data/1kg.mt')
mt = mt.annotate_cols(pheno = table[mt.s])
mt = hl.sample_qc(mt)
mt = mt.filter_cols((mt.sample_qc.dp_stats.mean >= 4) & (mt.sample_qc.call_rate >= 0.97))
ab = mt.AD[1] / hl.sum(mt.AD)
filter_condition_ab = ((mt.GT.is_hom_ref() & (ab <= 0.1)) |
(mt.GT.is_het() & (ab >= 0.25) & (ab <= 0.75)) |
(mt.GT.is_hom_var() & (ab >= 0.9)))
mt = mt.filter_entries(filter_condition_ab)
mt = hl.variant_qc(mt)
mt = mt.filter_rows(mt.variant_qc.AF[1] > 0.01)
eigenvalues, pcs, _ = hl.hwe_normalized_pca(mt.GT)
mt = mt.annotate_cols(scores = pcs[mt.s].scores)
gwas = hl.linear_regression_rows(
y=mt.pheno.CaffeineConsumption,
x=mt.GT.n_alt_alleles(),
covariates=[1.0, mt.pheno.isFemale, mt.scores[0], mt.scores[1], mt.scores[2]])
| 0.444324 | 0.993397 |
# Listas - `list`
Las listas son una estructura de datos (contenedor, repocitorio) de datos de diferente tipo.
La información (datos) se organizan siguiendo un esquema secuencial.
1, 2, 3, 4, 5 -> `[1, 2, 3, 4, 5]`
```
python = 'Python'
javascript = 'JavaScript'
cpp = 'C++'
java = 'Java'
php = 'PHP'
python
javascript
cpp
java
php
lenguajes = ['Python', 'JavaScript', 'C++', 'Java', 'PHP']
lenguajes
type(lenguajes)
```
La función incorporada `len()` nos permite obtener la cantidad de elementos de una lista:
```
len(lenguajes)
```
Una lista se puede crear a partir de la clase `list()`:
```
numeros = [2, 3, 5, 7, 11]
numeros = list([2, 3, 5, 7, 11])
numeros
numeros = list((2, 3, 5, 7, 11))
numeros
type(numeros)
```
Se puede crear una lista vacía y en instrucciones (órdenes) posteriores agregar elementos según sea necesario.
```
numeros = list()
len(numeros)
numeros = []
numeros
len(numeros)
numeros.append(2)
numeros
len(numeros)
numeros.append(3)
numeros.append(5)
numeros
len(numeros)
numeros.append(7)
numeros.append(11)
numeros
len(numeros)
numeros.append(17)
numeros
```
A través de la función `list.insert(indice, dato)` podemos agregar/adicionar un elemento de `dato` en una posición (`índice`) específico.
```
numeros.insert(5, 13)
numeros
len(numeros)
```
Por medio de la función `list.extend(iterable)` agregamos al final varios elementos a la lista. El argumento `iterable` puede ser: una lista, una tupla, una cadena de caracters, o cualquier objeto que represente una secuencia.
```
otros_primos = [19, 23, 29, 31]
otros_primos
numeros.extend(otros_primos)
numeros
len(numeros)
```
## Acceso a elementos de datos de una lista
```
numeros
numeros[0]
numeros[1]
numeros[2]
numeros[-1]
numeros[-2]
numeros[-3]
```
### Acceso a varios elementos de una lista:
```
numeros[1:4]
type(numeros[1:4])
numeros[0:4]
numeros[:4]
numeros[4:-1]
numeros[4:]
lenguajes
lenguajes[0]
lenguajes[-1]
lenguajes[:2]
```
## Actualización de los valores de una lista:
```
lenguajes
lenguajes[0] = 'pyton'
lenguajes
lenguajes[0] = 'Python'
lenguajes
numeros
numeros.append(32)
numeros
numeros[-1] = 37
numeros
# numeros[20] = 101 # IndexError: La posición no existe en la lista.
# numeros[-50] = 100 # IndexError: La posición no existe en la lista.
```
## Operador de pertenencia - `in`
Facilita la comprobación de existencia de un elemento en una lista.
```
numeros
11 in numeros
6 in numeros
numero = 11
if numero in numeros:
print(f'El valor {numero} está presente en la lista numeros.')
numero = 59
if numero in numeros:
print(f'El valor {numero} está presente en la lista numeros.')
else:
print(f'El valor {numero} no está presente en la lista numeros.')
numero = 18
if numero not in numeros:
print(f'El valor {numero} no está presente en la lista numeros.')
```
## Mutabilidad
El contenido de una lista puede cambiar en tiempo de ejecución.
```
numeros
primos = numeros
primos
numeros.append(41)
numeros
primos
primos.append(43)
primos
numeros
```
Nota importante: Podemos crear una lista a partir de la clase `list`.
```
numeros_primos = list(primos)
numeros_primos
numeros_primos.append(47)
numeros_primos
primos
numeros
```
**Nota importante**: Se puede crear una copia de una lista utilizando la clase `list` o utilizando la notación de slicing (o rebanado).
```
lista_primos = numeros[:]
lista_primos
lista_primos.append(47)
lista_primos
numeros
```
### Validación del contenido y referencia de objetos tipo `list`:
```
numeros == primos
numeros_primos == lista_primos
numeros
numeros.append(47)
numeros
numeros_primos == numeros
numeros_primos is numeros
primos is numeros
```
## Funciones (métodos) de las listas (`list`)
### Función `list.remove(elemento)`
Facilita la remoción de un elemento específico en una lista.
```
lenguajes
if 'PHP' in lenguajes:
lenguajes.remove('PHP')
print('Se ha eliminado el elemento "PHP" de la lista.')
else:
print('El elemento "PHP" no se encuentra en la lista.')
lenguajes
if 'PHP' in lenguajes:
lenguajes.remove('PHP')
print('Se ha eliminado el elemento "PHP" de la lista.')
else:
print('El elemento "PHP" no se encuentra en la lista.')
lenguajes
```
### Función `list.sort()`
Permite el ordenamiento de la lista. Cuando se efectúa esta operación la lista original se modifica.
```
lenguajes.sort()
lenguajes
```
### Función `sorted()`
Es una función incorporada en Python. No es una función de los objetos `list`.
Esta función genera una nueva lista. Ese es su retorno.
```
impares = [7, 1, 13, 5, 3]
impares
sorted(impares)
impares
sorted(impares, reverse=True)
```
### Función `list.reverse()`
Invierte el contenido de una lista.
```
lenguajes
lenguajes.reverse()
lenguajes
```
### Función `pop()`
Elimina un elemento desde una posición (índice) específica.
El valor eliminado (extraído) se devuelve como valor de retorno de la función.
```
lenguajes
lenguajes.append('PHP')
lenguajes
lenguajes.insert(1, 'Go')
lenguajes
len(lenguajes)
lenguaje = lenguajes.pop(-2)
lenguaje
lenguajes
len(lenguajes)
lenguaje = lenguajes.pop(1)
lenguaje
lenguajes
lenguaje = lenguajes.pop()
lenguaje
```
Pregunta: ¿Qué ocurre si a la función `pop()` le pasamos un índice inexistente?
```
try:
lenguajes.pop(20)
except:
print('No existe el índice 20.')
```
Respuesta: Si el índice no existe en la lista, se genera el error `IndexError`.
```
try:
lenguajes.pop(-10)
except:
print('No existe el índice -10.')
```
### Función `index(elemento)`
Retorna el primer índice de un `elemento`.
Si no se encuentra ese `elemento`, se genera el error `ValueError`.
```
enteros = [3, 2, 5, 3, 7, 4, 4, 8, 2, 5]
enteros
enteros.index(3)
enteros.index(3, 1)
enteros.index(5, 4)
try:
enteros.index(10)
except ValueError as e:
print('Mensaje:', e)
```
### Función `copy()`
Genera una copia de los elementos de la lista.
Esa función es equivalente a escribir: `lista[:]`.
```
copia_enteros = enteros.copy()
enteros
copia_enteros
enteros == copia_enteros
copia_enteros is enteros
```
### Función `count(elemento)`
Cuenta el número de ocurrencias (repeticiones) en una lista.
```
enteros
enteros.count(3)
enteros.count(7)
enteros.count(10)
```
### Función `clear()`
Elimina todos los elementos de una lista. La deja vacía.
```
enteros
len(enteros)
enteros.clear()
enteros
len(enteros)
```
De forma alternativa, se puede utilizar la expresión `del lista[:]` para vaciar (eliminar) todo el contenido de una lista.
```
copia_enteros
len(copia_enteros)
del copia_enteros[:]
copia_enteros
len(copia_enteros)
```
|
github_jupyter
|
python = 'Python'
javascript = 'JavaScript'
cpp = 'C++'
java = 'Java'
php = 'PHP'
python
javascript
cpp
java
php
lenguajes = ['Python', 'JavaScript', 'C++', 'Java', 'PHP']
lenguajes
type(lenguajes)
len(lenguajes)
numeros = [2, 3, 5, 7, 11]
numeros = list([2, 3, 5, 7, 11])
numeros
numeros = list((2, 3, 5, 7, 11))
numeros
type(numeros)
numeros = list()
len(numeros)
numeros = []
numeros
len(numeros)
numeros.append(2)
numeros
len(numeros)
numeros.append(3)
numeros.append(5)
numeros
len(numeros)
numeros.append(7)
numeros.append(11)
numeros
len(numeros)
numeros.append(17)
numeros
numeros.insert(5, 13)
numeros
len(numeros)
otros_primos = [19, 23, 29, 31]
otros_primos
numeros.extend(otros_primos)
numeros
len(numeros)
numeros
numeros[0]
numeros[1]
numeros[2]
numeros[-1]
numeros[-2]
numeros[-3]
numeros[1:4]
type(numeros[1:4])
numeros[0:4]
numeros[:4]
numeros[4:-1]
numeros[4:]
lenguajes
lenguajes[0]
lenguajes[-1]
lenguajes[:2]
lenguajes
lenguajes[0] = 'pyton'
lenguajes
lenguajes[0] = 'Python'
lenguajes
numeros
numeros.append(32)
numeros
numeros[-1] = 37
numeros
# numeros[20] = 101 # IndexError: La posición no existe en la lista.
# numeros[-50] = 100 # IndexError: La posición no existe en la lista.
numeros
11 in numeros
6 in numeros
numero = 11
if numero in numeros:
print(f'El valor {numero} está presente en la lista numeros.')
numero = 59
if numero in numeros:
print(f'El valor {numero} está presente en la lista numeros.')
else:
print(f'El valor {numero} no está presente en la lista numeros.')
numero = 18
if numero not in numeros:
print(f'El valor {numero} no está presente en la lista numeros.')
numeros
primos = numeros
primos
numeros.append(41)
numeros
primos
primos.append(43)
primos
numeros
numeros_primos = list(primos)
numeros_primos
numeros_primos.append(47)
numeros_primos
primos
numeros
lista_primos = numeros[:]
lista_primos
lista_primos.append(47)
lista_primos
numeros
numeros == primos
numeros_primos == lista_primos
numeros
numeros.append(47)
numeros
numeros_primos == numeros
numeros_primos is numeros
primos is numeros
lenguajes
if 'PHP' in lenguajes:
lenguajes.remove('PHP')
print('Se ha eliminado el elemento "PHP" de la lista.')
else:
print('El elemento "PHP" no se encuentra en la lista.')
lenguajes
if 'PHP' in lenguajes:
lenguajes.remove('PHP')
print('Se ha eliminado el elemento "PHP" de la lista.')
else:
print('El elemento "PHP" no se encuentra en la lista.')
lenguajes
lenguajes.sort()
lenguajes
impares = [7, 1, 13, 5, 3]
impares
sorted(impares)
impares
sorted(impares, reverse=True)
lenguajes
lenguajes.reverse()
lenguajes
lenguajes
lenguajes.append('PHP')
lenguajes
lenguajes.insert(1, 'Go')
lenguajes
len(lenguajes)
lenguaje = lenguajes.pop(-2)
lenguaje
lenguajes
len(lenguajes)
lenguaje = lenguajes.pop(1)
lenguaje
lenguajes
lenguaje = lenguajes.pop()
lenguaje
try:
lenguajes.pop(20)
except:
print('No existe el índice 20.')
try:
lenguajes.pop(-10)
except:
print('No existe el índice -10.')
enteros = [3, 2, 5, 3, 7, 4, 4, 8, 2, 5]
enteros
enteros.index(3)
enteros.index(3, 1)
enteros.index(5, 4)
try:
enteros.index(10)
except ValueError as e:
print('Mensaje:', e)
copia_enteros = enteros.copy()
enteros
copia_enteros
enteros == copia_enteros
copia_enteros is enteros
enteros
enteros.count(3)
enteros.count(7)
enteros.count(10)
enteros
len(enteros)
enteros.clear()
enteros
len(enteros)
copia_enteros
len(copia_enteros)
del copia_enteros[:]
copia_enteros
len(copia_enteros)
| 0.081616 | 0.883538 |
```
import pandas as pd
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
# Build a dataframe with your connections
bess_association= pd.read_csv("Association_data-only-para-text.csv",engine='python')
bess_association.item_A = bess_association.item_A.str.replace("xxspacexx"," ").str.replace("xxeosxx",":").str.replace("xyz","-")
bess_association.item_B = bess_association.item_B.str.replace("xxspacexx"," ").str.replace("xxeosxx",":").str.replace("xyz","-")
```
### Selecting top associations for a parsona
```
persona = 'victoria'
bess_persona = bess_association[((bess_association.item_A.str.contains(persona)) |
(bess_association.item_B.str.contains(persona))) & (bess_association.freqAB > 5)]
stage_of_life = False
if(stage_of_life):
flag = bess_persona.item_A.str.len() < bess_persona.item_B.str.len()
bess_persona.loc[flag,['item_A','item_B']] = bess_persona.loc[flag,['item_B','item_A']]
bess_persona_groups = bess_persona.groupby('item_A').head(10).sort_values(['item_A','lift'])
bess_persona_groups['stage_of_life'] = bess_persona_groups.item_A.str.split(":").apply(lambda x: x[1]).str.split("-").apply(lambda x:x[-1])
bess_persona_groups['persona'] = bess_persona_groups.item_A.str.split(":").apply(lambda x: x[0])
#bess_persona_groups.item_A.str.split(":").apply(lambda x: x[1])
else:
bess_persona_groups = bess_persona
bess_persona.sort_values(['lift'],ascending=False).head()
```
#### Creating Data Frames for graphs
```
bess_persona
if stage_of_life:
df = pd.DataFrame({ 'from': bess_persona_groups.persona, 'to':bess_persona_groups.item_B,\
'value': bess_persona_groups.stage_of_life})
else:
df = pd.DataFrame({ 'from': bess_persona_groups.item_A, 'to':bess_persona_groups.item_B})
file_name = persona+"_SOF_graph.csv"
df.to_csv(file_name,index=False)
df = pd.read_csv(file_name)
file_name
if stage_of_life:
carac_ids = pd.concat([df['from'].drop_duplicates(), df.value])
carac = pd.DataFrame({ 'ID':carac_ids, 'myvalue':pd.Categorical(carac_ids).codes})
df['from'] = df['from'].str.capitalize()
df['to'] = df['to'].str.capitalize()
G=nx.from_pandas_edgelist(df, 'from', 'to',create_using=nx.Graph())
# Graph with Custom nodes:
fig=plt.figure(figsize=(18, 16), dpi= 80, facecolor='w', edgecolor='k')
#nx.draw(G, with_labels=True, node_size=2000, node_color="skyblue", node_shape="s", alpha=0.5, linewidths=50)
#node_color='skyblue'
#node_color= carac['myvalue'][:-1,]
nx.draw(G, with_labels=True, node_size=11000,edge_color= 'lightblue',
node_color='skyblue', width=4.0, edge_cmap=plt.cm.Set2, alpha=0.7,cmap=plt.cm.Set2,font_size = 20)
plt.show()
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
# Build a dataframe with your connections
bess_association= pd.read_csv("Association_data-only-para-text.csv",engine='python')
bess_association.item_A = bess_association.item_A.str.replace("xxspacexx"," ").str.replace("xxeosxx",":").str.replace("xyz","-")
bess_association.item_B = bess_association.item_B.str.replace("xxspacexx"," ").str.replace("xxeosxx",":").str.replace("xyz","-")
persona = 'victoria'
bess_persona = bess_association[((bess_association.item_A.str.contains(persona)) |
(bess_association.item_B.str.contains(persona))) & (bess_association.freqAB > 5)]
stage_of_life = False
if(stage_of_life):
flag = bess_persona.item_A.str.len() < bess_persona.item_B.str.len()
bess_persona.loc[flag,['item_A','item_B']] = bess_persona.loc[flag,['item_B','item_A']]
bess_persona_groups = bess_persona.groupby('item_A').head(10).sort_values(['item_A','lift'])
bess_persona_groups['stage_of_life'] = bess_persona_groups.item_A.str.split(":").apply(lambda x: x[1]).str.split("-").apply(lambda x:x[-1])
bess_persona_groups['persona'] = bess_persona_groups.item_A.str.split(":").apply(lambda x: x[0])
#bess_persona_groups.item_A.str.split(":").apply(lambda x: x[1])
else:
bess_persona_groups = bess_persona
bess_persona.sort_values(['lift'],ascending=False).head()
bess_persona
if stage_of_life:
df = pd.DataFrame({ 'from': bess_persona_groups.persona, 'to':bess_persona_groups.item_B,\
'value': bess_persona_groups.stage_of_life})
else:
df = pd.DataFrame({ 'from': bess_persona_groups.item_A, 'to':bess_persona_groups.item_B})
file_name = persona+"_SOF_graph.csv"
df.to_csv(file_name,index=False)
df = pd.read_csv(file_name)
file_name
if stage_of_life:
carac_ids = pd.concat([df['from'].drop_duplicates(), df.value])
carac = pd.DataFrame({ 'ID':carac_ids, 'myvalue':pd.Categorical(carac_ids).codes})
df['from'] = df['from'].str.capitalize()
df['to'] = df['to'].str.capitalize()
G=nx.from_pandas_edgelist(df, 'from', 'to',create_using=nx.Graph())
# Graph with Custom nodes:
fig=plt.figure(figsize=(18, 16), dpi= 80, facecolor='w', edgecolor='k')
#nx.draw(G, with_labels=True, node_size=2000, node_color="skyblue", node_shape="s", alpha=0.5, linewidths=50)
#node_color='skyblue'
#node_color= carac['myvalue'][:-1,]
nx.draw(G, with_labels=True, node_size=11000,edge_color= 'lightblue',
node_color='skyblue', width=4.0, edge_cmap=plt.cm.Set2, alpha=0.7,cmap=plt.cm.Set2,font_size = 20)
plt.show()
| 0.23926 | 0.654212 |
# --- Day 3: Crossed Wires ---
Specifically, two wires are connected to a central port and extend outward on a grid. You trace the path each wire takes as it leaves the central port, one wire per line of text (your puzzle input).
The wires twist and turn, but the two wires occasionally cross paths. To fix the circuit, you need to find the intersection point closest to the central port. Because the wires are on a grid, use the Manhattan distance for this measurement. While the wires do technically cross right at the central port where they both start, this point does not count, nor does a wire count as crossing with itself.
```
from pathlib import Path
def travel(points, instruction):
direction = instruction[0]
distance = int(instruction[1:])
x = points[-1][0]
y = points[-1][1]
if direction == "R":
new_points = [(x + t, y) for t in range(1, distance + 1)]
if direction == "L":
new_points = [(x - t, y) for t in range(1, distance + 1)]
if direction == "U":
new_points = [(x, y + t) for t in range(1, distance + 1)]
if direction == "D":
new_points = [(x, y - t) for t in range(1, distance + 1)]
points += new_points
return points
def get_points(start, instructions):
points = start
for instruction in instructions:
points = travel(points, instruction)
return points
def manhattan(p1, p2):
return sum([abs(a - b) for a, b in zip(p1, p2)])
def get_shortest_distance(paths):
line_one_instructions = paths[0].split(",")
line_two_instructions = paths[1].split(",")
line_one_points = get_points([(0, 0)], line_one_instructions)
line_two_points = get_points([(0, 0)], line_two_instructions)
intersections = list(set(line_one_points).intersection(set(line_two_points)))
distances = [manhattan((0, 0), point) for point in intersections]
distances_greater_than_zero = [distance for distance in distances if distance != 0]
return min(distances_greater_than_zero)
test_paths_one = [
"R75,D30,R83,U83,L12,D49,R71,U7,L72",
"U62,R66,U55,R34,D71,R55,D58,R83",
]
test_paths_two = [
"R98,U47,R26,D63,R33,U87,L62,D20,R33,U53,R51",
"U98,R91,D20,R16,D67,R40,U7,R15,U6,R7",
]
paths = Path("input").read_text().splitlines()
# Should be 159
get_shortest_distance(test_paths_one)
# Should be 135
get_shortest_distance(test_paths_two)
get_shortest_distance(paths)
```
## --- Part Two ---
It turns out that this circuit is very timing-sensitive; you actually need to minimize the signal delay.
To do this, calculate the number of steps each wire takes to reach each intersection; choose the intersection where the sum of both wires' steps is lowest. If a wire visits a position on the grid multiple times, use the steps value from the first time it visits that position when calculating the total value of a specific intersection.
The number of steps a wire takes is the total number of grid squares the wire has entered to get to that location, including the intersection being considered.
```
def get_fewest_steps(paths):
line_one_instructions = paths[0].split(",")
line_two_instructions = paths[1].split(",")
line_one_points = get_points([(0, 0)], line_one_instructions)
line_two_points = get_points([(0, 0)], line_two_instructions)
intersections = list(set(line_one_points).intersection(set(line_two_points)))
steps_dict = {
line_one_points.index(intersection)
+ (line_two_points).index(intersection): intersection
for intersection in intersections
if intersection != (0, 0)
}
return min(steps_dict.keys())
# Should be 610
get_fewest_steps(test_paths_one)
# Should be 410
get_fewest_steps(test_paths_two)
get_fewest_steps(paths)
```
|
github_jupyter
|
from pathlib import Path
def travel(points, instruction):
direction = instruction[0]
distance = int(instruction[1:])
x = points[-1][0]
y = points[-1][1]
if direction == "R":
new_points = [(x + t, y) for t in range(1, distance + 1)]
if direction == "L":
new_points = [(x - t, y) for t in range(1, distance + 1)]
if direction == "U":
new_points = [(x, y + t) for t in range(1, distance + 1)]
if direction == "D":
new_points = [(x, y - t) for t in range(1, distance + 1)]
points += new_points
return points
def get_points(start, instructions):
points = start
for instruction in instructions:
points = travel(points, instruction)
return points
def manhattan(p1, p2):
return sum([abs(a - b) for a, b in zip(p1, p2)])
def get_shortest_distance(paths):
line_one_instructions = paths[0].split(",")
line_two_instructions = paths[1].split(",")
line_one_points = get_points([(0, 0)], line_one_instructions)
line_two_points = get_points([(0, 0)], line_two_instructions)
intersections = list(set(line_one_points).intersection(set(line_two_points)))
distances = [manhattan((0, 0), point) for point in intersections]
distances_greater_than_zero = [distance for distance in distances if distance != 0]
return min(distances_greater_than_zero)
test_paths_one = [
"R75,D30,R83,U83,L12,D49,R71,U7,L72",
"U62,R66,U55,R34,D71,R55,D58,R83",
]
test_paths_two = [
"R98,U47,R26,D63,R33,U87,L62,D20,R33,U53,R51",
"U98,R91,D20,R16,D67,R40,U7,R15,U6,R7",
]
paths = Path("input").read_text().splitlines()
# Should be 159
get_shortest_distance(test_paths_one)
# Should be 135
get_shortest_distance(test_paths_two)
get_shortest_distance(paths)
def get_fewest_steps(paths):
line_one_instructions = paths[0].split(",")
line_two_instructions = paths[1].split(",")
line_one_points = get_points([(0, 0)], line_one_instructions)
line_two_points = get_points([(0, 0)], line_two_instructions)
intersections = list(set(line_one_points).intersection(set(line_two_points)))
steps_dict = {
line_one_points.index(intersection)
+ (line_two_points).index(intersection): intersection
for intersection in intersections
if intersection != (0, 0)
}
return min(steps_dict.keys())
# Should be 610
get_fewest_steps(test_paths_one)
# Should be 410
get_fewest_steps(test_paths_two)
get_fewest_steps(paths)
| 0.761272 | 0.957794 |
Generate intial word embedding for headlines and description
The embedding is limited to a fixed vocabulary size (`vocab_size`) but
a vocabulary of all the words that appeared in the data is built.
```
FN = 'vocabulary-embedding'
seed=42
vocab_size = 40000
embedding_dim = 100
lower = False # dont lower case the text
```
# read tokenized headlines and descriptions
```
import cPickle as pickle
FN0 = 'tokens' # this is the name of the data file which I assume you already have
with open('data/%s.pkl'%FN0, 'rb') as fp:
heads, desc, keywords = pickle.load(fp) # keywords are not used in this project
if lower:
heads = [h.lower() for h in heads]
if lower:
desc = [h.lower() for h in desc]
i=0
heads[i]
desc[i]
keywords[i]
len(heads),len(set(heads))
len(desc),len(set(desc))
```
# build vocabulary
```
from collections import Counter
from itertools import chain
def get_vocab(lst):
vocabcount = Counter(w for txt in lst for w in txt.split())
vocab = map(lambda x: x[0], sorted(vocabcount.items(), key=lambda x: -x[1]))
return vocab, vocabcount
vocab, vocabcount = get_vocab(heads+desc)
```
most popular tokens
```
print vocab[:50]
print '...',len(vocab)
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot([vocabcount[w] for w in vocab]);
plt.gca().set_xscale("log", nonposx='clip')
plt.gca().set_yscale("log", nonposy='clip')
plt.title('word distribution in headlines and discription')
plt.xlabel('rank')
plt.ylabel('total appearances');
```
always nice to see [Zipf's law](https://en.wikipedia.org/wiki/Zipf%27s_law)
# Index words
```
empty = 0 # RNN mask of no data
eos = 1 # end of sentence
start_idx = eos+1 # first real word
def get_idx(vocab, vocabcount):
word2idx = dict((word, idx+start_idx) for idx,word in enumerate(vocab))
word2idx['<empty>'] = empty
word2idx['<eos>'] = eos
idx2word = dict((idx,word) for word,idx in word2idx.iteritems())
return word2idx, idx2word
word2idx, idx2word = get_idx(vocab, vocabcount)
```
# Word Embedding
## read GloVe
```
fname = 'glove.6B.%dd.txt'%embedding_dim
import os
datadir_base = os.path.expanduser(os.path.join('~', '.keras'))
if not os.access(datadir_base, os.W_OK):
datadir_base = os.path.join('/tmp', '.keras')
datadir = os.path.join(datadir_base, 'datasets')
glove_name = os.path.join(datadir, fname)
if not os.path.exists(glove_name):
path = 'glove.6B.zip'
path = get_file(path, origin="http://nlp.stanford.edu/data/glove.6B.zip")
!unzip {datadir}/{path}
glove_n_symbols = !wc -l {glove_name}
glove_n_symbols = int(glove_n_symbols[0].split()[0])
glove_n_symbols
import numpy as np
glove_index_dict = {}
glove_embedding_weights = np.empty((glove_n_symbols, embedding_dim))
globale_scale=.1
with open(glove_name, 'r') as fp:
i = 0
for l in fp:
l = l.strip().split()
w = l[0]
glove_index_dict[w] = i
glove_embedding_weights[i,:] = map(float,l[1:])
i += 1
glove_embedding_weights *= globale_scale
glove_embedding_weights.std()
for w,i in glove_index_dict.iteritems():
w = w.lower()
if w not in glove_index_dict:
glove_index_dict[w] = i
```
## embedding matrix
use GloVe to initialize embedding matrix
```
import numpy as np
# generate random embedding with same scale as glove
np.random.seed(seed)
shape = (vocab_size, embedding_dim)
scale = glove_embedding_weights.std()*np.sqrt(12)/2 # uniform and not normal
embedding = np.random.uniform(low=-scale, high=scale, size=shape)
print 'random-embedding/glove scale', scale, 'std', embedding.std()
# copy from glove weights of words that appear in our short vocabulary (idx2word)
c = 0
for i in range(vocab_size):
w = idx2word[i]
g = glove_index_dict.get(w, glove_index_dict.get(w.lower()))
if g is None and w.startswith('#'): # glove has no hastags (I think...)
w = w[1:]
g = glove_index_dict.get(w, glove_index_dict.get(w.lower()))
if g is not None:
embedding[i,:] = glove_embedding_weights[g,:]
c+=1
print 'number of tokens, in small vocab, found in glove and copied to embedding', c,c/float(vocab_size)
```
lots of word in the full vocabulary (word2idx) are outside `vocab_size`.
Build an alterantive which will map them to their closest match in glove but only if the match
is good enough (cos distance above `glove_thr`)
```
glove_thr = 0.5
word2glove = {}
for w in word2idx:
if w in glove_index_dict:
g = w
elif w.lower() in glove_index_dict:
g = w.lower()
elif w.startswith('#') and w[1:] in glove_index_dict:
g = w[1:]
elif w.startswith('#') and w[1:].lower() in glove_index_dict:
g = w[1:].lower()
else:
continue
word2glove[w] = g
```
for every word outside the embedding matrix find the closest word inside the mebedding matrix.
Use cos distance of GloVe vectors.
Allow for the last `nb_unknown_words` words inside the embedding matrix to be considered to be outside.
Dont accept distances below `glove_thr`
```
normed_embedding = embedding/np.array([np.sqrt(np.dot(gweight,gweight)) for gweight in embedding])[:,None]
nb_unknown_words = 100
glove_match = []
for w,idx in word2idx.iteritems():
if idx >= vocab_size-nb_unknown_words and w.isalpha() and w in word2glove:
gidx = glove_index_dict[word2glove[w]]
gweight = glove_embedding_weights[gidx,:].copy()
# find row in embedding that has the highest cos score with gweight
gweight /= np.sqrt(np.dot(gweight,gweight))
score = np.dot(normed_embedding[:vocab_size-nb_unknown_words], gweight)
while True:
embedding_idx = score.argmax()
s = score[embedding_idx]
if s < glove_thr:
break
if idx2word[embedding_idx] in word2glove :
glove_match.append((w, embedding_idx, s))
break
score[embedding_idx] = -1
glove_match.sort(key = lambda x: -x[2])
print '# of glove substitutes found', len(glove_match)
```
manually check that the worst substitutions we are going to do are good enough
```
for orig, sub, score in glove_match[-10:]:
print score, orig,'=>', idx2word[sub]
```
build a lookup table of index of outside words to index of inside words
```
glove_idx2idx = dict((word2idx[w],embedding_idx) for w, embedding_idx, _ in glove_match)
```
# Data
```
Y = [[word2idx[token] for token in headline.split()] for headline in heads]
len(Y)
plt.hist(map(len,Y),bins=50);
X = [[word2idx[token] for token in d.split()] for d in desc]
len(X)
plt.hist(map(len,X),bins=50);
import cPickle as pickle
with open('data/%s.pkl'%FN,'wb') as fp:
pickle.dump((embedding, idx2word, word2idx, glove_idx2idx),fp,-1)
import cPickle as pickle
with open('data/%s.data.pkl'%FN,'wb') as fp:
pickle.dump((X,Y),fp,-1)
```
|
github_jupyter
|
FN = 'vocabulary-embedding'
seed=42
vocab_size = 40000
embedding_dim = 100
lower = False # dont lower case the text
import cPickle as pickle
FN0 = 'tokens' # this is the name of the data file which I assume you already have
with open('data/%s.pkl'%FN0, 'rb') as fp:
heads, desc, keywords = pickle.load(fp) # keywords are not used in this project
if lower:
heads = [h.lower() for h in heads]
if lower:
desc = [h.lower() for h in desc]
i=0
heads[i]
desc[i]
keywords[i]
len(heads),len(set(heads))
len(desc),len(set(desc))
from collections import Counter
from itertools import chain
def get_vocab(lst):
vocabcount = Counter(w for txt in lst for w in txt.split())
vocab = map(lambda x: x[0], sorted(vocabcount.items(), key=lambda x: -x[1]))
return vocab, vocabcount
vocab, vocabcount = get_vocab(heads+desc)
print vocab[:50]
print '...',len(vocab)
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot([vocabcount[w] for w in vocab]);
plt.gca().set_xscale("log", nonposx='clip')
plt.gca().set_yscale("log", nonposy='clip')
plt.title('word distribution in headlines and discription')
plt.xlabel('rank')
plt.ylabel('total appearances');
empty = 0 # RNN mask of no data
eos = 1 # end of sentence
start_idx = eos+1 # first real word
def get_idx(vocab, vocabcount):
word2idx = dict((word, idx+start_idx) for idx,word in enumerate(vocab))
word2idx['<empty>'] = empty
word2idx['<eos>'] = eos
idx2word = dict((idx,word) for word,idx in word2idx.iteritems())
return word2idx, idx2word
word2idx, idx2word = get_idx(vocab, vocabcount)
fname = 'glove.6B.%dd.txt'%embedding_dim
import os
datadir_base = os.path.expanduser(os.path.join('~', '.keras'))
if not os.access(datadir_base, os.W_OK):
datadir_base = os.path.join('/tmp', '.keras')
datadir = os.path.join(datadir_base, 'datasets')
glove_name = os.path.join(datadir, fname)
if not os.path.exists(glove_name):
path = 'glove.6B.zip'
path = get_file(path, origin="http://nlp.stanford.edu/data/glove.6B.zip")
!unzip {datadir}/{path}
glove_n_symbols = !wc -l {glove_name}
glove_n_symbols = int(glove_n_symbols[0].split()[0])
glove_n_symbols
import numpy as np
glove_index_dict = {}
glove_embedding_weights = np.empty((glove_n_symbols, embedding_dim))
globale_scale=.1
with open(glove_name, 'r') as fp:
i = 0
for l in fp:
l = l.strip().split()
w = l[0]
glove_index_dict[w] = i
glove_embedding_weights[i,:] = map(float,l[1:])
i += 1
glove_embedding_weights *= globale_scale
glove_embedding_weights.std()
for w,i in glove_index_dict.iteritems():
w = w.lower()
if w not in glove_index_dict:
glove_index_dict[w] = i
import numpy as np
# generate random embedding with same scale as glove
np.random.seed(seed)
shape = (vocab_size, embedding_dim)
scale = glove_embedding_weights.std()*np.sqrt(12)/2 # uniform and not normal
embedding = np.random.uniform(low=-scale, high=scale, size=shape)
print 'random-embedding/glove scale', scale, 'std', embedding.std()
# copy from glove weights of words that appear in our short vocabulary (idx2word)
c = 0
for i in range(vocab_size):
w = idx2word[i]
g = glove_index_dict.get(w, glove_index_dict.get(w.lower()))
if g is None and w.startswith('#'): # glove has no hastags (I think...)
w = w[1:]
g = glove_index_dict.get(w, glove_index_dict.get(w.lower()))
if g is not None:
embedding[i,:] = glove_embedding_weights[g,:]
c+=1
print 'number of tokens, in small vocab, found in glove and copied to embedding', c,c/float(vocab_size)
glove_thr = 0.5
word2glove = {}
for w in word2idx:
if w in glove_index_dict:
g = w
elif w.lower() in glove_index_dict:
g = w.lower()
elif w.startswith('#') and w[1:] in glove_index_dict:
g = w[1:]
elif w.startswith('#') and w[1:].lower() in glove_index_dict:
g = w[1:].lower()
else:
continue
word2glove[w] = g
normed_embedding = embedding/np.array([np.sqrt(np.dot(gweight,gweight)) for gweight in embedding])[:,None]
nb_unknown_words = 100
glove_match = []
for w,idx in word2idx.iteritems():
if idx >= vocab_size-nb_unknown_words and w.isalpha() and w in word2glove:
gidx = glove_index_dict[word2glove[w]]
gweight = glove_embedding_weights[gidx,:].copy()
# find row in embedding that has the highest cos score with gweight
gweight /= np.sqrt(np.dot(gweight,gweight))
score = np.dot(normed_embedding[:vocab_size-nb_unknown_words], gweight)
while True:
embedding_idx = score.argmax()
s = score[embedding_idx]
if s < glove_thr:
break
if idx2word[embedding_idx] in word2glove :
glove_match.append((w, embedding_idx, s))
break
score[embedding_idx] = -1
glove_match.sort(key = lambda x: -x[2])
print '# of glove substitutes found', len(glove_match)
for orig, sub, score in glove_match[-10:]:
print score, orig,'=>', idx2word[sub]
glove_idx2idx = dict((word2idx[w],embedding_idx) for w, embedding_idx, _ in glove_match)
Y = [[word2idx[token] for token in headline.split()] for headline in heads]
len(Y)
plt.hist(map(len,Y),bins=50);
X = [[word2idx[token] for token in d.split()] for d in desc]
len(X)
plt.hist(map(len,X),bins=50);
import cPickle as pickle
with open('data/%s.pkl'%FN,'wb') as fp:
pickle.dump((embedding, idx2word, word2idx, glove_idx2idx),fp,-1)
import cPickle as pickle
with open('data/%s.data.pkl'%FN,'wb') as fp:
pickle.dump((X,Y),fp,-1)
| 0.427994 | 0.821152 |
# Job Shop Scheduling
**Implementation Note:** The following cell specifies the solver to used in the subsequent calculations. Some of these problems can become quite larger, and therefore the `gurobi` solver has been set as a default. If you don't have the `gurobi` solver then adjust the code to use the `glpk` solver, but know the calculations may take longer (and the benchmark problem will not solve at all). If you do have the `gurobi` solver, edit the location of the executable to match the location on your computer.
```
%%capture
!pip install -q pyomo
!apt-get install -y -qq glpk-utils
!apt-get install -y -qq coinor-cbc
from pyomo.environ import *
from pyomo.gdp import *
#solver = SolverFactory('glpk')
solver = SolverFactory('cbc', executable='/usr/bin/cbc')
#solver = SolverFactory('gurobi', executable='/usr/local/bin/gurobi.sh')
```
## Contents
* [Background](#Background)
* [Job Shop Example](#JobShopExample)
* [Task Decomposition](#TaskDecomposition)
* [Model Formulation](#ModelFormulation)
* [Pyomo Implementation](#PyomoImplementation)
* [Displaying a Solution](#DisplayingSolution)
* [Visualzing Results using Gantt Charts](#Visualization)
* [Appication to Scheduling of Batch Processes](#BatchProcesses)
* [Single Product Strategies](#SingleProduct)
* [Overlapping Tasks](#OverlappingTasks)
* [Unit Cleanout](#UnitCleanout)
* [Zero-Wait Policy](#ZeroWait)
* [Benchmark Problem LA19](#Benchmark)
<a id="Background"></a>
## Background
A job shop consists of a set of distinct machines that process jobs. Each job is a series of tasks that require use of particular machines for known durations, and which must be completed in specified order. The job shop scheduling problem is to schedule the jobs on the machines to minimize the time necessary to process all jobs (i.e, the makespan) or some other metric of productivity. Job shop scheduling is one of the classic problems in Operations Research.
Data consists of two tables. The first table is decomposition of the jobs into a series of tasks. Each task lists a job name, name of the required machine, and task duration. The second table list task pairs where the first task must be completed before the second task can be started. This formulation is quite general, but can also specify situations with no feasible solutions.
<a id="JobShopExample"></a>
## Job Shop Example
The following example of a job shop is from from Christelle Gueret, Christian Prins, Marc Sevaux, "Applications of Optimization with Xpress-MP," Dash Optimization, 2000.
In this example, there are three printed paper products that must pass through color printing presses in a particular order. The given data consists of a flowsheet showing the order in which each job passes through the color presses

and a table of data showing, in minutes, the amount of time each job requires on each machine.
| Machine | Color | Paper 1 | Paper 2 | Paper 3 |
| :-----: | :---: | :-----: | :-----: | :-----: |
| 1 | Blue | 45 | 20 | 12 |
| 2 | Green | - | 10 | 17 |
| 3 | Yellow| 10 | 34 | 28 |
What is the minimum amount of time (i.e, what is the makespan) for this set of jobs?
<a id="TaskDecomposition"></a>
## Task Decomposition
The first step in the analysis is to decompose the process into a series of tasks. Each task is a (job,machine) pair. Some tasks cannot start until a prerequisite task is completed.
| Task (Job,Machine) | Duration | Prerequisite Task |
| :----------------: | :------: | :---------------: |
| (Paper 1, Blue) | 45 | - |
| (Paper 1, Yellow) | 10 | (Paper 1,Blue) |
| (Paper 2, Blue) | 20 | (Paper 2, Green) |
| (Paper 2, Green) | 10 | - |
| (Paper 2, Yellow) | 34 | (Paper 2, Blue) |
| (Paper 3, Blue) | 12 | (Paper 3, Yellow) |
| (Paper 3, Green) | 17 | (Paper 3, Blue) |
| (Paper 3, Yellow) | 28 | - |
We convert this to a JSON style representation where tasks are denoted by (Job,Machine) tuples in Python. The task data is stored in a Python dictionary indexed by (Job,Machine) tuples. The task data conists of a dictionary with duration ('dur') and (Job,Machine) pair for any prerequisite task.
```
TASKS = {
('Paper_1','Blue') : {'dur': 45, 'prec': None},
('Paper_1','Yellow') : {'dur': 10, 'prec': ('Paper_1','Blue')},
('Paper_2','Blue') : {'dur': 20, 'prec': ('Paper_2','Green')},
('Paper_2','Green') : {'dur': 10, 'prec': None},
('Paper_2','Yellow') : {'dur': 34, 'prec': ('Paper_2','Blue')},
('Paper_3','Blue') : {'dur': 12, 'prec': ('Paper_3','Yellow')},
('Paper_3','Green') : {'dur': 17, 'prec': ('Paper_3','Blue')},
('Paper_3','Yellow') : {'dur': 28, 'prec': None},
}
```
<a id="ModelFormulation"></a>
## Model Formulation
Each task is represented as an ordered pair $(j,m)$ where $j$ is a job, and $m$ is a machine.
| Parameter | Description |
| :-------- | :-----------|
| $\text{dur}_{j,m}$ | Duration of task $(j,m)$ |
| $\text{prec}_{j,m}$ | A task $(k,n) = \text{Prec}_{j,m}$ that must be completed before task $(j,m)$|
| Decision Variables | Description |
| :-------- | :-----------|
| $\text{makespan}$ | Completion of all jobs |
| $\text{start}_{j,m}$ | Start time for task $(j,m)$ |
| $y_{j,k,m}$ | boolean variable for tasks $(i,m)$ and $(j,m)$ on machine $m$ where $j < k$ |
Upper and lower bounds on the start and completion of task $(j,m)$
\begin{align}
\text{start}_{j,m} & \geq 0\\
\text{start}_{j,m}+\text{Dur}_{j,m} & \leq \text{makespan}
\end{align}
Satisfying prerequisite tasks
\begin{align}
\text{start}_{k,n}+\text{Dur}_{k,n}\leq\text{start}_{j,m}\ \ \ \ \text{for } (k,n) =\text{Prec}_{j,m}
\end{align}
Disjunctive Constraints
If $M$ is big enough, then satisfying
\begin{align}
\text{start}_{j,m}+\text{Dur}_{j,m} & \leq \text{start}_{k,m}+M(1 - y_{j,k,m})\\
\text{start}_{k,m}+\text{Dur}_{k,m} & \leq \text{start}_{j,m}+My_{j,k,m}
\end{align}
avoids conflicts for use of the same machine.
<a id="PyomoImplementation"></a>
## Pyomo Implementation
The job shop scheduling problem is implemented below in Pyomo. The implementation consists of of a function JobShop(TASKS) that accepts a dictionary of tanks and returns a pandas dataframe containing an optimal schedule of tasks. An optional argument to JobShop allows one to specify a solver.
```
def JobShop(TASKS):
model = ConcreteModel()
model.TASKS = Set(initialize=TASKS.keys(), dimen=2)
model.JOBS = Set(initialize=set([j for (j,m) in TASKS.keys()]))
model.MACHINES = Set(initialize=set([m for (j,m) in TASKS.keys()]))
model.TASKORDER = Set(initialize = model.TASKS * model.TASKS, dimen=4,
filter = lambda model,j,m,k,n: (k,n) == TASKS[(j,m)]['prec'])
model.DISJUNCTIONS = Set(initialize=model.JOBS * model.JOBS * model.MACHINES, dimen=3,
filter = lambda model,j,k,m: j < k and (j,m) in model.TASKS and (k,m) in model.TASKS)
t_max = sum([TASKS[(j,m)]['dur'] for (j,m) in TASKS.keys()])
model.makespan = Var(bounds=(0, t_max))
model.start = Var(model.TASKS, bounds=(0, t_max))
model.obj = Objective(expr = model.makespan, sense = minimize)
model.fini = Constraint(model.TASKS, rule=lambda model,j,m:
model.start[j,m] + TASKS[(j,m)]['dur'] <= model.makespan)
model.prec = Constraint(model.TASKORDER, rule=lambda model,j,m,k,n:
model.start[k,n] + TASKS[(k,n)]['dur'] <= model.start[j,m])
model.disj = Disjunction(model.DISJUNCTIONS, rule=lambda model,j,k,m:
[model.start[j,m] + TASKS[(j,m)]['dur'] <= model.start[k,m],
model.start[k,m] + TASKS[(k,m)]['dur'] <= model.start[j,m]])
TransformationFactory('gdp.chull').apply_to(model)
solver.solve(model)
results = [{'Job': j,
'Machine': m,
'Start': model.start[j, m](),
'Duration': TASKS[(j, m)]['dur'],
'Finish': model.start[j, m]() + TASKS[(j, m)]['dur']}
for j,m in model.TASKS]
return results
results = JobShop(TASKS)
results
```
## Printing Schedules
```
import pandas as pd
schedule = pd.DataFrame(results)
print('\nSchedule by Job')
print(schedule.sort_values(by=['Job','Start']).set_index(['Job', 'Machine']))
print('\nSchedule by Machine')
print(schedule.sort_values(by=['Machine','Start']).set_index(['Machine', 'Job']))
```
<a id="Visualization"></a>
## Visualizing Results with Gantt Charts
```
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib as mpl
import pandas as pd
def Visualize(results):
schedule = pd.DataFrame(results)
JOBS = list(schedule['Job'].unique())
MACHINES = list(schedule['Machine'].unique())
makespan = schedule['Finish'].max()
schedule.sort_values(by=['Job','Start'])
schedule.set_index(['Job', 'Machine'], inplace=True)
plt.figure(figsize=(12, 5 + (len(JOBS)+len(MACHINES))/4))
plt.subplot(2,1,1)
jdx = 0
for j in sorted(JOBS):
jdx += 1
mdx = 0
for m in MACHINES:
mdx += 1
c = mpl.cm.Dark2.colors[mdx%7]
if (j,m) in schedule.index:
plt.plot([schedule.loc[(j,m),'Start'],schedule.loc[(j,m),'Finish']],
[jdx,jdx],color = c,alpha=1.0,lw=25,solid_capstyle='butt')
plt.text((schedule.loc[(j,m),'Start'] + schedule.loc[(j,m),'Finish'])/2.0,jdx,
m, color='white', weight='bold',
horizontalalignment='center', verticalalignment='center')
plt.ylim(0.5,jdx+0.5)
plt.title('Job Schedule')
plt.gca().set_yticks(range(1,1+len(JOBS)))
plt.gca().set_yticklabels(sorted(JOBS))
plt.plot([makespan,makespan],plt.ylim(),'r--')
plt.text(makespan,plt.ylim()[0]-0.2,str(round(makespan,2)),
horizontalalignment='center', verticalalignment='top')
plt.xlabel('Time')
plt.ylabel('Jobs')
plt.subplot(2,1,2)
mdx = 0
for m in sorted(MACHINES):
mdx += 1
jdx = 0
for j in JOBS:
jdx += 1
c = mpl.cm.Dark2.colors[jdx%7]
if (j,m) in schedule.index:
plt.plot([schedule.loc[(j,m),'Start'],schedule.loc[(j,m),'Finish']],
[mdx,mdx],color = c,alpha=1.0,lw=25,solid_capstyle='butt')
plt.text((schedule.loc[(j,m),'Start'] + schedule.loc[(j,m),'Finish'])/2.0,mdx,
j, color='white', weight='bold',
horizontalalignment='center', verticalalignment='center')
plt.ylim(0.5,mdx+0.5)
plt.title('Machine Schedule')
plt.gca().set_yticks(range(1,1+len(MACHINES)))
plt.gca().set_yticklabels(sorted(MACHINES))
plt.plot([makespan,makespan],plt.ylim(),'r--')
plt.text(makespan,plt.ylim()[0]-0.2,str(round(makespan,2)),
horizontalalignment='center', verticalalignment='top')
plt.xlabel('Time')
plt.ylabel('Machines')
plt.tight_layout()
Visualize(results)
```
<a id="BatchProcesses"></a>
## Application to Scheduling of Batch Processes
We will now turn our attention to the application of the job shop scheduling problem to the short term scheduling of batch processes. We illustrate these techniques using an example from Dunn (2013).

| Process | Mixer | Reactor | Separator | Packaging |
| :-----: | :---: | :-----: | :-------: | :-------: |
| A | 1.0 | 5.0 | 4.0 | 1.5 |
| B | - | - | 4.5 | 1.0 |
| C | - | 3.0 | 5.0 | 1.5 |
<a id="SingleProduct"></a>
## Single Product Strategies
Before going further, we create a function to streamline the generation of the TASKS dictionary.
```
def Recipe(jobs,machines,durations):
TASKS = {}
for j in jobs:
prec = (None,None)
for m,d in zip(machines,durations):
task = (j,m)
if prec == (None,None):
TASKS.update({(j,m): {'dur': d, 'prec': None}})
else:
TASKS.update({(j,m): {'dur': d, 'prec': prec}})
prec = task
return TASKS
RecipeA = Recipe('A',['Mixer','Reactor','Separator','Packaging'],[1,5,4,1.5])
RecipeB = Recipe('B',['Separator','Packaging'],[4.5,1])
RecipeC = Recipe('C',['Separator','Reactor','Packaging'],[5,3,1.5])
Visualize(JobShop(RecipeA))
Visualize(JobShop(RecipeB))
Visualize(JobShop(RecipeC))
```
<a id="OverlappingTasks"></a>
## Overlapping Tasks
Let's now consider an optimal scheduling problem where we are wish to make two batches of Product A.
```
TASKS = Recipe(['A1','A2'],['Mixer','Reactor','Separator','Packaging'],[1,5,4,1.5])
results = JobShop(TASKS)
Visualize(results)
print("Makespan =", max([task['Finish'] for task in results]))
```
Earlier we found it tood 11.5 hours to produce one batch of product A. As we see here, we can produce a second batch with only 5.0 additional hours because some of the tasks overlap. The overlapping of tasks is the key to gaining efficiency in batch processing facilities.
Let's next consider production of a single batch each of products A, B, and C.
```
TASKS = RecipeA
TASKS.update(RecipeB)
TASKS.update(RecipeC)
results = JobShop(TASKS)
Visualize(results)
print("Makespan =", max([task['Finish'] for task in results]))
```
The individual production of A, B, and C required 11.5, 5.5, and 9.5 hours, respectively, for a total of 25.5 hours. As we see here, by scheduling the production simultaneously, we can get all three batches done in just 15 hours.
As we see below, each additional set of three products takes an additionl 13 hours. So there is considerable efficiency gained by scheduling over longer intervals whenever possible.
```
TASKS = Recipe(['A1','A2'],['Mixer','Reactor','Separator','Packaging'],[1,5,4,1.5])
TASKS.update(Recipe(['B1','B2'],['Separator','Packaging'],[4.5,1]))
TASKS.update(Recipe(['C1','C2'],['Separator','Reactor','Packaging'],[5,3,1.5]))
results = JobShop(TASKS)
Visualize(results)
print("Makespan =", max([task['Finish'] for task in results]))
```
<a id="UnitCleanout"></a>
## Unit Cleanout
A common feature in batch unit operations is a requirement that equipment be cleaned prior to reuse.
In most cases the time needed for clean out would be equipment and product specific. Bur for the purposes of illustration, we implement this policy with a single non-negative parameter $t_{clean} \geq 0$ which, if specified, requires a period no less than $t_{clean}$ between the finish of one task and the start of another on every piece of equipment.
This is implemented by modifying the usual disjunctive constraints to avoid machine conflicts, i.e.,
\begin{align}
\text{start}_{j,m}+\text{Dur}_{j,m} & \leq \text{start}_{k,m}+M(1 - y_{j,k,m})\\
\text{start}_{k,m}+\text{Dur}_{k,m} & \leq \text{start}_{j,m}+My_{j,k,m}
\end{align}
to read
\begin{align}
\text{start}_{j,m}+\text{Dur}_{j,m} + t_{clean} & \leq \text{start}_{k,m}+M(1 - y_{j,k,m})\\
\text{start}_{k,m}+\text{Dur}_{k,m} + t_{clean} & \leq \text{start}_{j,m}+My_{j,k,m}
\end{align}
for sufficiently large $M$.
```
def JobShop(TASKS, tclean=0):
model = ConcreteModel()
model.TASKS = Set(initialize=TASKS.keys(), dimen=2)
model.JOBS = Set(initialize=set([j for (j,m) in TASKS.keys()]))
model.MACHINES = Set(initialize=set([m for (j,m) in TASKS.keys()]))
model.TASKORDER = Set(initialize = model.TASKS * model.TASKS, dimen=4,
filter = lambda model,j,m,k,n: (k,n) == TASKS[(j,m)]['prec'])
model.DISJUNCTIONS = Set(initialize=model.JOBS * model.JOBS * model.MACHINES, dimen=3,
filter = lambda model,j,k,m: j < k and (j,m) in model.TASKS and (k,m) in model.TASKS)
t_max = sum([TASKS[(j,m)]['dur'] for (j,m) in TASKS.keys()])
model.makespan = Var(bounds=(0, t_max))
model.start = Var(model.TASKS, bounds=(0, t_max))
model.obj = Objective(expr = model.makespan, sense = minimize)
model.fini = Constraint(model.TASKS, rule=lambda model,j,m:
model.start[j,m] + TASKS[(j,m)]['dur'] <= model.makespan)
model.prec = Constraint(model.TASKORDER, rule=lambda model,j,m,k,n:
model.start[k,n] + TASKS[(k,n)]['dur'] <= model.start[j,m])
model.disj = Disjunction(model.DISJUNCTIONS, rule=lambda model,j,k,m:
[model.start[j,m] + TASKS[(j,m)]['dur'] + tclean <= model.start[k,m],
model.start[k,m] + TASKS[(k,m)]['dur'] + tclean <= model.start[j,m]])
TransformationFactory('gdp.chull').apply_to(model)
solver.solve(model)
results = [{'Job': j,
'Machine': m,
'Start': model.start[j, m](),
'Duration': TASKS[(j, m)]['dur'],
'Finish': model.start[j, m]() + TASKS[(j, m)]['dur']}
for j,m in model.TASKS]
return results
results = JobShop(TASKS, tclean=0.5)
Visualize(results)
print("Makespan =", max([task['Finish'] for task in results]))
```
<a id="ZeroWait"></a>
## Zero Wait Policy
One of the issues in the use of job shop scheduling for batch processing are situations where there it isn't possible to store intermediate materials. If there is no way to store intermediates, either in the processing equipment or in external vessels, then a **zero-wait** policy may be appropriate.
A zero-wait policy requires subsequent processing machines to be available immediately upon completion of any task. To implement this policy, the usual precident sequencing constraint of a job shop scheduling problem, i.e.,
\begin{align*}
\text{start}_{k,n}+\text{Dur}_{k,n} \leq \text{start}_{j,m}\ \ \ \ \text{for } (k,n) =\text{Prec}_{j,m}
\end{align*}
is changed to
\begin{align*}
\text{start}_{k,n}+\text{Dur}_{k,n} = \text{start}_{j,m}\ \ \ \ \text{for } (k,n) =\text{Prec}_{j,m}\text{ and ZW is True}
\end{align*}
if the zero-wait policy is in effect.
While this could be implemented on an equipment or product specific basis, here we add an optional ZW flag to the JobShop function that, by default, is set to False.
```
def JobShop(TASKS, tclean=0, ZW=False):
model = ConcreteModel()
model.TASKS = Set(initialize=TASKS.keys(), dimen=2)
model.JOBS = Set(initialize=set([j for (j,m) in TASKS.keys()]))
model.MACHINES = Set(initialize=set([m for (j,m) in TASKS.keys()]))
model.TASKORDER = Set(initialize = model.TASKS * model.TASKS, dimen=4,
filter = lambda model,j,m,k,n: (k,n) == TASKS[(j,m)]['prec'])
model.DISJUNCTIONS = Set(initialize=model.JOBS * model.JOBS * model.MACHINES, dimen=3,
filter = lambda model,j,k,m: j < k and (j,m) in model.TASKS and (k,m) in model.TASKS)
t_max = sum([TASKS[(j,m)]['dur'] for (j,m) in TASKS.keys()])
model.makespan = Var(bounds=(0, t_max))
model.start = Var(model.TASKS, bounds=(0, t_max))
model.obj = Objective(expr = model.makespan, sense = minimize)
model.fini = Constraint(model.TASKS, rule=lambda model,j,m:
model.start[j,m] + TASKS[(j,m)]['dur'] <= model.makespan)
if ZW:
model.prec = Constraint(model.TASKORDER, rule=lambda model,j,m,k,n:
model.start[k,n] + TASKS[(k,n)]['dur'] == model.start[j,m])
else:
model.prec = Constraint(model.TASKORDER, rule=lambda model,j,m,k,n:
model.start[k,n] + TASKS[(k,n)]['dur'] <= model.start[j,m])
model.disj = Disjunction(model.DISJUNCTIONS, rule=lambda model,j,k,m:
[model.start[j,m] + TASKS[(j,m)]['dur'] + tclean <= model.start[k,m],
model.start[k,m] + TASKS[(k,m)]['dur'] + tclean <= model.start[j,m]])
TransformationFactory('gdp.chull').apply_to(model)
solver.solve(model)
results = [{'Job': j,
'Machine': m,
'Start': model.start[j, m](),
'Duration': TASKS[(j, m)]['dur'],
'Finish': model.start[j, m]() + TASKS[(j, m)]['dur']}
for j,m in model.TASKS]
return results
results = JobShop(TASKS, tclean=0.5, ZW=True)
Visualize(results)
print("Makespan =", max([task['Finish'] for task in results]))
```
<a id="Benchmark"></a>
## Benchmark Problems
The file `jobshop1.txt` (available [here](http://people.brunel.ac.uk/~mastjjb/jeb/orlib/files/jobshop1.txt)) is a well known collection of 82 benchmark problems for job shop scheduling. The data format for each example consists of a single line for each job. The data on each line is a sequence of (machine number, time) pairs showing the order in which machines process each job.
LA19 is a benchmark problem for job shop scheduling introduced by Lawrence in 1984, and a solution presented by Cook and Applegate in 1991. The following cell may take many minutes to hours to run, depending on the choice of solver and hardware.
```
data = """
2 44 3 5 5 58 4 97 0 9 7 84 8 77 9 96 1 58 6 89
4 15 7 31 1 87 8 57 0 77 3 85 2 81 5 39 9 73 6 21
9 82 6 22 4 10 3 70 1 49 0 40 8 34 2 48 7 80 5 71
1 91 2 17 7 62 5 75 8 47 4 11 3 7 6 72 9 35 0 55
6 71 1 90 3 75 0 64 2 94 8 15 4 12 7 67 9 20 5 50
7 70 5 93 8 77 2 29 4 58 6 93 3 68 1 57 9 7 0 52
6 87 1 63 4 26 5 6 2 82 3 27 7 56 8 48 9 36 0 95
0 36 5 15 8 41 9 78 3 76 6 84 4 30 7 76 2 36 1 8
5 88 2 81 3 13 6 82 4 54 7 13 8 29 9 40 1 78 0 75
9 88 4 54 6 64 7 32 0 52 2 6 8 54 5 82 3 6 1 26
"""
TASKS = {}
prec = ''
lines = data.splitlines()
job= 0
for line in lines[1:]:
j = "J{0:1d}".format(job)
nums = line.split()
prec = ''
for m,dur in zip(nums[::2],nums[1::2]):
task = (j,'M{0:s}'.format(m))
if prec:
TASKS[task] = {'dur':int(dur), 'prec':prec}
else:
TASKS[task] = {'dur':int(dur), 'prec':None}
prec = task
job += 1
Visualize(JobShop(TASKS))
```
### Recalculate Benchmark Problem with a Zero-Wait Policy
The following calculation is quite intensive and will take several minutes to finish with the `gurobi` solver.
```
Visualize(JobShop(TASKS, ZW=True))
```
|
github_jupyter
|
%%capture
!pip install -q pyomo
!apt-get install -y -qq glpk-utils
!apt-get install -y -qq coinor-cbc
from pyomo.environ import *
from pyomo.gdp import *
#solver = SolverFactory('glpk')
solver = SolverFactory('cbc', executable='/usr/bin/cbc')
#solver = SolverFactory('gurobi', executable='/usr/local/bin/gurobi.sh')
TASKS = {
('Paper_1','Blue') : {'dur': 45, 'prec': None},
('Paper_1','Yellow') : {'dur': 10, 'prec': ('Paper_1','Blue')},
('Paper_2','Blue') : {'dur': 20, 'prec': ('Paper_2','Green')},
('Paper_2','Green') : {'dur': 10, 'prec': None},
('Paper_2','Yellow') : {'dur': 34, 'prec': ('Paper_2','Blue')},
('Paper_3','Blue') : {'dur': 12, 'prec': ('Paper_3','Yellow')},
('Paper_3','Green') : {'dur': 17, 'prec': ('Paper_3','Blue')},
('Paper_3','Yellow') : {'dur': 28, 'prec': None},
}
def JobShop(TASKS):
model = ConcreteModel()
model.TASKS = Set(initialize=TASKS.keys(), dimen=2)
model.JOBS = Set(initialize=set([j for (j,m) in TASKS.keys()]))
model.MACHINES = Set(initialize=set([m for (j,m) in TASKS.keys()]))
model.TASKORDER = Set(initialize = model.TASKS * model.TASKS, dimen=4,
filter = lambda model,j,m,k,n: (k,n) == TASKS[(j,m)]['prec'])
model.DISJUNCTIONS = Set(initialize=model.JOBS * model.JOBS * model.MACHINES, dimen=3,
filter = lambda model,j,k,m: j < k and (j,m) in model.TASKS and (k,m) in model.TASKS)
t_max = sum([TASKS[(j,m)]['dur'] for (j,m) in TASKS.keys()])
model.makespan = Var(bounds=(0, t_max))
model.start = Var(model.TASKS, bounds=(0, t_max))
model.obj = Objective(expr = model.makespan, sense = minimize)
model.fini = Constraint(model.TASKS, rule=lambda model,j,m:
model.start[j,m] + TASKS[(j,m)]['dur'] <= model.makespan)
model.prec = Constraint(model.TASKORDER, rule=lambda model,j,m,k,n:
model.start[k,n] + TASKS[(k,n)]['dur'] <= model.start[j,m])
model.disj = Disjunction(model.DISJUNCTIONS, rule=lambda model,j,k,m:
[model.start[j,m] + TASKS[(j,m)]['dur'] <= model.start[k,m],
model.start[k,m] + TASKS[(k,m)]['dur'] <= model.start[j,m]])
TransformationFactory('gdp.chull').apply_to(model)
solver.solve(model)
results = [{'Job': j,
'Machine': m,
'Start': model.start[j, m](),
'Duration': TASKS[(j, m)]['dur'],
'Finish': model.start[j, m]() + TASKS[(j, m)]['dur']}
for j,m in model.TASKS]
return results
results = JobShop(TASKS)
results
import pandas as pd
schedule = pd.DataFrame(results)
print('\nSchedule by Job')
print(schedule.sort_values(by=['Job','Start']).set_index(['Job', 'Machine']))
print('\nSchedule by Machine')
print(schedule.sort_values(by=['Machine','Start']).set_index(['Machine', 'Job']))
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib as mpl
import pandas as pd
def Visualize(results):
schedule = pd.DataFrame(results)
JOBS = list(schedule['Job'].unique())
MACHINES = list(schedule['Machine'].unique())
makespan = schedule['Finish'].max()
schedule.sort_values(by=['Job','Start'])
schedule.set_index(['Job', 'Machine'], inplace=True)
plt.figure(figsize=(12, 5 + (len(JOBS)+len(MACHINES))/4))
plt.subplot(2,1,1)
jdx = 0
for j in sorted(JOBS):
jdx += 1
mdx = 0
for m in MACHINES:
mdx += 1
c = mpl.cm.Dark2.colors[mdx%7]
if (j,m) in schedule.index:
plt.plot([schedule.loc[(j,m),'Start'],schedule.loc[(j,m),'Finish']],
[jdx,jdx],color = c,alpha=1.0,lw=25,solid_capstyle='butt')
plt.text((schedule.loc[(j,m),'Start'] + schedule.loc[(j,m),'Finish'])/2.0,jdx,
m, color='white', weight='bold',
horizontalalignment='center', verticalalignment='center')
plt.ylim(0.5,jdx+0.5)
plt.title('Job Schedule')
plt.gca().set_yticks(range(1,1+len(JOBS)))
plt.gca().set_yticklabels(sorted(JOBS))
plt.plot([makespan,makespan],plt.ylim(),'r--')
plt.text(makespan,plt.ylim()[0]-0.2,str(round(makespan,2)),
horizontalalignment='center', verticalalignment='top')
plt.xlabel('Time')
plt.ylabel('Jobs')
plt.subplot(2,1,2)
mdx = 0
for m in sorted(MACHINES):
mdx += 1
jdx = 0
for j in JOBS:
jdx += 1
c = mpl.cm.Dark2.colors[jdx%7]
if (j,m) in schedule.index:
plt.plot([schedule.loc[(j,m),'Start'],schedule.loc[(j,m),'Finish']],
[mdx,mdx],color = c,alpha=1.0,lw=25,solid_capstyle='butt')
plt.text((schedule.loc[(j,m),'Start'] + schedule.loc[(j,m),'Finish'])/2.0,mdx,
j, color='white', weight='bold',
horizontalalignment='center', verticalalignment='center')
plt.ylim(0.5,mdx+0.5)
plt.title('Machine Schedule')
plt.gca().set_yticks(range(1,1+len(MACHINES)))
plt.gca().set_yticklabels(sorted(MACHINES))
plt.plot([makespan,makespan],plt.ylim(),'r--')
plt.text(makespan,plt.ylim()[0]-0.2,str(round(makespan,2)),
horizontalalignment='center', verticalalignment='top')
plt.xlabel('Time')
plt.ylabel('Machines')
plt.tight_layout()
Visualize(results)
def Recipe(jobs,machines,durations):
TASKS = {}
for j in jobs:
prec = (None,None)
for m,d in zip(machines,durations):
task = (j,m)
if prec == (None,None):
TASKS.update({(j,m): {'dur': d, 'prec': None}})
else:
TASKS.update({(j,m): {'dur': d, 'prec': prec}})
prec = task
return TASKS
RecipeA = Recipe('A',['Mixer','Reactor','Separator','Packaging'],[1,5,4,1.5])
RecipeB = Recipe('B',['Separator','Packaging'],[4.5,1])
RecipeC = Recipe('C',['Separator','Reactor','Packaging'],[5,3,1.5])
Visualize(JobShop(RecipeA))
Visualize(JobShop(RecipeB))
Visualize(JobShop(RecipeC))
TASKS = Recipe(['A1','A2'],['Mixer','Reactor','Separator','Packaging'],[1,5,4,1.5])
results = JobShop(TASKS)
Visualize(results)
print("Makespan =", max([task['Finish'] for task in results]))
TASKS = RecipeA
TASKS.update(RecipeB)
TASKS.update(RecipeC)
results = JobShop(TASKS)
Visualize(results)
print("Makespan =", max([task['Finish'] for task in results]))
TASKS = Recipe(['A1','A2'],['Mixer','Reactor','Separator','Packaging'],[1,5,4,1.5])
TASKS.update(Recipe(['B1','B2'],['Separator','Packaging'],[4.5,1]))
TASKS.update(Recipe(['C1','C2'],['Separator','Reactor','Packaging'],[5,3,1.5]))
results = JobShop(TASKS)
Visualize(results)
print("Makespan =", max([task['Finish'] for task in results]))
def JobShop(TASKS, tclean=0):
model = ConcreteModel()
model.TASKS = Set(initialize=TASKS.keys(), dimen=2)
model.JOBS = Set(initialize=set([j for (j,m) in TASKS.keys()]))
model.MACHINES = Set(initialize=set([m for (j,m) in TASKS.keys()]))
model.TASKORDER = Set(initialize = model.TASKS * model.TASKS, dimen=4,
filter = lambda model,j,m,k,n: (k,n) == TASKS[(j,m)]['prec'])
model.DISJUNCTIONS = Set(initialize=model.JOBS * model.JOBS * model.MACHINES, dimen=3,
filter = lambda model,j,k,m: j < k and (j,m) in model.TASKS and (k,m) in model.TASKS)
t_max = sum([TASKS[(j,m)]['dur'] for (j,m) in TASKS.keys()])
model.makespan = Var(bounds=(0, t_max))
model.start = Var(model.TASKS, bounds=(0, t_max))
model.obj = Objective(expr = model.makespan, sense = minimize)
model.fini = Constraint(model.TASKS, rule=lambda model,j,m:
model.start[j,m] + TASKS[(j,m)]['dur'] <= model.makespan)
model.prec = Constraint(model.TASKORDER, rule=lambda model,j,m,k,n:
model.start[k,n] + TASKS[(k,n)]['dur'] <= model.start[j,m])
model.disj = Disjunction(model.DISJUNCTIONS, rule=lambda model,j,k,m:
[model.start[j,m] + TASKS[(j,m)]['dur'] + tclean <= model.start[k,m],
model.start[k,m] + TASKS[(k,m)]['dur'] + tclean <= model.start[j,m]])
TransformationFactory('gdp.chull').apply_to(model)
solver.solve(model)
results = [{'Job': j,
'Machine': m,
'Start': model.start[j, m](),
'Duration': TASKS[(j, m)]['dur'],
'Finish': model.start[j, m]() + TASKS[(j, m)]['dur']}
for j,m in model.TASKS]
return results
results = JobShop(TASKS, tclean=0.5)
Visualize(results)
print("Makespan =", max([task['Finish'] for task in results]))
def JobShop(TASKS, tclean=0, ZW=False):
model = ConcreteModel()
model.TASKS = Set(initialize=TASKS.keys(), dimen=2)
model.JOBS = Set(initialize=set([j for (j,m) in TASKS.keys()]))
model.MACHINES = Set(initialize=set([m for (j,m) in TASKS.keys()]))
model.TASKORDER = Set(initialize = model.TASKS * model.TASKS, dimen=4,
filter = lambda model,j,m,k,n: (k,n) == TASKS[(j,m)]['prec'])
model.DISJUNCTIONS = Set(initialize=model.JOBS * model.JOBS * model.MACHINES, dimen=3,
filter = lambda model,j,k,m: j < k and (j,m) in model.TASKS and (k,m) in model.TASKS)
t_max = sum([TASKS[(j,m)]['dur'] for (j,m) in TASKS.keys()])
model.makespan = Var(bounds=(0, t_max))
model.start = Var(model.TASKS, bounds=(0, t_max))
model.obj = Objective(expr = model.makespan, sense = minimize)
model.fini = Constraint(model.TASKS, rule=lambda model,j,m:
model.start[j,m] + TASKS[(j,m)]['dur'] <= model.makespan)
if ZW:
model.prec = Constraint(model.TASKORDER, rule=lambda model,j,m,k,n:
model.start[k,n] + TASKS[(k,n)]['dur'] == model.start[j,m])
else:
model.prec = Constraint(model.TASKORDER, rule=lambda model,j,m,k,n:
model.start[k,n] + TASKS[(k,n)]['dur'] <= model.start[j,m])
model.disj = Disjunction(model.DISJUNCTIONS, rule=lambda model,j,k,m:
[model.start[j,m] + TASKS[(j,m)]['dur'] + tclean <= model.start[k,m],
model.start[k,m] + TASKS[(k,m)]['dur'] + tclean <= model.start[j,m]])
TransformationFactory('gdp.chull').apply_to(model)
solver.solve(model)
results = [{'Job': j,
'Machine': m,
'Start': model.start[j, m](),
'Duration': TASKS[(j, m)]['dur'],
'Finish': model.start[j, m]() + TASKS[(j, m)]['dur']}
for j,m in model.TASKS]
return results
results = JobShop(TASKS, tclean=0.5, ZW=True)
Visualize(results)
print("Makespan =", max([task['Finish'] for task in results]))
data = """
2 44 3 5 5 58 4 97 0 9 7 84 8 77 9 96 1 58 6 89
4 15 7 31 1 87 8 57 0 77 3 85 2 81 5 39 9 73 6 21
9 82 6 22 4 10 3 70 1 49 0 40 8 34 2 48 7 80 5 71
1 91 2 17 7 62 5 75 8 47 4 11 3 7 6 72 9 35 0 55
6 71 1 90 3 75 0 64 2 94 8 15 4 12 7 67 9 20 5 50
7 70 5 93 8 77 2 29 4 58 6 93 3 68 1 57 9 7 0 52
6 87 1 63 4 26 5 6 2 82 3 27 7 56 8 48 9 36 0 95
0 36 5 15 8 41 9 78 3 76 6 84 4 30 7 76 2 36 1 8
5 88 2 81 3 13 6 82 4 54 7 13 8 29 9 40 1 78 0 75
9 88 4 54 6 64 7 32 0 52 2 6 8 54 5 82 3 6 1 26
"""
TASKS = {}
prec = ''
lines = data.splitlines()
job= 0
for line in lines[1:]:
j = "J{0:1d}".format(job)
nums = line.split()
prec = ''
for m,dur in zip(nums[::2],nums[1::2]):
task = (j,'M{0:s}'.format(m))
if prec:
TASKS[task] = {'dur':int(dur), 'prec':prec}
else:
TASKS[task] = {'dur':int(dur), 'prec':None}
prec = task
job += 1
Visualize(JobShop(TASKS))
Visualize(JobShop(TASKS, ZW=True))
| 0.378919 | 0.96853 |
<a href="https://colab.research.google.com/github/QasimMahmood98/machinelearning/blob/main/Eye.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
#Importing data
from google.colab import files
uploaded = files.upload()
df = pd.read_csv('17006903.csv')
df.head(7)
#count nu
df.shape
#count of number
df.isna().sum()
#get 0 which is good or 1 which is not good
df['B'].value_counts()
#Visualise
sns.countplot(df['B'], label='Count')
#looking at data types
df.dtypes
#encode categorical data values
from sklearn.preprocessing import LabelEncoder
labelencoder_Y = LabelEncoder()
df.iloc[:,1] = labelencoder_Y.fit_transform(df.iloc[:,1].values)
#create a pair plot
sns.pairplot(df.iloc[:,1:6], hue="B")
#print first 5 rows of new data
df.head(5)
#correlations
df.iloc[:,1:12].corr()
#visualise
plt.figure(figsize=(10,10))
sns.heatmap(df.iloc[:,1:12].corr(), annot=True, fmt='.0%')
#splitting into indep (x) And Dep (Y)
X = df.iloc[:,2:15].values #Features to determine if it has or not
Y = df.iloc[:,1].values #diagnosis
#split data set into 50% training and 50% testing
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split (X,Y, test_size = 0.50, random_state=0)
#scale the data
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.fit_transform(X_test)
#function for models
def models(X_train, Y_train):
#logic regression model
from sklearn.linear_model import LogisticRegression
log = LogisticRegression(random_state=0)
log.fit(X_train, Y_train)
#decision making skills
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(criterion = 'entropy', random_state=0)
tree.fit(X_train, Y_train)
#Random Forest Classifier
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(n_estimators = 10, criterion = 'entropy', random_state=0)
forest.fit(X_train, Y_train)
#print accuracy
print('[0]Logistic Regression Training Accuracy:', log.score(X_train, Y_train))
print('[1]Decision Tree classifier Training Accuracy:', tree.score(X_train, Y_train))
print('[2]Random Forest classifier Training Accuracy:', forest.score(X_train, Y_train))
return log, tree, forest
#getting models
model = models (X_train,Y_train)
#test data / testing model accuracy
from sklearn.metrics import confusion_matrix
for i in range(len(model)):
print('Model' ,i)
cm = confusion_matrix(Y_test, model[i].predict(X_test))
TP = cm [0][0]
TN = cm [1][1]
FN = cm [1][0]
FP = cm [0][1]
print(cm)
print('Testing Accuracy = ', (TP + TN)/ (TP + TN + FN + FP))
print()
# show another metrics
from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score
for i in range(len(model)):
print('Model' ,i)
print(classification_report(Y_test, model[i].predict(X_test)))
print(accuracy_score(Y_test, model[i].predict(X_test)))
print()
#printing prediction
pred = model[2].predict(X_test)
print (pred)
print()
print(Y_test)
```
|
github_jupyter
|
#imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
#Importing data
from google.colab import files
uploaded = files.upload()
df = pd.read_csv('17006903.csv')
df.head(7)
#count nu
df.shape
#count of number
df.isna().sum()
#get 0 which is good or 1 which is not good
df['B'].value_counts()
#Visualise
sns.countplot(df['B'], label='Count')
#looking at data types
df.dtypes
#encode categorical data values
from sklearn.preprocessing import LabelEncoder
labelencoder_Y = LabelEncoder()
df.iloc[:,1] = labelencoder_Y.fit_transform(df.iloc[:,1].values)
#create a pair plot
sns.pairplot(df.iloc[:,1:6], hue="B")
#print first 5 rows of new data
df.head(5)
#correlations
df.iloc[:,1:12].corr()
#visualise
plt.figure(figsize=(10,10))
sns.heatmap(df.iloc[:,1:12].corr(), annot=True, fmt='.0%')
#splitting into indep (x) And Dep (Y)
X = df.iloc[:,2:15].values #Features to determine if it has or not
Y = df.iloc[:,1].values #diagnosis
#split data set into 50% training and 50% testing
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split (X,Y, test_size = 0.50, random_state=0)
#scale the data
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.fit_transform(X_test)
#function for models
def models(X_train, Y_train):
#logic regression model
from sklearn.linear_model import LogisticRegression
log = LogisticRegression(random_state=0)
log.fit(X_train, Y_train)
#decision making skills
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(criterion = 'entropy', random_state=0)
tree.fit(X_train, Y_train)
#Random Forest Classifier
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(n_estimators = 10, criterion = 'entropy', random_state=0)
forest.fit(X_train, Y_train)
#print accuracy
print('[0]Logistic Regression Training Accuracy:', log.score(X_train, Y_train))
print('[1]Decision Tree classifier Training Accuracy:', tree.score(X_train, Y_train))
print('[2]Random Forest classifier Training Accuracy:', forest.score(X_train, Y_train))
return log, tree, forest
#getting models
model = models (X_train,Y_train)
#test data / testing model accuracy
from sklearn.metrics import confusion_matrix
for i in range(len(model)):
print('Model' ,i)
cm = confusion_matrix(Y_test, model[i].predict(X_test))
TP = cm [0][0]
TN = cm [1][1]
FN = cm [1][0]
FP = cm [0][1]
print(cm)
print('Testing Accuracy = ', (TP + TN)/ (TP + TN + FN + FP))
print()
# show another metrics
from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score
for i in range(len(model)):
print('Model' ,i)
print(classification_report(Y_test, model[i].predict(X_test)))
print(accuracy_score(Y_test, model[i].predict(X_test)))
print()
#printing prediction
pred = model[2].predict(X_test)
print (pred)
print()
print(Y_test)
| 0.360939 | 0.90686 |
# QCEngine
Full [QCEngine documentation](http://docs.qcarchive.molssi.org/projects/QCEngine) is available.
QCEngine is a quantum chemistry abstraction layer where many different quantum chemistry (or quantum-chemistry-like!) programs can be run with identical input and output abstractions that match the [MolSSI QCSchema](https://github.com/MolSSI/QCSchema).
Begin by importing `qcengine`.
```
import qcelemental as qcel
import qcengine as qcng
```
We can list all programs that QCEngine currently supports.
It should be noted that there are many programs which provide force field or machine learning potential evaluation (e.g. `rdkit` and `torchani`)
in addition to the traditional quantum chemistry programs.
```
qcng.list_all_programs()
```
We can then list all programs that QCEngine has detected on the current resource. This list will vary depending on installed packages. As a note, QCEngine does not install programs by default, and these must be installed separately.
```
qcng.list_available_programs()
```
## Single Computations
QCEngine makes the distinction between a "single" evaluation which corresponds to a single molecular geometry
and a "procedure" which involves multiple geometries or multiple molecules.
"Single" evaluations include energy, gradient, Hessian, and property quantities.
"Procedures" include geometry optimization and other complex multi-step procedures.
First, we can build a Molecule object using the QCElemental molecule builder:
```
mol = qcel.models.Molecule(geometry=[[0, 0, 0], [0, 1.5, 0], [0, 0, 1.5]],
symbols=["O", "H", "H"],
connectivity=[[0, 1, 1], [0, 2, 1]])
mol
```
We can then provide minimal input for a quantum chemistry job which specifies the molecule, driver, and model that the computation should be run under:
```
computation = {
"molecule": mol,
"driver": "energy",
"model": {"method": "B3LYP", "basis": "6-31g"}
}
ret = qcng.compute(computation, "psi4")
```
The result contains many attributes that hold relevant data.
We can access the `return_result` which contains the desired value as determined by the `driver` input field.
In this case, it is the B3LYP/6-31g energy (in Hartree):
```
ret.return_result
```
QCEngine automatically parses additional data about the state of the computation and pulls several other component fields. Here we can see the energy breakdown as well as the basis information:
```
ret.properties.dict()
```
Finally, QCEngine records much of the state of the computation such as the hardware it was run on, the program it was run with, and the versions of programs used:
```
ret.provenance.dict()
```
## Procedures
Since we created a pretty poor molecule to start with, we should optimize it first under a force field method to have a reasonable geometry.
Here, we will use the standalone `geomeTRIC` package coupled with the `rdkit` force field evaluator.
```
oh_bond, hoh_angle = mol.measure([[0, 1], [1, 0, 2]])
print(f"O-H Bond length (Bohr): {oh_bond}")
print(f"H-O-H Angle (degrees): {hoh_angle}")
opt_input = {
"keywords": {
"program": "rdkit"
},
"input_specification": {
"driver": "gradient",
"model": {"method": "UFF"},
},
"initial_molecule": mol
}
opt = qcng.compute_procedure(opt_input, "geometric")
opt
```
We can first check the geometry of the final molecule and see that it is something much more reasonable:
```
opt_mol = opt.final_molecule
oh_bond, hoh_angle = opt_mol.measure([[0, 1], [1, 0, 2]])
print(f"O-H Bond length (Bohr): {oh_bond}")
print(f"H-O-H Angle (degrees): {hoh_angle}")
```
We can explore additional data generated with this geometry optimization including details of every gradient evaluation performed:
```
opt.trajectory
```
If desired, we can also look at the standard output of the `geomeTRIC` program:
```
print(opt.stdout)
```
## Conclusion
These are some of the capabilities QCEngine offers, check out more [documentation](http://docs.qcarchive.molssi.org/projects/QCEngine).
If you like the project, consider starring us on [GitHub](https://github.com/MolSSI/QCEngine) or if you have any questions, join our [Slack](https://join.slack.com/t/qcdb/shared_invite/enQtNDIzNTQ2OTExODk0LWM3OTgxN2ExYTlkMTlkZjA0OTExZDlmNGRlY2M4NWJlNDlkZGQyYWUxOTJmMzc3M2VlYzZjMjgxMDRkYzFmOTE) channel.
|
github_jupyter
|
import qcelemental as qcel
import qcengine as qcng
qcng.list_all_programs()
qcng.list_available_programs()
mol = qcel.models.Molecule(geometry=[[0, 0, 0], [0, 1.5, 0], [0, 0, 1.5]],
symbols=["O", "H", "H"],
connectivity=[[0, 1, 1], [0, 2, 1]])
mol
computation = {
"molecule": mol,
"driver": "energy",
"model": {"method": "B3LYP", "basis": "6-31g"}
}
ret = qcng.compute(computation, "psi4")
ret.return_result
ret.properties.dict()
ret.provenance.dict()
oh_bond, hoh_angle = mol.measure([[0, 1], [1, 0, 2]])
print(f"O-H Bond length (Bohr): {oh_bond}")
print(f"H-O-H Angle (degrees): {hoh_angle}")
opt_input = {
"keywords": {
"program": "rdkit"
},
"input_specification": {
"driver": "gradient",
"model": {"method": "UFF"},
},
"initial_molecule": mol
}
opt = qcng.compute_procedure(opt_input, "geometric")
opt
opt_mol = opt.final_molecule
oh_bond, hoh_angle = opt_mol.measure([[0, 1], [1, 0, 2]])
print(f"O-H Bond length (Bohr): {oh_bond}")
print(f"H-O-H Angle (degrees): {hoh_angle}")
opt.trajectory
print(opt.stdout)
| 0.41182 | 0.991297 |
```
# default_exp tslearner
```
# TSLearners (TSClassifier, TSRegressor, TSForecaster)
> This contains a set of time series learners with a new API that simplifies the learner creation.
```
#export
from tsai.imports import *
from tsai.learner import *
from tsai.data.all import *
from tsai.models.InceptionTime import *
from tsai.models.utils import *
```
## TSClassifier API
***
**Commonly used arguments:**
* **X:** array-like of shape (n_samples, n_steps) or (n_samples, n_features, n_steps) with the input time series samples. Internally, they will be converted to torch tensors.
* **y:** array-like of shape (n_samples), (n_samples, n_outputs) or (n_samples, n_features, n_outputs) with the target. Internally, they will be converted to torch tensors. Default=None. None is used for unlabeled datasets.
* **splits:** lists of indices used to split data between train and validation. Default=None. If no splits are passed, data will be split 80:20 between train and test without shuffling.
* **tfms:** item transforms that will be applied to each sample individually. Default=`[None, TSClassification()]` which is commonly used in most single label datasets.
* **batch_tfms:** transforms applied to each batch. Default=None.
* **bs:** batch size (if batch_size is provided then batch_size will override bs). An int or a list of ints can be passed. Default=`[64, 128]`. If a list of ints, the first one will be used for training, and the second for the valid (batch size can be larger as it doesn't require backpropagation which consumes more memory).
* **arch:** indicates which architecture will be used. Default: InceptionTime.
* **arch_config:** keyword arguments passed to the selected architecture. Default={}.
* **pretrained:** indicates if pretrained model weights will be used. Default=False.
* **weights_path:** indicates the path to the pretrained weights in case they are used.
* **loss_func:** allows you to pass any loss function. Default=None (in which case CrossEntropyLossFlat() is applied).
* **opt_func:** allows you to pass an optimizer. Default=Adam.
* **lr:** learning rate. Default=0.001.
* **metrics:** list of metrics passed to the Learner. Default=accuracy.
* **cbs:** list of callbacks passed to the Learner. Default=None.
* **wd:** is the default weight decay used when training the model. Default=None.
**Less frequently used arguments:**
* **sel_vars:** used to select which of the features in multivariate datasets are used. Default=None means all features are used. If necessary a list-like of indices can be used (eg.`[0,3,5]`).
* **sel_steps:** used to select the steps used. Default=None means all steps are used. If necessary a list-like of indices can be used (eg. `slice(-50, None)` will select the last 50 steps from each time series).
* **inplace:** indicates whether tfms are applied during instantiation or on-the-fly. Default=True, which means that tfms will be applied during instantiation. This results in a faster training.
* **shuffle_train:** indicates whether to shuffle the training set every time the dataloader is fully read/iterated or not. This doesn't have an impact on the validation set which is never shuffled. Default=True.
* **drop_last:** if True the last incomplete training batch is dropped (thus ensuring training batches of equal size). This doesn't have an impact on the validation set where samples are never dropped. Default=True.
* **num_workers:** num_workers (int): how many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. Default=0.
* **do_setup:** ndicates if the Pipeline.setup method should be called during initialization. Default=True.
* **device:** Defaults to default_device() which is CUDA by default. You can specify device as `torch.device('cpu').
* **verbose:** controls the verbosity when fitting and predicting.
* **exclude_head:** indicates whether the head of the pretrained model needs to be removed or not. Default=True.
* **cut:** indicates the position where the pretrained model head needs to be cut. Defaults=-1.
* **init:** allows you to set to None (no initialization applied), set to True (in which case nn.init.kaiming_normal_ will be applied) or pass an initialization. Default=None.
* **splitter:** To do transfer learning, you need to pass a splitter to Learner. This should be a function taking the model and returning a collection of parameter groups, e.g. a list of list of parameters. Default=trainable_params. If the model has a backbone and a head, it will then be split in those 2 groups.
* **path** and **model_dir:** are used to save and/or load models. Often path will be inferred from dls, but you can override it or pass a Path object to model_dir.
* **wd_bn_bias:** controls if weight decay is applied to BatchNorm layers and bias. Default=False.
train_bn=True
* **moms:** the default momentums used in Learner.fit_one_cycle. Default=(0.95, 0.85, 0.95).
```
#export
defaults.cat_tfms = [None, TSClassification()]
class TSClassifier(Learner):
def __init__(self, X, y=None, splits=None, tfms=defaults.cat_tfms, inplace=True, sel_vars=None, sel_steps=None,
bs=[64, 128], batch_size=None, batch_tfms=None, shuffle_train=True, drop_last=True, num_workers=0, do_setup=True, device=None,
arch=None, arch_config={}, pretrained=False, weights_path=None, exclude_head=True, cut=-1, init=None,
loss_func=None, opt_func=Adam, lr=0.001, metrics=accuracy, cbs=None, wd=None, wd_bn_bias=False,
train_bn=True, moms=(0.95, 0.85, 0.95), path='.', model_dir='models', splitter=trainable_params, verbose=False):
#Splits
if splits is None: splits = TSSplitter()(X)
# Batch size
if batch_size is not None:
bs = batch_size
# DataLoaders
dls = get_ts_dls(X, y=y, splits=splits, sel_vars=sel_vars, sel_steps=sel_steps, tfms=tfms, inplace=inplace, path=path, bs=bs,
batch_tfms=batch_tfms, num_workers=num_workers, device=device, shuffle_train=shuffle_train, drop_last=drop_last)
if loss_func is None:
if hasattr(dls, 'loss_func'): loss_func = dls.loss_func
elif hasattr(dls, 'cat') and not dls.cat: loss_func = MSELossFlat()
elif hasattr(dls, 'train_ds') and hasattr(dls.train_ds, 'loss_func'): loss_func = dls.train_ds.loss_func
else: loss_func = CrossEntropyLossFlat()
# Model
if init is True:
init = nn.init.kaiming_normal_
if arch is None:
arch = InceptionTime
if 'xresnet' in arch.__name__.lower() and not '1d' in arch.__name__.lower():
model = build_tsimage_model(arch, dls=dls, pretrained=pretrained, init=init, device=device, verbose=verbose, **arch_config)
elif 'tabularmodel' in arch.__name__.lower():
build_tabular_model(arch, dls=dls, device=device, **arch_config)
else:
model = build_ts_model(arch, dls=dls, device=device, verbose=verbose, pretrained=pretrained, weights_path=weights_path,
exclude_head=exclude_head, cut=cut, init=init, **arch_config)
setattr(model, "__name__", arch.__name__)
try:
model[0], model[1]
splitter = ts_splitter
except:
pass
super().__init__(dls, model, loss_func=loss_func, opt_func=opt_func, lr=lr, cbs=cbs, metrics=metrics, path=path, splitter=splitter,
model_dir=model_dir, wd=wd, wd_bn_bias=wd_bn_bias, train_bn=train_bn, moms=moms)
from tsai.models.InceptionTimePlus import *
X, y, splits = get_classification_data('OliveOil', split_data=False)
batch_tfms = [TSStandardize(by_sample=True)]
learn = TSClassifier(X, y, splits=splits, batch_tfms=batch_tfms, metrics=accuracy, arch=InceptionTimePlus, arch_config=dict(fc_dropout=.5))
learn.fit_one_cycle(1)
```
## TSRegressor API
***
**Commonly used arguments:**
* **X:** array-like of shape (n_samples, n_steps) or (n_samples, n_features, n_steps) with the input time series samples. Internally, they will be converted to torch tensors.
* **y:** array-like of shape (n_samples), (n_samples, n_outputs) or (n_samples, n_features, n_outputs) with the target. Internally, they will be converted to torch tensors. Default=None. None is used for unlabeled datasets.
* **splits:** lists of indices used to split data between train and validation. Default=None. If no splits are passed, data will be split 80:20 between train and test without shuffling.
* **tfms:** item transforms that will be applied to each sample individually. Default=`[None, TSRegression()]` which is commonly used in most single label datasets.
* **batch_tfms:** transforms applied to each batch. Default=None.
* **bs:** batch size (if batch_size is provided then batch_size will override bs). An int or a list of ints can be passed. Default=`[64, 128]`. If a list of ints, the first one will be used for training, and the second for the valid (batch size can be larger as it doesn't require backpropagation which consumes more memory).
* **arch:** indicates which architecture will be used. Default: InceptionTime.
* **arch_config:** keyword arguments passed to the selected architecture. Default={}.
* **pretrained:** indicates if pretrained model weights will be used. Default=False.
* **weights_path:** indicates the path to the pretrained weights in case they are used.
* **loss_func:** allows you to pass any loss function. Default=None (in which case CrossEntropyLossFlat() is applied).
* **opt_func:** allows you to pass an optimizer. Default=Adam.
* **lr:** learning rate. Default=0.001.
* **metrics:** list of metrics passed to the Learner. Default=None.
* **cbs:** list of callbacks passed to the Learner. Default=None.
* **wd:** is the default weight decay used when training the model. Default=None.
**Less frequently used arguments:**
* **sel_vars:** used to select which of the features in multivariate datasets are used. Default=None means all features are used. If necessary a list-like of indices can be used (eg.`[0,3,5]`).
* **sel_steps:** used to select the steps used. Default=None means all steps are used. If necessary a list-like of indices can be used (eg. `slice(-50, None)` will select the last 50 steps from each time series).
* **inplace:** indicates whether tfms are applied during instantiation or on-the-fly. Default=True, which means that tfms will be applied during instantiation. This results in a faster training.
* **shuffle_train:** indicates whether to shuffle the training set every time the dataloader is fully read/iterated or not. This doesn't have an impact on the validation set which is never shuffled. Default=True.
* **drop_last:** if True the last incomplete training batch is dropped (thus ensuring training batches of equal size). This doesn't have an impact on the validation set where samples are never dropped. Default=True.
* **num_workers:** num_workers (int): how many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. Default=0.
* **do_setup:** ndicates if the Pipeline.setup method should be called during initialization. Default=True.
* **device:** Defaults to default_device() which is CUDA by default. You can specify device as `torch.device('cpu').
* **verbose:** controls the verbosity when fitting and predicting.
* **exclude_head:** indicates whether the head of the pretrained model needs to be removed or not. Default=True.
* **cut:** indicates the position where the pretrained model head needs to be cut. Defaults=-1.
* **init:** allows you to set to None (no initialization applied), set to True (in which case nn.init.kaiming_normal_ will be applied) or pass an initialization. Default=None.
* **splitter:** To do transfer learning, you need to pass a splitter to Learner. This should be a function taking the model and returning a collection of parameter groups, e.g. a list of list of parameters. Default=trainable_params. If the model has a backbone and a head, it will then be split in those 2 groups.
* **path** and **model_dir:** are used to save and/or load models. Often path will be inferred from dls, but you can override it or pass a Path object to model_dir.
* **wd_bn_bias:** controls if weight decay is applied to BatchNorm layers and bias. Default=False.
train_bn=True
* **moms:** the default momentums used in Learner.fit_one_cycle. Default=(0.95, 0.85, 0.95).
```
#export
#export
defaults.reg_tfms = [None, TSRegression()]
class TSRegressor(Learner):
def __init__(self, X, y=None, splits=None, tfms=defaults.reg_tfms, inplace=True, sel_vars=None, sel_steps=None,
bs=[64, 128], batch_size=None, batch_tfms=None, shuffle_train=True, drop_last=True, num_workers=0, do_setup=True, device=None,
arch=None, arch_config={}, pretrained=False, weights_path=None, exclude_head=True, cut=-1, init=None,
loss_func=None, opt_func=Adam, lr=0.001, metrics=None, cbs=None, wd=None, wd_bn_bias=False,
train_bn=True, moms=(0.95, 0.85, 0.95), path='.', model_dir='models', splitter=trainable_params, verbose=False):
#Splits
if splits is None: splits = TSSplitter()(X)
# Batch size
if batch_size is not None:
bs = batch_size
# DataLoaders
dls = get_ts_dls(X, y=y, splits=splits, sel_vars=sel_vars, sel_steps=sel_steps, tfms=tfms, inplace=inplace, path=path, bs=bs,
batch_tfms=batch_tfms, num_workers=num_workers, device=device,
shuffle_train=shuffle_train, drop_last=drop_last)
if loss_func is None:
if hasattr(dls, 'loss_func'): loss_func = dls.loss_func
elif hasattr(dls, 'cat') and not dls.cat: loss_func = MSELossFlat()
elif hasattr(dls, 'train_ds') and hasattr(dls.train_ds, 'loss_func'): loss_func = dls.train_ds.loss_func
else: loss_func = MSELossFlat()
# Model
if init is True:
init = nn.init.kaiming_normal_
if arch is None:
arch = InceptionTime
if 'xresnet' in arch.__name__.lower() and not '1d' in arch.__name__.lower():
model = build_tsimage_model(arch, dls=dls, pretrained=pretrained, init=init, device=device, verbose=verbose, **arch_config)
elif 'tabularmodel' in arch.__name__.lower():
build_tabular_model(arch, dls=dls, device=device, **arch_config)
else:
model = build_ts_model(arch, dls=dls, device=device, verbose=verbose, pretrained=pretrained, weights_path=weights_path,
exclude_head=exclude_head, cut=cut, init=init, **arch_config)
setattr(model, "__name__", arch.__name__)
try:
model[0], model[1]
splitter = ts_splitter
except:
pass
super().__init__(dls, model, loss_func=loss_func, opt_func=opt_func, lr=lr, cbs=cbs, metrics=metrics, path=path, splitter=splitter,
model_dir=model_dir, wd=wd, wd_bn_bias=wd_bn_bias, train_bn=train_bn, moms=moms)
from tsai.models.TST import *
X, y, splits = get_regression_data('AppliancesEnergy', split_data=False)
batch_tfms = [TSStandardize()]
learn = TSRegressor(X, y, splits=splits, batch_tfms=batch_tfms, arch=TST, metrics=mae, bs=512)
learn.fit_one_cycle(1, 1e-4)
```
## TSForecaster API
***
**Commonly used arguments:**
* **X:** array-like of shape (n_samples, n_steps) or (n_samples, n_features, n_steps) with the input time series samples. Internally, they will be converted to torch tensors.
* **y:** array-like of shape (n_samples), (n_samples, n_outputs) or (n_samples, n_features, n_outputs) with the target. Internally, they will be converted to torch tensors. Default=None. None is used for unlabeled datasets.
* **splits:** lists of indices used to split data between train and validation. Default=None. If no splits are passed, data will be split 80:20 between train and test without shuffling.
* **tfms:** item transforms that will be applied to each sample individually. Default=`[None, TSForecasting()]` which is commonly used in most single label datasets.
* **batch_tfms:** transforms applied to each batch. Default=None.
* **bs:** batch size (if batch_size is provided then batch_size will override bs). An int or a list of ints can be passed. Default=`[64, 128]`. If a list of ints, the first one will be used for training, and the second for the valid (batch size can be larger as it doesn't require backpropagation which consumes more memory).
* **arch:** indicates which architecture will be used. Default: InceptionTime.
* **arch_config:** keyword arguments passed to the selected architecture. Default={}.
* **pretrained:** indicates if pretrained model weights will be used. Default=False.
* **weights_path:** indicates the path to the pretrained weights in case they are used.
* **loss_func:** allows you to pass any loss function. Default=None (in which case CrossEntropyLossFlat() is applied).
* **opt_func:** allows you to pass an optimizer. Default=Adam.
* **lr:** learning rate. Default=0.001.
* **metrics:** list of metrics passed to the Learner. Default=None.
* **cbs:** list of callbacks passed to the Learner. Default=None.
* **wd:** is the default weight decay used when training the model. Default=None.
**Less frequently used arguments:**
* **sel_vars:** used to select which of the features in multivariate datasets are used. Default=None means all features are used. If necessary a list-like of indices can be used (eg.`[0,3,5]`).
* **sel_steps:** used to select the steps used. Default=None means all steps are used. If necessary a list-like of indices can be used (eg. `slice(-50, None)` will select the last 50 steps from each time series).
* **inplace:** indicates whether tfms are applied during instantiation or on-the-fly. Default=True, which means that tfms will be applied during instantiation. This results in a faster training.
* **shuffle_train:** indicates whether to shuffle the training set every time the dataloader is fully read/iterated or not. This doesn't have an impact on the validation set which is never shuffled. Default=True.
* **drop_last:** if True the last incomplete training batch is dropped (thus ensuring training batches of equal size). This doesn't have an impact on the validation set where samples are never dropped. Default=True.
* **num_workers:** num_workers (int): how many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. Default=None.
* **do_setup:** ndicates if the Pipeline.setup method should be called during initialization. Default=True.
* **device:** Defaults to default_device() which is CUDA by default. You can specify device as `torch.device('cpu').
* **verbose:** controls the verbosity when fitting and predicting.
* **exclude_head:** indicates whether the head of the pretrained model needs to be removed or not. Default=True.
* **cut:** indicates the position where the pretrained model head needs to be cut. Defaults=-1.
* **init:** allows you to set to None (no initialization applied), set to True (in which case nn.init.kaiming_normal_ will be applied) or pass an initialization. Default=None.
* **splitter:** To do transfer learning, you need to pass a splitter to Learner. This should be a function taking the model and returning a collection of parameter groups, e.g. a list of list of parameters. Default=trainable_params. If the model has a backbone and a head, it will then be split in those 2 groups.
* **path** and **model_dir:** are used to save and/or load models. Often path will be inferred from dls, but you can override it or pass a Path object to model_dir.
* **wd_bn_bias:** controls if weight decay is applied to BatchNorm layers and bias. Default=False.
train_bn=True
* **moms:** the default momentums used in Learner.fit_one_cycle. Default=(0.95, 0.85, 0.95).
```
#export
defaults.fcst_tfms = [None, TSForecasting()]
class TSForecaster(Learner):
def __init__(self, X, y=None, splits=None, tfms=defaults.fcst_tfms, inplace=True, sel_vars=None, sel_steps=None,
bs=[64, 128], batch_size=None, batch_tfms=None, shuffle_train=True, drop_last=True, num_workers=0, do_setup=True, device=None,
arch=None, arch_config={}, pretrained=False, weights_path=None, exclude_head=True, cut=-1, init=None,
loss_func=None, opt_func=Adam, lr=0.001, metrics=None, cbs=None, wd=None, wd_bn_bias=False,
train_bn=True, moms=(0.95, 0.85, 0.95), path='.', model_dir='models', splitter=trainable_params, verbose=False):
#Splits
if splits is None: splits = TSSplitter()(X)
# Batch size
if batch_size is not None:
bs = batch_size
# DataLoaders
dls = get_ts_dls(X, y=y, splits=splits, sel_vars=sel_vars, sel_steps=sel_steps, tfms=tfms, inplace=inplace, path=path, bs=bs,
batch_tfms=batch_tfms, num_workers=num_workers, device=device,
shuffle_train=shuffle_train, drop_last=drop_last)
if loss_func is None:
if hasattr(dls, 'loss_func'): loss_func = dls.loss_func
elif hasattr(dls, 'cat') and not dls.cat: loss_func = MSELossFlat()
elif hasattr(dls, 'train_ds') and hasattr(dls.train_ds, 'loss_func'): loss_func = dls.train_ds.loss_func
else: loss_func = MSELossFlat()
# Model
if init is True:
init = nn.init.kaiming_normal_
if arch is None:
arch = InceptionTime
if 'xresnet' in arch.__name__.lower() and not '1d' in arch.__name__.lower():
model = build_tsimage_model(arch, dls=dls, pretrained=pretrained, init=init, device=device, verbose=verbose, **arch_config)
elif 'tabularmodel' in arch.__name__.lower():
build_tabular_model(arch, dls=dls, device=device, **arch_config)
else:
model = build_ts_model(arch, dls=dls, device=device, verbose=verbose, pretrained=pretrained, weights_path=weights_path,
exclude_head=exclude_head, cut=cut, init=init, **arch_config)
setattr(model, "__name__", arch.__name__)
try:
model[0], model[1]
splitter = ts_splitter
except:
pass
super().__init__(dls, model, loss_func=loss_func, opt_func=opt_func, lr=lr, cbs=cbs, metrics=metrics, path=path, splitter=splitter,
model_dir=model_dir, wd=wd, wd_bn_bias=wd_bn_bias, train_bn=train_bn, moms=moms)
from tsai.models.TSTPlus import *
ts = get_forecasting_time_series('Sunspots')
from tsai.models.TSTPlus import *
ts = get_forecasting_time_series('Sunspots')
X, y = SlidingWindowSplitter(60, horizon=1)(ts)
splits = TSSplitter(235)(y)
batch_tfms = [TSStandardize(by_var=True)]
learn = TSForecaster(X, y, splits=splits, batch_tfms=batch_tfms, arch=TST, arch_config=dict(fc_dropout=.5), metrics=mae, bs=512)
learn.fit_one_cycle(1)
#hide
out = create_scripts(); beep(out)
```
|
github_jupyter
|
# default_exp tslearner
#export
from tsai.imports import *
from tsai.learner import *
from tsai.data.all import *
from tsai.models.InceptionTime import *
from tsai.models.utils import *
#export
defaults.cat_tfms = [None, TSClassification()]
class TSClassifier(Learner):
def __init__(self, X, y=None, splits=None, tfms=defaults.cat_tfms, inplace=True, sel_vars=None, sel_steps=None,
bs=[64, 128], batch_size=None, batch_tfms=None, shuffle_train=True, drop_last=True, num_workers=0, do_setup=True, device=None,
arch=None, arch_config={}, pretrained=False, weights_path=None, exclude_head=True, cut=-1, init=None,
loss_func=None, opt_func=Adam, lr=0.001, metrics=accuracy, cbs=None, wd=None, wd_bn_bias=False,
train_bn=True, moms=(0.95, 0.85, 0.95), path='.', model_dir='models', splitter=trainable_params, verbose=False):
#Splits
if splits is None: splits = TSSplitter()(X)
# Batch size
if batch_size is not None:
bs = batch_size
# DataLoaders
dls = get_ts_dls(X, y=y, splits=splits, sel_vars=sel_vars, sel_steps=sel_steps, tfms=tfms, inplace=inplace, path=path, bs=bs,
batch_tfms=batch_tfms, num_workers=num_workers, device=device, shuffle_train=shuffle_train, drop_last=drop_last)
if loss_func is None:
if hasattr(dls, 'loss_func'): loss_func = dls.loss_func
elif hasattr(dls, 'cat') and not dls.cat: loss_func = MSELossFlat()
elif hasattr(dls, 'train_ds') and hasattr(dls.train_ds, 'loss_func'): loss_func = dls.train_ds.loss_func
else: loss_func = CrossEntropyLossFlat()
# Model
if init is True:
init = nn.init.kaiming_normal_
if arch is None:
arch = InceptionTime
if 'xresnet' in arch.__name__.lower() and not '1d' in arch.__name__.lower():
model = build_tsimage_model(arch, dls=dls, pretrained=pretrained, init=init, device=device, verbose=verbose, **arch_config)
elif 'tabularmodel' in arch.__name__.lower():
build_tabular_model(arch, dls=dls, device=device, **arch_config)
else:
model = build_ts_model(arch, dls=dls, device=device, verbose=verbose, pretrained=pretrained, weights_path=weights_path,
exclude_head=exclude_head, cut=cut, init=init, **arch_config)
setattr(model, "__name__", arch.__name__)
try:
model[0], model[1]
splitter = ts_splitter
except:
pass
super().__init__(dls, model, loss_func=loss_func, opt_func=opt_func, lr=lr, cbs=cbs, metrics=metrics, path=path, splitter=splitter,
model_dir=model_dir, wd=wd, wd_bn_bias=wd_bn_bias, train_bn=train_bn, moms=moms)
from tsai.models.InceptionTimePlus import *
X, y, splits = get_classification_data('OliveOil', split_data=False)
batch_tfms = [TSStandardize(by_sample=True)]
learn = TSClassifier(X, y, splits=splits, batch_tfms=batch_tfms, metrics=accuracy, arch=InceptionTimePlus, arch_config=dict(fc_dropout=.5))
learn.fit_one_cycle(1)
#export
#export
defaults.reg_tfms = [None, TSRegression()]
class TSRegressor(Learner):
def __init__(self, X, y=None, splits=None, tfms=defaults.reg_tfms, inplace=True, sel_vars=None, sel_steps=None,
bs=[64, 128], batch_size=None, batch_tfms=None, shuffle_train=True, drop_last=True, num_workers=0, do_setup=True, device=None,
arch=None, arch_config={}, pretrained=False, weights_path=None, exclude_head=True, cut=-1, init=None,
loss_func=None, opt_func=Adam, lr=0.001, metrics=None, cbs=None, wd=None, wd_bn_bias=False,
train_bn=True, moms=(0.95, 0.85, 0.95), path='.', model_dir='models', splitter=trainable_params, verbose=False):
#Splits
if splits is None: splits = TSSplitter()(X)
# Batch size
if batch_size is not None:
bs = batch_size
# DataLoaders
dls = get_ts_dls(X, y=y, splits=splits, sel_vars=sel_vars, sel_steps=sel_steps, tfms=tfms, inplace=inplace, path=path, bs=bs,
batch_tfms=batch_tfms, num_workers=num_workers, device=device,
shuffle_train=shuffle_train, drop_last=drop_last)
if loss_func is None:
if hasattr(dls, 'loss_func'): loss_func = dls.loss_func
elif hasattr(dls, 'cat') and not dls.cat: loss_func = MSELossFlat()
elif hasattr(dls, 'train_ds') and hasattr(dls.train_ds, 'loss_func'): loss_func = dls.train_ds.loss_func
else: loss_func = MSELossFlat()
# Model
if init is True:
init = nn.init.kaiming_normal_
if arch is None:
arch = InceptionTime
if 'xresnet' in arch.__name__.lower() and not '1d' in arch.__name__.lower():
model = build_tsimage_model(arch, dls=dls, pretrained=pretrained, init=init, device=device, verbose=verbose, **arch_config)
elif 'tabularmodel' in arch.__name__.lower():
build_tabular_model(arch, dls=dls, device=device, **arch_config)
else:
model = build_ts_model(arch, dls=dls, device=device, verbose=verbose, pretrained=pretrained, weights_path=weights_path,
exclude_head=exclude_head, cut=cut, init=init, **arch_config)
setattr(model, "__name__", arch.__name__)
try:
model[0], model[1]
splitter = ts_splitter
except:
pass
super().__init__(dls, model, loss_func=loss_func, opt_func=opt_func, lr=lr, cbs=cbs, metrics=metrics, path=path, splitter=splitter,
model_dir=model_dir, wd=wd, wd_bn_bias=wd_bn_bias, train_bn=train_bn, moms=moms)
from tsai.models.TST import *
X, y, splits = get_regression_data('AppliancesEnergy', split_data=False)
batch_tfms = [TSStandardize()]
learn = TSRegressor(X, y, splits=splits, batch_tfms=batch_tfms, arch=TST, metrics=mae, bs=512)
learn.fit_one_cycle(1, 1e-4)
#export
defaults.fcst_tfms = [None, TSForecasting()]
class TSForecaster(Learner):
def __init__(self, X, y=None, splits=None, tfms=defaults.fcst_tfms, inplace=True, sel_vars=None, sel_steps=None,
bs=[64, 128], batch_size=None, batch_tfms=None, shuffle_train=True, drop_last=True, num_workers=0, do_setup=True, device=None,
arch=None, arch_config={}, pretrained=False, weights_path=None, exclude_head=True, cut=-1, init=None,
loss_func=None, opt_func=Adam, lr=0.001, metrics=None, cbs=None, wd=None, wd_bn_bias=False,
train_bn=True, moms=(0.95, 0.85, 0.95), path='.', model_dir='models', splitter=trainable_params, verbose=False):
#Splits
if splits is None: splits = TSSplitter()(X)
# Batch size
if batch_size is not None:
bs = batch_size
# DataLoaders
dls = get_ts_dls(X, y=y, splits=splits, sel_vars=sel_vars, sel_steps=sel_steps, tfms=tfms, inplace=inplace, path=path, bs=bs,
batch_tfms=batch_tfms, num_workers=num_workers, device=device,
shuffle_train=shuffle_train, drop_last=drop_last)
if loss_func is None:
if hasattr(dls, 'loss_func'): loss_func = dls.loss_func
elif hasattr(dls, 'cat') and not dls.cat: loss_func = MSELossFlat()
elif hasattr(dls, 'train_ds') and hasattr(dls.train_ds, 'loss_func'): loss_func = dls.train_ds.loss_func
else: loss_func = MSELossFlat()
# Model
if init is True:
init = nn.init.kaiming_normal_
if arch is None:
arch = InceptionTime
if 'xresnet' in arch.__name__.lower() and not '1d' in arch.__name__.lower():
model = build_tsimage_model(arch, dls=dls, pretrained=pretrained, init=init, device=device, verbose=verbose, **arch_config)
elif 'tabularmodel' in arch.__name__.lower():
build_tabular_model(arch, dls=dls, device=device, **arch_config)
else:
model = build_ts_model(arch, dls=dls, device=device, verbose=verbose, pretrained=pretrained, weights_path=weights_path,
exclude_head=exclude_head, cut=cut, init=init, **arch_config)
setattr(model, "__name__", arch.__name__)
try:
model[0], model[1]
splitter = ts_splitter
except:
pass
super().__init__(dls, model, loss_func=loss_func, opt_func=opt_func, lr=lr, cbs=cbs, metrics=metrics, path=path, splitter=splitter,
model_dir=model_dir, wd=wd, wd_bn_bias=wd_bn_bias, train_bn=train_bn, moms=moms)
from tsai.models.TSTPlus import *
ts = get_forecasting_time_series('Sunspots')
from tsai.models.TSTPlus import *
ts = get_forecasting_time_series('Sunspots')
X, y = SlidingWindowSplitter(60, horizon=1)(ts)
splits = TSSplitter(235)(y)
batch_tfms = [TSStandardize(by_var=True)]
learn = TSForecaster(X, y, splits=splits, batch_tfms=batch_tfms, arch=TST, arch_config=dict(fc_dropout=.5), metrics=mae, bs=512)
learn.fit_one_cycle(1)
#hide
out = create_scripts(); beep(out)
| 0.657098 | 0.913715 |
# T1069.001 - Permission Groups Discovery: Local Groups
Adversaries may attempt to find local system groups and permission settings. The knowledge of local system permission groups can help adversaries determine which groups exist and which users belong to a particular group. Adversaries may use this information to determine which users have elevated permissions, such as the users found within the local administrators group.
Commands such as <code>net localgroup</code> of the [Net](https://attack.mitre.org/software/S0039) utility, <code>dscl . -list /Groups</code> on macOS, and <code>groups</code> on Linux can list local groups.
## Atomic Tests
```
#Import the Module before running the tests.
# Checkout Jupyter Notebook at https://github.com/cyb3rbuff/TheAtomicPlaybook to run PS scripts.
Import-Module /Users/0x6c/AtomicRedTeam/atomics/invoke-atomicredteam/Invoke-AtomicRedTeam.psd1 - Force
```
### Atomic Test #1 - Permission Groups Discovery (Local)
Permission Groups Discovery
**Supported Platforms:** macos, linux
#### Attack Commands: Run with `sh`
```sh
if [ -x "$(command -v dscacheutil)" ]; then dscacheutil -q group; else echo "dscacheutil is missing from the machine. skipping..."; fi;
if [ -x "$(command -v dscl)" ]; then dscl . -list /Groups; else echo "dscl is missing from the machine. skipping..."; fi;
if [ -x "$(command -v groups)" ]; then groups; else echo "groups is missing from the machine. skipping..."; fi;
```
```
Invoke-AtomicTest T1069.001 -TestNumbers 1
```
### Atomic Test #2 - Basic Permission Groups Discovery Windows (Local)
Basic Permission Groups Discovery for Windows. This test will display some errors if run on a computer not connected to a domain. Upon execution, domain
information will be displayed.
**Supported Platforms:** windows
#### Attack Commands: Run with `command_prompt`
```command_prompt
net localgroup
net localgroup "Administrators"
```
```
Invoke-AtomicTest T1069.001 -TestNumbers 2
```
### Atomic Test #3 - Permission Groups Discovery PowerShell (Local)
Permission Groups Discovery utilizing PowerShell. This test will display some errors if run on a computer not connected to a domain. Upon execution, domain
information will be displayed.
**Supported Platforms:** windows
#### Attack Commands: Run with `powershell`
```powershell
get-localgroup
Get-LocalGroupMember -Name "Administrators"
```
```
Invoke-AtomicTest T1069.001 -TestNumbers 3
```
## Detection
System and network discovery techniques normally occur throughout an operation as an adversary learns the environment. Data and events should not be viewed in isolation, but as part of a chain of behavior that could lead to other activities, such as Lateral Movement, based on the information obtained.
Monitor processes and command-line arguments for actions that could be taken to gather system and network information. Remote access tools with built-in features may interact directly with the Windows API to gather information. Information may also be acquired through Windows system management tools such as [Windows Management Instrumentation](https://attack.mitre.org/techniques/T1047) and [PowerShell](https://attack.mitre.org/techniques/T1059/001).
|
github_jupyter
|
#Import the Module before running the tests.
# Checkout Jupyter Notebook at https://github.com/cyb3rbuff/TheAtomicPlaybook to run PS scripts.
Import-Module /Users/0x6c/AtomicRedTeam/atomics/invoke-atomicredteam/Invoke-AtomicRedTeam.psd1 - Force
if [ -x "$(command -v dscacheutil)" ]; then dscacheutil -q group; else echo "dscacheutil is missing from the machine. skipping..."; fi;
if [ -x "$(command -v dscl)" ]; then dscl . -list /Groups; else echo "dscl is missing from the machine. skipping..."; fi;
if [ -x "$(command -v groups)" ]; then groups; else echo "groups is missing from the machine. skipping..."; fi;
Invoke-AtomicTest T1069.001 -TestNumbers 1
### Atomic Test #3 - Permission Groups Discovery PowerShell (Local)
Permission Groups Discovery utilizing PowerShell. This test will display some errors if run on a computer not connected to a domain. Upon execution, domain
information will be displayed.
**Supported Platforms:** windows
#### Attack Commands: Run with `powershell`
| 0.411466 | 0.775562 |
```
# !wget https://f000.backblazeb2.com/file/malay-dataset/tagging/ontonotes5/processed-ontonotes5.json
import json
from sklearn.model_selection import train_test_split
import random
with open('processed-ontonotes5.json') as fopen:
data = json.load(fopen)
from collections import defaultdict
entities = defaultdict(list)
for i in data:
entities['text'].append(i[0])
entities['label'].append(i[1])
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/tagging/entities-OntoNotes5/PERSON/kabinet/mbkm.txt
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/tagging/entities-OntoNotes5/PERSON/kabinet/menteri.txt
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/tagging/entities-OntoNotes5/PERSON/kabinet/setiausaha-politik.txt
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/tagging/entities-OntoNotes5/PERSON/kabinet/timbalan-menteri.txt
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/tagging/entities/popit-persons.json
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/tagging/entities-OntoNotes5/PERSON/arab-boy.json
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/tagging/entities-OntoNotes5/PERSON/arab-girl.json
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/tagging/entities-OntoNotes5/PERSON/calon-wakil-rakyat.csv
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/tagging/entities-OntoNotes5/PERSON/chinese-boy.json
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/tagging/entities-OntoNotes5/PERSON/chinese-girl.json
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/tagging/entities-OntoNotes5/PERSON/indian-boy.json
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/tagging/entities-OntoNotes5/PERSON/indian-girl.json
with open('mbkm.txt') as fopen:
data = fopen.read().split('\n')
mbkm = []
for d in data:
if not ('BIN ' in d or 'BINTI ' in d):
continue
if ') ' in d:
d = ' '.join(d.split(') ')[1:])
d = d.title()
mbkm.append(d)
with open('menteri.txt') as fopen:
data = fopen.read().split('\n')
menteri = []
for d in data:
if not ('BIN ' in d or 'BINTI ' in d):
continue
if ') ' in d:
d = ' '.join(d.split(') ')[1:])
d = d.title()
menteri.append(d)
with open('setiausaha-politik.txt') as fopen:
data = fopen.read().split('\n')
setiausaha_politik = []
for d in data:
if not ('BIN ' in d or 'BINTI ' in d):
continue
if ')\t' in d:
d = ' '.join(d.split(')\t')[1:])
d = d.title()
setiausaha_politik.append(d)
with open('timbalan-menteri.txt') as fopen:
data = fopen.read().split('\n')
timbalan_menteri = []
for d in data:
if not ('BIN ' in d or 'BINTI ' in d):
continue
if ') ' in d:
d = ' '.join(d.split(') ')[1:])
d = d.title()
timbalan_menteri.append(d)
with open('popit-persons.json') as fopen:
popit = json.load(fopen)
import pandas as pd
df = pd.read_csv('calon-wakil-rakyat.csv').iloc[:,5].tolist()
df = [i.title() for i in df]
menteri = mbkm + menteri + setiausaha_politik + timbalan_menteri + popit + df
with open('arab-boy.json') as fopen:
arab_boy = json.load(fopen)
arab_boy = [i[1] for i in arab_boy]
arab_boy = list(set(' '.join(arab_boy).split()))
with open('arab-girl.json') as fopen:
arab_girl = json.load(fopen)
arab_girl = [i[1] for i in arab_girl]
arab_girl = list(set(' '.join(arab_girl).split()))
with open('chinese-boy.json') as fopen:
chinese_boy = json.load(fopen)
chinese_boy = [i[1] for i in chinese_boy]
chinese_boy = list(set(' '.join(chinese_boy).split()))
with open('chinese-girl.json') as fopen:
chinese_girl = json.load(fopen)
chinese_girl = [i[1] for i in chinese_girl]
chinese_girl = list(set(' '.join(chinese_girl).split()))
with open('indian-boy.json') as fopen:
indian_boy = json.load(fopen)
indian_boy = list(set(indian_boy))
with open('indian-girl.json') as fopen:
indian_girl = json.load(fopen)
indian_girl = list(set(indian_girl))
chinese = list(set(chinese_boy + chinese_girl))
arabic = list(set(arab_boy + arab_girl))
indian = list(set(indian_boy + indian_girl))
# !wget https://raw.githubusercontent.com/rossgoodwin/american-names/master/surnames.json
with open('surnames.json') as fopen:
american = json.load(fopen)
mapping = {0: arabic, 1: chinese, 2: indian, 3: american}
mixed_mapping = {0: [american, chinese], 1: [indian, arabic], 2: [chinese, arabic]}
import random
def generate_name(length):
r = random.randint(0,4)
if r == 4:
r = mixed_mapping[random.randint(0, 2)]
l, r = random.choice(r[0]), random.choice(r[1])
name = f'{l} {r}'
else:
s = mapping[r]
name = ' '.join(random.sample(s, length))
while length == 1 and len(name) < 5:
name = ' '.join(random.sample(arabic, length))
return name
results = []
i = 0
while i < len(entities['label']):
r = []
if entities['label'][i] == 'PERSON':
while entities['label'][i] == 'PERSON':
r.append(i)
i += 1
results.append(r)
i += 1
len(results)
import math
import numpy as np
def generate_index(l, name, texts, labels, length):
cp, indices = [], []
b = length - len(l)
left = math.ceil(b / 2)
right = b - left
minus = l[0] - left
if minus < 0:
absolute = np.abs(minus)
right += absolute
left -= absolute
for i in range(l[0] - left, l[0]):
cp.append(texts[i])
indices.append(labels[i])
cp.extend(name)
indices.extend([labels[l[0]] for _ in range(len(name))])
try:
for i in range(l[-1] + 1, l[-1] + right + 1):
cp.append(texts[i])
indices.append(labels[i])
except Exception as e:
print(e)
pass
return cp, indices
train_results, test_results = train_test_split(results, test_size = 0.2)
train_X, train_Y = [], []
repeat = 4
for t in menteri:
for i in range(repeat):
x, y = generate_index(train_results[random.randint(0, len(train_results) - 1)],
t.split(), entities['text'], entities['label'], 50)
if len(x) != len(y):
print('len not same')
continue
train_X.append(x)
train_Y.append(y)
len(train_X)
for r in train_results:
if random.random() > 0.5:
for _ in range(1):
n = generate_name(len(r)).split()
x, y = generate_index(r, n, entities['text'], entities['label'], 50)
if len(x) != len(y):
print('len not same')
continue
train_X.append(x)
train_Y.append(y)
len(train_X)
test_X, test_Y = [], []
for r in test_results:
if random.random() > 0.5:
for _ in range(1):
n = generate_name(len(r)).split()
x, y = generate_index(r, n, entities['text'], entities['label'], 50)
if len(x) != len(y):
print('len not same')
continue
test_X.append(x)
test_Y.append(y)
len(test_X)
len(train_X), len(test_X)
with open('augmentation-person-ontonotes5.json', 'w') as fopen:
json.dump({'train_X': train_X, 'train_Y': train_Y,
'test_X': test_X, 'test_Y': test_Y}, fopen)
```
|
github_jupyter
|
# !wget https://f000.backblazeb2.com/file/malay-dataset/tagging/ontonotes5/processed-ontonotes5.json
import json
from sklearn.model_selection import train_test_split
import random
with open('processed-ontonotes5.json') as fopen:
data = json.load(fopen)
from collections import defaultdict
entities = defaultdict(list)
for i in data:
entities['text'].append(i[0])
entities['label'].append(i[1])
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/tagging/entities-OntoNotes5/PERSON/kabinet/mbkm.txt
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/tagging/entities-OntoNotes5/PERSON/kabinet/menteri.txt
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/tagging/entities-OntoNotes5/PERSON/kabinet/setiausaha-politik.txt
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/tagging/entities-OntoNotes5/PERSON/kabinet/timbalan-menteri.txt
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/tagging/entities/popit-persons.json
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/tagging/entities-OntoNotes5/PERSON/arab-boy.json
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/tagging/entities-OntoNotes5/PERSON/arab-girl.json
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/tagging/entities-OntoNotes5/PERSON/calon-wakil-rakyat.csv
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/tagging/entities-OntoNotes5/PERSON/chinese-boy.json
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/tagging/entities-OntoNotes5/PERSON/chinese-girl.json
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/tagging/entities-OntoNotes5/PERSON/indian-boy.json
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/tagging/entities-OntoNotes5/PERSON/indian-girl.json
with open('mbkm.txt') as fopen:
data = fopen.read().split('\n')
mbkm = []
for d in data:
if not ('BIN ' in d or 'BINTI ' in d):
continue
if ') ' in d:
d = ' '.join(d.split(') ')[1:])
d = d.title()
mbkm.append(d)
with open('menteri.txt') as fopen:
data = fopen.read().split('\n')
menteri = []
for d in data:
if not ('BIN ' in d or 'BINTI ' in d):
continue
if ') ' in d:
d = ' '.join(d.split(') ')[1:])
d = d.title()
menteri.append(d)
with open('setiausaha-politik.txt') as fopen:
data = fopen.read().split('\n')
setiausaha_politik = []
for d in data:
if not ('BIN ' in d or 'BINTI ' in d):
continue
if ')\t' in d:
d = ' '.join(d.split(')\t')[1:])
d = d.title()
setiausaha_politik.append(d)
with open('timbalan-menteri.txt') as fopen:
data = fopen.read().split('\n')
timbalan_menteri = []
for d in data:
if not ('BIN ' in d or 'BINTI ' in d):
continue
if ') ' in d:
d = ' '.join(d.split(') ')[1:])
d = d.title()
timbalan_menteri.append(d)
with open('popit-persons.json') as fopen:
popit = json.load(fopen)
import pandas as pd
df = pd.read_csv('calon-wakil-rakyat.csv').iloc[:,5].tolist()
df = [i.title() for i in df]
menteri = mbkm + menteri + setiausaha_politik + timbalan_menteri + popit + df
with open('arab-boy.json') as fopen:
arab_boy = json.load(fopen)
arab_boy = [i[1] for i in arab_boy]
arab_boy = list(set(' '.join(arab_boy).split()))
with open('arab-girl.json') as fopen:
arab_girl = json.load(fopen)
arab_girl = [i[1] for i in arab_girl]
arab_girl = list(set(' '.join(arab_girl).split()))
with open('chinese-boy.json') as fopen:
chinese_boy = json.load(fopen)
chinese_boy = [i[1] for i in chinese_boy]
chinese_boy = list(set(' '.join(chinese_boy).split()))
with open('chinese-girl.json') as fopen:
chinese_girl = json.load(fopen)
chinese_girl = [i[1] for i in chinese_girl]
chinese_girl = list(set(' '.join(chinese_girl).split()))
with open('indian-boy.json') as fopen:
indian_boy = json.load(fopen)
indian_boy = list(set(indian_boy))
with open('indian-girl.json') as fopen:
indian_girl = json.load(fopen)
indian_girl = list(set(indian_girl))
chinese = list(set(chinese_boy + chinese_girl))
arabic = list(set(arab_boy + arab_girl))
indian = list(set(indian_boy + indian_girl))
# !wget https://raw.githubusercontent.com/rossgoodwin/american-names/master/surnames.json
with open('surnames.json') as fopen:
american = json.load(fopen)
mapping = {0: arabic, 1: chinese, 2: indian, 3: american}
mixed_mapping = {0: [american, chinese], 1: [indian, arabic], 2: [chinese, arabic]}
import random
def generate_name(length):
r = random.randint(0,4)
if r == 4:
r = mixed_mapping[random.randint(0, 2)]
l, r = random.choice(r[0]), random.choice(r[1])
name = f'{l} {r}'
else:
s = mapping[r]
name = ' '.join(random.sample(s, length))
while length == 1 and len(name) < 5:
name = ' '.join(random.sample(arabic, length))
return name
results = []
i = 0
while i < len(entities['label']):
r = []
if entities['label'][i] == 'PERSON':
while entities['label'][i] == 'PERSON':
r.append(i)
i += 1
results.append(r)
i += 1
len(results)
import math
import numpy as np
def generate_index(l, name, texts, labels, length):
cp, indices = [], []
b = length - len(l)
left = math.ceil(b / 2)
right = b - left
minus = l[0] - left
if minus < 0:
absolute = np.abs(minus)
right += absolute
left -= absolute
for i in range(l[0] - left, l[0]):
cp.append(texts[i])
indices.append(labels[i])
cp.extend(name)
indices.extend([labels[l[0]] for _ in range(len(name))])
try:
for i in range(l[-1] + 1, l[-1] + right + 1):
cp.append(texts[i])
indices.append(labels[i])
except Exception as e:
print(e)
pass
return cp, indices
train_results, test_results = train_test_split(results, test_size = 0.2)
train_X, train_Y = [], []
repeat = 4
for t in menteri:
for i in range(repeat):
x, y = generate_index(train_results[random.randint(0, len(train_results) - 1)],
t.split(), entities['text'], entities['label'], 50)
if len(x) != len(y):
print('len not same')
continue
train_X.append(x)
train_Y.append(y)
len(train_X)
for r in train_results:
if random.random() > 0.5:
for _ in range(1):
n = generate_name(len(r)).split()
x, y = generate_index(r, n, entities['text'], entities['label'], 50)
if len(x) != len(y):
print('len not same')
continue
train_X.append(x)
train_Y.append(y)
len(train_X)
test_X, test_Y = [], []
for r in test_results:
if random.random() > 0.5:
for _ in range(1):
n = generate_name(len(r)).split()
x, y = generate_index(r, n, entities['text'], entities['label'], 50)
if len(x) != len(y):
print('len not same')
continue
test_X.append(x)
test_Y.append(y)
len(test_X)
len(train_X), len(test_X)
with open('augmentation-person-ontonotes5.json', 'w') as fopen:
json.dump({'train_X': train_X, 'train_Y': train_Y,
'test_X': test_X, 'test_Y': test_Y}, fopen)
| 0.303629 | 0.246154 |
```
import numpy as np
import pandas as pd
import os
import seaborn as sns
import matplotlib.pyplot as plt
mod_names = ['ActionIn-sin-1-4-20-3', 'ActionIn-sin-1-1-20-3', 'ActionIn-one_hot-1-4-20-3', 'ActionIn-one_hot-1-1-20-3']
mod_names = ['ActionIn-sin-3-4-20-3', 'ActionIn-sin-3-1-20-3',
'ActionIn-one_hot-3-4-20-3', 'ActionIn-one_hot-3-1-20-3']
log_dict = {}
for model_name in mod_names:
log_list = []
for res_folder in sorted(list(os.walk('../res'))[0][1]):
if res_folder.startswith(model_name):
try:
log_list += [pd.read_csv(f'../res/{res_folder}/log.csv')]
except (FileNotFoundError, NotADirectoryError) as e:
print(f"no log found for {res_folder}!")
print(f'file loaded! log count: {len(log_list)}')
log_dict[model_name] = pd.concat(log_list)
sns.set(rc={'figure.figsize':(11.7,8.27)})
sns.set_style("white")
sns.set_palette("bright")
for key, df in log_dict.items():
rmean = df.groupby('episodes').mean().hinter_loss
rstd = df.groupby('episodes').sem().hinter_loss
x = df.groupby('episodes').mean().index
sns.lineplot(x=x, y=rmean, label=key)
plt.fill_between(x, rmean - rstd, rmean + rstd, alpha=0.1)
plt.ylabel("Loss")
plt.xlabel("Episodes")
plt.legend( loc="upper right")
plt.title('Training Q loss (hinter)')
plt.savefig('lc_diff.pdf', transparent = True, bbox_inches = 'tight', pad_inches = 0)
plt.show()
sns.set(rc={'figure.figsize':(11.7,8.27)})
sns.set_style("white")
sns.set_palette("bright")
for key, df in log_dict.items():
rmean = df.groupby('episodes').mean().guesser_loss
rstd = df.groupby('episodes').sem().guesser_loss
x = df.groupby('episodes').mean().index
sns.lineplot(x=x, y=rmean, label=key)
plt.fill_between(x, rmean - rstd, rmean + rstd, alpha=0.1)
plt.ylabel("Loss")
plt.xlabel("Episodes")
plt.legend( loc="upper right")
plt.title('Training Q loss (guesser)')
plt.savefig('lc_diff.pdf', transparent = True, bbox_inches = 'tight', pad_inches = 0)
plt.show()
sns.set(rc={'figure.figsize':(11.7,8.27)})
sns.set_style("white")
sns.set_palette("bright")
for key, df in log_dict.items():
rmean = df.groupby('episodes').mean()['return']
rstd = df.groupby('episodes').sem()['return']
x = df.groupby('episodes').mean().index
sns.lineplot(x=x, y=rmean, label=key)
plt.fill_between(x, rmean - rstd, rmean + rstd, alpha=0.1)
plt.ylabel("Reward")
plt.xlabel("Episodes")
plt.legend( loc="lower right")
plt.title('Training SP returns')
plt.savefig('lc_diff.pdf', transparent = True, bbox_inches = 'tight', pad_inches = 0)
plt.show()
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import os
import seaborn as sns
import matplotlib.pyplot as plt
mod_names = ['ActionIn-sin-1-4-20-3', 'ActionIn-sin-1-1-20-3', 'ActionIn-one_hot-1-4-20-3', 'ActionIn-one_hot-1-1-20-3']
mod_names = ['ActionIn-sin-3-4-20-3', 'ActionIn-sin-3-1-20-3',
'ActionIn-one_hot-3-4-20-3', 'ActionIn-one_hot-3-1-20-3']
log_dict = {}
for model_name in mod_names:
log_list = []
for res_folder in sorted(list(os.walk('../res'))[0][1]):
if res_folder.startswith(model_name):
try:
log_list += [pd.read_csv(f'../res/{res_folder}/log.csv')]
except (FileNotFoundError, NotADirectoryError) as e:
print(f"no log found for {res_folder}!")
print(f'file loaded! log count: {len(log_list)}')
log_dict[model_name] = pd.concat(log_list)
sns.set(rc={'figure.figsize':(11.7,8.27)})
sns.set_style("white")
sns.set_palette("bright")
for key, df in log_dict.items():
rmean = df.groupby('episodes').mean().hinter_loss
rstd = df.groupby('episodes').sem().hinter_loss
x = df.groupby('episodes').mean().index
sns.lineplot(x=x, y=rmean, label=key)
plt.fill_between(x, rmean - rstd, rmean + rstd, alpha=0.1)
plt.ylabel("Loss")
plt.xlabel("Episodes")
plt.legend( loc="upper right")
plt.title('Training Q loss (hinter)')
plt.savefig('lc_diff.pdf', transparent = True, bbox_inches = 'tight', pad_inches = 0)
plt.show()
sns.set(rc={'figure.figsize':(11.7,8.27)})
sns.set_style("white")
sns.set_palette("bright")
for key, df in log_dict.items():
rmean = df.groupby('episodes').mean().guesser_loss
rstd = df.groupby('episodes').sem().guesser_loss
x = df.groupby('episodes').mean().index
sns.lineplot(x=x, y=rmean, label=key)
plt.fill_between(x, rmean - rstd, rmean + rstd, alpha=0.1)
plt.ylabel("Loss")
plt.xlabel("Episodes")
plt.legend( loc="upper right")
plt.title('Training Q loss (guesser)')
plt.savefig('lc_diff.pdf', transparent = True, bbox_inches = 'tight', pad_inches = 0)
plt.show()
sns.set(rc={'figure.figsize':(11.7,8.27)})
sns.set_style("white")
sns.set_palette("bright")
for key, df in log_dict.items():
rmean = df.groupby('episodes').mean()['return']
rstd = df.groupby('episodes').sem()['return']
x = df.groupby('episodes').mean().index
sns.lineplot(x=x, y=rmean, label=key)
plt.fill_between(x, rmean - rstd, rmean + rstd, alpha=0.1)
plt.ylabel("Reward")
plt.xlabel("Episodes")
plt.legend( loc="lower right")
plt.title('Training SP returns')
plt.savefig('lc_diff.pdf', transparent = True, bbox_inches = 'tight', pad_inches = 0)
plt.show()
| 0.387459 | 0.304651 |
# Dashboarding SEC Text for Financial NLP
The U.S. Securities and Exchange Commission (SEC) filings are widely used in finance. Companies file the SEC filings to notify the world about their business conditions and the future outlook of the companies. Because of the potential predictive values, the SEC filings are good sources of information for workers in finance, ranging from individual investors to executives of large financial corporations. These filings are publicly available to all investors.
In this example notebook, we focus on the following three types of SEC filings: 10-Ks, 10-Qs, and 8-Ks.
* [10-Ks](https://www.investopedia.com/terms/1/10-k.asp) - Annual reports of companies (and will be quite detailed).
* [10-Qs](https://www.investopedia.com/terms/1/10q.asp) - Quarterly reports, except in the quarter in which a 10K is filed (and are less detailed than 10-Ks).
* [8-Ks](https://www.investopedia.com/terms/1/8-k.asp) - Filed at every instance when there is a change in business conditions that is material and needs to be reported. This means that there can be multiple 8-Ks filed throughout the fiscal year.
The functionality of SageMaker JumpStart Industry will be presented throughout the notebook, which provides an overall dashboard to visualize the three types of filings with various analyses. We can append several standard financial characteristics, such as *Analyst Recommendation Mean* and *Return on Equity*, but one interesting part of the dashboard is *attribute scoring*. Using word lists derived from natural language processing (NLP) techniques, we will score the actual texts of these filings for a number of characteristics, such as risk, uncertainty, and positivity, as word proportions, providing simple, accessible numbers to represent these traits. Using this dashboard, anybody can pull up information and related statistics about any companies they have interest in, and digest it in a simple, useful way.
## General Steps
This notebook goes through the following steps to demonstrate how to extract texts from specific sections in SEC filings, score the texts, and summarize them.
1. Retrieve and parse 10-K, 10-Q, 8-K filings. Retrieving these filings from SEC's EDGAR service is complicated, and parsing these forms into plain text for further analysis can be time-consuming. We provide the [SageMaker JumpStart Industry Python SDK](https://sagemaker-jumpstart-industry-pack.readthedocs.io/en/latest/index.html) to create a curated dataset in a *single API call*.
2. Create separate dataframes for each of the three types of forms, along with separate columns for each extracted section.
3. Combine two or more sections of the 10-K forms and shows how to use the NLP scoring API to add numerical values to the columns for the text of these columns. The column is called `text2score`.
4. Add a column with a summary of the `text2score` column.
5. Prepare the final dataframe that can be used as input for a dashboard.
One of the features of this notebook helps break long SEC filings into separate sections, each of which deals with different aspects of a company's reporting. The goal of this example notebook is to make accessing and processing texts from SEC filing easy for investors and training their algorithms.
**Note**: You can also access this notebook through SageMaker JumpStart that is executable on SageMaker Studio. For more information, see [Amazon SageMaker JumpStart Industry](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-jumpstart-industry.html) in the **<em>Amazon SageMaker Developer Guide</em>**.
>**<span style="color:RED">Important</span>**:
>This example notebook is for demonstrative purposes only. It is not financial advice and should not be relied on as financial or investment advice.
## Financial NLP
Financial NLP is one of the rapidly increasing use cases of ML in industry. To find more discussion about this, see the following survey paper: [Deep Learning for Financial Applications: A Survey](https://arxiv.org/abs/2002.05786). The starting point for a vast amount of financial NLP is about extracting and processing texts in SEC filings. The SEC filings report different types of information related to various events involving companies. To find a complete list of SEC forms, see [Forms List](https://www.sec.gov/forms).
The SEC filings are widely used by financial services and companies as a source of information about companies in order to make trading, lending, investment, and risk management decisions. They contain forward-looking information that helps with forecasts and are written with a view to the future. In addition, in recent times, the value of historical time-series data has degraded, since economies have been structurally transformed by trade wars, pandemics, and political upheavals. Therefore, text as a source of forward-looking information has been increasing in relevance.
There has been an exponential growth in downloads of SEC filings. See [How to Talk When a Machine is Listening: Corporate Disclosure in the Age of AI](https://www.nber.org/papers/w27950); this paper reports that the number of machine downloads of corporate 10-K and 10-Q filings increased from 360,861 in 2003 to 165,318,719 in 2016.
A vast body of academic and practitioner research that is based on financial text, a significant portion of which is based on SEC filings. A recent review article summarizing this work is [Textual Analysis in Finance (2020)](https://www.annualreviews.org/doi/abs/10.1146/annurev-financial-012820-032249).
This notebook describes how a user can quickly retrieve a set of forms, break them into sections, score texts in each section using pre-defined word lists, and prepare a dashboard to filter the data.
## SageMaker Notebook Kernel Setup
Recommended kernel is **conda_python3**.
For the instance type, using a larger instance with sufficient memory can be helpful to download the following materials.
## Load SDK and Helper Scripts
First, we import required packages and load the S3 bucket from SageMaker session, as shown below.
```
import boto3
import pandas as pd
import sagemaker
pd.get_option("display.max_columns", None)
# Prepare the SageMaker session's default S3 bucket and a folder to store processed data
session = sagemaker.Session()
region = session._region_name
bucket = session.default_bucket()
role = sagemaker.get_execution_role()
secdashboard_processed_folder = "jumpstart_industry_secdashboard_processed"
```
### Install the `smjsindustry` library
The following code cell downloads the [`smjsindustry` SDK](https://pypi.org/project/smjsindustry/) and helper scripts from the S3 buckets prepared by SageMaker JumpStart Industry. You will learn how to use the `smjsindustry` SDK which contains various APIs to curate SEC datasets. The dataset in this example was synthetically generated using the `smjsindustry` package's SEC Forms Retrieval tool. For more information, see the [SageMaker JumpStart Industry Python SDK documentation](https://sagemaker-jumpstart-industry-pack.readthedocs.io/en/latest/notebooks/index.html).
```
# Download scripts from S3
notebook_artifact_bucket = f"jumpstart-cache-prod-{region}"
notebook_sdk_prefix = "smfinance-notebook-dependency/smjsindustry"
notebook_script_prefix = "smfinance-notebook-data/sec-dashboard"
# Download smjsindustry SDK
sdk_bucket = f"s3://{notebook_artifact_bucket}/{notebook_sdk_prefix}"
!aws s3 sync $sdk_bucket ./
# Download helper scripts
scripts_bucket = f"s3://{notebook_artifact_bucket}/{notebook_script_prefix}"
!aws s3 sync $scripts_bucket ./sec-dashboard
```
We deliver APIs through the `smjsindustry` client library. The first step requires pip installing a Python package that interacts with a SageMaker processing container. The retrieval, parsing, transforming, and scoring of text is a complex process and uses many different algorithms and packages. To make this seamless and stable for the user, the functionality is packaged into a collection of APIs. For installation and maintenance of the workflow, this approach reduces your effort to a pip install followed by a single API call.
```
# Install smjsindustry SDK
!pip install --no-index smjsindustry-1.0.0-py3-none-any.whl
%pylab inline
```
The preceding line loads in several standard packages, including NumPy, SciPy, and matplotlib.
## Load the functions for extracting the "Item" sections from the forms
We created various helper functions to enable sectioning the SEC forms. These functions do take around 1 minute to load.
```
%run sec-dashboard/SEC_Section_Extraction_Functions.ipynb
```
Next, we import ```smjsindustry``` package, as shown below.
```
import smjsindustry
from smjsindustry.finance import utils
from smjsindustry import NLPScoreType, NLPSCORE_NO_WORD_LIST
from smjsindustry import NLPScorerConfig, JaccardSummarizerConfig, KMedoidsSummarizerConfig
from smjsindustry import Summarizer, NLPScorer
from smjsindustry.finance.processor import DataLoader, SECXMLFilingParser
from smjsindustry.finance.processor_config import EDGARDataSetConfig
```
## Download the filings you wish to work with
Downloading SEC filings is done from the SEC's Electronic Data Gathering, Analysis, and Retrieval (EDGAR) website, which provides open data access. EDGAR is the primary system under the U.S. Securities And Exchange Commission (SEC) for companies and others submitting documents under the Securities Act of 1933, the Securities Exchange Act of 1934, the Trust Indenture Act of 1939, and the Investment Company Act of 1940. EDGAR contains millions of company and individual filings. The system processes about 3,000 filings per day, serves up 3,000 terabytes of data to the public annually, and accommodates 40,000 new filers per year on average. Below we provide a simple *one*-API call that will create a dataset of plain text filings in a few lines of code, for any period of time and for a large number of tickers.
We have wrapped the extraction functionality into a SageMaker processing container and provide this notebook to enable users to download a dataset of filings with metadata such as dates and parsed plain text that can then be used for machine learning using other SageMaker tools. Users only need to specify a date range and a list of ticker symbols and this API will do the rest.
The extracted dataframe is written to S3 storage and to the local notebook instance.
The API below specifies the machine to be used and the volume size. It also specifies the tickers or CIK codes for the companies to be covered, as well as the 3 form types (10-K, 10-Q, 8-K) to be retrieved. The data range is also specified as well as the filename (CSV) where the retrieved filings will be stored.
The API is in 3 parts:
1. Set up a dataset configuration (an `EDGARDataSetConfig` object). This specifies (i) the tickers or SEC CIK codes for the companies whose forms are being extracted; (ii) the SEC forms types (in this case 10-K, 10-Q, 8-K); (iii) date range of forms by filing date, (iv) the output CSV file and S3 bucket to store the dataset.
2. Set up a data loader object (a `DataLoader` object). The middle section shows how to assign system resources and has default values in place.
3. Run the data loader (`data_loader.load`).
This initiates a processing job running in a SageMaker container.
>**<span style="color:RED">Important</span>**:
>This example notebook uses data obtained from the SEC EDGAR database. You are responsible for complying with EDGAR’s access terms and conditions located in the [Accessing EDGAR Data](https://www.sec.gov/os/accessing-edgar-data) page.
```
%%time
dataset_config = EDGARDataSetConfig(
tickers_or_ciks=[
"amzn",
"goog",
"27904",
"fb",
"msft",
"uber",
"nflx",
], # list of stock tickers or CIKs
form_types=["10-K", "10-Q", "8-K"], # list of SEC form types
filing_date_start="2019-01-01", # starting filing date
filing_date_end="2020-12-31", # ending filing date
email_as_user_agent="test-user@test.com",
) # user agent email
data_loader = DataLoader(
role=role, # loading job execution role
instance_count=1, # instances number, limit varies with instance type
instance_type="ml.c5.2xlarge", # instance type
volume_size_in_gb=30, # size in GB of the EBS volume to use
volume_kms_key=None, # KMS key ID to encrypt the processing volume
output_kms_key=None, # KMS key ID to encrypt processing job outputs
max_runtime_in_seconds=None, # timeout in seconds. Default is 24 hours.
sagemaker_session=session, # session object
tags=None,
) # a list of key-value pairs
data_loader.load(
dataset_config,
"s3://{}/{}/{}".format(
bucket, secdashboard_processed_folder, "output"
), # output s3 prefix (both bucket and folder names are required)
"dataset_10k_10q_8k_2019_2021.csv", # output file name
wait=True,
logs=True,
)
```
## Copy the file into Studio from the s3 bucket
We can examine the dataframe that was constructed by the API.
```
client = boto3.client("s3")
client.download_file(
bucket,
"{}/{}/{}".format(secdashboard_processed_folder, "output", "dataset_10k_10q_8k_2019_2021.csv"),
"dataset_10k_10q_8k_2019_2021.csv",
)
```
See how a complete dataset was prepared. Altogether, a few hundred forms were retrieved across tickers and the three types of SEC form.
```
df_forms = pd.read_csv("dataset_10k_10q_8k_2019_2021.csv")
df_forms
```
Here is a breakdown of the few hundred forms by **ticker** and **form_type**.
```
df_forms.groupby(["ticker", "form_type"]).count().reset_index()
```
## Create the dataframe for the extracted item sections from the 10-K filings
In this section, we break the various sections of the 10-K filings into separate columns of the extracted dataframe.
1. Take a subset of the dataframe by specifying `df.form_type == "10-K"`.
2. Extract the sections for each 10-K filing and put them in columns in a separate dataframe.
3. Merge this dataframe with the dataframe from Step 1.
You can examine the cells in the dataframe below to see the text from each section.
```
df = pd.read_csv("dataset_10k_10q_8k_2019_2021.csv")
df_10K = df[df.form_type == "10-K"]
# Construct the DataFrame row by row.
items_10K = pd.DataFrame(columns=columns_10K, dtype=object)
for i in df_10K.index:
form_text = df_10K.text[i]
item_iter = get_form_items(form_text, "10-K")
items_10K.loc[i] = items_to_df_row(item_iter, columns_10K, "10-K")
items_10K.rename(columns=header_mappings_10K, inplace=True)
df_10K = pd.merge(df_10K, items_10K, left_index=True, right_index=True)
df_10K.head(10)
```
Let's take a look at the text in one of the columns to see that there is clean, parsed, plain text provided by the API:
```
print(df_10K["Risk Factors"][138])
```
## Similarly, we can create the dataframe for the extracted item sections from the 10-Q filings
1. Take a subset of the dataframe by specifying `df.form_type == "10-Q"`.
2. Extract the sections for each 10-Q filing and put them in columns in a separate dataframe.
3. Merge this dataframe with the dataframe from 1.
```
df = pd.read_csv("dataset_10k_10q_8k_2019_2021.csv")
df_10Q = df[df.form_type == "10-Q"]
# Construct the DataFrame row by row.
items_10Q = pd.DataFrame(columns=columns_10Q, dtype=object)
for i in df_10Q.index:
form_text = df_10Q.text[i]
item_iter = get_form_items(form_text, "10-Q")
items_10Q.loc[i] = items_to_df_row(item_iter, columns_10Q, "10-Q")
items_10Q.rename(columns=header_mappings_10Q, inplace=True)
df_10Q = pd.merge(df_10Q, items_10Q, left_index=True, right_index=True)
df_10Q.head(10)
```
## Create the dataframe for the extracted item sections from the 8-K filings
1. Take a subset of the dataframe by specifying `df.form_type == "8-K"`.
2. Extract the sections for each 8-K filing and put them in columns in a separate dataframe.
3. Merge this dataframe with the dataframe from Step 1.
```
df = pd.read_csv("dataset_10k_10q_8k_2019_2021.csv")
df_8K = df[df.form_type == "8-K"]
# Construct the DataFrame row by row.
items_8K = pd.DataFrame(columns=columns_8K, dtype=object)
for i in df_8K.index:
form_text = df_8K.text[i]
item_iter = get_form_items(form_text, "8-K")
items_8K.loc[i] = items_to_df_row(item_iter, columns_8K, "8-K")
items_8K.rename(columns=header_mappings_8K, inplace=True)
df_8K = pd.merge(df_8K, items_8K, left_index=True, right_index=True)
df1 = df_8K.copy()
df1 = df1.mask(df1.apply(lambda x: x.str.len().lt(1)))
df1
```
## Summary table of section counts
```
df1 = df1.groupby("ticker").count()
df1[df1.columns[5:]]
```
## NLP scoring of the 10-K forms for specific sections
Financial text has been scored using word lists for some time. See the paper ["Textual Analysis in Finance"](https://www.investopedia.com/terms/1/8-k.asp) for a comprehensive review.
The `smjsindustry` library provides 11 NLP score types by default: `positive`, `negative`, `litigious`, `polarity`, `risk`, `readability`, `fraud`, `safe`, `certainty`, `uncertainty`, and `sentiment`. Each score (except readability and sentiment) has its word list, which is used for scanning and matching with an input text dataset.
NLP scoring delivers a score as the fraction of words in a document that are in the relevant scoring word lists. Users can provide their own custom word list to calculate the NLP scores. Some scores like readability use standard formulae such as the Gunning-Fog score. Sentiment scores are based on the [VADER](https://pypi.org/project/vaderSentiment/) library.
These NLP scores are added as new numerical columns to the text dataframe; this creates a multimodal dataframe, which is a mixture of tabular data and longform text, called **TabText**. When submitting this multimodal dataframe for ML, it is a good idea to normalize the columns of NLP scores (usually with standard normalization or min-max scaling).
Any chosen text column can be scored automatically using the tools in SageMaker JumpStart. We demonstrate this below.
As an example, we combine the MD&A section (Item 7) and the Risk section (Item 7A), and then apply NLP scoring. We compute 11 additional columns for various types of scores.
Since the size of the SEC filings text can be very large, NLP scoring is computationally time-consuming, so we have built the API to enable distribution of this task across multiple machines. In the API, users can choose the number and type of machine instances they want to run NLP scoring on in distributed fashion.
To begin, earmark the text for NLP scoring by creating a new column that combines two columns into a single column called `text2score`. A new file is saved in the Amazon S3 bucket.
```
df_10K["text2score"] = [
i + " " + j
for i, j in zip(
df_10K[
"Management’s Discussion and Analysis of Financial Condition and Results of Operations"
],
df_10K["Quantitative and Qualitative Disclosures about Market Risk"],
)
]
df_10K[["ticker", "text2score"]].to_csv("text2score.csv", index=False)
client.upload_file(
"text2score.csv",
bucket,
"{}/{}/{}".format(secdashboard_processed_folder, "output", "text2score.csv"),
)
```
**Technical notes**:
1. The NLPScorer sends SageMaker processing job requests to processing containers. It might take a few minutes when spinning up a processing container. The actual filings extraction start after the initial spin-up.
2. You are not charged for the waiting time used for the initial spin-up.
3. You can run processing jobs in multiple instances.
4. The name of the processing job is shown in the runtime log.
5. You can also access the processing job from the [SageMaker console](https://console.aws.amazon.com/sagemaker). On the left navigation pane, choose Processing, Processing job.
6. NLP scoring can be slow for massive documents such as SEC filings, which contain anywhere from 20,000-100,000 words. Matching to word lists (usually 200 words or more) can be time-consuming.
7. VPC mode is supported in this API.
**Input**
The input to the API requires (i) what NLP scores to be generated, each one resulting in a new column in the dataframe; (ii) specification of system resources, i.e., number and type of machine instances to be used; (iii) the s3 bucket and filename in which to store the enhanced dataframe as a CSV file; (iv) a section that kicks off the API.
**Output**
The output filename used in the example below is `all_scores.csv`, but you can change this to any other filename. It's stored in the S3 bucket and then, as shown in the following code, we copy it into Studio here to process it into a dashboard.
```
%%time
import smjsindustry
from smjsindustry import NLPScoreType, NLPSCORE_NO_WORD_LIST
from smjsindustry import NLPScorer
from smjsindustry import NLPScorerConfig
score_type_list = list(
NLPScoreType(score_type, [])
for score_type in NLPScoreType.DEFAULT_SCORE_TYPES
if score_type not in NLPSCORE_NO_WORD_LIST
)
score_type_list.extend([NLPScoreType(score_type, None) for score_type in NLPSCORE_NO_WORD_LIST])
nlp_scorer_config = NLPScorerConfig(score_type_list)
nlp_score_processor = NLPScorer(
role=role, # loading job execution role
instance_count=1, # instances number, limit varies with instance type
instance_type="ml.c5.9xlarge", # ec2 instance type to run the loading job
volume_size_in_gb=30, # size in GB of the EBS volume to use
volume_kms_key=None, # KMS key ID to encrypt the processing volume
output_kms_key=None, # KMS key ID to encrypt processing job outputs
max_runtime_in_seconds=None, # timeout in seconds. Default is 24 hours.
sagemaker_session=session, # session object
tags=None,
) # a list of key-value pairs
nlp_score_processor.calculate(
nlp_scorer_config,
"text2score", # input column
"s3://{}/{}/{}/{}".format(
bucket, secdashboard_processed_folder, "output", "text2score.csv"
), # input from s3 bucket
"s3://{}/{}/{}".format(
bucket, secdashboard_processed_folder, "output"
), # output s3 prefix (both bucket and folder names are required)
"all_scores.csv", # output file name
)
client.download_file(
bucket,
"{}/{}/{}".format(secdashboard_processed_folder, "output", "all_scores.csv"),
"all_scores.csv",
)
```
## Stock Screener based on NLP scores
Once we have added columns for all the NLP scores, we can then screen the table for companies with high scores on any of the attributes. See the table below.
```
qdf = pd.read_csv("all_scores.csv")
qdf.head()
```
## Add a column with summaries of the text being scored
We can further enhance the dataframe with summaries of the target text column. As an example, we used the abstractive summarizer from Hugging Face. Since this summarizer can only accommodate roughly 300 words of text, it's not directly applicable to our text, which is much longer (thousands of words). Therefore, we applied the Hugging Face summarizer to groups of paragraphs and pulled it all together to make a single summary. We created a helper function `fullSummary` that is called in the code below to create a summary of each document in the column `text2score`.
Notice that the output dataframe is now extended with an additional summary column.
*Note*: An abstractive summarizer restructures the text and loses the original sentences. This is in contrast to an extractive summarizer, which retain the original sentence structure.
Summarization is time-consuming and this code block takes time. We do the first 5 documents in the `text2score` column to illustrate.
```
%%time
qdf["summary"] = ""
for i in range(5):
qdf.loc[i, "summary"] = fullSummary(qdf.loc[i, "text2score"])
print(i, end="..")
```
Examine one of the summaries.
```
i = 2
print(qdf.summary[i])
print("---------------")
print(qdf.text2score[i])
```
#### Store the curated dataset
```
qsf = qdf.drop(["text2score"], axis=1)
qsf.to_csv("stock_sec_scores.csv", index=False)
```
To complete this example notebook, we provide two artifacts that may be included in a dashboard:
1. Creating an interactive datatable so that a non-technical user may sort and filter the rows of the curated dataframe.
2. Visualizing the differences in documents by NLP scores using radar plots.
This is shown next.
## Create an interactive dashboard
Using the generated CSV file, you can construct an interactive screening dashboard.
Run from an R script to construct the dashboard. All you need is just this single block of code below. It will create a browser enabled interactive data table, and save it in a file title `SEC_dashboard.html`. You may open it in a browser.
### Install `R`
```
!sudo yum -y install R
import subprocess
ret_code = subprocess.call(["/usr/bin/Rscript", "sec-dashboard/Dashboard.R"])
```
After the notebook finishes running, open the `SEC_Dashboard.html` file that was created. You might need to click `Trust HTML` in the upper left corner to see the filterable table and the content of it. The following screenshot shows an example of the filterable table.
```
from IPython.display import Image
Image("sec-dashboard/dashboard.png", width=800, height=600)
```
## Visualizing the text through the NLP scores
The following visualization function shows how to create a *radar plot* to compare two SEC filings using their normalized NLP scores. The scores are normalized using min-max scaling on each NLP score. The radar plot is useful because it shows the overlap (and consequently, the difference) between the documents.
```
## Read in the scores
scores = pd.read_csv("stock_sec_scores.csv")
# Choose whichever filings you want to compare for the 2nd and 3rd parameter
createRadarChart(scores, 2, 5)
```
## Further support
The [SEC filings retrieval API operations](https://sagemaker-jumpstart-industry-pack.readthedocs.io/en/latest/smjsindustry.finance.data_loader.html) we introduced at the beginning of this example notebook also download and parse other SEC forms, such as 495, 497, 497K, S-3ASR, and N-1A. If you need further support for any other types of finance documents, reach out to the SageMaker JumpStart team through [AWS Support](https://console.aws.amazon.com/support/) or [AWS Developer Forums for Amazon SageMaker](https://forums.aws.amazon.com/forum.jspa?forumID=285).
## References
1. [What’s New post](https://aws.amazon.com/about-aws/whats-new/2021/09/amazon-sagemaker-jumpstart-multimodal-financial-analysis-tools/)
2. Blogs:
* [Use SEC text for ratings classification using multimodal ML in Amazon SageMaker JumpStart](https://aws.amazon.com/blogs/machine-learning/use-sec-text-for-ratings-classification-using-multimodal-ml-in-amazon-sagemaker-jumpstart/)
* [Use pre-trained financial language models for transfer learning in Amazon SageMaker JumpStart](https://aws.amazon.com/blogs/machine-learning/use-pre-trained-financial-language-models-for-transfer-learning-in-amazon-sagemaker-jumpstart/)
3. Documentation and links to the SageMaker JumpStart Industry Python SDK:
* ReadTheDocs: https://sagemaker-jumpstart-industry-pack.readthedocs.io/en/latest/index.html
* PyPI: https://pypi.org/project/smjsindustry/
* GitHub Repository: https://github.com/aws/sagemaker-jumpstart-industry-pack/
* Official SageMaker Developer Guide: https://docs.aws.amazon.com/sagemaker/latest/dg/studio-jumpstart-industry.html
## Licenses
The SageMaker JumpStart Industry product and its related materials are under the [Legal License Terms](https://jumpstart-cache-prod-us-east-1.s3.us-east-1.amazonaws.com/smfinance-notebook-dependency/licenses.txt).
>**<span style="color:RED">Important</span>**:
>(1) This notebook is for demonstrative purposes only. It is not financial advice and should not be relied on as financial or investment advice. (2) This notebook uses data obtained from the SEC EDGAR database. You are responsible for complying with EDGAR’s [access terms and conditions](https://www.sec.gov/os/accessing-edgar-data).
This notebook utilizes certain third-party open source software packages at install-time or run-time (“External Dependencies”) that are subject to copyleft license terms you must accept in order to use it. If you do not accept all the applicable license terms, you should not use the notebook. We recommend that you consult your company’s open source approval policy before proceeding.
Provided below is a list of External Dependencies and the applicable license identification as indicated by the documentation associated with the External Dependencies as of Amazon’s most recent review.
- R v3.5.2: GPLv3 license (https://www.gnu.org/licenses/gpl-3.0.html)
- DT v0.19.1: GPLv3 license (https://github.com/rstudio/DT/blob/master/LICENSE)
THIS INFORMATION IS PROVIDED FOR CONVENIENCE ONLY. AMAZON DOES NOT PROMISE THAT
THE LIST OR THE APPLICABLE TERMS AND CONDITIONS ARE COMPLETE, ACCURATE, OR
UP-TO-DATE, AND AMAZON WILL HAVE NO LIABILITY FOR ANY INACCURACIES. YOU SHOULD
CONSULT THE DOWNLOAD SITES FOR THE EXTERNAL DEPENDENCIES FOR THE MOST COMPLETE
AND UP-TO-DATE LICENSING INFORMATION.
YOUR USE OF THE EXTERNAL DEPENDENCIES IS AT YOUR SOLE RISK. IN NO EVENT WILL
AMAZON BE LIABLE FOR ANY DAMAGES, INCLUDING WITHOUT LIMITATION ANY DIRECT,
INDIRECT, CONSEQUENTIAL, SPECIAL, INCIDENTAL, OR PUNITIVE DAMAGES (INCLUDING
FOR ANY LOSS OF GOODWILL, BUSINESS INTERRUPTION, LOST PROFITS OR DATA, OR
COMPUTER FAILURE OR MALFUNCTION) ARISING FROM OR RELATING TO THE EXTERNAL
DEPENDENCIES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, EVEN
IF AMAZON HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. THESE LIMITATIONS
AND DISCLAIMERS APPLY EXCEPT TO THE EXTENT PROHIBITED BY APPLICABLE LAW.
|
github_jupyter
|
import boto3
import pandas as pd
import sagemaker
pd.get_option("display.max_columns", None)
# Prepare the SageMaker session's default S3 bucket and a folder to store processed data
session = sagemaker.Session()
region = session._region_name
bucket = session.default_bucket()
role = sagemaker.get_execution_role()
secdashboard_processed_folder = "jumpstart_industry_secdashboard_processed"
# Download scripts from S3
notebook_artifact_bucket = f"jumpstart-cache-prod-{region}"
notebook_sdk_prefix = "smfinance-notebook-dependency/smjsindustry"
notebook_script_prefix = "smfinance-notebook-data/sec-dashboard"
# Download smjsindustry SDK
sdk_bucket = f"s3://{notebook_artifact_bucket}/{notebook_sdk_prefix}"
!aws s3 sync $sdk_bucket ./
# Download helper scripts
scripts_bucket = f"s3://{notebook_artifact_bucket}/{notebook_script_prefix}"
!aws s3 sync $scripts_bucket ./sec-dashboard
# Install smjsindustry SDK
!pip install --no-index smjsindustry-1.0.0-py3-none-any.whl
%pylab inline
%run sec-dashboard/SEC_Section_Extraction_Functions.ipynb
import smjsindustry
from smjsindustry.finance import utils
from smjsindustry import NLPScoreType, NLPSCORE_NO_WORD_LIST
from smjsindustry import NLPScorerConfig, JaccardSummarizerConfig, KMedoidsSummarizerConfig
from smjsindustry import Summarizer, NLPScorer
from smjsindustry.finance.processor import DataLoader, SECXMLFilingParser
from smjsindustry.finance.processor_config import EDGARDataSetConfig
%%time
dataset_config = EDGARDataSetConfig(
tickers_or_ciks=[
"amzn",
"goog",
"27904",
"fb",
"msft",
"uber",
"nflx",
], # list of stock tickers or CIKs
form_types=["10-K", "10-Q", "8-K"], # list of SEC form types
filing_date_start="2019-01-01", # starting filing date
filing_date_end="2020-12-31", # ending filing date
email_as_user_agent="test-user@test.com",
) # user agent email
data_loader = DataLoader(
role=role, # loading job execution role
instance_count=1, # instances number, limit varies with instance type
instance_type="ml.c5.2xlarge", # instance type
volume_size_in_gb=30, # size in GB of the EBS volume to use
volume_kms_key=None, # KMS key ID to encrypt the processing volume
output_kms_key=None, # KMS key ID to encrypt processing job outputs
max_runtime_in_seconds=None, # timeout in seconds. Default is 24 hours.
sagemaker_session=session, # session object
tags=None,
) # a list of key-value pairs
data_loader.load(
dataset_config,
"s3://{}/{}/{}".format(
bucket, secdashboard_processed_folder, "output"
), # output s3 prefix (both bucket and folder names are required)
"dataset_10k_10q_8k_2019_2021.csv", # output file name
wait=True,
logs=True,
)
client = boto3.client("s3")
client.download_file(
bucket,
"{}/{}/{}".format(secdashboard_processed_folder, "output", "dataset_10k_10q_8k_2019_2021.csv"),
"dataset_10k_10q_8k_2019_2021.csv",
)
df_forms = pd.read_csv("dataset_10k_10q_8k_2019_2021.csv")
df_forms
df_forms.groupby(["ticker", "form_type"]).count().reset_index()
df = pd.read_csv("dataset_10k_10q_8k_2019_2021.csv")
df_10K = df[df.form_type == "10-K"]
# Construct the DataFrame row by row.
items_10K = pd.DataFrame(columns=columns_10K, dtype=object)
for i in df_10K.index:
form_text = df_10K.text[i]
item_iter = get_form_items(form_text, "10-K")
items_10K.loc[i] = items_to_df_row(item_iter, columns_10K, "10-K")
items_10K.rename(columns=header_mappings_10K, inplace=True)
df_10K = pd.merge(df_10K, items_10K, left_index=True, right_index=True)
df_10K.head(10)
print(df_10K["Risk Factors"][138])
df = pd.read_csv("dataset_10k_10q_8k_2019_2021.csv")
df_10Q = df[df.form_type == "10-Q"]
# Construct the DataFrame row by row.
items_10Q = pd.DataFrame(columns=columns_10Q, dtype=object)
for i in df_10Q.index:
form_text = df_10Q.text[i]
item_iter = get_form_items(form_text, "10-Q")
items_10Q.loc[i] = items_to_df_row(item_iter, columns_10Q, "10-Q")
items_10Q.rename(columns=header_mappings_10Q, inplace=True)
df_10Q = pd.merge(df_10Q, items_10Q, left_index=True, right_index=True)
df_10Q.head(10)
df = pd.read_csv("dataset_10k_10q_8k_2019_2021.csv")
df_8K = df[df.form_type == "8-K"]
# Construct the DataFrame row by row.
items_8K = pd.DataFrame(columns=columns_8K, dtype=object)
for i in df_8K.index:
form_text = df_8K.text[i]
item_iter = get_form_items(form_text, "8-K")
items_8K.loc[i] = items_to_df_row(item_iter, columns_8K, "8-K")
items_8K.rename(columns=header_mappings_8K, inplace=True)
df_8K = pd.merge(df_8K, items_8K, left_index=True, right_index=True)
df1 = df_8K.copy()
df1 = df1.mask(df1.apply(lambda x: x.str.len().lt(1)))
df1
df1 = df1.groupby("ticker").count()
df1[df1.columns[5:]]
df_10K["text2score"] = [
i + " " + j
for i, j in zip(
df_10K[
"Management’s Discussion and Analysis of Financial Condition and Results of Operations"
],
df_10K["Quantitative and Qualitative Disclosures about Market Risk"],
)
]
df_10K[["ticker", "text2score"]].to_csv("text2score.csv", index=False)
client.upload_file(
"text2score.csv",
bucket,
"{}/{}/{}".format(secdashboard_processed_folder, "output", "text2score.csv"),
)
%%time
import smjsindustry
from smjsindustry import NLPScoreType, NLPSCORE_NO_WORD_LIST
from smjsindustry import NLPScorer
from smjsindustry import NLPScorerConfig
score_type_list = list(
NLPScoreType(score_type, [])
for score_type in NLPScoreType.DEFAULT_SCORE_TYPES
if score_type not in NLPSCORE_NO_WORD_LIST
)
score_type_list.extend([NLPScoreType(score_type, None) for score_type in NLPSCORE_NO_WORD_LIST])
nlp_scorer_config = NLPScorerConfig(score_type_list)
nlp_score_processor = NLPScorer(
role=role, # loading job execution role
instance_count=1, # instances number, limit varies with instance type
instance_type="ml.c5.9xlarge", # ec2 instance type to run the loading job
volume_size_in_gb=30, # size in GB of the EBS volume to use
volume_kms_key=None, # KMS key ID to encrypt the processing volume
output_kms_key=None, # KMS key ID to encrypt processing job outputs
max_runtime_in_seconds=None, # timeout in seconds. Default is 24 hours.
sagemaker_session=session, # session object
tags=None,
) # a list of key-value pairs
nlp_score_processor.calculate(
nlp_scorer_config,
"text2score", # input column
"s3://{}/{}/{}/{}".format(
bucket, secdashboard_processed_folder, "output", "text2score.csv"
), # input from s3 bucket
"s3://{}/{}/{}".format(
bucket, secdashboard_processed_folder, "output"
), # output s3 prefix (both bucket and folder names are required)
"all_scores.csv", # output file name
)
client.download_file(
bucket,
"{}/{}/{}".format(secdashboard_processed_folder, "output", "all_scores.csv"),
"all_scores.csv",
)
qdf = pd.read_csv("all_scores.csv")
qdf.head()
%%time
qdf["summary"] = ""
for i in range(5):
qdf.loc[i, "summary"] = fullSummary(qdf.loc[i, "text2score"])
print(i, end="..")
i = 2
print(qdf.summary[i])
print("---------------")
print(qdf.text2score[i])
qsf = qdf.drop(["text2score"], axis=1)
qsf.to_csv("stock_sec_scores.csv", index=False)
!sudo yum -y install R
import subprocess
ret_code = subprocess.call(["/usr/bin/Rscript", "sec-dashboard/Dashboard.R"])
from IPython.display import Image
Image("sec-dashboard/dashboard.png", width=800, height=600)
## Read in the scores
scores = pd.read_csv("stock_sec_scores.csv")
# Choose whichever filings you want to compare for the 2nd and 3rd parameter
createRadarChart(scores, 2, 5)
| 0.405331 | 0.989771 |
```
# default_exp download
#exports
import pandas as pd
import typer
import textwrap
import requests
from bs4 import BeautifulSoup as bs
#exports
def get_every_noise_canvas(everynoise_url='https://everynoise.com/'):
r = requests.get(everynoise_url)
soup = bs(r.text, features='lxml')
canvases = soup.find_all('div', attrs={'class': 'canvas'})
assert len(canvases) == 1, ''
canvas = canvases[0]
return canvas
canvas = get_every_noise_canvas()
#exports
extract_style_elems = lambda genre_elem: {
style_elem.split(': ')[0].strip(): style_elem.split(': ')[1].replace('px', '')
for style_elem
in genre_elem['style'].split(';')
}
def extract_canvas_height_width(canvas):
canvas_style_elems = extract_style_elems(canvas)
canvas_height = int(canvas_style_elems['height'])
canvas_width = int(canvas_style_elems['width'])
return canvas_height, canvas_width
canvas_height, canvas_width = extract_canvas_height_width(canvas)
canvas_height, canvas_width
genre_elem = canvas.find('div')
genre_elem
#exports
genre_elem_to_name = lambda genre_elem: genre_elem.text.replace('» ', '')
genre_elem_to_name(genre_elem)
genre_style_elems = extract_style_elems(genre_elem)
genre_style_elems
#exports
def get_genre_xy(genre_style_elems, canvas_width, canvas_height):
x = int(genre_style_elems['left'].replace('px', ''))
y = canvas_height - int(genre_style_elems['top'].replace('px', ''))
return x, y
x, y = get_genre_xy(genre_style_elems, canvas_width, canvas_height)
x, y
#exports
def extract_genre_attrs(genre_elem, canvas_width, canvas_height):
genre_attrs = {}
genre_style_elems = extract_style_elems(genre_elem)
genre_attrs['genre'] = genre_elem_to_name(genre_elem)
genre_attrs['x'], genre_attrs['y'] = get_genre_xy(genre_style_elems, canvas_width, canvas_height)
genre_attrs['hex_colour'] = genre_style_elems['color']
return genre_attrs
genre_attrs = extract_genre_attrs(genre_elem, canvas_width, canvas_height)
genre_attrs
#exports
def get_df_genre_attrs(everynoise_url='https://everynoise.com/'):
canvas = get_every_noise_canvas(everynoise_url=everynoise_url)
canvas_height, canvas_width = extract_canvas_height_width(canvas)
genre_elems = canvas.find_all('div')
all_genre_attrs = []
for genre_elem in genre_elems:
genre_attrs = extract_genre_attrs(genre_elem, canvas_width, canvas_height)
all_genre_attrs += [genre_attrs]
df_genre_attrs = pd.DataFrame(all_genre_attrs)
return df_genre_attrs
df_genre_attrs = get_df_genre_attrs()
df_genre_attrs.head()
#exports
app = typer.Typer()
#exports
@app.command()
def download_genre_attrs(fp='data/genre_attrs.csv'):
# TODO if dir does not exist then create it
df_genre_attrs = get_df_genre_attrs()
df_genre_attrs.to_csv(fp, index=False)
return
fp = '../data/genre_attrs.csv'
download_genre_attrs(fp)
#exports
if __name__ == '__main__' and '__file__' in globals():
app()
#hide
from nbdev.export import notebook2script
notebook2script('01-library-gen.ipynb')
```
|
github_jupyter
|
# default_exp download
#exports
import pandas as pd
import typer
import textwrap
import requests
from bs4 import BeautifulSoup as bs
#exports
def get_every_noise_canvas(everynoise_url='https://everynoise.com/'):
r = requests.get(everynoise_url)
soup = bs(r.text, features='lxml')
canvases = soup.find_all('div', attrs={'class': 'canvas'})
assert len(canvases) == 1, ''
canvas = canvases[0]
return canvas
canvas = get_every_noise_canvas()
#exports
extract_style_elems = lambda genre_elem: {
style_elem.split(': ')[0].strip(): style_elem.split(': ')[1].replace('px', '')
for style_elem
in genre_elem['style'].split(';')
}
def extract_canvas_height_width(canvas):
canvas_style_elems = extract_style_elems(canvas)
canvas_height = int(canvas_style_elems['height'])
canvas_width = int(canvas_style_elems['width'])
return canvas_height, canvas_width
canvas_height, canvas_width = extract_canvas_height_width(canvas)
canvas_height, canvas_width
genre_elem = canvas.find('div')
genre_elem
#exports
genre_elem_to_name = lambda genre_elem: genre_elem.text.replace('» ', '')
genre_elem_to_name(genre_elem)
genre_style_elems = extract_style_elems(genre_elem)
genre_style_elems
#exports
def get_genre_xy(genre_style_elems, canvas_width, canvas_height):
x = int(genre_style_elems['left'].replace('px', ''))
y = canvas_height - int(genre_style_elems['top'].replace('px', ''))
return x, y
x, y = get_genre_xy(genre_style_elems, canvas_width, canvas_height)
x, y
#exports
def extract_genre_attrs(genre_elem, canvas_width, canvas_height):
genre_attrs = {}
genre_style_elems = extract_style_elems(genre_elem)
genre_attrs['genre'] = genre_elem_to_name(genre_elem)
genre_attrs['x'], genre_attrs['y'] = get_genre_xy(genre_style_elems, canvas_width, canvas_height)
genre_attrs['hex_colour'] = genre_style_elems['color']
return genre_attrs
genre_attrs = extract_genre_attrs(genre_elem, canvas_width, canvas_height)
genre_attrs
#exports
def get_df_genre_attrs(everynoise_url='https://everynoise.com/'):
canvas = get_every_noise_canvas(everynoise_url=everynoise_url)
canvas_height, canvas_width = extract_canvas_height_width(canvas)
genre_elems = canvas.find_all('div')
all_genre_attrs = []
for genre_elem in genre_elems:
genre_attrs = extract_genre_attrs(genre_elem, canvas_width, canvas_height)
all_genre_attrs += [genre_attrs]
df_genre_attrs = pd.DataFrame(all_genre_attrs)
return df_genre_attrs
df_genre_attrs = get_df_genre_attrs()
df_genre_attrs.head()
#exports
app = typer.Typer()
#exports
@app.command()
def download_genre_attrs(fp='data/genre_attrs.csv'):
# TODO if dir does not exist then create it
df_genre_attrs = get_df_genre_attrs()
df_genre_attrs.to_csv(fp, index=False)
return
fp = '../data/genre_attrs.csv'
download_genre_attrs(fp)
#exports
if __name__ == '__main__' and '__file__' in globals():
app()
#hide
from nbdev.export import notebook2script
notebook2script('01-library-gen.ipynb')
| 0.164047 | 0.204699 |
```
Project: Project 2: Luther
Date: 02/03/2017
Name: Prashant Tatineni
```
# Project Overview
For Project Luther, I gathered the set of all films listed under movie franchises on boxofficemojo.com. My goal was to predict the success of a movie sequel (i.e., domestic gross in USD) based on the performance of other sequels, and especially based on previous films in that particular franchise. I saw some linear correlation between certain variables, like number of theaters, and the total domestic gross, but the predictions from my final model were not entirely reasonable. More time could be spent on better addressing the various outliers in the dataset.
# Summary of Solution Steps
1. Retrieve data from boxofficemojo.com.
2. Clean up data and reduce to a set of predictor variables, with "Adjusted Gross" as the target for prediction.
3. Run Linear Regression model.
4. Review model performance.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.display import Image
import requests
from bs4 import BeautifulSoup
import dateutil.parser
import statsmodels.api as sm
import patsy
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
import sys, sklearn
from sklearn import linear_model, preprocessing
from sklearn import metrics
%matplotlib inline
```
## Step 1
I started with the "Franchises" list on Boxofficemojo.com. Within each franchise page, I scraped each movie's information and enter it into a Python dictionary. If it's already in the dictionary, the entry will be overwritten, except with a different Franchise name. But note below that the url for "Franchises" list was sorted Ascending, so this conveniently rolls "subfranchises" into their "parent" franchise.
E.g., "Fantastic Beasts" and the "Harry Potter" movies have their own separate Franchises, but they will all be tagged as the "JKRowling" franchise, i.e. "./chart/?id=jkrowling.htm"
Also, because I was comparing sequels to their predecessors, I focused on Domestic Gross, adjusted for ticket price inflation.
```
url = 'http://www.boxofficemojo.com/franchises/?view=Franchise&sort=nummovies&order=ASC&p=.htm'
response = requests.get(url)
page = response.text
soup = BeautifulSoup(page,"lxml")
tables = soup.find_all("table")
rows = [row for row in tables[3].find_all('tr')]
rows = rows[1:]
# Initialize empty dictionary of movies
movies = {}
for row in rows:
items = row.find_all('td')
franchise = items[0].find('a')['href']
franchiseurl = 'http://www.boxofficemojo.com/franchises/' + franchise[2:]
response = requests.get(franchiseurl)
franchise_page = response.text
franchise_soup = BeautifulSoup(franchise_page,"lxml")
franchise_tables = franchise_soup.find_all("table")
franchise_gross = [row for row in franchise_tables[4].find_all('tr')]
franchise_gross = franchise_gross[1:len(franchise_gross)-2]
franchise_adjgross = [row for row in franchise_tables[5].find_all('tr')]
franchise_adjgross = franchise_adjgross[1:len(franchise_adjgross)-2]
# Assign movieurl as key
# Add title, franchise, inflation-adjusted gross, release date.
for row in franchise_adjgross:
movie_info = row.find_all('td')
movieurl = movie_info[1].find('a')['href']
title = movie_info[1]
adjgross = movie_info[3]
release = movie_info[5]
movies[movieurl] = [title.text]
movies[movieurl].append(franchise)
movies[movieurl].append(adjgross.text)
movies[movieurl].append(release.text)
# Add number of theaters for the above movies
for row in franchise_gross:
movie_info = row.find_all('td')
movieurl = movie_info[1].find('a')['href']
theaters = movie_info[4]
if movieurl in movies.keys():
movies[movieurl].append(theaters.text)
df = pd.DataFrame(movies.values())
df.columns = ['Title','Franchise', 'AdjGross', 'Release', 'Theaters']
df.head()
df.shape
```
## Step 2
Clean up data.
```
# Remove movies that were re-issues, special editions, or separate 3D or IMAX versions.
df['Ignore'] = df['Title'].apply(lambda x: 're-issue' in x.lower() or 're-release' in x.lower() or 'special edition' in x.lower() or '3d)' in x.lower() or 'imax' in x.lower())
df = df[(df.Ignore == False)]
del df['Ignore']
df.shape
# Convert Adjusted Gross to a number
df['AdjGross'] = df['AdjGross'].apply(lambda x: int(x.replace('$','').replace(',','')))
# Convert Date string to dateobject. Need to prepend '19' for dates > 17 because Python treats '/60' as year '2060'
df['Release'] = df['Release'].apply(lambda x: (x[:-2] + '19' + x[-2:]) if int(x[-2:]) > 17 else x)
df['Release'] = df['Release'].apply(lambda x: dateutil.parser.parse(x))
```
The films need to be grouped by franchise so that franchise-related data can be included as featured for each observation.
- The Average Adjusted Gross of all previous films in the franchise
- The Adjusted Gross of the very first film in the franchise
- The Release Date of the previous film in the franchise
- The Release Date of the very first film in the franchise
- The Series Number of the film in that franchise
-- I considered using the film's number in the franchise as a rank value that could be split into indicator variables, but it's useful as a linear value because the total accrued sum of $ earned by the franchise is a linear combination of "SeriesNum" and "PrevAvgGross"
```
df = df.sort_values(['Franchise','Release'])
df['CumGross'] = df.groupby(['Franchise'])['AdjGross'].apply(lambda x: x.cumsum())
df['SeriesNum'] = df.groupby(['Franchise'])['Release'].apply(lambda x: x.rank())
df['PrevAvgGross'] = (df['CumGross'] - df['AdjGross'])/(df['SeriesNum'] - 1)
```
- Number of Theaters in which the film showed
-- Where this number was unavailable, replaced '-' with 0; the 0 will later be replaced with the mean number of theaters for the other films in the same franchise. I chose the average as a reasonable estimate.
```
df.Theaters = df.Theaters.replace('-','0')
df['Theaters'] = df['Theaters'].apply(lambda x: int(x.replace(',','')))
df['PrevRelease'] = df['Release'].shift()
# Create a second dataframe with franchise group-related information.
df_group = pd.DataFrame(df.groupby(['Franchise'])['Title'].apply(lambda x: x.count()))
df_group['FirstGross'] = df.groupby(['Franchise'])['AdjGross'].first()
df_group['FirstRelease'] = df.groupby(['Franchise'])['Release'].first()
df_group['SumTheaters'] = df.groupby(['Franchise'])['Theaters'].apply(lambda x: x.sum())
df_group.columns = ['NumOfFilms','FirstGross','FirstRelease','SumTheaters']
df_group['AvgTheaters'] = df_group['SumTheaters']/df_group['NumOfFilms']
df_group['Franchise'] = df.groupby(['Franchise'])['Franchise'].first()
df = df.merge(df_group, on='Franchise')
df.head()
df['Theaters'] = df.Theaters.replace(0,df.AvgTheaters)
# Drop rows with NaN. Drops all first films, but I've already stored first film information within other features.
df = df.dropna()
df.shape
df['DaysSinceFirstFilm'] = df.Release - df.FirstRelease
df['DaysSinceFirstFilm'] = df['DaysSinceFirstFilm'].apply(lambda x: x.days)
df['DaysSincePrevFilm'] = df.Release - df.PrevRelease
df['DaysSincePrevFilm'] = df['DaysSincePrevFilm'].apply(lambda x: x.days)
df.sort_values('Release',ascending=False).head()
```
For the regression model, I decided to keep data for films released through 2016, but drop the 3 films released this year; because of their recent release date, their gross earnings will not yet be representative.
```
films17 = df.loc[[530,712,676]]
# Grabbing columns for regression model and dropping 2017 films
dfreg = df[['AdjGross','Theaters','SeriesNum','PrevAvgGross','FirstGross','DaysSinceFirstFilm','DaysSincePrevFilm']]
dfreg = dfreg.drop([530,712,676])
dfreg.shape
```
## Step 3
Apply Linear Regression.
```
dfreg.corr()
sns.pairplot(dfreg);
sns.regplot((dfreg.PrevAvgGross), (dfreg.AdjGross));
sns.regplot(np.log(dfreg.Theaters), np.log(dfreg.AdjGross));
```
In the pairplot we can see that 'AdjGross' may have some correlation with the variables, particularly 'Theaters' and 'PrevAvgGross'. However, it looks like a polynomial model, or natural log / some other transformation will be required before fitting a linear model.
```
y, X = patsy.dmatrices('AdjGross ~ Theaters + SeriesNum + PrevAvgGross + FirstGross + DaysSinceFirstFilm + DaysSincePrevFilm', data=dfreg, return_type="dataframe")
```
### First try: Initial linear regression model with statsmodels
```
model = sm.OLS(y, X)
fit = model.fit()
fit.summary()
fit.resid.plot(style='o');
```
### Try Polynomial Regression
```
polyX=PolynomialFeatures(2).fit_transform(X)
polymodel = sm.OLS(y, polyX)
polyfit = polymodel.fit()
polyfit.rsquared
polyfit.resid.plot(style='o');
polyfit.rsquared_adj
```
### Heteroskedasticity
The polynomial regression improved the Adjusted Rsquared and the residual plot, but there's still issues with other statistics including skew. It's worth running the Breusch-Pagan test:
```
hetnames = ['Lagrange multiplier statistic', 'p-val', 'f-val', 'f p-val']
hettest = sm.stats.diagnostic.het_breushpagan(fit.resid, fit.model.exog)
zip(hetnames,hettest)
hetnames = ['Lagrange multiplier statistic', 'p-val', 'f-val', 'f p-val']
hettest = sm.stats.diagnostic.het_breushpagan(polyfit.resid, fit.model.exog)
zip(hetnames,hettest)
```
### Apply Box-Cox Transformation
As seen above the p-values were very low, suggesting the data is indeed tending towards heteroskedasticity. To improve the data we can apply boxcox.
```
dfPolyX = pd.DataFrame(polyX)
bcPolyX = pd.DataFrame()
for i in range(dfPolyX.shape[1]):
bcPolyX[i] = scipy.stats.boxcox(dfPolyX[i])[0]
# Transformed data with Box-Cox:
bcPolyX.head()
# Introduce log(y) for target variable:
y = y.reset_index(drop=True)
logy = np.log(y)
```
### Try Polynomial Regression again with Log Y and Box-Cox transformed X
```
logPolyModel = sm.OLS(logy, bcPolyX)
logPolyFit = logPolyModel.fit()
logPolyFit.rsquared_adj
```
### Apply Regularization using Elastic Net to optimize this model.
```
X_scaled = preprocessing.scale(bcPolyX)
en_cv = linear_model.ElasticNetCV(cv=10, normalize=False)
en_cv.fit(X_scaled, logy)
en_cv.coef_
logy_en = en_cv.predict(X_scaled)
mse = metrics.mean_squared_error(logy, logy_en)
# The mean square error for this model
mse
plt.scatter([x for x in range(540)],(pd.DataFrame(logy_en)[0] - logy['AdjGross']));
```
## Step 4
As seen above, Polynomial Regression with Elastic Net produces a model with several nonzero coefficients for the given features. I decided to try testing this model on the three new sequels for 2017.
```
films17
df17 = films17[['AdjGross','Theaters','SeriesNum','PrevAvgGross','FirstGross','DaysSinceFirstFilm','DaysSincePrevFilm']]
y17, X17 = patsy.dmatrices('AdjGross ~ Theaters + SeriesNum + PrevAvgGross + FirstGross + DaysSinceFirstFilm + DaysSincePrevFilm', data=df17, return_type="dataframe")
polyX17 = PolynomialFeatures(2).fit_transform(X17)
dfPolyX17 = pd.DataFrame(polyX17)
bcPolyX17 = pd.DataFrame()
for i in range(dfPolyX17.shape[1]):
bcPolyX17[i] = scipy.stats.boxcox(dfPolyX17[i])[0]
X17_scaled = preprocessing.scale(bcPolyX17)
# Run the "en_cv" model from above on the 2017 data:
logy_en_2017 = en_cv.predict(X17_scaled)
# Predicted Adjusted Gross:
pd.DataFrame(np.exp(logy_en_2017))
# Adjusted Gross as of 2/1:
y17
```
|
github_jupyter
|
Project: Project 2: Luther
Date: 02/03/2017
Name: Prashant Tatineni
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.display import Image
import requests
from bs4 import BeautifulSoup
import dateutil.parser
import statsmodels.api as sm
import patsy
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
import sys, sklearn
from sklearn import linear_model, preprocessing
from sklearn import metrics
%matplotlib inline
url = 'http://www.boxofficemojo.com/franchises/?view=Franchise&sort=nummovies&order=ASC&p=.htm'
response = requests.get(url)
page = response.text
soup = BeautifulSoup(page,"lxml")
tables = soup.find_all("table")
rows = [row for row in tables[3].find_all('tr')]
rows = rows[1:]
# Initialize empty dictionary of movies
movies = {}
for row in rows:
items = row.find_all('td')
franchise = items[0].find('a')['href']
franchiseurl = 'http://www.boxofficemojo.com/franchises/' + franchise[2:]
response = requests.get(franchiseurl)
franchise_page = response.text
franchise_soup = BeautifulSoup(franchise_page,"lxml")
franchise_tables = franchise_soup.find_all("table")
franchise_gross = [row for row in franchise_tables[4].find_all('tr')]
franchise_gross = franchise_gross[1:len(franchise_gross)-2]
franchise_adjgross = [row for row in franchise_tables[5].find_all('tr')]
franchise_adjgross = franchise_adjgross[1:len(franchise_adjgross)-2]
# Assign movieurl as key
# Add title, franchise, inflation-adjusted gross, release date.
for row in franchise_adjgross:
movie_info = row.find_all('td')
movieurl = movie_info[1].find('a')['href']
title = movie_info[1]
adjgross = movie_info[3]
release = movie_info[5]
movies[movieurl] = [title.text]
movies[movieurl].append(franchise)
movies[movieurl].append(adjgross.text)
movies[movieurl].append(release.text)
# Add number of theaters for the above movies
for row in franchise_gross:
movie_info = row.find_all('td')
movieurl = movie_info[1].find('a')['href']
theaters = movie_info[4]
if movieurl in movies.keys():
movies[movieurl].append(theaters.text)
df = pd.DataFrame(movies.values())
df.columns = ['Title','Franchise', 'AdjGross', 'Release', 'Theaters']
df.head()
df.shape
# Remove movies that were re-issues, special editions, or separate 3D or IMAX versions.
df['Ignore'] = df['Title'].apply(lambda x: 're-issue' in x.lower() or 're-release' in x.lower() or 'special edition' in x.lower() or '3d)' in x.lower() or 'imax' in x.lower())
df = df[(df.Ignore == False)]
del df['Ignore']
df.shape
# Convert Adjusted Gross to a number
df['AdjGross'] = df['AdjGross'].apply(lambda x: int(x.replace('$','').replace(',','')))
# Convert Date string to dateobject. Need to prepend '19' for dates > 17 because Python treats '/60' as year '2060'
df['Release'] = df['Release'].apply(lambda x: (x[:-2] + '19' + x[-2:]) if int(x[-2:]) > 17 else x)
df['Release'] = df['Release'].apply(lambda x: dateutil.parser.parse(x))
df = df.sort_values(['Franchise','Release'])
df['CumGross'] = df.groupby(['Franchise'])['AdjGross'].apply(lambda x: x.cumsum())
df['SeriesNum'] = df.groupby(['Franchise'])['Release'].apply(lambda x: x.rank())
df['PrevAvgGross'] = (df['CumGross'] - df['AdjGross'])/(df['SeriesNum'] - 1)
df.Theaters = df.Theaters.replace('-','0')
df['Theaters'] = df['Theaters'].apply(lambda x: int(x.replace(',','')))
df['PrevRelease'] = df['Release'].shift()
# Create a second dataframe with franchise group-related information.
df_group = pd.DataFrame(df.groupby(['Franchise'])['Title'].apply(lambda x: x.count()))
df_group['FirstGross'] = df.groupby(['Franchise'])['AdjGross'].first()
df_group['FirstRelease'] = df.groupby(['Franchise'])['Release'].first()
df_group['SumTheaters'] = df.groupby(['Franchise'])['Theaters'].apply(lambda x: x.sum())
df_group.columns = ['NumOfFilms','FirstGross','FirstRelease','SumTheaters']
df_group['AvgTheaters'] = df_group['SumTheaters']/df_group['NumOfFilms']
df_group['Franchise'] = df.groupby(['Franchise'])['Franchise'].first()
df = df.merge(df_group, on='Franchise')
df.head()
df['Theaters'] = df.Theaters.replace(0,df.AvgTheaters)
# Drop rows with NaN. Drops all first films, but I've already stored first film information within other features.
df = df.dropna()
df.shape
df['DaysSinceFirstFilm'] = df.Release - df.FirstRelease
df['DaysSinceFirstFilm'] = df['DaysSinceFirstFilm'].apply(lambda x: x.days)
df['DaysSincePrevFilm'] = df.Release - df.PrevRelease
df['DaysSincePrevFilm'] = df['DaysSincePrevFilm'].apply(lambda x: x.days)
df.sort_values('Release',ascending=False).head()
films17 = df.loc[[530,712,676]]
# Grabbing columns for regression model and dropping 2017 films
dfreg = df[['AdjGross','Theaters','SeriesNum','PrevAvgGross','FirstGross','DaysSinceFirstFilm','DaysSincePrevFilm']]
dfreg = dfreg.drop([530,712,676])
dfreg.shape
dfreg.corr()
sns.pairplot(dfreg);
sns.regplot((dfreg.PrevAvgGross), (dfreg.AdjGross));
sns.regplot(np.log(dfreg.Theaters), np.log(dfreg.AdjGross));
y, X = patsy.dmatrices('AdjGross ~ Theaters + SeriesNum + PrevAvgGross + FirstGross + DaysSinceFirstFilm + DaysSincePrevFilm', data=dfreg, return_type="dataframe")
model = sm.OLS(y, X)
fit = model.fit()
fit.summary()
fit.resid.plot(style='o');
polyX=PolynomialFeatures(2).fit_transform(X)
polymodel = sm.OLS(y, polyX)
polyfit = polymodel.fit()
polyfit.rsquared
polyfit.resid.plot(style='o');
polyfit.rsquared_adj
hetnames = ['Lagrange multiplier statistic', 'p-val', 'f-val', 'f p-val']
hettest = sm.stats.diagnostic.het_breushpagan(fit.resid, fit.model.exog)
zip(hetnames,hettest)
hetnames = ['Lagrange multiplier statistic', 'p-val', 'f-val', 'f p-val']
hettest = sm.stats.diagnostic.het_breushpagan(polyfit.resid, fit.model.exog)
zip(hetnames,hettest)
dfPolyX = pd.DataFrame(polyX)
bcPolyX = pd.DataFrame()
for i in range(dfPolyX.shape[1]):
bcPolyX[i] = scipy.stats.boxcox(dfPolyX[i])[0]
# Transformed data with Box-Cox:
bcPolyX.head()
# Introduce log(y) for target variable:
y = y.reset_index(drop=True)
logy = np.log(y)
logPolyModel = sm.OLS(logy, bcPolyX)
logPolyFit = logPolyModel.fit()
logPolyFit.rsquared_adj
X_scaled = preprocessing.scale(bcPolyX)
en_cv = linear_model.ElasticNetCV(cv=10, normalize=False)
en_cv.fit(X_scaled, logy)
en_cv.coef_
logy_en = en_cv.predict(X_scaled)
mse = metrics.mean_squared_error(logy, logy_en)
# The mean square error for this model
mse
plt.scatter([x for x in range(540)],(pd.DataFrame(logy_en)[0] - logy['AdjGross']));
films17
df17 = films17[['AdjGross','Theaters','SeriesNum','PrevAvgGross','FirstGross','DaysSinceFirstFilm','DaysSincePrevFilm']]
y17, X17 = patsy.dmatrices('AdjGross ~ Theaters + SeriesNum + PrevAvgGross + FirstGross + DaysSinceFirstFilm + DaysSincePrevFilm', data=df17, return_type="dataframe")
polyX17 = PolynomialFeatures(2).fit_transform(X17)
dfPolyX17 = pd.DataFrame(polyX17)
bcPolyX17 = pd.DataFrame()
for i in range(dfPolyX17.shape[1]):
bcPolyX17[i] = scipy.stats.boxcox(dfPolyX17[i])[0]
X17_scaled = preprocessing.scale(bcPolyX17)
# Run the "en_cv" model from above on the 2017 data:
logy_en_2017 = en_cv.predict(X17_scaled)
# Predicted Adjusted Gross:
pd.DataFrame(np.exp(logy_en_2017))
# Adjusted Gross as of 2/1:
y17
| 0.432063 | 0.912045 |
## Coupled Neuron Models
#### Nonlinear Oscillators (Coupled Neuron Models)
* Fitzhugh-Nagumo
* Morris-Lecar
* Hindmarsh-Rose
* Hodgkins-Huxley
```
# Shebang
import tensorflow as tf
import numpy as np
from numpy.fft import fft, fftfreq, rfft, fftshift
import matplotlib.pyplot as plt
import logging
import Helper as hp
```
### Asynchronous Weak Coupling
#### Coupled Fitzhugh-Nagumo
```
a = 0.75
b = 0.8
c = 3
i = -0.6
k1 = 0.03
k2 = 0.3
inits = [0.1, 0.1, 0.1, 0.1]
# Coupling method: Numerical bifurcation analysis of two coupled Fitzhugh-Nagumo oscillators
# Anderson Hoff, Juliana V. dos Santos, Cesar Manchein, Holokz A. Albaquerque
def coupled_fitzhughnagumo_equation(state, t):
v, w, v1, w1 = tf.unstack(state)
dv = c*(v + w - (v**3/3) + i) + k1*(v - v1)
dw = -1/c * (v - a + b*w)
dv1 = c*(v1 + w1 - (v1**3/3)) + k2*(v1 - v)
dw1 = -1/c * (v1 - a + b*w1)
return tf.stack([dv, dw, dv1, dw1])
v, w, v1, w1 = hp.generate_tensorflowsession(coupled_fitzhughnagumo_equation, inits, tfinal=100)
plt.plot(v)
plt.plot(v1)
```
#### Coupled Morris-Lecar
```
vk = -84
gk = 8
vca = 130
gca = 4.4
vl = -60
gl = 2
phi = 0.04
v1 = -1.2
v2 = 18
v3 = 2
v4 = 30
iapp = 80
k1 = 0.3
k2 = 0.3
inits = [0.1, 0.1, 0.11, 0.1]
def coupled_morrislecar_equation(state, t):
v, n, v0, n0 = tf.unstack(state)
dv = (-gca*(0.5*(1 + tf.tanh((v - v1)/v2)))*(v - vca) - gk*n*(v - vk) - gl*(v - vl) + iapp + k1*(v - v0))
dn = (phi*((0.5*(1 + tf.tanh((v - v3)/v4))) - n))/(1/tf.cosh((v - v3)/(2*v4)))
dv0 = (-gca*(0.5*(1 + tf.tanh((v0 - v1)/v2)))*(v0 - vca) - gk*n0*(v0 - vk) - gl*(v0 - vl) + iapp + k2*(v0 - v))
dn0 = (phi*((0.5*(1 + tf.tanh((v0 - v3)/v4))) - n0))/(1/tf.cosh((v0 - v3)/(2*v4)))
return tf.stack([dv, dn, dv0, dn0])
v, n, v0, n0 = hp.generate_tensorflowsession(coupled_morrislecar_equation, inits, tfinal=500)
plt.plot(v)
plt.plot(v0)
```
#### Coupled Hindmarsh-Rose
```
a = 1.0
b = 3.0
c = 1.0
d = 5.0
r = 0.006
s = 4.0
i = 1.6
xnot = -1.5
k1 = 0.3
k2 = 0.3
inits = [0.1, 0.1, 0.1, 0.11, 0.1, 0.1]
def coupled_hindmarshrose_equation(state, t):
x, y, z, x1, y1, z1 = tf.unstack(state)
dx = (y - a*(x**3) + (b*(x**2)) - z + i) + k1*(x - x1)
dy = c - d*(x**2) - y
dz = r*(s*(x - xnot) - z)
dx1 = (y1 - a*(x1**3) + (b*(x1**2)) - z1 + i) + k2*(x1 - x)
dy1 = c - d*(x1**2) - y1
dz1 = r*(s*(x1 - xnot) - z1)
return tf.stack([dx, dy, dz, dx1, dy1, dz1])
x, y, z, x1, y1, z1 = hp.generate_tensorflowsession(coupled_hindmarshrose_equation, inits, tfinal =100)
plt.plot(x)
plt.plot(x1)
```
#### Coupled Hodgkin-Huxley
```
g_K = 36
g_Na = 120
g_L = 0.3
E_K = 12
E_Na = -115
E_L = -10.613
C_m = 1
I = -10
k1 = 0.3
k2 = 0.3
inits = [0.1, 0.1, 0.1, 0.1, 0.11, 0.1, 0.1, 0.1]
def coupled_hodgkinhuxley_equation(state, t):
i, n, m, h, i1, n1, m1, h1 = tf.unstack(state)
# Alpha and beta functions for channel activation functions
alpha_n = (0.01*(i + 10))/(tf.exp((i + 10)/10) - 1)
beta_n = 0.125* tf.exp(i/80)
alpha_m = (0.1*(i + 25))/(tf.exp((i + 25)/10) - 1)
beta_m = 4*tf.exp(i/18)
alpha_h = (0.07*tf.exp(i/20))
beta_h = 1/(tf.exp((i + 30)/10) + 1)
alpha_n1 = (0.01*(i1 + 10))/(tf.exp((i1 + 10)/10) - 1)
beta_n1 = 0.125* tf.exp(i1/80)
alpha_m1 = (0.1*(i1 + 25))/(tf.exp((i1 + 25)/10) - 1)
beta_m1 = 4*tf.exp(i1/18)
alpha_h1 = (0.07*tf.exp(i1/20))
beta_h1 = 1/(tf.exp((i1 + 30)/10) + 1)
# Differential Equations
di = (g_K*(n**4)*(i - E_K) + g_Na*(m**3)*h*(i - E_Na) + g_L*(i - E_L) - I)*(-1/C_m) + k1*(i - i1)
dn = alpha_n*(1 - n) - beta_n*n
dm = alpha_m*(1 - m) - beta_m*m
dh = alpha_h*(1 - h) - beta_h*h
di1 = (g_K*(n1**4)*(i1 - E_K) + g_Na*(m1**3)*h1*(i1 - E_Na) + g_L*(i1 - E_L) - I)*(-1/C_m) + k2*(i1 - i)
dn1 = alpha_n1*(1 - n1) - beta_n1*n1
dm1 = alpha_m1*(1 - m1) - beta_m1*m1
dh1 = alpha_h1*(1 - h1) - beta_h1*h1
return tf.stack([di, dn, dm, dh, di1, dn1, dm1, dh1])
i, n, m, h, i1, n1, m1, h1 = hp.generate_tensorflowsession(coupled_hodgkinhuxley_equation, inits, tfinal=100)
plt.plot(-i)
plt.plot(-i1)
```
### Synchronous Strong Coupling
#### Coupled Fitzhugh-Nagumo
```
a = 0.75
b = 0.8
c = 3
i = -0.75
k1 = 0.5
k2 = 0.5
inits = [0.01, 0.01, 0.01, 0.01]
# Coupling method: Numerical bifurcation analysis of two coupled Fitzhugh-Nagumo oscillators
# Anderson Hoff, Juliana V. dos Santos, Cesar Manchein, Holokz A. Albaquerque
def coupled_fitzhughnagumo_equation(state, t):
v, w, v1, w1 = tf.unstack(state)
dv = c*(v + w - (v**3/3) + i) + k1*(v - v1)
dw = -1/c * (v - a + b*w)
dv1 = c*(v1 + w1 - (v1**3/3) + i) + k2*(v1 - v)
dw1 = -1/c * (v1 - a + b*w1)
return tf.stack([dv, dw, dv1, dw1])
v, w, v1, w1 = hp.generate_tensorflowsession(coupled_fitzhughnagumo_equation, inits, tfinal=100)
plt.plot(v)
plt.plot(v1)
```
#### Coupled Morris-Lecar
```
vk = -84
gk = 8
vca = 130
gca = 4.4
vl = -60
gl = 2
phi = 0.04
v1 = -1.2
v2 = 18
v3 = 2
v4 = 30
iapp = 80
k1 = 0.3
k2 = 0.3
inits = [0.1, 0.1, 0.1, 0.1]
def coupled_morrislecar_equation(state, t):
v, n, v0, n0 = tf.unstack(state)
dv = (-gca*(0.5*(1 + tf.tanh((v - v1)/v2)))*(v - vca) - gk*n*(v - vk) - gl*(v - vl) + iapp + k1*(v - v0))
dn = (phi*((0.5*(1 + tf.tanh((v - v3)/v4))) - n))/(1/tf.cosh((v - v3)/(2*v4)))
dv0 = (-gca*(0.5*(1 + tf.tanh((v0 - v1)/v2)))*(v0 - vca) - gk*n0*(v0 - vk) - gl*(v0 - vl) + iapp + k2*(v0 - v))
dn0 = (phi*((0.5*(1 + tf.tanh((v0 - v3)/v4))) - n0))/(1/tf.cosh((v0 - v3)/(2*v4)))
return tf.stack([dv, dn, dv0, dn0])
v, n, v0, n0 = hp.generate_tensorflowsession(coupled_morrislecar_equation, inits, tfinal=500)
plt.plot(v)
plt.plot(v0)
```
#### Coupled Hindmarsh-Rose
```
a = 1.0
b = 3.0
c = 1.0
d = 5.0
r = 0.006
s = 4.0
i = 1.6
xnot = -1.5
k1 = 0.3
k2 = 0.3
inits = [0.1, 0.1, 0.1, 0.1, 0.1, 0.1]
def coupled_hindmarshrose_equation(state, t):
x, y, z, x1, y1, z1 = tf.unstack(state)
dx = (y - a*(x**3) + (b*(x**2)) - z + i) + k1*(x - x1)
dy = c - d*(x**2) - y
dz = r*(s*(x - xnot) - z)
dx1 = (y1 - a*(x1**3) + (b*(x1**2)) - z1 + i) + k2*(x1 - x)
dy1 = c - d*(x1**2) - y1
dz1 = r*(s*(x1 - xnot) - z1)
return tf.stack([dx, dy, dz, dx1, dy1, dz1])
x, y, z, x1, y1, z1 = hp.generate_tensorflowsession(coupled_hindmarshrose_equation, inits, tfinal=100)
plt.plot(x)
plt.plot(x1)
```
#### Coupled Hodgkin-Huxley
```
g_K = 36
g_Na = 120
g_L = 0.3
E_K = 12
E_Na = -115
E_L = -10.613
C_m = 1
I = -10
k1 = 0.3
k2 = 0.3
inits = [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]
def coupled_hodgkinhuxley_equation(state, t):
i, n, m, h, i1, n1, m1, h1 = tf.unstack(state)
# Alpha and beta functions for channel activation functions
alpha_n = (0.01*(i + 10))/(tf.exp((i + 10)/10) - 1)
beta_n = 0.125* tf.exp(i/80)
alpha_m = (0.1*(i + 25))/(tf.exp((i + 25)/10) - 1)
beta_m = 4*tf.exp(i/18)
alpha_h = (0.07*tf.exp(i/20))
beta_h = 1/(tf.exp((i + 30)/10) + 1)
alpha_n1 = (0.01*(i1 + 10))/(tf.exp((i1 + 10)/10) - 1)
beta_n1 = 0.125* tf.exp(i1/80)
alpha_m1 = (0.1*(i1 + 25))/(tf.exp((i1 + 25)/10) - 1)
beta_m1 = 4*tf.exp(i1/18)
alpha_h1 = (0.07*tf.exp(i1/20))
beta_h1 = 1/(tf.exp((i1 + 30)/10) + 1)
# Differential Equations
di = (g_K*(n**4)*(i - E_K) + g_Na*(m**3)*h*(i - E_Na) + g_L*(i - E_L) - I)*(-1/C_m) + k1*(i - i1)
dn = alpha_n*(1 - n) - beta_n*n
dm = alpha_m*(1 - m) - beta_m*m
dh = alpha_h*(1 - h) - beta_h*h
di1 = (g_K*(n1**4)*(i1 - E_K) + g_Na*(m1**3)*h1*(i1 - E_Na) + g_L*(i1 - E_L) - I)*(-1/C_m) + k2*(i1 - i)
dn1 = alpha_n1*(1 - n1) - beta_n1*n1
dm1 = alpha_m1*(1 - m1) - beta_m1*m1
dh1 = alpha_h1*(1 - h1) - beta_h1*h1
return tf.stack([di, dn, dm, dh, di1, dn1, dm1, dh1])
i, n, m, h, i1, n1, m1, h1 = hp.generate_tensorflowsession(coupled_hodgkinhuxley_equation, inits, tfinal=100)
plt.plot(-i)
plt.plot(-i1)
```
|
github_jupyter
|
# Shebang
import tensorflow as tf
import numpy as np
from numpy.fft import fft, fftfreq, rfft, fftshift
import matplotlib.pyplot as plt
import logging
import Helper as hp
a = 0.75
b = 0.8
c = 3
i = -0.6
k1 = 0.03
k2 = 0.3
inits = [0.1, 0.1, 0.1, 0.1]
# Coupling method: Numerical bifurcation analysis of two coupled Fitzhugh-Nagumo oscillators
# Anderson Hoff, Juliana V. dos Santos, Cesar Manchein, Holokz A. Albaquerque
def coupled_fitzhughnagumo_equation(state, t):
v, w, v1, w1 = tf.unstack(state)
dv = c*(v + w - (v**3/3) + i) + k1*(v - v1)
dw = -1/c * (v - a + b*w)
dv1 = c*(v1 + w1 - (v1**3/3)) + k2*(v1 - v)
dw1 = -1/c * (v1 - a + b*w1)
return tf.stack([dv, dw, dv1, dw1])
v, w, v1, w1 = hp.generate_tensorflowsession(coupled_fitzhughnagumo_equation, inits, tfinal=100)
plt.plot(v)
plt.plot(v1)
vk = -84
gk = 8
vca = 130
gca = 4.4
vl = -60
gl = 2
phi = 0.04
v1 = -1.2
v2 = 18
v3 = 2
v4 = 30
iapp = 80
k1 = 0.3
k2 = 0.3
inits = [0.1, 0.1, 0.11, 0.1]
def coupled_morrislecar_equation(state, t):
v, n, v0, n0 = tf.unstack(state)
dv = (-gca*(0.5*(1 + tf.tanh((v - v1)/v2)))*(v - vca) - gk*n*(v - vk) - gl*(v - vl) + iapp + k1*(v - v0))
dn = (phi*((0.5*(1 + tf.tanh((v - v3)/v4))) - n))/(1/tf.cosh((v - v3)/(2*v4)))
dv0 = (-gca*(0.5*(1 + tf.tanh((v0 - v1)/v2)))*(v0 - vca) - gk*n0*(v0 - vk) - gl*(v0 - vl) + iapp + k2*(v0 - v))
dn0 = (phi*((0.5*(1 + tf.tanh((v0 - v3)/v4))) - n0))/(1/tf.cosh((v0 - v3)/(2*v4)))
return tf.stack([dv, dn, dv0, dn0])
v, n, v0, n0 = hp.generate_tensorflowsession(coupled_morrislecar_equation, inits, tfinal=500)
plt.plot(v)
plt.plot(v0)
a = 1.0
b = 3.0
c = 1.0
d = 5.0
r = 0.006
s = 4.0
i = 1.6
xnot = -1.5
k1 = 0.3
k2 = 0.3
inits = [0.1, 0.1, 0.1, 0.11, 0.1, 0.1]
def coupled_hindmarshrose_equation(state, t):
x, y, z, x1, y1, z1 = tf.unstack(state)
dx = (y - a*(x**3) + (b*(x**2)) - z + i) + k1*(x - x1)
dy = c - d*(x**2) - y
dz = r*(s*(x - xnot) - z)
dx1 = (y1 - a*(x1**3) + (b*(x1**2)) - z1 + i) + k2*(x1 - x)
dy1 = c - d*(x1**2) - y1
dz1 = r*(s*(x1 - xnot) - z1)
return tf.stack([dx, dy, dz, dx1, dy1, dz1])
x, y, z, x1, y1, z1 = hp.generate_tensorflowsession(coupled_hindmarshrose_equation, inits, tfinal =100)
plt.plot(x)
plt.plot(x1)
g_K = 36
g_Na = 120
g_L = 0.3
E_K = 12
E_Na = -115
E_L = -10.613
C_m = 1
I = -10
k1 = 0.3
k2 = 0.3
inits = [0.1, 0.1, 0.1, 0.1, 0.11, 0.1, 0.1, 0.1]
def coupled_hodgkinhuxley_equation(state, t):
i, n, m, h, i1, n1, m1, h1 = tf.unstack(state)
# Alpha and beta functions for channel activation functions
alpha_n = (0.01*(i + 10))/(tf.exp((i + 10)/10) - 1)
beta_n = 0.125* tf.exp(i/80)
alpha_m = (0.1*(i + 25))/(tf.exp((i + 25)/10) - 1)
beta_m = 4*tf.exp(i/18)
alpha_h = (0.07*tf.exp(i/20))
beta_h = 1/(tf.exp((i + 30)/10) + 1)
alpha_n1 = (0.01*(i1 + 10))/(tf.exp((i1 + 10)/10) - 1)
beta_n1 = 0.125* tf.exp(i1/80)
alpha_m1 = (0.1*(i1 + 25))/(tf.exp((i1 + 25)/10) - 1)
beta_m1 = 4*tf.exp(i1/18)
alpha_h1 = (0.07*tf.exp(i1/20))
beta_h1 = 1/(tf.exp((i1 + 30)/10) + 1)
# Differential Equations
di = (g_K*(n**4)*(i - E_K) + g_Na*(m**3)*h*(i - E_Na) + g_L*(i - E_L) - I)*(-1/C_m) + k1*(i - i1)
dn = alpha_n*(1 - n) - beta_n*n
dm = alpha_m*(1 - m) - beta_m*m
dh = alpha_h*(1 - h) - beta_h*h
di1 = (g_K*(n1**4)*(i1 - E_K) + g_Na*(m1**3)*h1*(i1 - E_Na) + g_L*(i1 - E_L) - I)*(-1/C_m) + k2*(i1 - i)
dn1 = alpha_n1*(1 - n1) - beta_n1*n1
dm1 = alpha_m1*(1 - m1) - beta_m1*m1
dh1 = alpha_h1*(1 - h1) - beta_h1*h1
return tf.stack([di, dn, dm, dh, di1, dn1, dm1, dh1])
i, n, m, h, i1, n1, m1, h1 = hp.generate_tensorflowsession(coupled_hodgkinhuxley_equation, inits, tfinal=100)
plt.plot(-i)
plt.plot(-i1)
a = 0.75
b = 0.8
c = 3
i = -0.75
k1 = 0.5
k2 = 0.5
inits = [0.01, 0.01, 0.01, 0.01]
# Coupling method: Numerical bifurcation analysis of two coupled Fitzhugh-Nagumo oscillators
# Anderson Hoff, Juliana V. dos Santos, Cesar Manchein, Holokz A. Albaquerque
def coupled_fitzhughnagumo_equation(state, t):
v, w, v1, w1 = tf.unstack(state)
dv = c*(v + w - (v**3/3) + i) + k1*(v - v1)
dw = -1/c * (v - a + b*w)
dv1 = c*(v1 + w1 - (v1**3/3) + i) + k2*(v1 - v)
dw1 = -1/c * (v1 - a + b*w1)
return tf.stack([dv, dw, dv1, dw1])
v, w, v1, w1 = hp.generate_tensorflowsession(coupled_fitzhughnagumo_equation, inits, tfinal=100)
plt.plot(v)
plt.plot(v1)
vk = -84
gk = 8
vca = 130
gca = 4.4
vl = -60
gl = 2
phi = 0.04
v1 = -1.2
v2 = 18
v3 = 2
v4 = 30
iapp = 80
k1 = 0.3
k2 = 0.3
inits = [0.1, 0.1, 0.1, 0.1]
def coupled_morrislecar_equation(state, t):
v, n, v0, n0 = tf.unstack(state)
dv = (-gca*(0.5*(1 + tf.tanh((v - v1)/v2)))*(v - vca) - gk*n*(v - vk) - gl*(v - vl) + iapp + k1*(v - v0))
dn = (phi*((0.5*(1 + tf.tanh((v - v3)/v4))) - n))/(1/tf.cosh((v - v3)/(2*v4)))
dv0 = (-gca*(0.5*(1 + tf.tanh((v0 - v1)/v2)))*(v0 - vca) - gk*n0*(v0 - vk) - gl*(v0 - vl) + iapp + k2*(v0 - v))
dn0 = (phi*((0.5*(1 + tf.tanh((v0 - v3)/v4))) - n0))/(1/tf.cosh((v0 - v3)/(2*v4)))
return tf.stack([dv, dn, dv0, dn0])
v, n, v0, n0 = hp.generate_tensorflowsession(coupled_morrislecar_equation, inits, tfinal=500)
plt.plot(v)
plt.plot(v0)
a = 1.0
b = 3.0
c = 1.0
d = 5.0
r = 0.006
s = 4.0
i = 1.6
xnot = -1.5
k1 = 0.3
k2 = 0.3
inits = [0.1, 0.1, 0.1, 0.1, 0.1, 0.1]
def coupled_hindmarshrose_equation(state, t):
x, y, z, x1, y1, z1 = tf.unstack(state)
dx = (y - a*(x**3) + (b*(x**2)) - z + i) + k1*(x - x1)
dy = c - d*(x**2) - y
dz = r*(s*(x - xnot) - z)
dx1 = (y1 - a*(x1**3) + (b*(x1**2)) - z1 + i) + k2*(x1 - x)
dy1 = c - d*(x1**2) - y1
dz1 = r*(s*(x1 - xnot) - z1)
return tf.stack([dx, dy, dz, dx1, dy1, dz1])
x, y, z, x1, y1, z1 = hp.generate_tensorflowsession(coupled_hindmarshrose_equation, inits, tfinal=100)
plt.plot(x)
plt.plot(x1)
g_K = 36
g_Na = 120
g_L = 0.3
E_K = 12
E_Na = -115
E_L = -10.613
C_m = 1
I = -10
k1 = 0.3
k2 = 0.3
inits = [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]
def coupled_hodgkinhuxley_equation(state, t):
i, n, m, h, i1, n1, m1, h1 = tf.unstack(state)
# Alpha and beta functions for channel activation functions
alpha_n = (0.01*(i + 10))/(tf.exp((i + 10)/10) - 1)
beta_n = 0.125* tf.exp(i/80)
alpha_m = (0.1*(i + 25))/(tf.exp((i + 25)/10) - 1)
beta_m = 4*tf.exp(i/18)
alpha_h = (0.07*tf.exp(i/20))
beta_h = 1/(tf.exp((i + 30)/10) + 1)
alpha_n1 = (0.01*(i1 + 10))/(tf.exp((i1 + 10)/10) - 1)
beta_n1 = 0.125* tf.exp(i1/80)
alpha_m1 = (0.1*(i1 + 25))/(tf.exp((i1 + 25)/10) - 1)
beta_m1 = 4*tf.exp(i1/18)
alpha_h1 = (0.07*tf.exp(i1/20))
beta_h1 = 1/(tf.exp((i1 + 30)/10) + 1)
# Differential Equations
di = (g_K*(n**4)*(i - E_K) + g_Na*(m**3)*h*(i - E_Na) + g_L*(i - E_L) - I)*(-1/C_m) + k1*(i - i1)
dn = alpha_n*(1 - n) - beta_n*n
dm = alpha_m*(1 - m) - beta_m*m
dh = alpha_h*(1 - h) - beta_h*h
di1 = (g_K*(n1**4)*(i1 - E_K) + g_Na*(m1**3)*h1*(i1 - E_Na) + g_L*(i1 - E_L) - I)*(-1/C_m) + k2*(i1 - i)
dn1 = alpha_n1*(1 - n1) - beta_n1*n1
dm1 = alpha_m1*(1 - m1) - beta_m1*m1
dh1 = alpha_h1*(1 - h1) - beta_h1*h1
return tf.stack([di, dn, dm, dh, di1, dn1, dm1, dh1])
i, n, m, h, i1, n1, m1, h1 = hp.generate_tensorflowsession(coupled_hodgkinhuxley_equation, inits, tfinal=100)
plt.plot(-i)
plt.plot(-i1)
| 0.419648 | 0.799521 |
```
import mushi
import numpy as np
from scipy.special import expit
import matplotlib.pyplot as plt
import matplotlib.colors as colors
m = 10
t = np.logspace(0, 5, m)
change_points = t[:-1]
y = 1e3 * (1 + 2 * expit(100 * (t - 1e3)) + 10 * expit(-100 * (t - 1e2)))
eta = mushi.eta(t[:-1], y)
eta.plot();
n = 10
ksfs = mushi.kSFS(n=n)
plt.plot(change_points, ksfs.tmrca_cdf(eta))
plt.xlabel('$t$ (generations ago)')
plt.xscale('log')
plt.ylabel('TMRCA CDF')
plt.ylim([0, 1]);
mu0 = 100
ksfs.simulate(eta, mu0, seed=0)
ksfs.plot_total();
plt.xscale('log')
plt.yscale('log')
alpha_tv_trajectory = np.logspace(2, 5, 10)
alpha_spline_trajectory = np.logspace(1, 6, 10)
residuals = np.zeros((len(alpha_tv_trajectory), len(alpha_spline_trajectory), m))
loss = np.zeros((len(alpha_tv_trajectory), len(alpha_spline_trajectory)))
etas = {}
for i, alpha_tv in enumerate(alpha_tv_trajectory):
print(f'alpha_tv = {alpha_tv}')
for j, alpha_spline in enumerate(alpha_spline_trajectory):
print(f' alpha_spline = {alpha_spline}')
ksfs.clear_eta()
ksfs.infer_history(change_points, mu0, max_iter=100,
alpha_tv=alpha_tv, alpha_spline=alpha_spline, alpha_ridge=1e-6)
residuals[i, j, :] = np.log(ksfs.eta.y) - np.log(eta.y)
L = mushi.utils.C(n) @ mushi.utils.M(n, *ksfs.eta.arrays())
loss[i, j] = mushi.utils.prf(ksfs.mu.Z, ksfs.X, L)
etas[i, j] = ksfs.eta
plt.figure(figsize=(10, 5))
plt.pcolormesh(alpha_tv_trajectory, alpha_spline_trajectory,
(residuals ** 2).sum(2).T,
alpha=0.5, cmap="Reds", vmin=0)
plt.xlabel('$\\alpha_{\\rm tv}$')
plt.ylabel('$\\alpha_{\\rm spline}$')
plt.xscale('log')
plt.yscale('log')
cbar = plt.colorbar()
cbar.ax.set_ylabel('$\\int\\left(\\log\\eta - \\log\\eta_{\\rm true}\\right)^2$');
plt.figure(figsize=(10, 5))
plt.pcolormesh(alpha_tv_trajectory, alpha_spline_trajectory,
loss.T,
alpha=0.5, cmap="Reds")
plt.xlabel('$\\alpha_{\\rm tv}$')
plt.ylabel('$\\alpha_{\\rm spline}$')
plt.xscale('log')
plt.yscale('log')
cbar = plt.colorbar()
cbar.ax.set_ylabel('${\\rm loss}_1(\\eta)$');
j_choice = 5
plt.figure(figsize=(6, 5))
plt.pcolormesh(alpha_tv_trajectory, t, residuals[:, j_choice, :].T, alpha=0.5, cmap="RdBu_r", vmin=-.5, vmax=.5)
plt.xlabel('$\\alpha_{\\rm tv}$')
plt.ylabel(f'$t$ (generations ago)')
plt.xscale('log')
plt.yscale('log')
plt.title(f'$\\alpha_{{\\rm spline}} = {alpha_spline_trajectory[j_choice]:.2e}$')
cbar = plt.colorbar()
cbar.ax.set_ylabel('$\\log\\eta - \\log\\eta_{\\rm true}$');
_, axes = plt.subplots(1, 2, figsize=(10, 5))
plt.sca(axes[1])
eta.plot(color='grey', lw=6)
plt.title(f'$\\alpha_{{\\rm spline}} = {alpha_spline_trajectory[j_choice]:.2e}$')
plt.sca(axes[0])
mushi.kSFS(X=ksfs.X).plot_total(kwargs=dict(color='k', ls='', marker='o'))
plt.title(f'$\\alpha_{{\\rm spline}} = {alpha_spline_trajectory[j_choice]:.2e}$')
cmap = plt.get_cmap('viridis')
for i, alpha_tv in enumerate(alpha_tv_trajectory):
plt.sca(axes[1])
etas[i, j_choice].plot(label=f'{alpha_tv:.2e}',
color=cmap(i / (len(alpha_tv_trajectory) - 1)))
plt.sca(axes[0])
plt.plot(range(1, n), mu0*(mushi.utils.C(n) @ mushi.utils.M(n, *etas[i, j_choice].arrays())).sum(1),
ls='-', marker='.',
color=cmap(i / (len(alpha_tv_trajectory) - 1)),
label=f'{alpha_tv:.2e}')
plt.sca(axes[0])
plt.xscale('log')
plt.yscale('log')
plt.sca(axes[1])
plt.legend(title='$\\alpha_{\\rm tv}$', bbox_to_anchor=(1.04, 1), loc='upper left', ncol=1);
```
|
github_jupyter
|
import mushi
import numpy as np
from scipy.special import expit
import matplotlib.pyplot as plt
import matplotlib.colors as colors
m = 10
t = np.logspace(0, 5, m)
change_points = t[:-1]
y = 1e3 * (1 + 2 * expit(100 * (t - 1e3)) + 10 * expit(-100 * (t - 1e2)))
eta = mushi.eta(t[:-1], y)
eta.plot();
n = 10
ksfs = mushi.kSFS(n=n)
plt.plot(change_points, ksfs.tmrca_cdf(eta))
plt.xlabel('$t$ (generations ago)')
plt.xscale('log')
plt.ylabel('TMRCA CDF')
plt.ylim([0, 1]);
mu0 = 100
ksfs.simulate(eta, mu0, seed=0)
ksfs.plot_total();
plt.xscale('log')
plt.yscale('log')
alpha_tv_trajectory = np.logspace(2, 5, 10)
alpha_spline_trajectory = np.logspace(1, 6, 10)
residuals = np.zeros((len(alpha_tv_trajectory), len(alpha_spline_trajectory), m))
loss = np.zeros((len(alpha_tv_trajectory), len(alpha_spline_trajectory)))
etas = {}
for i, alpha_tv in enumerate(alpha_tv_trajectory):
print(f'alpha_tv = {alpha_tv}')
for j, alpha_spline in enumerate(alpha_spline_trajectory):
print(f' alpha_spline = {alpha_spline}')
ksfs.clear_eta()
ksfs.infer_history(change_points, mu0, max_iter=100,
alpha_tv=alpha_tv, alpha_spline=alpha_spline, alpha_ridge=1e-6)
residuals[i, j, :] = np.log(ksfs.eta.y) - np.log(eta.y)
L = mushi.utils.C(n) @ mushi.utils.M(n, *ksfs.eta.arrays())
loss[i, j] = mushi.utils.prf(ksfs.mu.Z, ksfs.X, L)
etas[i, j] = ksfs.eta
plt.figure(figsize=(10, 5))
plt.pcolormesh(alpha_tv_trajectory, alpha_spline_trajectory,
(residuals ** 2).sum(2).T,
alpha=0.5, cmap="Reds", vmin=0)
plt.xlabel('$\\alpha_{\\rm tv}$')
plt.ylabel('$\\alpha_{\\rm spline}$')
plt.xscale('log')
plt.yscale('log')
cbar = plt.colorbar()
cbar.ax.set_ylabel('$\\int\\left(\\log\\eta - \\log\\eta_{\\rm true}\\right)^2$');
plt.figure(figsize=(10, 5))
plt.pcolormesh(alpha_tv_trajectory, alpha_spline_trajectory,
loss.T,
alpha=0.5, cmap="Reds")
plt.xlabel('$\\alpha_{\\rm tv}$')
plt.ylabel('$\\alpha_{\\rm spline}$')
plt.xscale('log')
plt.yscale('log')
cbar = plt.colorbar()
cbar.ax.set_ylabel('${\\rm loss}_1(\\eta)$');
j_choice = 5
plt.figure(figsize=(6, 5))
plt.pcolormesh(alpha_tv_trajectory, t, residuals[:, j_choice, :].T, alpha=0.5, cmap="RdBu_r", vmin=-.5, vmax=.5)
plt.xlabel('$\\alpha_{\\rm tv}$')
plt.ylabel(f'$t$ (generations ago)')
plt.xscale('log')
plt.yscale('log')
plt.title(f'$\\alpha_{{\\rm spline}} = {alpha_spline_trajectory[j_choice]:.2e}$')
cbar = plt.colorbar()
cbar.ax.set_ylabel('$\\log\\eta - \\log\\eta_{\\rm true}$');
_, axes = plt.subplots(1, 2, figsize=(10, 5))
plt.sca(axes[1])
eta.plot(color='grey', lw=6)
plt.title(f'$\\alpha_{{\\rm spline}} = {alpha_spline_trajectory[j_choice]:.2e}$')
plt.sca(axes[0])
mushi.kSFS(X=ksfs.X).plot_total(kwargs=dict(color='k', ls='', marker='o'))
plt.title(f'$\\alpha_{{\\rm spline}} = {alpha_spline_trajectory[j_choice]:.2e}$')
cmap = plt.get_cmap('viridis')
for i, alpha_tv in enumerate(alpha_tv_trajectory):
plt.sca(axes[1])
etas[i, j_choice].plot(label=f'{alpha_tv:.2e}',
color=cmap(i / (len(alpha_tv_trajectory) - 1)))
plt.sca(axes[0])
plt.plot(range(1, n), mu0*(mushi.utils.C(n) @ mushi.utils.M(n, *etas[i, j_choice].arrays())).sum(1),
ls='-', marker='.',
color=cmap(i / (len(alpha_tv_trajectory) - 1)),
label=f'{alpha_tv:.2e}')
plt.sca(axes[0])
plt.xscale('log')
plt.yscale('log')
plt.sca(axes[1])
plt.legend(title='$\\alpha_{\\rm tv}$', bbox_to_anchor=(1.04, 1), loc='upper left', ncol=1);
| 0.580828 | 0.605449 |
```
import pandas as pd
from __future__ import print_function
from distutils.version import LooseVersion as Version
import sys
OK = '\x1b[42m[ OK ]\x1b[0m'
FAIL = "\x1b[41m[FAIL]\x1b[0m"
try:
import importlib
except ImportError:
print(FAIL, "Python version 3.7 is required,"
" but %s is installed." % sys.version)
def import_version(pkg, min_ver, fail_msg=""):
mod = None
try:
mod = importlib.import_module(pkg)
if pkg in {'PIL'}:
ver = mod.VERSION
else:
ver = mod.__version__
if Version(ver) == min_ver:
print(OK, "%s version %s is installed."
% (lib, min_ver))
else:
print(FAIL, "%s version %s is required, but %s installed."
% (lib, min_ver, ver))
except ImportError:
print(FAIL, '%s not installed. %s' % (pkg, fail_msg))
return mod
# first check the python version
pyversion = Version(sys.version)
if pyversion >= "3.7":
print(OK, "Python version is %s" % sys.version)
elif pyversion < "3.7":
print(FAIL, "Python version 3.7 is required,"
" but %s is installed." % sys.version)
else:
print(FAIL, "Unknown Python version: %s" % sys.version)
print()
requirements = {'numpy': "1.18.5", 'matplotlib': "3.2.2",'sklearn': "0.23.1",
'pandas': "1.0.5",'xgboost': "1.1.1", 'shap': "0.35.0"}
# now the dependencies
for lib, required_version in list(requirements.items()):
import_version(lib, required_version)
df = pd.read_csv('census_tracts.csv')
df.head()
df_DC = df.loc[df['city']=='Washington']
df_Baltimore = df.loc[df['city']=='Baltimore']
df_Atlanta = df.loc[df['city']=='Atlanta']
df_Oakland = df.loc[df['city']=='Oakland']
df_NYC = df.loc[df['city']=='New York City']
print(df.iloc[-1])
columns = df.columns
print(columns)
```
geoid - do we need to preprocess
Total Population - Continuous
Total Population Over 25 - Continuous
Median Income - Continuous
Median Home Value - CONTINUOUS/TARGET VALUE
Educational Attainment - Continuous, number of people in the tract who have a college degree
Below is all continuous
White_Alone
Black_Alone
Native_Alone
Asian_Alone
NHPI
Other
Two_OrMOre
Categorical
City
Metro-Area
```
print(df.dtypes)
total population # continuous
```
This data is NOT iid bc it is grouped so we need to figure out how to group it in preprocessing
```
import numpy as np
import matplotlib
from matplotlib import pylab as plt
pd.value_counts(df['city']).plot.bar() #can i break this out by city
plt.ylabel('count')
plt.xlabel('city')
plt.show()
pd.value_counts(df['metro_area']).plot.bar()
plt.ylabel('count')
plt.xlabel('metro_area')
plt.show()
df['total_population'].plot.hist()
plt.xlabel('total_population')
plt.ylabel('count')
plt.show() #bounded, can use minmax scaler
df['total_population_25_over'].plot.hist()
plt.xlabel('total_population_25_over')
plt.ylabel('count')
plt.show() #bounded, can use minmax scaler
df['median_income'].plot.hist()
plt.xlabel('median_income')
plt.ylabel('count')
plt.show() #graph looks wrong #mention there are outliers and make another histogram
df['median_home_value'].value_counts
#df['median_home_value'].plot.hist(layout=(3, 6), figsize=(20, 10), sharey=False, sharex=False, bins=50)
df['median_home_value'].plot.hist(range=[25000, 2_000_000], bins = 10)
plt.xlabel('median_home_value')
plt.ylabel('count')
plt.show() #use minmax scaler
#build a different dataframes into cities - search properties for histogram plots
df['median_home_value'].plot.hist(range=[25000, 2_000_000], bins = 10)
plt.xlabel('median_home_value')
plt.ylabel('count')
plt.show() #use minmax scaler
df_DC['median_home_value'].plot.hist(range=[25000, 2_000_000], bins = 10)
plt.xlabel('median_home_value')
plt.ylabel('count')
plt.show()
df_NYC['median_home_value'].plot.hist(range=[25000, 2_000_000], bins = 10)
plt.xlabel('median_home_value')
plt.ylabel('count')
plt.show()
df_Oakland['median_home_value'].plot.hist(range=[25000, 2_000_000], bins = 10)
plt.xlabel('median_home_value')
plt.ylabel('count')
plt.show()
df_Baltimore['median_home_value'].plot.hist(range=[25000, 2_000_000], bins = 10)
plt.xlabel('median_home_value')
plt.ylabel('count')
plt.show()
df_Atlanta['median_home_value'].plot.hist(range=[25000, 2_000_000], bins = 10)
plt.xlabel('median_home_value')
plt.ylabel('count')
plt.show()
df['educational_attainment'].plot.hist()
plt.xlabel('educational_attainment')
plt.ylabel('count')
plt.show()
df['white_alone'].plot.hist()
plt.xlabel('white_alone')
plt.ylabel('count')
plt.show()
categories = df['city'].unique()
bin_range = (df['median_income'].min(),df['median_income'].max())
for c in categories:
plt.hist(df[df['city']==c]['median_income'],alpha=0.5,label=c,range=[50000, 2000000],bins=20,density=True)
plt.legend()
plt.ylabel('counts')
plt.xlabel('median_income')
plt.show()
import seaborn as sns
sns.violinplot(data = df, x = 'city', y = 'white_alone')
#plt.xticks([0,1],['0.0506801187398187', '-0.044641636506989'])
plt.yticks([-100, 25000])
plt.ylabel('white_alone')
plt.show()
sns.violinplot( x=df["city"], y=df["median_home_value"], palette="Blues")
plt.ylim(0,10000000)
plt.show()
sns.violinplot( x=df["city"], y=df["white_alone"], palette="Blues")
#plt.ylim(0,10000000)
plt.show()
sns.violinplot( x=df["city"], y=df["black_alone"], palette="Blues")
#plt.ylim(0,10000000)
plt.show()
df.plot.scatter('educational_attainment','median_income',s=10,alpha=0.1) # alpha=0.1,s=10
plt.xlim(0, 3000)
plt.ylim(0, 250000)
plt.show()
from matplotlib import pylab as plt
categories = df['city'].unique()
bin_range = (df['median_home_value'].min(),df['median_home_value'].max())
for c in categories:
plt.hist(df[df['city']==c]['median_home_value'],alpha=0.5,label=c,range=[25000, 1_500_000],density=True)
plt.legend()
plt.ylabel('counts')
plt.xlabel('median_home_value')
plt.show()
from sklearn.model_selection import GroupShuffleSplit
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import StandardScaler
y = df['median_home_value']
X = df.loc[:, df.columns != 'median_home_value']
groups = df["city"] #should we group by city - have to have some plots to prove that its not iid - show median home value - different means and stds.
#work on the histogram and check
gss = GroupShuffleSplit(n_splits=10, train_size=.6, random_state=42)
for other_index, train_index in gss.split(X,y,groups):
X_train = X.iloc[train_index]
y_train = y.iloc[train_index]
X_other = X.iloc[other_index]
y_other = y.iloc[other_index]
gss2 = GroupShuffleSplit(n_splits=1, train_size=.5, random_state=42)
groups_other = X_other["city"]
for test_index, validation_index in gss2.split(X_other, y_other, groups_other):
X_val = X.iloc[validation_index]
y_val = y.iloc[validation_index]
X_test = X.iloc[test_index]
y_test = y.iloc[test_index]
std_ftrs = ['total_population', 'total_population_25_over', 'median_income',
'educational_attainment', 'white_alone', 'black_alone', 'native_alone', 'asian_alone',
'native_hawaiian_pacific_islander', 'some_other_race_alone', 'two_or_more', 'hispanic_or_latino']
std_scaler = StandardScaler()
std_fit = std_scaler.fit(X_train[std_ftrs])
std_train = std_scaler.transform(X_train[std_ftrs])
std_val = std_scaler.transform(X_val[std_ftrs])
std_test = std_scaler.transform(X_test[std_ftrs])
onehot_ftrs = ['city', 'metro_area']
enc = OneHotEncoder(sparse=False,handle_unknown='ignore') #initialize encoder
enc.fit(X_train[onehot_ftrs]) #fit the training data
onehot_train = enc.transform(X_train[onehot_ftrs]) #transform the training data
onehot_val = enc.transform(X_val[onehot_ftrs]) #transform validation data
onehot_test = enc.transform(X_test[onehot_ftrs]) #transform test data
print(df.dtypes)
```
|
github_jupyter
|
import pandas as pd
from __future__ import print_function
from distutils.version import LooseVersion as Version
import sys
OK = '\x1b[42m[ OK ]\x1b[0m'
FAIL = "\x1b[41m[FAIL]\x1b[0m"
try:
import importlib
except ImportError:
print(FAIL, "Python version 3.7 is required,"
" but %s is installed." % sys.version)
def import_version(pkg, min_ver, fail_msg=""):
mod = None
try:
mod = importlib.import_module(pkg)
if pkg in {'PIL'}:
ver = mod.VERSION
else:
ver = mod.__version__
if Version(ver) == min_ver:
print(OK, "%s version %s is installed."
% (lib, min_ver))
else:
print(FAIL, "%s version %s is required, but %s installed."
% (lib, min_ver, ver))
except ImportError:
print(FAIL, '%s not installed. %s' % (pkg, fail_msg))
return mod
# first check the python version
pyversion = Version(sys.version)
if pyversion >= "3.7":
print(OK, "Python version is %s" % sys.version)
elif pyversion < "3.7":
print(FAIL, "Python version 3.7 is required,"
" but %s is installed." % sys.version)
else:
print(FAIL, "Unknown Python version: %s" % sys.version)
print()
requirements = {'numpy': "1.18.5", 'matplotlib': "3.2.2",'sklearn': "0.23.1",
'pandas': "1.0.5",'xgboost': "1.1.1", 'shap': "0.35.0"}
# now the dependencies
for lib, required_version in list(requirements.items()):
import_version(lib, required_version)
df = pd.read_csv('census_tracts.csv')
df.head()
df_DC = df.loc[df['city']=='Washington']
df_Baltimore = df.loc[df['city']=='Baltimore']
df_Atlanta = df.loc[df['city']=='Atlanta']
df_Oakland = df.loc[df['city']=='Oakland']
df_NYC = df.loc[df['city']=='New York City']
print(df.iloc[-1])
columns = df.columns
print(columns)
print(df.dtypes)
total population # continuous
import numpy as np
import matplotlib
from matplotlib import pylab as plt
pd.value_counts(df['city']).plot.bar() #can i break this out by city
plt.ylabel('count')
plt.xlabel('city')
plt.show()
pd.value_counts(df['metro_area']).plot.bar()
plt.ylabel('count')
plt.xlabel('metro_area')
plt.show()
df['total_population'].plot.hist()
plt.xlabel('total_population')
plt.ylabel('count')
plt.show() #bounded, can use minmax scaler
df['total_population_25_over'].plot.hist()
plt.xlabel('total_population_25_over')
plt.ylabel('count')
plt.show() #bounded, can use minmax scaler
df['median_income'].plot.hist()
plt.xlabel('median_income')
plt.ylabel('count')
plt.show() #graph looks wrong #mention there are outliers and make another histogram
df['median_home_value'].value_counts
#df['median_home_value'].plot.hist(layout=(3, 6), figsize=(20, 10), sharey=False, sharex=False, bins=50)
df['median_home_value'].plot.hist(range=[25000, 2_000_000], bins = 10)
plt.xlabel('median_home_value')
plt.ylabel('count')
plt.show() #use minmax scaler
#build a different dataframes into cities - search properties for histogram plots
df['median_home_value'].plot.hist(range=[25000, 2_000_000], bins = 10)
plt.xlabel('median_home_value')
plt.ylabel('count')
plt.show() #use minmax scaler
df_DC['median_home_value'].plot.hist(range=[25000, 2_000_000], bins = 10)
plt.xlabel('median_home_value')
plt.ylabel('count')
plt.show()
df_NYC['median_home_value'].plot.hist(range=[25000, 2_000_000], bins = 10)
plt.xlabel('median_home_value')
plt.ylabel('count')
plt.show()
df_Oakland['median_home_value'].plot.hist(range=[25000, 2_000_000], bins = 10)
plt.xlabel('median_home_value')
plt.ylabel('count')
plt.show()
df_Baltimore['median_home_value'].plot.hist(range=[25000, 2_000_000], bins = 10)
plt.xlabel('median_home_value')
plt.ylabel('count')
plt.show()
df_Atlanta['median_home_value'].plot.hist(range=[25000, 2_000_000], bins = 10)
plt.xlabel('median_home_value')
plt.ylabel('count')
plt.show()
df['educational_attainment'].plot.hist()
plt.xlabel('educational_attainment')
plt.ylabel('count')
plt.show()
df['white_alone'].plot.hist()
plt.xlabel('white_alone')
plt.ylabel('count')
plt.show()
categories = df['city'].unique()
bin_range = (df['median_income'].min(),df['median_income'].max())
for c in categories:
plt.hist(df[df['city']==c]['median_income'],alpha=0.5,label=c,range=[50000, 2000000],bins=20,density=True)
plt.legend()
plt.ylabel('counts')
plt.xlabel('median_income')
plt.show()
import seaborn as sns
sns.violinplot(data = df, x = 'city', y = 'white_alone')
#plt.xticks([0,1],['0.0506801187398187', '-0.044641636506989'])
plt.yticks([-100, 25000])
plt.ylabel('white_alone')
plt.show()
sns.violinplot( x=df["city"], y=df["median_home_value"], palette="Blues")
plt.ylim(0,10000000)
plt.show()
sns.violinplot( x=df["city"], y=df["white_alone"], palette="Blues")
#plt.ylim(0,10000000)
plt.show()
sns.violinplot( x=df["city"], y=df["black_alone"], palette="Blues")
#plt.ylim(0,10000000)
plt.show()
df.plot.scatter('educational_attainment','median_income',s=10,alpha=0.1) # alpha=0.1,s=10
plt.xlim(0, 3000)
plt.ylim(0, 250000)
plt.show()
from matplotlib import pylab as plt
categories = df['city'].unique()
bin_range = (df['median_home_value'].min(),df['median_home_value'].max())
for c in categories:
plt.hist(df[df['city']==c]['median_home_value'],alpha=0.5,label=c,range=[25000, 1_500_000],density=True)
plt.legend()
plt.ylabel('counts')
plt.xlabel('median_home_value')
plt.show()
from sklearn.model_selection import GroupShuffleSplit
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import StandardScaler
y = df['median_home_value']
X = df.loc[:, df.columns != 'median_home_value']
groups = df["city"] #should we group by city - have to have some plots to prove that its not iid - show median home value - different means and stds.
#work on the histogram and check
gss = GroupShuffleSplit(n_splits=10, train_size=.6, random_state=42)
for other_index, train_index in gss.split(X,y,groups):
X_train = X.iloc[train_index]
y_train = y.iloc[train_index]
X_other = X.iloc[other_index]
y_other = y.iloc[other_index]
gss2 = GroupShuffleSplit(n_splits=1, train_size=.5, random_state=42)
groups_other = X_other["city"]
for test_index, validation_index in gss2.split(X_other, y_other, groups_other):
X_val = X.iloc[validation_index]
y_val = y.iloc[validation_index]
X_test = X.iloc[test_index]
y_test = y.iloc[test_index]
std_ftrs = ['total_population', 'total_population_25_over', 'median_income',
'educational_attainment', 'white_alone', 'black_alone', 'native_alone', 'asian_alone',
'native_hawaiian_pacific_islander', 'some_other_race_alone', 'two_or_more', 'hispanic_or_latino']
std_scaler = StandardScaler()
std_fit = std_scaler.fit(X_train[std_ftrs])
std_train = std_scaler.transform(X_train[std_ftrs])
std_val = std_scaler.transform(X_val[std_ftrs])
std_test = std_scaler.transform(X_test[std_ftrs])
onehot_ftrs = ['city', 'metro_area']
enc = OneHotEncoder(sparse=False,handle_unknown='ignore') #initialize encoder
enc.fit(X_train[onehot_ftrs]) #fit the training data
onehot_train = enc.transform(X_train[onehot_ftrs]) #transform the training data
onehot_val = enc.transform(X_val[onehot_ftrs]) #transform validation data
onehot_test = enc.transform(X_test[onehot_ftrs]) #transform test data
print(df.dtypes)
| 0.167832 | 0.391464 |
### **Import Google Drive**
```
from google.colab import drive
drive.mount('/content/drive')
```
### **Import Library**
```
import glob
import numpy as np
import os
import shutil
np.random.seed(42)
from sklearn.preprocessing import LabelEncoder
import cv2
import tensorflow as tf
import keras
import shutil
import random
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.utils import class_weight
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, cohen_kappa_score
```
### **Load Data**
```
os.chdir('/content/drive/My Drive/Colab Notebooks/DATA RD/')
Train = glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/Data Split/Train/*')
Val=glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/Data Split/Validation/*')
Test=glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/Data Split/Test/*')
import matplotlib.image as mpimg
for ima in Train[600:601]:
img=mpimg.imread(ima)
imgplot = plt.imshow(img)
plt.show()
```
### **Data Preparation**
```
nrows = 224
ncolumns = 224
channels = 3
def read_and_process_image(list_of_images):
X = [] # images
y = [] # labels
for image in list_of_images:
X.append(cv2.resize(cv2.imread(image, cv2.IMREAD_COLOR), (nrows,ncolumns), interpolation=cv2.INTER_CUBIC)) #Read the image
#get the labels
if 'Normal' in image:
y.append(0)
elif 'Mild' in image:
y.append(1)
elif 'Moderate' in image:
y.append(2)
elif 'Severe' in image:
y.append(3)
return X, y
X_train, y_train = read_and_process_image(Train)
X_val, y_val = read_and_process_image(Val)
X_test, y_test = read_and_process_image(Test)
import seaborn as sns
import gc
gc.collect()
#Convert list to numpy array
X_train = np.array(X_train)
y_train= np.array(y_train)
X_val = np.array(X_val)
y_val= np.array(y_val)
X_test = np.array(X_test)
y_test= np.array(y_test)
print('Train:',X_train.shape,y_train.shape)
print('Val:',X_val.shape,y_val.shape)
print('Test',X_test.shape,y_test.shape)
sns.countplot(y_train)
plt.title('Total Data Training')
sns.countplot(y_val)
plt.title('Total Data Validasi')
sns.countplot(y_test)
plt.title('Total Data Test')
y_train_ohe = pd.get_dummies(y_train)
y_val_ohe=pd.get_dummies(y_val)
y_test_ohe=pd.get_dummies(y_test)
y_train_ohe.shape,y_val_ohe.shape,y_test_ohe.shape
```
### **Model Parameters**
```
batch_size = 16
EPOCHS = 100
WARMUP_EPOCHS = 2
LEARNING_RATE = 0.001
WARMUP_LEARNING_RATE = 1e-3
HEIGHT = 224
WIDTH = 224
CANAL = 3
N_CLASSES = 4
ES_PATIENCE = 5
RLROP_PATIENCE = 3
DECAY_DROP = 0.5
```
### **Data Generator**
```
train_datagen =tf.keras.preprocessing.image.ImageDataGenerator(
rotation_range=360,
horizontal_flip=True,
vertical_flip=True)
test_datagen=tf.keras.preprocessing.image.ImageDataGenerator()
train_generator = train_datagen.flow(X_train, y_train_ohe, batch_size=batch_size)
val_generator = test_datagen.flow(X_val, y_val_ohe, batch_size=batch_size)
test_generator = test_datagen.flow(X_test, y_test_ohe, batch_size=batch_size)
```
### **Define Model**
```
IMG_SHAPE = (224, 224, 3)
base_model =tf.keras.applications.DenseNet201(weights='imagenet',
include_top=False,
input_shape=IMG_SHAPE)
x =tf.keras.layers.GlobalAveragePooling2D()(base_model.output)
x =tf.keras.layers.Dropout(0.25)(x)
x =tf.keras.layers.Dense(1024, activation='relu')(x)
x =tf.keras.layers.Dropout(0.25)(x)
final_output =tf.keras.layers.Dense(N_CLASSES, activation='softmax', name='final_output')(x)
model =tf.keras.models.Model(inputs=base_model.inputs,outputs=final_output)
```
### **Train Top Layers**
```
for layer in model.layers:
layer.trainable = False
for i in range(-5, 0):
model.layers[i].trainable = True
metric_list = ["accuracy"]
optimizer =tf.keras.optimizers.Adam(lr=WARMUP_LEARNING_RATE)
model.compile(optimizer=optimizer, loss="categorical_crossentropy", metrics=metric_list)
model.summary()
import time
start = time.time()
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = val_generator.n//val_generator.batch_size
history_warmup = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=val_generator,
validation_steps=STEP_SIZE_VALID,
epochs=WARMUP_EPOCHS,
verbose=1).history
end = time.time()
print('Waktu Training:', end - start)
```
### **Train Fine Tuning**
```
for layer in model.layers:
layer.trainable = True
es =tf.keras.callbacks.EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
rlrop =tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', mode='min', patience=RLROP_PATIENCE, factor=DECAY_DROP, min_lr=1e-6, verbose=1)
callback_list = [es]
optimizer =tf.keras.optimizers.Adam(lr=LEARNING_RATE)
model.compile(optimizer=optimizer, loss="categorical_crossentropy", metrics=metric_list)
model.summary()
history_finetunning = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=val_generator,
validation_steps=STEP_SIZE_VALID,
epochs=EPOCHS,
callbacks=callback_list,
verbose=1).history
```
### **Model Graph**
```
history = {'loss': history_warmup['loss'] + history_finetunning['loss'],
'val_loss': history_warmup['val_loss'] + history_finetunning['val_loss'],
'acc': history_warmup['accuracy'] + history_finetunning['accuracy'],
'val_acc': history_warmup['val_accuracy'] + history_finetunning['val_accuracy']}
sns.set_style("whitegrid")
fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=(20, 18))
ax1.plot(history['loss'], label='Train loss')
ax1.plot(history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history['acc'], label='Train accuracy')
ax2.plot(history['val_acc'], label='Validation accuracy')
ax2.legend(loc='best')
ax2.set_title('Accuracy')
plt.xlabel('Epochs')
sns.despine()
plt.show()
```
### **Evaluate Model**
```
loss_Val, acc_Val = model.evaluate(X_val, y_val_ohe,batch_size=1, verbose=1)
print("Validation: accuracy = %f ; loss_v = %f" % (acc_Val, loss_Val))
lastFullTrainPred = np.empty((0, N_CLASSES))
lastFullTrainLabels = np.empty((0, N_CLASSES))
lastFullValPred = np.empty((0, N_CLASSES))
lastFullValLabels = np.empty((0, N_CLASSES))
for i in range(STEP_SIZE_TRAIN+1):
im, lbl = next(train_generator)
scores = model.predict(im, batch_size=train_generator.batch_size)
lastFullTrainPred = np.append(lastFullTrainPred, scores, axis=0)
lastFullTrainLabels = np.append(lastFullTrainLabels, lbl, axis=0)
for i in range(STEP_SIZE_VALID+1):
im, lbl = next(val_generator)
scores = model.predict(im, batch_size=val_generator.batch_size)
lastFullValPred = np.append(lastFullValPred, scores, axis=0)
lastFullValLabels = np.append(lastFullValLabels, lbl, axis=0)
lastFullComPred = np.concatenate((lastFullTrainPred, lastFullValPred))
lastFullComLabels = np.concatenate((lastFullTrainLabels, lastFullValLabels))
complete_labels = [np.argmax(label) for label in lastFullComLabels]
train_preds = [np.argmax(pred) for pred in lastFullTrainPred]
train_labels = [np.argmax(label) for label in lastFullTrainLabels]
validation_preds = [np.argmax(pred) for pred in lastFullValPred]
validation_labels = [np.argmax(label) for label in lastFullValLabels]
fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7))
labels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe']
train_cnf_matrix = confusion_matrix(train_labels, train_preds)
validation_cnf_matrix = confusion_matrix(validation_labels, validation_preds)
train_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis]
validation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis]
train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels)
validation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels)
sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax1).set_title('Train')
sns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax2).set_title('Validation')
plt.show()
```
|
github_jupyter
|
from google.colab import drive
drive.mount('/content/drive')
import glob
import numpy as np
import os
import shutil
np.random.seed(42)
from sklearn.preprocessing import LabelEncoder
import cv2
import tensorflow as tf
import keras
import shutil
import random
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.utils import class_weight
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, cohen_kappa_score
os.chdir('/content/drive/My Drive/Colab Notebooks/DATA RD/')
Train = glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/Data Split/Train/*')
Val=glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/Data Split/Validation/*')
Test=glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/Data Split/Test/*')
import matplotlib.image as mpimg
for ima in Train[600:601]:
img=mpimg.imread(ima)
imgplot = plt.imshow(img)
plt.show()
nrows = 224
ncolumns = 224
channels = 3
def read_and_process_image(list_of_images):
X = [] # images
y = [] # labels
for image in list_of_images:
X.append(cv2.resize(cv2.imread(image, cv2.IMREAD_COLOR), (nrows,ncolumns), interpolation=cv2.INTER_CUBIC)) #Read the image
#get the labels
if 'Normal' in image:
y.append(0)
elif 'Mild' in image:
y.append(1)
elif 'Moderate' in image:
y.append(2)
elif 'Severe' in image:
y.append(3)
return X, y
X_train, y_train = read_and_process_image(Train)
X_val, y_val = read_and_process_image(Val)
X_test, y_test = read_and_process_image(Test)
import seaborn as sns
import gc
gc.collect()
#Convert list to numpy array
X_train = np.array(X_train)
y_train= np.array(y_train)
X_val = np.array(X_val)
y_val= np.array(y_val)
X_test = np.array(X_test)
y_test= np.array(y_test)
print('Train:',X_train.shape,y_train.shape)
print('Val:',X_val.shape,y_val.shape)
print('Test',X_test.shape,y_test.shape)
sns.countplot(y_train)
plt.title('Total Data Training')
sns.countplot(y_val)
plt.title('Total Data Validasi')
sns.countplot(y_test)
plt.title('Total Data Test')
y_train_ohe = pd.get_dummies(y_train)
y_val_ohe=pd.get_dummies(y_val)
y_test_ohe=pd.get_dummies(y_test)
y_train_ohe.shape,y_val_ohe.shape,y_test_ohe.shape
batch_size = 16
EPOCHS = 100
WARMUP_EPOCHS = 2
LEARNING_RATE = 0.001
WARMUP_LEARNING_RATE = 1e-3
HEIGHT = 224
WIDTH = 224
CANAL = 3
N_CLASSES = 4
ES_PATIENCE = 5
RLROP_PATIENCE = 3
DECAY_DROP = 0.5
train_datagen =tf.keras.preprocessing.image.ImageDataGenerator(
rotation_range=360,
horizontal_flip=True,
vertical_flip=True)
test_datagen=tf.keras.preprocessing.image.ImageDataGenerator()
train_generator = train_datagen.flow(X_train, y_train_ohe, batch_size=batch_size)
val_generator = test_datagen.flow(X_val, y_val_ohe, batch_size=batch_size)
test_generator = test_datagen.flow(X_test, y_test_ohe, batch_size=batch_size)
IMG_SHAPE = (224, 224, 3)
base_model =tf.keras.applications.DenseNet201(weights='imagenet',
include_top=False,
input_shape=IMG_SHAPE)
x =tf.keras.layers.GlobalAveragePooling2D()(base_model.output)
x =tf.keras.layers.Dropout(0.25)(x)
x =tf.keras.layers.Dense(1024, activation='relu')(x)
x =tf.keras.layers.Dropout(0.25)(x)
final_output =tf.keras.layers.Dense(N_CLASSES, activation='softmax', name='final_output')(x)
model =tf.keras.models.Model(inputs=base_model.inputs,outputs=final_output)
for layer in model.layers:
layer.trainable = False
for i in range(-5, 0):
model.layers[i].trainable = True
metric_list = ["accuracy"]
optimizer =tf.keras.optimizers.Adam(lr=WARMUP_LEARNING_RATE)
model.compile(optimizer=optimizer, loss="categorical_crossentropy", metrics=metric_list)
model.summary()
import time
start = time.time()
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = val_generator.n//val_generator.batch_size
history_warmup = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=val_generator,
validation_steps=STEP_SIZE_VALID,
epochs=WARMUP_EPOCHS,
verbose=1).history
end = time.time()
print('Waktu Training:', end - start)
for layer in model.layers:
layer.trainable = True
es =tf.keras.callbacks.EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
rlrop =tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', mode='min', patience=RLROP_PATIENCE, factor=DECAY_DROP, min_lr=1e-6, verbose=1)
callback_list = [es]
optimizer =tf.keras.optimizers.Adam(lr=LEARNING_RATE)
model.compile(optimizer=optimizer, loss="categorical_crossentropy", metrics=metric_list)
model.summary()
history_finetunning = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=val_generator,
validation_steps=STEP_SIZE_VALID,
epochs=EPOCHS,
callbacks=callback_list,
verbose=1).history
history = {'loss': history_warmup['loss'] + history_finetunning['loss'],
'val_loss': history_warmup['val_loss'] + history_finetunning['val_loss'],
'acc': history_warmup['accuracy'] + history_finetunning['accuracy'],
'val_acc': history_warmup['val_accuracy'] + history_finetunning['val_accuracy']}
sns.set_style("whitegrid")
fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=(20, 18))
ax1.plot(history['loss'], label='Train loss')
ax1.plot(history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history['acc'], label='Train accuracy')
ax2.plot(history['val_acc'], label='Validation accuracy')
ax2.legend(loc='best')
ax2.set_title('Accuracy')
plt.xlabel('Epochs')
sns.despine()
plt.show()
loss_Val, acc_Val = model.evaluate(X_val, y_val_ohe,batch_size=1, verbose=1)
print("Validation: accuracy = %f ; loss_v = %f" % (acc_Val, loss_Val))
lastFullTrainPred = np.empty((0, N_CLASSES))
lastFullTrainLabels = np.empty((0, N_CLASSES))
lastFullValPred = np.empty((0, N_CLASSES))
lastFullValLabels = np.empty((0, N_CLASSES))
for i in range(STEP_SIZE_TRAIN+1):
im, lbl = next(train_generator)
scores = model.predict(im, batch_size=train_generator.batch_size)
lastFullTrainPred = np.append(lastFullTrainPred, scores, axis=0)
lastFullTrainLabels = np.append(lastFullTrainLabels, lbl, axis=0)
for i in range(STEP_SIZE_VALID+1):
im, lbl = next(val_generator)
scores = model.predict(im, batch_size=val_generator.batch_size)
lastFullValPred = np.append(lastFullValPred, scores, axis=0)
lastFullValLabels = np.append(lastFullValLabels, lbl, axis=0)
lastFullComPred = np.concatenate((lastFullTrainPred, lastFullValPred))
lastFullComLabels = np.concatenate((lastFullTrainLabels, lastFullValLabels))
complete_labels = [np.argmax(label) for label in lastFullComLabels]
train_preds = [np.argmax(pred) for pred in lastFullTrainPred]
train_labels = [np.argmax(label) for label in lastFullTrainLabels]
validation_preds = [np.argmax(pred) for pred in lastFullValPred]
validation_labels = [np.argmax(label) for label in lastFullValLabels]
fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7))
labels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe']
train_cnf_matrix = confusion_matrix(train_labels, train_preds)
validation_cnf_matrix = confusion_matrix(validation_labels, validation_preds)
train_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis]
validation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis]
train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels)
validation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels)
sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax1).set_title('Train')
sns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax2).set_title('Validation')
plt.show()
| 0.388502 | 0.716217 |
# **1. Imports y declaración de variables**
## **1.1. Imports**
En este primer apartado, la idea es importar todas las librerías necesarias para realizar el proceso de transferencia de aprendizaje y declarar todas las variables que sean necesarias para el modelo que se pretende entrenar.
Las librerías que se van a usar son las siguientes:
* **matplotlib**. Es una librería desarrollada en Python para representar gráficas.
* **keras**. Librería contenedora de todas las capas y algoritmos necesarios para la creación de la nueva red neuronal.
* **pathlib**. Librería que permite la creción de rutas a directorios.
* **os**. Librería interna del sistema operativo.
```
import matplotlib.pyplot as plt
from pathlib import Path
from keras.applications.resnet_v2 import ResNet50V2
from keras.layers import GlobalAveragePooling2D, Dense, Dropout
from keras.models import Model
from keras.callbacks import ModelCheckpoint,EarlyStopping, ReduceLROnPlateau
from keras.preprocessing import image
from tensorflow.keras.optimizers import SGD, RMSprop
from keras import backend as K
K.clear_session()
```
## **1.2. Variables**
* **FREEZE_IMG_SIZE**: Tamaño de la imagen en el primer freeze de la red neuronal.
* **FIRST_UNFREEZE_IMG_SIZE**: Tamaño de la imagen tras el primer descongelamiento. La idea es que el tamaño de las imagenes del primer conjunto sea mas pequeño para que en la primera fase de descongelamiento se procesen el conjunto de imágenes como uno totalmente diferente al primero.
* **NUM_CHANNELS**: Total de canales de color de las imágenes. Generalmente es 3, haciendo referencia a los canales RGB.
* **IMG_SHAPE**: Tamaño de la imagen final, cogiendo como referencia la variable *FIRST_UNFREEZE_IMG_SIZE*.
* **TOP_EPOCHS_1**. Nº de épocas en la primera fase de entrenamiento.
* **TOP_EPOCHS_2**. Nº de épocas en la segunda fase de entrenamiento.
* **BATCH_SIZE**. Tamaño del batch. Dependiendo del tamaño del batch, se procesarán instantáneamente X imágenes.
* **DATA_PATH**. Directorio donde se encuentran las carpetas de ENTRENAMIENTO y VALIDACIÓN
```
FREEZE_IMG_SIZE = 75
FIRST_UNFREEZE_IMG_SIZE = FREEZE_IMG_SIZE * 3
NUM_CHANNELS = 3
IMG_SHAPE = (FIRST_UNFREEZE_IMG_SIZE, FIRST_UNFREEZE_IMG_SIZE, NUM_CHANNELS)
TOP_EPOCHS_1 = 10
TOP_EPOCHS_2 = 12
BATCH_SIZE = 4
DATA_PATH = Path('./splitted_data')
```
A continuación se hace una prueba mostrando una imagen del dataset con el redimensionamiento aplicado.
```
img_path = f"{DATA_PATH}/Entrenamiento/AMD/AMD (1).jpg"
img = image.load_img(img_path, target_size=(FREEZE_IMG_SIZE, FREEZE_IMG_SIZE))
img
```
# **2. Creación de la red neuronal mediante *Transfer Learning***
En este modelo se va a hacer uso del modelo preentrenado de **ResNet101V2** que nos ofrece Keras para poder aplicar *Transfer Learning* y de esta manera crear un nuevo modelo adaptado a las necesidades del problema planteado.
Para ello se le va a pasar por hiperparámetros:
* ***input_shape***. Tamaño de las imagenes que tendrán en la futura clasificación, una vez el modelo entrenado.
* ***include_top***. Mediante este parámetro se omite la red neuronal *fully-connected* que viene como predeterminada en la red *ResNet101V2* para poder meter la nueva que se creeará a mano.
* ***weights***. Para incluir en la red preentrenada los pesos generados mediante el dataset de *imagenet* o unos precargados indicando su ruta en el directorio.
* ***pooling***. Capa de pooling que se utilizará al final del proceso de convoluciones. Las opciones que hay son *None*, *avg* (global average pooling) y *max* (max pooling).
Con la función *summary()* del modelo podemos ver la estructura actual del modelo y así ver las capas existentes y el nº de parámetros entrenables.
```
resNet101_classifier = ResNet50V2(input_shape=IMG_SHAPE, include_top=False, weights='imagenet', pooling=None)
resNet101_classifier.summary()
```
Primero recurperamos el output de la red neuronal preentrenada para poder añadirle la nueva red neuronal que entrenaremos con las imágenes de nuestro dataset.
```
## Recuperamos el output del modelo
incv3_output = resNet101_classifier.output
print(incv3_output.shape)
avg_pool = GlobalAveragePooling2D(name='logos_pooling')(incv3_output)
avg_pool.get_shape()
fully_connected = Dense(1024, activation='relu', name='logos_fullyconnected')(avg_pool)
dropout = Dropout(0.5)(fully_connected)
predictions = Dense(23, activation='softmax', name='logos_softmax')(dropout)
model = Model(inputs=resNet101_classifier.input, outputs=predictions)
model.summary()
```
# **3. Primera fase de entrenamiento**
## **3.1. Congelar capas**
Se cancela la capacidad de aprendizaje de todas las capas de la red neuronal para adaptarla a las nuevas imágenes.
```
for layer in resNet101_classifier.layers:
layer.trainable = False
```
## **3.2. Compilación del modelo**
Se compila el modelo con:
* La función de ***categorical_crossentropy*** como función de error.
* ***Rms*** como optimizador para el modelo.
* Se guarda la métrica de **exactitud** para mostrarla en la fase de entrenamiento junto al error. La exactitud corresponde al nº de aciertos del modelo.
```
model.compile(loss='categorical_crossentropy', optimizer=RMSprop(learning_rate=0.001), metrics=['accuracy'])
```
## **3.2. Procesamiento de imágenes**
El siguiente paso a llevar a cabo es la **carga** y **procesamiento** de las imágenes que se van a usar tanto para el entrenamiento como para la validación del modelo.
Es necesario la creación de ambos datasets (entramiento y validación) para poder detectar problemas como el *overfitting* y mejoras dentro del modelo.
Establecemos las variables con los directorios donde se encuentran las imágenes de entrenamiento y de validación y creamos dos instancias de ***ImageDataGenerator*** para generar nuevas imágenes distorsionadas con las que el modelo pueda entrenar.
Para ello, la clase ofrece una serie de parámetros para indicar cómo puede deformar esas imágenes de entrada:
* ***rotation_range***: el rango de la rotación aleatoria aplicada a las imágenes para generar nuevos ejemplares.
* ***rescale***: rescala las imagenes a la cifra proporcionada, es decir, como el canal RGB es de 0 a 255, se realiza una conversión para que dicho valor se encuentre entre 0 y 1, siendo valor/255.
* ***shear_range***: inclina las imágenes para que el modelo sepa que las imagenes pueden estar giradas.
* ***zoom_range***: al igual que la enterior, le hace zoom al 30% de las imágenes para que el modelo aprenda.
* ***horizontal_flip***: voltea las imágenes.
* ***width_shift_range***:
* ***height_shift_range***:
```
data_gen = image.ImageDataGenerator(
rotation_range=15,
rescale=1./255,
shear_range=0.1,
zoom_range=0.2,
horizontal_flip=True,
width_shift_range=0.1,
height_shift_range=0.1
)
data_gen_test = image.ImageDataGenerator(rescale=1./255)
```
Una vez indicados los parámetros para la generación de nuevas imágenes, se cargan las imágenes mediante el generador. La forma de carga depende de cómo se tengan las imágenes almacenadas, ya que se pueden cargar desde un ***DataFrame***, desde un **directorio de carpetas** o desde un ***array Numpy***.
En nuestro caso se va a cargar desde un directorio de carpetas, donde tenemos organizado qué imágenes se van a usar para el **entrenamiento** y cuáles para la **validación**.
La carga se realizará mediante la función de ***flow_from_directory***, la cual necesita:
1. Un parámetro de entrada para indicar la ruta del directorio de carpetas donde se encuentran las imágenes de entrenamiento y validación.
2. Que las carpetas dentro de los directorios de *entrenamiento* y *validación* tengan ordenadas las imágenes por carpetas, correspondiendo el nombre de la carpeta a la clase de las imágenes que contiene, quedando un árbol de carpetas parecido al siguiente:
```
|-- Entrenamiento
| |-- Clase 1
| |-- ImgClase1_1.jpg
| |-- ImgClase1_2.jpg
| |-- ImgClase1_3.jpg
| |-- ...
| |-- Clase 2
| |-- Clase 3
| |-- ...
|-- Validación
| |-- Clase 1
| |-- ImgClase1_1.jpg
| |-- ImgClase1_2.jpg
| |-- ImgClase1_3.jpg
| |-- ...
| |-- Clase 2
| |-- Clase 3
| |-- ...
```
```
imagenes_entrenamiento_1 = data_gen.flow_from_directory(
f"{DATA_PATH}/Entrenamiento",
target_size=(FREEZE_IMG_SIZE, FREEZE_IMG_SIZE),
batch_size=BATCH_SIZE,
shuffle=True,
class_mode='categorical'
)
imagenes_validacion_1 = data_gen_test.flow_from_directory(
f"{DATA_PATH}/Validacion",
target_size=(FREEZE_IMG_SIZE, FREEZE_IMG_SIZE),
batch_size=BATCH_SIZE,
shuffle=True,
class_mode='categorical'
)
```
## **3.3. Entrenamiento**
Como paso final de la primera fase, se entrena el modelo con los conjuntos de datos generados mediante el ***ImageDataGenerator***.
```
model.fit(
imagenes_entrenamiento_1,
steps_per_epoch=imagenes_entrenamiento_1.n // BATCH_SIZE,
epochs=TOP_EPOCHS_1,
validation_data=imagenes_validacion_1,
validation_steps=imagenes_validacion_1.n // BATCH_SIZE
)
```
# **4. Segunda fase de entrenamiento**
Se comprueba que la estructura de la red sigue siendo la misma que la del principio para seguir con el entrenamiento.
```
for i, layer in enumerate(model.layers):
print(i, layer.name)
```
## **4.1 Descongelar capas**
Se **descongela** un grupo específico de las capas de convolución finales del modelo preentrenado. Esto va a permitir **extraer nuevas características** de las imágenes de entrenamiento, consiguiendo que los mapas de características devueltos por la capa de ***Global Average Pooling*** a la capa de ***softmax*** sean más específicos que las de la primera fase de entrenamiento.
```
freeze_until_layer = 153 # 153
for layer in model.layers[:freeze_until_layer]:
layer.trainable = False
for layer in model.layers[freeze_until_layer:]:
layer.trainable = True
```
## **4.2 Keras' callbacks**
1. ***EarlyStopping***: es un callback que para la ejecución del entrenamiento cuando una métrica a dejado de mejorar ([EarlyStopping](https://keras.io/api/callbacks/early_stopping/)).
* *monitor*: selecciona la métrica en la que se va a basar para parar el entrenamiento.
* *patience*: nº de épocas sin mejorar la métrica seleccionada.
2. ***ModelCheckPoint***: Callback que se encarga de guardar los pesos en cualquier momento del entrenamiento según los parámentros que se le indique ([Model Checkpoint](https://keras.io/api/callbacks/model_checkpoint/)).
* *filepath*: ruta de guardado de los pesos.
* *monitor*: es el valor en el que se basa para guardar los pesos.
* *save_best_only*: Si el valor es mejor que los anteriores pesos guardados, lo guarda reemplazando el anterior.
* *mode*.
3. ***ReduceLROnPlateau***: es un callback que ofrece Keras para ir disminuyendo el learning rate a medida que la métrica indicada deja de mejorar o se estanca ([ReduceLROnPlateau](https://keras.io/api/callbacks/reduce_lr_on_plateau/)).
* *monitor*: es el valor en el que se basa el callback para reducir el learning rate.
* *factor*: el factor por el que se multiplica el learning rate cada vez que se aplica la reducción.
* *patience*: el número de épocas que espera el callback para reducir el learning rate.
* *min_lr*: cantidad mínima a la que se puede reducir el learning rate.
```
cb_early_stopper = EarlyStopping(monitor = 'val_loss', patience = 3)
cb_checkpointer = ModelCheckpoint(filepath = './weights/ResNet50V2_weights.hdf5', monitor = 'val_loss', save_best_only = True, mode = 'auto')
model_reducelr_callback = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=2, min_lr=0.0001)
```
## **4.3 Preparación de dataset y compilación del modelo**
Se compila de nuevo el modelo aplicandole:
* ***loss***: función de error o pérdida, que en este caso la elegida ha sido *categorical_crossentropy*.
* ***optimizer***: optimizador para establecer el *learning rate*.
* ***metrics***: métrica que será evauada durante la fase de entrenamiento y validación.
```
model.compile(
loss='categorical_crossentropy',
optimizer=SGD(learning_rate=0.001, momentum=0.9),
metrics=['accuracy'])
imagenes_entrenamiento_2 = data_gen.flow_from_directory(
f"{DATA_PATH}/Entrenamiento",
target_size=(FIRST_UNFREEZE_IMG_SIZE, FIRST_UNFREEZE_IMG_SIZE),
batch_size=BATCH_SIZE,
shuffle=True,
class_mode='categorical'
)
imagenes_validacion_2 = data_gen_test.flow_from_directory(
f"{DATA_PATH}/Validacion",
target_size=(FIRST_UNFREEZE_IMG_SIZE, FIRST_UNFREEZE_IMG_SIZE),
batch_size=BATCH_SIZE,
shuffle=True,
class_mode='categorical'
)
history = model.fit(
imagenes_entrenamiento_2,
steps_per_epoch=imagenes_entrenamiento_2.n // BATCH_SIZE,
epochs=TOP_EPOCHS_2,
validation_data=imagenes_validacion_2,
validation_steps=imagenes_validacion_2.n // BATCH_SIZE,
callbacks=[cb_early_stopper, cb_checkpointer, model_reducelr_callback])
```
Se guarda el modelo en un **fichero externo** para poder usarlo en una futura clasificación
```
model.save('./models/ResNet50V2_model.hdf5')
```
## **5. Visualización de métricas**
Se muestran dos gráficas para la visualización de las métricas guardadas de cada una de las épocas de entrenamiento y validación.
Esto ayudará a detectar de una manera mas fácil si existe **overfitting** o **underfitting** en el modelo.
```
acc = history.history['accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
val_accuracy = history.history['val_accuracy']
epochs = range(len(acc))
plot1 = plt.figure(1)
plt.plot(epochs, acc, label="accuracy")
plt.plot(epochs, val_accuracy, label="val_accuracy")
plt.title('Training and validation accuracy')
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.)
plot2 = plt.figure(2)
plt.plot(epochs, loss, label="loss")
plt.plot(epochs, val_loss, label="val_loss")
plt.title('Training and validation loss')
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.)
```
|
github_jupyter
|
import matplotlib.pyplot as plt
from pathlib import Path
from keras.applications.resnet_v2 import ResNet50V2
from keras.layers import GlobalAveragePooling2D, Dense, Dropout
from keras.models import Model
from keras.callbacks import ModelCheckpoint,EarlyStopping, ReduceLROnPlateau
from keras.preprocessing import image
from tensorflow.keras.optimizers import SGD, RMSprop
from keras import backend as K
K.clear_session()
FREEZE_IMG_SIZE = 75
FIRST_UNFREEZE_IMG_SIZE = FREEZE_IMG_SIZE * 3
NUM_CHANNELS = 3
IMG_SHAPE = (FIRST_UNFREEZE_IMG_SIZE, FIRST_UNFREEZE_IMG_SIZE, NUM_CHANNELS)
TOP_EPOCHS_1 = 10
TOP_EPOCHS_2 = 12
BATCH_SIZE = 4
DATA_PATH = Path('./splitted_data')
img_path = f"{DATA_PATH}/Entrenamiento/AMD/AMD (1).jpg"
img = image.load_img(img_path, target_size=(FREEZE_IMG_SIZE, FREEZE_IMG_SIZE))
img
resNet101_classifier = ResNet50V2(input_shape=IMG_SHAPE, include_top=False, weights='imagenet', pooling=None)
resNet101_classifier.summary()
## Recuperamos el output del modelo
incv3_output = resNet101_classifier.output
print(incv3_output.shape)
avg_pool = GlobalAveragePooling2D(name='logos_pooling')(incv3_output)
avg_pool.get_shape()
fully_connected = Dense(1024, activation='relu', name='logos_fullyconnected')(avg_pool)
dropout = Dropout(0.5)(fully_connected)
predictions = Dense(23, activation='softmax', name='logos_softmax')(dropout)
model = Model(inputs=resNet101_classifier.input, outputs=predictions)
model.summary()
for layer in resNet101_classifier.layers:
layer.trainable = False
model.compile(loss='categorical_crossentropy', optimizer=RMSprop(learning_rate=0.001), metrics=['accuracy'])
data_gen = image.ImageDataGenerator(
rotation_range=15,
rescale=1./255,
shear_range=0.1,
zoom_range=0.2,
horizontal_flip=True,
width_shift_range=0.1,
height_shift_range=0.1
)
data_gen_test = image.ImageDataGenerator(rescale=1./255)
|-- Entrenamiento
| |-- Clase 1
| |-- ImgClase1_1.jpg
| |-- ImgClase1_2.jpg
| |-- ImgClase1_3.jpg
| |-- ...
| |-- Clase 2
| |-- Clase 3
| |-- ...
|-- Validación
| |-- Clase 1
| |-- ImgClase1_1.jpg
| |-- ImgClase1_2.jpg
| |-- ImgClase1_3.jpg
| |-- ...
| |-- Clase 2
| |-- Clase 3
| |-- ...
imagenes_entrenamiento_1 = data_gen.flow_from_directory(
f"{DATA_PATH}/Entrenamiento",
target_size=(FREEZE_IMG_SIZE, FREEZE_IMG_SIZE),
batch_size=BATCH_SIZE,
shuffle=True,
class_mode='categorical'
)
imagenes_validacion_1 = data_gen_test.flow_from_directory(
f"{DATA_PATH}/Validacion",
target_size=(FREEZE_IMG_SIZE, FREEZE_IMG_SIZE),
batch_size=BATCH_SIZE,
shuffle=True,
class_mode='categorical'
)
model.fit(
imagenes_entrenamiento_1,
steps_per_epoch=imagenes_entrenamiento_1.n // BATCH_SIZE,
epochs=TOP_EPOCHS_1,
validation_data=imagenes_validacion_1,
validation_steps=imagenes_validacion_1.n // BATCH_SIZE
)
for i, layer in enumerate(model.layers):
print(i, layer.name)
freeze_until_layer = 153 # 153
for layer in model.layers[:freeze_until_layer]:
layer.trainable = False
for layer in model.layers[freeze_until_layer:]:
layer.trainable = True
cb_early_stopper = EarlyStopping(monitor = 'val_loss', patience = 3)
cb_checkpointer = ModelCheckpoint(filepath = './weights/ResNet50V2_weights.hdf5', monitor = 'val_loss', save_best_only = True, mode = 'auto')
model_reducelr_callback = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=2, min_lr=0.0001)
model.compile(
loss='categorical_crossentropy',
optimizer=SGD(learning_rate=0.001, momentum=0.9),
metrics=['accuracy'])
imagenes_entrenamiento_2 = data_gen.flow_from_directory(
f"{DATA_PATH}/Entrenamiento",
target_size=(FIRST_UNFREEZE_IMG_SIZE, FIRST_UNFREEZE_IMG_SIZE),
batch_size=BATCH_SIZE,
shuffle=True,
class_mode='categorical'
)
imagenes_validacion_2 = data_gen_test.flow_from_directory(
f"{DATA_PATH}/Validacion",
target_size=(FIRST_UNFREEZE_IMG_SIZE, FIRST_UNFREEZE_IMG_SIZE),
batch_size=BATCH_SIZE,
shuffle=True,
class_mode='categorical'
)
history = model.fit(
imagenes_entrenamiento_2,
steps_per_epoch=imagenes_entrenamiento_2.n // BATCH_SIZE,
epochs=TOP_EPOCHS_2,
validation_data=imagenes_validacion_2,
validation_steps=imagenes_validacion_2.n // BATCH_SIZE,
callbacks=[cb_early_stopper, cb_checkpointer, model_reducelr_callback])
model.save('./models/ResNet50V2_model.hdf5')
acc = history.history['accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
val_accuracy = history.history['val_accuracy']
epochs = range(len(acc))
plot1 = plt.figure(1)
plt.plot(epochs, acc, label="accuracy")
plt.plot(epochs, val_accuracy, label="val_accuracy")
plt.title('Training and validation accuracy')
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.)
plot2 = plt.figure(2)
plt.plot(epochs, loss, label="loss")
plt.plot(epochs, val_loss, label="val_loss")
plt.title('Training and validation loss')
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.)
| 0.721253 | 0.923316 |
# 如何自助 —— 助人即助己
—— 重大病毒性疫情中如何应对创伤应激障碍
**李笑来**(2020.02)
-----

病毒性传染病疫情从未停止过威胁人类。在这次 2020 年春节前中国武汉新型冠状病毒([2019-nCoV](https://zh.wikipedia.org/wiki/2019%E6%96%B0%E5%9E%8B%E5%86%A0%E7%8B%80%E7%97%85%E6%AF%92))爆发之前,2003 年就有过令一代人记忆深刻的 SARS 大爆发。除此之外,还有很多可能随时发生的疫情,比如,禽流感([Avian Influenza](https://zh.wikipedia.org/wiki/%E7%A6%BD%E6%B5%81%E6%84%9F))—— H5N2(美国,1983),H5N1(中国广东,1996),H9N2(香港,1999),H7N7(荷兰,2003),H3N2(美国,2006),H7N9(中国上海,2013/03),H10N8(中国江西,2013/12),H7N4(中国江苏,2018/02)……
在重大病毒性传染病疫情发生的时候,尤其是该病毒可通过空气传染的时候,每一个人的生命都面临威胁 —— 至少是潜在的威胁。一旦得知自己的生命可能受到威胁,每个人都一样,会因此产生巨大的难以消解的压力;进而有可能仅仅因为压力就会产生很多应激障碍,导致注意力不集中,易怒,失眠;进而会造成最严重的后果之一:自我免疫机能受损 —— 然而,在疫情中我们赖以生存的唯一支撑就是我们自身的免疫机能。
并且,对很多人来说,疫情并非常态 —— 绝大多数人从未预料自己将要面对如此可怕的疫情。于是,恐慌必然在人群中弥漫开来,进而使得每个人的压力在相互影响、相互印证的过程中进一步变大,直至无法承受。
因此,疫情中绝大部分人都需要心理援助。
更为可怕的是,每当重大病毒性传染病疫情发生的时候,医疗资源总是完全没有办法满足正在爆发的需求 —— 对口的医生不够用,药物不够用,设备不够用,床位不够用,隔离设施不够用…… 反正要什么没什么 —— 更不要提足够数量的心理咨询师了。
于是,对每个人来说,出路只有一条:
> 自助 —— 做自己的心理咨询师。
虽然这听起来不大可能,可事实上却并不复杂,甚至可能很简单。
## 1. 如何决定是否读下去
你只需要问自己一个问题,就知道你是否应该认真读下去了:
> 我有没有自己非常关心的人?
如果,除了你自己之外,你谁都不关心,那么,实际上你并没有读下去的必要。
如果,你有你心里格外关心的人,他们出事儿的话,你会非常难过;他们的生命遭受威胁比你自己的生命遭受威胁更令你害怕;你天然有保护他们的冲动,为了他们你甚至宁可放弃自己…… 那么,你就应该认真读下去。
—— 就这么简单。
## 2. 你的角色决定了你的选择
你是船员,你就会理所应当地只做船员该做的事情;你是船长,那你就会自然而然地承担更多的责任,包括保护船员。
再比如,你可能经常听到某个老人讲,说他的儿子 “某一天突然就长大了!” —— 那其实并不是所谓的 “成熟” 了,本质上来看,只不过是 “那个老人眼里的孩子” 突然从某一天开始扮演另外一个角色了而已…… 过去扮演的是小杂碎,现在扮演的是顶天立地的大男人。
在这样特殊的时刻,人人都面临死亡威胁的时候,你的心里有你真正关心的人,你要扮演什么角色?他们的保护者。一旦角色敲定,你的看法、想法和做法就都不一样了,你自己体会一下?你的内心会有一个声音冒出来:他们危险,所以,我要为他们想办法!
在此之前,你很可能只不过是这样的:“危险!怎么办?” —— 虽然没有主语,但,自顾自的世界里,主语只有也只能是 “我自己” 啊!现在不一样了,主语是 “他” 或者 “他们”。
## 3. 伊姆村的故事
1666 年的 8 月份,伊丽莎白·汉考克(Elizabeth Hancock)在短短的 8 天时间里接连埋葬了 7 个人,他们是自己的丈夫,和自己的六个孩子 —— 他们死于黑死病。这种致命的瘟疫,在此之前的四五百年间,在欧洲大陆上间歇性爆发,前后夺走了 1.5 亿人的生命。
伊丽莎白所在的村庄叫伊姆村(Eyam),当时大约有 344 个村民。村里的黑死病是从 1665 年的夏天开始的,最先死掉的是村里的一个裁缝,很快,他家里的其他人也相继染病死亡。后来才知道,这个裁缝从伦敦采购的布料里有跳蚤,这些从伦敦过来的跳蚤把黑死病带到了伊姆村。
没有人不害怕…… 害怕的第一反应当然是逃走。可是,村民们做了一个惊人的决定,他们想要先听听他们所尊重的一位牧师的看法,然后再做决定。这位刚到伊姆村不久仍在忙于建立自己声望的牧师名字叫威廉·蒙佩森(William Monpesson)。更为惊人的是,蒙佩森牧师告诉大家,我们不能跑,因为逃跑即意味着说把疾病继续传播下去。蒙佩森对村民表示,只要自己有一口气在,就会在村里一直陪着大家…… 再次更为惊人的是,绝大多数村民竟然坚忍地接受了蒙佩森牧师的建议,发誓留下来。村民们在村口放了一块大石头,以此为限自我隔离。
今天的我们已经很难想象当年的自我隔离有多么艰难。到了 1666 年中旬,村里死亡人数已经高达 200 人。村里的石匠也去世了,于是,村民们只好自己刻墓碑…… 尸体只能由亲人用绳子远远地套上拖走,礼拜也是露天举行的,以减少疾病的传播…… 瘟疫爆发 14 个月后,黑死病突然消失了。这时候,死亡人数累计达 273 人,村里只剩下 83 个人幸存。
这一次的黑死病爆发,迅速散播欧洲各地,造成人口大量死亡,然而伊姆村周围的城镇却未受感染。伊姆村损失了 3/4 的人口,却成功阻断了感染,保全了英格兰北部数千万人的性命。
这真是个惊人的真实故事。
他们为什么如此坚忍,又,他们是怎样做出如此坚忍的决定的呢?
> 大抵上是因为他们有比自己和自己的生命更为关心的人或事罢。
当年的会议上,蒙帕森牧师说的话大概是这样的:
> “如果我们离开,就等于把疾病再传染给他人。或许,我们可以把这个灾难视为一个礼物,因为它能让我们证明,这世界上的确有善良和真爱。”
疫情过后,蒙佩森牧师也幸存了下来。1669 年,蒙佩森牧师离开伊姆村,前往伊克岭(Eakring, Nottinghamshire)工作。可是,因为他来自疫区,伊克岭的人们并不欢迎他,甚至害怕他,乃至于蒙佩森牧师常年躲在罗浮德公园的一个简陋小棚屋里生活。
有兴趣的同学可以多看看关于伊姆村的文章,比如这篇[英文文章](https://www.bbc.com/news/uk-england-35064071)。
## 4. 恐惧和慌张其实可被分离
在生命受到威胁的时候,没有人不害怕。不仅是害怕,并且应该是极度恐惧。
感到害怕并不丢人 —— 这是正常现象,求生欲是任何一个物种都天然最强烈的欲望。
然而,恐慌却是极其有害的。恐慌只有负面作用没有正面作用,只能是消极的恐慌,会使人做出不理性的选择,进一步恶化早已经存在且不可回避的危险。
恐慌是什么?顾名思义,恐慌就是恐惧引发的慌张。
其实把恐惧和慌张分离开来是很容易的 —— 比绝大多数人以为的容易得多。
你一定有过在电影院里看恐怖片的经历。恐怖片的导演、编剧、摄影、主演以及配角,他们整套班子都在有过之而无不及地穷尽一切手段让你感受到恐惧,因为只有这样,他们才算是成功。你坐在那里,一次又一次地被惊吓,一次又一次地真真切切地感受到恐惧……
然而,重点在于,你竟然不会慌张!
你看,你事实上已经有过把恐惧和慌张分离开来的实战经验 —— 只是你没想到而已。
从心理学的专业角度去讲的话,是这样的:
> 坐在电影院里看恐怖片的时候,是你的清醒意识(Conscious)在感受恐惧,但,你的潜意识(Subconscious)实际上并不恐惧 —— 所以你不慌张。
1896 年,世界上第一部电影(当然是无声电影),火车进站(***The Arrival of the Mail Train***)首映的时候,观众被直冲着他们而来的真人大小*列车*的移动画面吓得惊慌失措,人们尖叫着跑到放映室的后面。
他们的恐惧和慌张为什么没有被分开呢?
这就要解释一下我们大脑中潜意识的工作原理才能说得清楚。
## 5. 潜意识的基础工作原理
我们现在最熟悉的电子设备很可能只能是手机了。我们的手机有起码两组摄像头,前置一组,后置一组。
我们的意识就好像是手机一样,有两组摄像头在随时在记录我们所遭遇的一切。你的清醒意识就好像是前置摄像头,我们把它称作摄像头A,而你的潜意识就好像另外一个摄像头,我们把它称作摄像头 B。
这两个摄像头的区别在于,摄像头 A 焦距很长,所以清晰范围很小,拍摄角度也很小。因为我们的清醒意识是要对已获得数据进行及时处理的,所以,它也确实处理不了太多的信息。而摄像头 B 是全景摄像头,焦距很短,所以,事实上清晰范围极大,拍摄角度也是 360 无死角的…… 因为,我们的潜意识不负责处理信息,它只负责记录,不停地记录,不做任何处理。
你的潜意识不停地记录,所以它一定需要一块很大很大的硬盘才能把那些经年累月拍下来的视频保存下来。虽然它从不做实际思考,也不做实际判断,但,它有个很容易理解的机制,就是重复者优先。也就是说,若是某一个模式反复发生,那么它就会把这个模式放到最容易调出的地方。
看到有什么东西迎面冲过来,不需要你的清醒意识思考,你的潜意识会让你立刻跳起来逃离 —— 这是因为你和你的祖先重复过很多次的经历,所以,你的潜意识早就为这种情况建立了快捷方式,不假思索地采取正确行动,这多方便啊!
在 1896 年去电影院看火车进站的观众,就是这样被他们的潜意识直接驱动,而后逃离电影院的。
那为什么你现在反应没那么激烈了呢?因为你坐在电影院看迎面冲过来什么东西的次数已经足够多了,乃至于你的潜意识早就为这种情况建立了新的快捷方式,所以,即便你仍然会被刺激到,你的身体甚至可能会不由自主地猛烈晃一下,但,“惊慌失措地跑掉”,在你身上基本上不太可能发生了。
接下来的结论就很自然了:
> 你的潜意识是可以被改变的……
不仅如此,既然它是可以被改变的,为什么不找找方案,把它往更好、更有效的方向上改变呢?即,你的潜意识是可被训练的。
## 6. 恐慌的根源是什么?
恐惧和慌张,并不见得总是联系在一起。但,若是你真的恐慌了,那意味着什么呢?恐慌的根源究竟是什么呢?
清醒意识和潜意识的关系,有点像老板和秘书之间的关系。清醒意识就好像是老板,而潜意识就好像是老板的秘书 —— 老板需要什么,秘书就要提供支持……
老板感到恐惧了,秘书却拿不出预案 —— 老板就慌了…… 就这么简单。
有统计表明,人们因需要上台演讲产生的恐慌远远大于因思考死亡而带来的恐慌。为什么?因为上台演讲对绝大多数人来说是前所未有的经历,所以,潜意识,即,你这个老板的秘书没有任何预案,因为根本就没有任何重复的经历,别说重复了,连第一次都没有过,怎么可能不是最恐慌?
平日里死亡显得没那么可怕,因为你的秘书知道死亡是很久以后的事…… 可当重大病毒性疫情发生的时候,你这是前所未有的第一次感受到如此近距离的死亡,而你的秘书当然没有任何预案,所以,你自然而然地开始恐慌。
所以你就明白了,所谓的冷静,不是凭空而来的。在绝大多数情况下,所谓的冷静,总是以丰富的经验为前提。处变不惊从来都不是天生的特质,恰恰相反,它只能来自于积累。
从我自己的角度来看也是一样的。若是当年没有在北京经历过 2003 年的 SARS 疫情,现在的我很可能不太容易像现在这样情绪相对比较稳定。
## 7. 潜意识的好玩之处
潜意识只负责记录,它没有思考能力,没有分辨能力,它最多只能识别重复。
于是,出现了一个格外好玩的现象<sup>[1]</sup>:
> 秘书无法分辨老板是正在亲身经历,还是老板正在生动地想象……
对秘书来说,无论是亲身经历,还是生动的想象,都是一样的,记录下来就是了。记着记着发现重复的模式,做快捷方式就好了……
现在这个技巧在很多领域都是常识。比如,国际比赛的运动员就普遍使用这种技巧。他们会在比赛之前反复冥想,细致地想象在比赛场上的每一个细节,然后想象自己完美发挥的样子,甚至连最终拿下冠军之后的呼吸方式都要 “演练” 很多遍…… 他们会调用所有的感官,想象嗅觉,想象视觉,想象听觉,想象触觉,甚至会想象现场汗水的味道…… 想象越是逼真、越是细节丰富,“秘书” 越是信以为真。这么做的最明显效果就是他们上场之后全无紧张,非常轻松,因为无论遇到任何情况,“秘书” 都有足够的预案。
所以,看电影并不仅仅是娱乐而已。也许你看过不少灾难片,但很可能你只是杀时间而已。有些人看过灾难片之后,会不由自主地想,如果我是主角,我会怎么做?如果我遇到类似的情况,谁对我最重要,最高的优先级到底是什么?故事里谁的下场不好是因为他的选择错误?如果他换一种选择会是怎样的解决?如果我是他,我会不会做出同样的选择?想得越细,你的 “秘书”(即,你的潜意识)就越会信以为真 —— 于是,你的秘书就相对于他人的秘书多了一些 “预案”;于是,相对来看,类似情形发生的时候,你更容易相对冷静。
以下是一些与疫情有关的电影电视剧,随便挑几个看看:
> * [恐怖地带](https://movie.douban.com/subject/1301419/) (1995)
> * [惊变28天](https://movie.douban.com/subject/1306421/) (2002)
> * [天外来菌](https://movie.douban.com/subject/2133368/) (2008)
> * [嗜血破晓](https://movie.douban.com/subject/1972732/) (2009)
> * [传染病](https://movie.douban.com/subject/4301043/) (2011)
> * [流感](https://movie.douban.com/subject/10432911/) (2013)
> * [末日浩劫](https://movie.douban.com/subject/19976260/) (2013)
> * [复生](https://movie.douban.com/subject/21973146/) (2013)
> * [平常的心](https://movie.douban.com/subject/6776051/) (2014)
> * [灭绝](https://movie.douban.com/subject/25884436/) (2015)
> * [隔离死城](https://movie.douban.com/subject/26384948/) (2016)
> * [血疫](https://movie.douban.com/subject/26581181/)(2019)
## 8. 恐慌的即时应对策略
恐慌发生的时候,本质上只不过是你这个老板的秘书没有预案而已。
所以,你要做的最直接最简单的事情,就是告诉你的 “秘书”:
> 行了行了,我知道你没招了…… 你先别闹,让我想想办法。
潜意识这个秘书的唯一核心任务就是支持你躲避危险。当它无法支持你的时候,它就胡搞瞎搞,给你调出一切它能调出的过往记录 —— 这就是为什么在慌乱之中你会有各种稀奇古怪不合情不合理甚至干脆算得上是邪恶的念头。
有人建议你对自己说,“我要冷静!”
这种粗暴的建议通常是没有效果的。
就好像之前讲过的,潜意识是无法分辨你的实际经历和你的想象一样,你的潜意识无法分辨你的实际感受。如果你想要你的 “秘书” 安静下来,不要胡闹,最有效的方法是故意让身体放松下来。
把你的身体舒展开来,把你的呼吸速度放慢,或者,把你的目光聚焦到远处,甚至找一个宽阔的空间都有帮助。做出你在安全区域舒适状态下经常做的动作,你的潜意识就会平静下来,因为它相信你已经处于安全和舒适的状态。它就又开始有条不紊按部就班地工作了。
于是,你这个老板终于可以集中精力琢磨应对方案了。
## 9. 以理服己才是正确的选择
人们总觉得应该以理服人 —— 结果呢?总是以失败告终。
事实上,最应该以理相服的,是自己啊!理性对待任何人都挺困难的,既然如此,为什么不理性对待自己呢?这才是对自己真的好啊!
自己跟自己说话,在很长一段时间里都被认为是 “精神病症状”。可事实上,我们的大脑的确就是如此工作的,你的脑子里有清醒意识,还有另外一个工作方式并不相同的潜意识,所以,你的脑子里总是有不止一个声音,不是吗?
(事实上,潜意识中,也有两个部分,一部分产生直觉,另外一部分产生情绪。这就三个声音了,不是吗?柏拉图的类比是,一个车夫,一匹黑马,一匹白马;西格蒙德·弗洛伊德的说法是,本我,自我,和超我。)
所以,要经常跟自己对话,一问一答,有助于厘清思路:
> * 我这么想是对的吗?
> * 我的判断依据都有哪些?合理吗?靠谱吗?
> * 还有什么是我没有想到的?
> * 若是我这么做了,都有哪些后果?
> * 有没有什么后果是我无法承受的?
> * 有没有可能出现意外?
> * 最坏的情况是什么?如若发生我应该怎么办?
日常生活中从未如此 “喃喃自语” 的人,基本上只能是 “大脑简单” 的那种罢。
## 10. 灾难面前不要参与群辩
灾难发生的时候,绝大多数人是被他们的 “秘书” 所驱动的 —— 你只要想象一下一个科研大会的参会者全都是秘书的话会是怎样的景象就明白了。
每次重大疫情发生的时候,“理论” 的声音总是突然间嘈杂起来,这就好像股市里只有在股灾发生的时候人们才开始认真对待且严肃讨论 “价值投资” 一样 —— 他们不是突然理性起来了,实际上他们只是突然不知道怎么办了而已。
如果你想要保持冷静,那么就尽量不要参与关于疫情的种种 “原本应该……” 和 “本来可以……” 之类的争论,更不要参与网络上的群辩。这种争论,从来都不会有结果的,至少从来都不可能很快有结果的 —— 可是,你处于什么状态啊?兵临城下。
历史上所有的重大疫情和现在以及未来的所有重大疫情一样,每个群体,每个国家,甚至整个人类都一样,总是反应过慢,应对不足,进展缓慢……
疫情永远不可能被恰当地对待 —— 天天草木皆兵吧?实际成本太高,高到完全无法承受;稍一不留神就爆发,实际成本同样无法承受…… 这就是现实。
首先,如若灾难像这一次一样威胁到整个人类(美国紧急撤侨,随后又禁航,是历史上从未见到过的举措),事实上是没有任何人有能力负责的 —— 追责,怎么追?追到了又怎样?难道追责能缓解疫情吗?
更为重要的是,当我们身处所有人共同面对的死亡威胁之时,过去、现在、将来之中,最不值得关注的是过去,注意力最值得的用处在当下,至于未来,是活下去才能见到的东西。
还记得吗?最值得你关心的,并不仅仅是你自己,还有你所关心的人和事 —— 否则,你也不会读到这里。
为了你所关心的那个人或者那些人,你必须把所有的注意力放到最重要的地方:
> 你自己的安全 —— 生理安全和心理安全。
你必须先保证你自己的健康和安全,你才有可能保护你所关心的人。
## 11. 新型冠状病毒的一些事实
以下是一些关于新型冠状病毒的基本事实:
1. 新型冠状病毒可通过空气传染,也可通过皮肤接触传;务必要注意回避呼吸道飞沫,以及带有病毒的分泌物。
2. 常见症状包括发热、四肢乏力、干咳、以及呼吸困难。但,被传染者不一定发烧,甚至可能没有症状。
3. 新型冠状病毒被认为对人群普遍易感。老人感染上后病情会相对严重。有[研究](https://c.m.163.com/news/a/F3P7O9QR0001899O.html)表明,女性相比男性在相同的环境下感染人数更少。
4. 根据统计数据,目前每个感染者大约能够继续感染 3~5 个人。(这就是所谓的 R0 值,R naught)。
5. 针对新型冠状病毒,目前尚无有效治疗的药物与疫苗。
6. 病毒在传播过程中会不断变异。
7. 病毒在干燥的空气中大约能够存活 48 个小时,在空气中 2 小时之后活性明显下降<sup>[3]</sup>。
8. 与任何重大病毒性传染病疫情一样,隔离是唯一可能有效的抗争方案。
现在你可以通过以下链接查看疫情数据地图:
> * [2019-nCoV Global Cases (数据来自 Johns Hopkins CSSE)](https://gisanddata.maps.arcgis.com/apps/opsdashboard/index.html#/bda7594740fd40299423467b48e9ecf6)
> * [新型冠状病毒疫情实时动态(山月版,数据来自丁香园)](https://ncov.shanyue.tech)
## 12. 坏消息中的好消息
这是病毒性传染病的普遍规律:
> 一般来说,毒性大的病毒传染能力弱,反过来,传染能力强的毒性小。
这一次新型冠状病毒,传染能力极强,**R0** (读作 “R naught”)值约在 *3~5* 之间 —— 即,平均来看,新型冠状病毒在传染上一个人之后,此人能再传染三到五人。
新型冠状病毒的传染能力极强,即意味着说,它的毒性相对较低,直接结果就是染病后的死亡率相对较低。以下是近几年全球性的传染病的死亡率统计。
| 病毒名称 | 年份 | 感染人数 | 死亡人数 | 死亡率 | 波及国家数量 |
| :---------: | :--: | :------: | :------: | :----: | :----------: |
| Ebola** | 1976 | 33,577 | 13,562 | 40.4% | 9 |
| Nipah | 1998 | 513 | 398 | 77.6% | 2 |
| SARS | 2003 | 8,906 | 774 | 9.6% | 29 |
| MERS* | 2013 | 2,494 | 858 | 34.4% | 28 |
| 2019-nCoV** | 2020 | 9,812 | 213 | 2.2% | 23 |
> \* 截至 2019 年 11 月;\** 截至2020 年 1 月 31 日
>
> 数据来源:CDC,世界卫生组织,新英格兰医药杂志;参见:[Business Insider](https://www.businessinsider.com/china-virus-everything-we-know-deadly-2019-ncov-wuhan-spread-2020-1#the-total-number-of-cases-internationally-has-surpassed-that-of-the-entire-sars-outbreak-8)
与 2003 年中国经历的 SARS 的死亡率 *9.6%* 相比,目前新型冠状病毒的死亡率相对小很多,**只有 2.2%**……
## 13. 避免成为密切接触者
与病例(疑似和确诊病例)发病后有如下接触情形之一者,被定义为“密切接触者”:
> 1. 与病例共同居住、学习、工作,或有其他密切接触的人员。
> 2. 诊疗、护理、探视病例的医护人员、家属或其他与病例有类似近距离接触的人员。
> 3. 与病例乘坐同一交通工具并有近距离接触的人员。
> 4. 现场调查人员调查后经评估认为符合密切接触者条件的人员。
人民日报推出了一个工具,可以用来查询你自己是否曾经与当前已被确诊的患者同行:
> [新型冠状病毒感染的肺炎确诊患者同行程查询工具](https://h5.peopleapp.com/txcx/index.html)
>
> [另外一个查询工具](https://h5.133.cn/gtgjwap/h5/virusTrip/search)
**一旦发现自己是密切接触者(无论是被通知,还是自己经过查询获知),唯一合理的决策只有 “自我隔离”。**
首先,密切接触者并不一定是感染者。被新型冠状病毒(2019-nCoV)传染上之后,可能有症状,也可能无症状,潜伏期可长达两周(14 天)。所以,密切接触者无论有没有症状都只有一半的概率的确被传染上。
| | 染上 | 未被传染 |
| ------ | ------------------------------------------ | ---------------- |
| 有症状 | 事实上也有一半的概率自我痊愈 | 可能只是普通流感 |
| 无症状 | 可能尚处于潜伏期,同样有一半的概率自我痊愈 | 彻底没有染上 |
即便是你真的已经被感染,也实际上有一定的概率能够自我痊愈 —— 人类的免疫系统依然是非常有效的。另外,新型冠状病毒的死亡率远远低于 50%。
所以,从概率角度出发,自我隔离永远应该是第一选择,**直接去医院问诊,肯定不是最优策略。**
你必须想清楚:**在疫情发展的过程中,医院是最大的传染源。** 如果你实际上并没有被传染上,很可能会因为去了医院而反倒变成真正的感染者。
即便是你真的被传染了,你也应该知道,事实上,作为二代、甚至三代感染者,身上的病毒很可能已经有所削弱,有没有医生都一样,最终你只能靠你自身的免疫力去战胜它 —— 即便是你去了医院,被收治了之后,医生也根本无法做到马上将你治愈,而是尽力延长你的生命(痛苦)。
**要想尽一切办法尽量避免成为密切接触者。**
虽然在中国这么说必然引发争议,甚至可能招来横祸,但,**良知告诉我必须把这句话说出来**:
> **中药根本不适用于病毒性传染病**
2003 年的时候声称对 SARS 有效的板蓝根冲剂,2020 年声称对 2019-nCoV 有效的双黄连口服液,都是铁板钉钉的扯淡。坏蛋骗傻子的游戏,竟然总是可以成功,令人无语。
2020 年 1 月 31 日晚间,全国上下有很多人跑到药店门口挤着排队,抢购双黄连口服液。他们的选择令人担忧 —— 在此之前他们中的绝大多数人尚未被感染,这下可好,一夜之间个个都变成了 “密切接触者” 而又不自知…… 而后,他们散去,回到各自家中,再把 “密切接触者人数” 扩大三五倍。他们的家人实在是太可怜了!不得不与这些人接触的那些人更可怜!
## 14. 家中的有效隔离
要避免家里成为封闭空间,一定要定时通风。
回家第一件事就是洗手 —— 最好在门口放置消毒免洗手液。
在门口脱下外套。
每人一套毛巾,不要混用。
每人一套餐具,不要混用,尽管麻烦,尽量使用公筷、公勺。
养成咳嗽或者打喷嚏时用手臂捂嘴的习惯。
揉眼睛一定要用纸巾,而不是直接用手。
不要在家里过多喷洒消毒水,消毒水过敏的症状和流感以及肺炎几乎一模一样,甚至连呼吸困难症状都相同。
## 15. 练习和颜悦色的说话方式
好像所有人的感受都相同:
> 最不听话的就是家人……
好像永远如此。
可实际情况是,绝大多数人一生都未学会对待家人应该且只应该使用和颜悦色的说话方式。
当我们伤心、愤怒、紧张、激动的时候,我们口腔与声带的工作方式都会发生相应的变化,声调提高,声音会变得刺耳,语速加快 —— 所有这些的作用,只有一个,使对方的潜意识紧张起来,迅速寻找应对方式……
你必须时刻提醒自己:
> 只有当我们和颜悦色地说话的时候,对方才是老板,否则永远只能跟对方的秘书对抗。
对方的秘书懂个屁啊!智商大约相当于七八岁的小屁孩而已。可你却不知道,所以越想越生气,于是进入多重恶性循环。
平时要勤练习,无论想说什么的时候,只要对方是家人,那么,就要在脑子里把要说的话,用和颜悦色的方式演练至少一遍。
觉得自己干脆做不到的时候,不要说话。
比如,你的配偶死活不肯出门戴口罩,他就觉得无所谓…… 跟他理论是没用的,尤其是用语言理论是断然无用的。那怎么办?挡在门口,递给他口罩,等。等他戴好,你再让开。一个字都不用说。反正,不要跟他的秘书对话就对了。
再举个例子,你知道中药不管用,但家里的老人非要半夜出门排队抢购双黄连口服液怎么办?
还是一样的,不要尝试用语言说服对方。告诉他你已经通过网购帮他们买好了,耐心等等…… 哪怕是撒谎,也比他们跑出去排队成为密切接触者强,是不是?一觉醒来,等他们的“老板”上班了,秘书不闹了,再和颜悦色地告诉他们你为什么撒谎了,也是很好的啊!
## 16. 假期结束了怎么办
假期结束了,需要出去工作,不可能再躲在家里了…… 面对的危险更大了,是吧?
讲大道理是没用的。这里,我只想拷贝粘贴一段我和我的社群中某位成员的对话:
> > 笑来老师您好,我是一名国家公务人员,对于这次疫情,您那天讲课讲到恐惧不恐慌时,因为还没正式上班,待在家里不出门,我还没太有紧张感,但今天我值班,因为单位要购置防护物资,要跟别人面对面打交道,收物资然后再分发,缺货的还要去超市再找,心里开始很郁闷,因为我们当地一个超市刚有一个卖菜的服务人员被确诊,当开车在路上看到路边商店、饭店关门,三三两两的车行驶在宽阔的马路上只有路灯发着清冷的光时,我感到好孤独无助,今天只是个开始,等正常每天上班后真有紧急情况,我们必须往前冲,还不知疫情啥时候结束,第一次感到病毒、死亡离我这么近,我还有孩子,笑来老师,我除了尽可能做好防护,我该怎么办?
>
> 很多人都面临同样的问题。因为我们现在面临的是对所有人都有生命威胁的病毒。 你问问你自己,如果你逃回去,躲在家里再也不出来,那么,这个决定是如何定义你的人格的?你愿意以那样的人格继续生活一辈子吗? 其实,现在很多人都没反应过来:如果从此之后必须“苟且”,那么其实生不如死。尽可能做好防护,积极向上地活着。
>
> > 谢谢老师!
> >
> > 现在我感觉到了我内在的那份力量,笑来老师您的话总是能砸进我的心里,给我智慧,给我力量!
## 17. 助人即助己
我有一个几千人的社群。
有一天,我在社群里讲课,告诉大家如何在疫情中做好自己的心理建设。结束的时候,我告诉大家,我虽然不是医生,虽然不是疫情专家,但,我是个很好的心理咨询师,非常擅长做各种心理疏导。现在疫情这么严重,所有人都有或多或少的创伤应激障碍,如果有人觉得需要心理疏导,可以跟我私聊。
几天下来,像上一节里那样的对话,前前后后已经过千条了……
我为什么会愿意在这样的事情上耗费时间精力?原因很简单,因为我很清楚,在这样的时候,助人即助己。
许多年前我就明白这个道理。许多年前,我发现自己特别需要被人鼓励 —— 谁不是如此呢?但,真的没有任何人鼓励我…… 真的很可怜!那怎么办呢?我的解决方法就是 “开始无时不刻地鼓励任何人”。 不停地鼓励别人,总是鼓励别人的结果就是,自己变成了不需要鼓励的人 —— 事实上,每次鼓励他人的过程都是在鼓励自己啊!尤其是当我看到被鼓励的人发生变化的时候,对自己是真正的鼓励,更大的鼓励啊!
疫情刚刚爆发的时候,我自己也马上开始出现了 “创伤应激障碍” 的症状,最典型的比如,注意难以集中 —— 连续四五天,一个字都写不出来。如何自救?很简单啊,想办法救别人啊!然后我的 “秘书” 就开始给我各种支持,因为这样的事情我已经做过无数遍了,所以它有不止一个 “预案”。
我开始看书、整理资料,接受社群成员的问询,能马上应对的就马上应对,不能马上应对的就告诉对方,“容我仔细想想”……
很快,我就进入了 “**现实生活中我有能力、我有责任、我有希望**” 的状态 —— 毕竟,我的角色很轻松地唤醒了我…… 你想啊,我毕竟是一个几千人社群的群主啊!
## 18. 保护自己的免疫机能
即便是这次的疫情最终平息,另外一种病毒的疫情还是会再次发生。在任何病毒性疫情面前,能保护我们的只有我们自己的免疫机能完善 —— 面对细菌,现代医学还是有一些办法的,可面对病毒,我们几乎总是束手无策。
保证自我免疫机能有效的方式,对普通人来说,其实相当简单,却事实上不见得容易做到:
> * 每日适度运动(当前情况下就是室内运动每天至少出一次汗)
> * 每日保证充足睡眠(如有必要就服用一些睡眠辅助药物,比如褪黑素)
更为重要的是,避免过度焦虑。请牢记:
> **过度焦虑是最损伤免疫机能的!**
尽管看起来有点极端,可事实上你最好听从以下建议:
> * 干脆关掉电视(或者,只看电影和剧集)
> * 短期关掉微信朋友圈
> * 短期卸载微博、抖音
**每日查看疫情发展状况最多一次。**
尤其不要让家里的孩子过度暴露在各种疫情信息(绝大多数是令人害怕担心的重复信息)之中,他们更为脆弱,更需要保护,尤其是心理保护。如果家中有孩子,那么,大人更应该规避时时刻刻讨论疫情。每天最多开一次 “通气会”。
尤其地,家中若是有糖尿病患者的话,必须辅助并监督他控制好血糖 —— 持续高血糖会直接引发免疫机能毁损。
## 19. 学会嬉皮笑脸地生活
人们为什么总是在关键时刻 “掉链子” 呢?很可能是因为人们对待重大问题的时候都会非常严肃,甚至干脆 “不得随意发声”。
比如,面对死亡,人们就有很多的禁忌。避讳谈论,甚至干脆避讳提及…… 不小心提到了,为了辟邪还要 “呸呸呸” 至少三下…… 这种做法真的对吗?有没有不合理的地方?如果有,害处在哪里?
有一项调查发现,绝大多数在空难事件当中幸存的飞行员都有共同的特征,就是,他们的嘴很臭,啥都敢说,经常讲各种关于空难的笑话 —— 而绝大多数人是不会这么做的,觉得那样不吉利。然而,在灾难发生的那一刻,只有这些平常拿空难开了无数次玩笑的人才有可能正常思考,不受慌张的潜意识的影响<sup>[2]</sup>。而那些平日里连提都不敢提的人,到了灾难发生的时候,直接进入 “战斗、逃跑、冻僵” 这三种应激状态中最差的一种, “冻僵”。
多项调查表明,人群当中至少有 85% 甚至更高比例的人在遇到极端危险的情况直接进入 “冻僵” 状态……
而这个比例也是相对符合现实生活状态的,绝大多数人根本就不敢讨论禁忌话题,只有少数人总是嬉皮笑脸地对待它们。用我们东北话讲,就是 “皮愣嘎叽的” —— 换个角度看,这其实是心理强大的一种表现。
## 20. 危险不应改变你的价值观
一个人自身的状况会极大地影响他对世界的感知。
我被发现糖尿病之后的一段时间里,就体会过这种情况。当时我必须住院治疗观察,整个一层糖尿病患者里,我是最年轻的。我反复听到的一句话就是,“这么年轻就得糖尿病了?” 更恼人的还有比如,“咋这么年轻却这么严重?”
你可以想象,在那一小段时间里,我的心情有多么糟糕,内心有多么压抑。在这种状态下,你知道我对同样的世界产生的感知是什么样的吗?
> 连身材曼妙的美女护士在我眼里都是面目可憎的!
—— 甚至能到这个地步。
重大疫情发生了,这的确是坏事。但,除此之外,这个世界的其他部分并没有突然发生好与坏的巨大变化。原来人群中有多少比例的好人,现在还是多少比例,并没有发生变化。原来人群当中有多少比例的坏人,现在还是多少比例,并没有发生变化。
你可能觉得骗子突然多了起来,其实并没有,原来有多少骗子,现在还是那些骗子,只不过,灾难发生的时候,骗子们突然找到了机会而已……
之前有多少比例的人持有双重标准,那么,现在依然是有多少比例的人持有双重标准,只不过在重大疫情面前,他们的双重标准暴露无遗而已……
在高密度的坏消息面前,你会不由自主地低估好消息的价值。
这个假期你天天宅在家里,伸手虽然可见五指,抬眼却看不到任何其他人,于是你可能会忽略大量的好人。依然在清洁城市的环卫工人,每天奔波在路上的快递小哥,比平时忙了一百倍的警务人员…… 更不用提大量 “逆行” 的医务人员 —— 多关注一下他们吧!骗子、小偷、坏蛋混蛋,根本就不值得你关注 —— 至少现在。
## 21. 希望才是最大的支撑
人们以为的坚强,其实并不存在。
没有不害怕的英雄,没有不自私的公仆,没有无缺点的父母…… 任何人都有一大堆的问题。
我们终生都在努力克服自己的缺点,时时刻刻都在与自己的阴暗面作斗争,并且还经常失败 —— 这才是生活的普遍真相。
世界那么不完美,我们如此多的缺陷,为什么还要津津有味地活着?
因为不管是一厢情愿也好,不切实际也罢,我们总是相信明天会更好 —— 这就是动力的来源。若是没有了希望,一切都没有意义。
多年来,我一直是自己的心理咨询师,自己的心理援助师。实在无能为力的时候,我就跑去当别人的心理咨询师、心理援助师…… 每一次基本上都是相当管用的。
然而,总是有更为艰难的时刻,比如现在,地球上的每一个人都面临生命威胁的时候。那怎么办?终极的手段只能是诉诸于希望。比如,简单点,我希望这些文字能够帮助更多人。
再比如,关于这次疫情,我相信它一定会过去的。新型冠状病毒不会杀死所有人 —— 它也要利己,杀光了所有人,它就没地方呆了,它也需要宿主。所以,病毒在传染人的过程中,不断变异,实际上是为了给自己和宿主之间建立一种平衡关系…… 这是传染病常识,病毒在传染的过程中所谓的毒性一定会慢慢削弱。所以,疫情一定会过去的。
比较有趣的是,历史上从来都没有不一样的时候,每次疫情大爆发之后,随之而来的是各方面的复兴,不仅是经济复兴,通常还会伴随着文化复兴 —— 你说这不是希望这是什么?
就好像我们必须学会让 “老板” 控制 “秘书” 一样,我们其实必须学会让 “老板” 和 “秘书” 更多地关注这世界美好的一面而不是更多地关注这世界阴暗的一面 —— 并不是我们想要自欺欺人,而是为了我们自己的心理健康。尽量把自己变成更健康的人,不就是等于为这世界的美好又增添了一分吗?美好这东西,可从来都不是凭空而来的啊!
## 结语
做自己的心理咨询师,并不难,起码入门并不难。
在糟糕的闪念出现之时,你第一个要问的是,这是老板在讲话?还是秘书在讲话?如此这般,你会发现自己一下子就清醒过来了。因为你已经知道那秘书的工作原理,你也知道如何与它沟通。
不仅要在生理上保护好自己,尤为重要甚至更为重要的是,要在心理上保护好自己 —— 与其参与无意义的群辩,与其把时间和精力浪费在时时刻刻刷重复资讯,还不如多安抚一下自己的 “秘书”,也多安抚一下别人的 “秘书”…… 心理健康对免疫机能有效性有极大的帮助。
能给自己做咨询师之后,你会发现你其实可以为你所关心的人做心理咨询心理援助了。因为你会发现你现在是真正能做到 “有效倾听” 了 —— 当你发现自己能够分辨自己究竟是在与老板打交道还是与秘书打交道的那一瞬间,你几乎与大师无异了,的确有一点神奇。
能力伴随着责任。你拥有了新的角色,你开始重视**自己的选择如何定义自己的人格**。你有了比自己更为重要的人或事…… 这一瞬间,你的世界也会随之而变 —— 更为重要的是,你开始创造一个新的世界,值得拭目以待的新世界。
灾难终会发生,可若是没有最坏的时刻,怎么会有最好的时光?关键在于,你不仅开始自己制造希望,你还会因为自己扮演保护者的角色,所以会不由自主地不断传播希望。
没有理由不相信,你一定会自然而然地变成一个更好的你。
----
2020/02/02 凌晨,于北京
-----
另外,2020 年,我会每天抽出一点时间,充当免费的心理援助师。
你可以在 [Mixin Messenger](https://mixin.one/messenger) 上找到我,我的 ID 是 *26806*
-----
## 参考书籍
1. [I would, but my DAMN MIND won't let me!: a teen's guide to controlling their thoughts and feelings](https://www.amazon.com/dp/099762440X/), by Jacqui Letran (2016)
1. [Deep Survival: Who Lives, Who Dies, and Why](https://www.amazon.com/dp/B0028Z4LUU), by Laurence Gonzales (2004)
1. 李兰娟:[新型冠状病毒在空气中存活时间可达48小时](https://weibo.com/1618051664/Isvyws8Zb?ref=home&rid=0_0_8_3069137192539465079_6_0_0&type=comment)
|
github_jupyter
|
# 如何自助 —— 助人即助己
—— 重大病毒性疫情中如何应对创伤应激障碍
**李笑来**(2020.02)
-----

病毒性传染病疫情从未停止过威胁人类。在这次 2020 年春节前中国武汉新型冠状病毒([2019-nCoV](https://zh.wikipedia.org/wiki/2019%E6%96%B0%E5%9E%8B%E5%86%A0%E7%8B%80%E7%97%85%E6%AF%92))爆发之前,2003 年就有过令一代人记忆深刻的 SARS 大爆发。除此之外,还有很多可能随时发生的疫情,比如,禽流感([Avian Influenza](https://zh.wikipedia.org/wiki/%E7%A6%BD%E6%B5%81%E6%84%9F))—— H5N2(美国,1983),H5N1(中国广东,1996),H9N2(香港,1999),H7N7(荷兰,2003),H3N2(美国,2006),H7N9(中国上海,2013/03),H10N8(中国江西,2013/12),H7N4(中国江苏,2018/02)……
在重大病毒性传染病疫情发生的时候,尤其是该病毒可通过空气传染的时候,每一个人的生命都面临威胁 —— 至少是潜在的威胁。一旦得知自己的生命可能受到威胁,每个人都一样,会因此产生巨大的难以消解的压力;进而有可能仅仅因为压力就会产生很多应激障碍,导致注意力不集中,易怒,失眠;进而会造成最严重的后果之一:自我免疫机能受损 —— 然而,在疫情中我们赖以生存的唯一支撑就是我们自身的免疫机能。
并且,对很多人来说,疫情并非常态 —— 绝大多数人从未预料自己将要面对如此可怕的疫情。于是,恐慌必然在人群中弥漫开来,进而使得每个人的压力在相互影响、相互印证的过程中进一步变大,直至无法承受。
因此,疫情中绝大部分人都需要心理援助。
更为可怕的是,每当重大病毒性传染病疫情发生的时候,医疗资源总是完全没有办法满足正在爆发的需求 —— 对口的医生不够用,药物不够用,设备不够用,床位不够用,隔离设施不够用…… 反正要什么没什么 —— 更不要提足够数量的心理咨询师了。
于是,对每个人来说,出路只有一条:
> 自助 —— 做自己的心理咨询师。
虽然这听起来不大可能,可事实上却并不复杂,甚至可能很简单。
## 1. 如何决定是否读下去
你只需要问自己一个问题,就知道你是否应该认真读下去了:
> 我有没有自己非常关心的人?
如果,除了你自己之外,你谁都不关心,那么,实际上你并没有读下去的必要。
如果,你有你心里格外关心的人,他们出事儿的话,你会非常难过;他们的生命遭受威胁比你自己的生命遭受威胁更令你害怕;你天然有保护他们的冲动,为了他们你甚至宁可放弃自己…… 那么,你就应该认真读下去。
—— 就这么简单。
## 2. 你的角色决定了你的选择
你是船员,你就会理所应当地只做船员该做的事情;你是船长,那你就会自然而然地承担更多的责任,包括保护船员。
再比如,你可能经常听到某个老人讲,说他的儿子 “某一天突然就长大了!” —— 那其实并不是所谓的 “成熟” 了,本质上来看,只不过是 “那个老人眼里的孩子” 突然从某一天开始扮演另外一个角色了而已…… 过去扮演的是小杂碎,现在扮演的是顶天立地的大男人。
在这样特殊的时刻,人人都面临死亡威胁的时候,你的心里有你真正关心的人,你要扮演什么角色?他们的保护者。一旦角色敲定,你的看法、想法和做法就都不一样了,你自己体会一下?你的内心会有一个声音冒出来:他们危险,所以,我要为他们想办法!
在此之前,你很可能只不过是这样的:“危险!怎么办?” —— 虽然没有主语,但,自顾自的世界里,主语只有也只能是 “我自己” 啊!现在不一样了,主语是 “他” 或者 “他们”。
## 3. 伊姆村的故事
1666 年的 8 月份,伊丽莎白·汉考克(Elizabeth Hancock)在短短的 8 天时间里接连埋葬了 7 个人,他们是自己的丈夫,和自己的六个孩子 —— 他们死于黑死病。这种致命的瘟疫,在此之前的四五百年间,在欧洲大陆上间歇性爆发,前后夺走了 1.5 亿人的生命。
伊丽莎白所在的村庄叫伊姆村(Eyam),当时大约有 344 个村民。村里的黑死病是从 1665 年的夏天开始的,最先死掉的是村里的一个裁缝,很快,他家里的其他人也相继染病死亡。后来才知道,这个裁缝从伦敦采购的布料里有跳蚤,这些从伦敦过来的跳蚤把黑死病带到了伊姆村。
没有人不害怕…… 害怕的第一反应当然是逃走。可是,村民们做了一个惊人的决定,他们想要先听听他们所尊重的一位牧师的看法,然后再做决定。这位刚到伊姆村不久仍在忙于建立自己声望的牧师名字叫威廉·蒙佩森(William Monpesson)。更为惊人的是,蒙佩森牧师告诉大家,我们不能跑,因为逃跑即意味着说把疾病继续传播下去。蒙佩森对村民表示,只要自己有一口气在,就会在村里一直陪着大家…… 再次更为惊人的是,绝大多数村民竟然坚忍地接受了蒙佩森牧师的建议,发誓留下来。村民们在村口放了一块大石头,以此为限自我隔离。
今天的我们已经很难想象当年的自我隔离有多么艰难。到了 1666 年中旬,村里死亡人数已经高达 200 人。村里的石匠也去世了,于是,村民们只好自己刻墓碑…… 尸体只能由亲人用绳子远远地套上拖走,礼拜也是露天举行的,以减少疾病的传播…… 瘟疫爆发 14 个月后,黑死病突然消失了。这时候,死亡人数累计达 273 人,村里只剩下 83 个人幸存。
这一次的黑死病爆发,迅速散播欧洲各地,造成人口大量死亡,然而伊姆村周围的城镇却未受感染。伊姆村损失了 3/4 的人口,却成功阻断了感染,保全了英格兰北部数千万人的性命。
这真是个惊人的真实故事。
他们为什么如此坚忍,又,他们是怎样做出如此坚忍的决定的呢?
> 大抵上是因为他们有比自己和自己的生命更为关心的人或事罢。
当年的会议上,蒙帕森牧师说的话大概是这样的:
> “如果我们离开,就等于把疾病再传染给他人。或许,我们可以把这个灾难视为一个礼物,因为它能让我们证明,这世界上的确有善良和真爱。”
疫情过后,蒙佩森牧师也幸存了下来。1669 年,蒙佩森牧师离开伊姆村,前往伊克岭(Eakring, Nottinghamshire)工作。可是,因为他来自疫区,伊克岭的人们并不欢迎他,甚至害怕他,乃至于蒙佩森牧师常年躲在罗浮德公园的一个简陋小棚屋里生活。
有兴趣的同学可以多看看关于伊姆村的文章,比如这篇[英文文章](https://www.bbc.com/news/uk-england-35064071)。
## 4. 恐惧和慌张其实可被分离
在生命受到威胁的时候,没有人不害怕。不仅是害怕,并且应该是极度恐惧。
感到害怕并不丢人 —— 这是正常现象,求生欲是任何一个物种都天然最强烈的欲望。
然而,恐慌却是极其有害的。恐慌只有负面作用没有正面作用,只能是消极的恐慌,会使人做出不理性的选择,进一步恶化早已经存在且不可回避的危险。
恐慌是什么?顾名思义,恐慌就是恐惧引发的慌张。
其实把恐惧和慌张分离开来是很容易的 —— 比绝大多数人以为的容易得多。
你一定有过在电影院里看恐怖片的经历。恐怖片的导演、编剧、摄影、主演以及配角,他们整套班子都在有过之而无不及地穷尽一切手段让你感受到恐惧,因为只有这样,他们才算是成功。你坐在那里,一次又一次地被惊吓,一次又一次地真真切切地感受到恐惧……
然而,重点在于,你竟然不会慌张!
你看,你事实上已经有过把恐惧和慌张分离开来的实战经验 —— 只是你没想到而已。
从心理学的专业角度去讲的话,是这样的:
> 坐在电影院里看恐怖片的时候,是你的清醒意识(Conscious)在感受恐惧,但,你的潜意识(Subconscious)实际上并不恐惧 —— 所以你不慌张。
1896 年,世界上第一部电影(当然是无声电影),火车进站(***The Arrival of the Mail Train***)首映的时候,观众被直冲着他们而来的真人大小*列车*的移动画面吓得惊慌失措,人们尖叫着跑到放映室的后面。
他们的恐惧和慌张为什么没有被分开呢?
这就要解释一下我们大脑中潜意识的工作原理才能说得清楚。
## 5. 潜意识的基础工作原理
我们现在最熟悉的电子设备很可能只能是手机了。我们的手机有起码两组摄像头,前置一组,后置一组。
我们的意识就好像是手机一样,有两组摄像头在随时在记录我们所遭遇的一切。你的清醒意识就好像是前置摄像头,我们把它称作摄像头A,而你的潜意识就好像另外一个摄像头,我们把它称作摄像头 B。
这两个摄像头的区别在于,摄像头 A 焦距很长,所以清晰范围很小,拍摄角度也很小。因为我们的清醒意识是要对已获得数据进行及时处理的,所以,它也确实处理不了太多的信息。而摄像头 B 是全景摄像头,焦距很短,所以,事实上清晰范围极大,拍摄角度也是 360 无死角的…… 因为,我们的潜意识不负责处理信息,它只负责记录,不停地记录,不做任何处理。
你的潜意识不停地记录,所以它一定需要一块很大很大的硬盘才能把那些经年累月拍下来的视频保存下来。虽然它从不做实际思考,也不做实际判断,但,它有个很容易理解的机制,就是重复者优先。也就是说,若是某一个模式反复发生,那么它就会把这个模式放到最容易调出的地方。
看到有什么东西迎面冲过来,不需要你的清醒意识思考,你的潜意识会让你立刻跳起来逃离 —— 这是因为你和你的祖先重复过很多次的经历,所以,你的潜意识早就为这种情况建立了快捷方式,不假思索地采取正确行动,这多方便啊!
在 1896 年去电影院看火车进站的观众,就是这样被他们的潜意识直接驱动,而后逃离电影院的。
那为什么你现在反应没那么激烈了呢?因为你坐在电影院看迎面冲过来什么东西的次数已经足够多了,乃至于你的潜意识早就为这种情况建立了新的快捷方式,所以,即便你仍然会被刺激到,你的身体甚至可能会不由自主地猛烈晃一下,但,“惊慌失措地跑掉”,在你身上基本上不太可能发生了。
接下来的结论就很自然了:
> 你的潜意识是可以被改变的……
不仅如此,既然它是可以被改变的,为什么不找找方案,把它往更好、更有效的方向上改变呢?即,你的潜意识是可被训练的。
## 6. 恐慌的根源是什么?
恐惧和慌张,并不见得总是联系在一起。但,若是你真的恐慌了,那意味着什么呢?恐慌的根源究竟是什么呢?
清醒意识和潜意识的关系,有点像老板和秘书之间的关系。清醒意识就好像是老板,而潜意识就好像是老板的秘书 —— 老板需要什么,秘书就要提供支持……
老板感到恐惧了,秘书却拿不出预案 —— 老板就慌了…… 就这么简单。
有统计表明,人们因需要上台演讲产生的恐慌远远大于因思考死亡而带来的恐慌。为什么?因为上台演讲对绝大多数人来说是前所未有的经历,所以,潜意识,即,你这个老板的秘书没有任何预案,因为根本就没有任何重复的经历,别说重复了,连第一次都没有过,怎么可能不是最恐慌?
平日里死亡显得没那么可怕,因为你的秘书知道死亡是很久以后的事…… 可当重大病毒性疫情发生的时候,你这是前所未有的第一次感受到如此近距离的死亡,而你的秘书当然没有任何预案,所以,你自然而然地开始恐慌。
所以你就明白了,所谓的冷静,不是凭空而来的。在绝大多数情况下,所谓的冷静,总是以丰富的经验为前提。处变不惊从来都不是天生的特质,恰恰相反,它只能来自于积累。
从我自己的角度来看也是一样的。若是当年没有在北京经历过 2003 年的 SARS 疫情,现在的我很可能不太容易像现在这样情绪相对比较稳定。
## 7. 潜意识的好玩之处
潜意识只负责记录,它没有思考能力,没有分辨能力,它最多只能识别重复。
于是,出现了一个格外好玩的现象<sup>[1]</sup>:
> 秘书无法分辨老板是正在亲身经历,还是老板正在生动地想象……
对秘书来说,无论是亲身经历,还是生动的想象,都是一样的,记录下来就是了。记着记着发现重复的模式,做快捷方式就好了……
现在这个技巧在很多领域都是常识。比如,国际比赛的运动员就普遍使用这种技巧。他们会在比赛之前反复冥想,细致地想象在比赛场上的每一个细节,然后想象自己完美发挥的样子,甚至连最终拿下冠军之后的呼吸方式都要 “演练” 很多遍…… 他们会调用所有的感官,想象嗅觉,想象视觉,想象听觉,想象触觉,甚至会想象现场汗水的味道…… 想象越是逼真、越是细节丰富,“秘书” 越是信以为真。这么做的最明显效果就是他们上场之后全无紧张,非常轻松,因为无论遇到任何情况,“秘书” 都有足够的预案。
所以,看电影并不仅仅是娱乐而已。也许你看过不少灾难片,但很可能你只是杀时间而已。有些人看过灾难片之后,会不由自主地想,如果我是主角,我会怎么做?如果我遇到类似的情况,谁对我最重要,最高的优先级到底是什么?故事里谁的下场不好是因为他的选择错误?如果他换一种选择会是怎样的解决?如果我是他,我会不会做出同样的选择?想得越细,你的 “秘书”(即,你的潜意识)就越会信以为真 —— 于是,你的秘书就相对于他人的秘书多了一些 “预案”;于是,相对来看,类似情形发生的时候,你更容易相对冷静。
以下是一些与疫情有关的电影电视剧,随便挑几个看看:
> * [恐怖地带](https://movie.douban.com/subject/1301419/) (1995)
> * [惊变28天](https://movie.douban.com/subject/1306421/) (2002)
> * [天外来菌](https://movie.douban.com/subject/2133368/) (2008)
> * [嗜血破晓](https://movie.douban.com/subject/1972732/) (2009)
> * [传染病](https://movie.douban.com/subject/4301043/) (2011)
> * [流感](https://movie.douban.com/subject/10432911/) (2013)
> * [末日浩劫](https://movie.douban.com/subject/19976260/) (2013)
> * [复生](https://movie.douban.com/subject/21973146/) (2013)
> * [平常的心](https://movie.douban.com/subject/6776051/) (2014)
> * [灭绝](https://movie.douban.com/subject/25884436/) (2015)
> * [隔离死城](https://movie.douban.com/subject/26384948/) (2016)
> * [血疫](https://movie.douban.com/subject/26581181/)(2019)
## 8. 恐慌的即时应对策略
恐慌发生的时候,本质上只不过是你这个老板的秘书没有预案而已。
所以,你要做的最直接最简单的事情,就是告诉你的 “秘书”:
> 行了行了,我知道你没招了…… 你先别闹,让我想想办法。
潜意识这个秘书的唯一核心任务就是支持你躲避危险。当它无法支持你的时候,它就胡搞瞎搞,给你调出一切它能调出的过往记录 —— 这就是为什么在慌乱之中你会有各种稀奇古怪不合情不合理甚至干脆算得上是邪恶的念头。
有人建议你对自己说,“我要冷静!”
这种粗暴的建议通常是没有效果的。
就好像之前讲过的,潜意识是无法分辨你的实际经历和你的想象一样,你的潜意识无法分辨你的实际感受。如果你想要你的 “秘书” 安静下来,不要胡闹,最有效的方法是故意让身体放松下来。
把你的身体舒展开来,把你的呼吸速度放慢,或者,把你的目光聚焦到远处,甚至找一个宽阔的空间都有帮助。做出你在安全区域舒适状态下经常做的动作,你的潜意识就会平静下来,因为它相信你已经处于安全和舒适的状态。它就又开始有条不紊按部就班地工作了。
于是,你这个老板终于可以集中精力琢磨应对方案了。
## 9. 以理服己才是正确的选择
人们总觉得应该以理服人 —— 结果呢?总是以失败告终。
事实上,最应该以理相服的,是自己啊!理性对待任何人都挺困难的,既然如此,为什么不理性对待自己呢?这才是对自己真的好啊!
自己跟自己说话,在很长一段时间里都被认为是 “精神病症状”。可事实上,我们的大脑的确就是如此工作的,你的脑子里有清醒意识,还有另外一个工作方式并不相同的潜意识,所以,你的脑子里总是有不止一个声音,不是吗?
(事实上,潜意识中,也有两个部分,一部分产生直觉,另外一部分产生情绪。这就三个声音了,不是吗?柏拉图的类比是,一个车夫,一匹黑马,一匹白马;西格蒙德·弗洛伊德的说法是,本我,自我,和超我。)
所以,要经常跟自己对话,一问一答,有助于厘清思路:
> * 我这么想是对的吗?
> * 我的判断依据都有哪些?合理吗?靠谱吗?
> * 还有什么是我没有想到的?
> * 若是我这么做了,都有哪些后果?
> * 有没有什么后果是我无法承受的?
> * 有没有可能出现意外?
> * 最坏的情况是什么?如若发生我应该怎么办?
日常生活中从未如此 “喃喃自语” 的人,基本上只能是 “大脑简单” 的那种罢。
## 10. 灾难面前不要参与群辩
灾难发生的时候,绝大多数人是被他们的 “秘书” 所驱动的 —— 你只要想象一下一个科研大会的参会者全都是秘书的话会是怎样的景象就明白了。
每次重大疫情发生的时候,“理论” 的声音总是突然间嘈杂起来,这就好像股市里只有在股灾发生的时候人们才开始认真对待且严肃讨论 “价值投资” 一样 —— 他们不是突然理性起来了,实际上他们只是突然不知道怎么办了而已。
如果你想要保持冷静,那么就尽量不要参与关于疫情的种种 “原本应该……” 和 “本来可以……” 之类的争论,更不要参与网络上的群辩。这种争论,从来都不会有结果的,至少从来都不可能很快有结果的 —— 可是,你处于什么状态啊?兵临城下。
历史上所有的重大疫情和现在以及未来的所有重大疫情一样,每个群体,每个国家,甚至整个人类都一样,总是反应过慢,应对不足,进展缓慢……
疫情永远不可能被恰当地对待 —— 天天草木皆兵吧?实际成本太高,高到完全无法承受;稍一不留神就爆发,实际成本同样无法承受…… 这就是现实。
首先,如若灾难像这一次一样威胁到整个人类(美国紧急撤侨,随后又禁航,是历史上从未见到过的举措),事实上是没有任何人有能力负责的 —— 追责,怎么追?追到了又怎样?难道追责能缓解疫情吗?
更为重要的是,当我们身处所有人共同面对的死亡威胁之时,过去、现在、将来之中,最不值得关注的是过去,注意力最值得的用处在当下,至于未来,是活下去才能见到的东西。
还记得吗?最值得你关心的,并不仅仅是你自己,还有你所关心的人和事 —— 否则,你也不会读到这里。
为了你所关心的那个人或者那些人,你必须把所有的注意力放到最重要的地方:
> 你自己的安全 —— 生理安全和心理安全。
你必须先保证你自己的健康和安全,你才有可能保护你所关心的人。
## 11. 新型冠状病毒的一些事实
以下是一些关于新型冠状病毒的基本事实:
1. 新型冠状病毒可通过空气传染,也可通过皮肤接触传;务必要注意回避呼吸道飞沫,以及带有病毒的分泌物。
2. 常见症状包括发热、四肢乏力、干咳、以及呼吸困难。但,被传染者不一定发烧,甚至可能没有症状。
3. 新型冠状病毒被认为对人群普遍易感。老人感染上后病情会相对严重。有[研究](https://c.m.163.com/news/a/F3P7O9QR0001899O.html)表明,女性相比男性在相同的环境下感染人数更少。
4. 根据统计数据,目前每个感染者大约能够继续感染 3~5 个人。(这就是所谓的 R0 值,R naught)。
5. 针对新型冠状病毒,目前尚无有效治疗的药物与疫苗。
6. 病毒在传播过程中会不断变异。
7. 病毒在干燥的空气中大约能够存活 48 个小时,在空气中 2 小时之后活性明显下降<sup>[3]</sup>。
8. 与任何重大病毒性传染病疫情一样,隔离是唯一可能有效的抗争方案。
现在你可以通过以下链接查看疫情数据地图:
> * [2019-nCoV Global Cases (数据来自 Johns Hopkins CSSE)](https://gisanddata.maps.arcgis.com/apps/opsdashboard/index.html#/bda7594740fd40299423467b48e9ecf6)
> * [新型冠状病毒疫情实时动态(山月版,数据来自丁香园)](https://ncov.shanyue.tech)
## 12. 坏消息中的好消息
这是病毒性传染病的普遍规律:
> 一般来说,毒性大的病毒传染能力弱,反过来,传染能力强的毒性小。
这一次新型冠状病毒,传染能力极强,**R0** (读作 “R naught”)值约在 *3~5* 之间 —— 即,平均来看,新型冠状病毒在传染上一个人之后,此人能再传染三到五人。
新型冠状病毒的传染能力极强,即意味着说,它的毒性相对较低,直接结果就是染病后的死亡率相对较低。以下是近几年全球性的传染病的死亡率统计。
| 病毒名称 | 年份 | 感染人数 | 死亡人数 | 死亡率 | 波及国家数量 |
| :---------: | :--: | :------: | :------: | :----: | :----------: |
| Ebola** | 1976 | 33,577 | 13,562 | 40.4% | 9 |
| Nipah | 1998 | 513 | 398 | 77.6% | 2 |
| SARS | 2003 | 8,906 | 774 | 9.6% | 29 |
| MERS* | 2013 | 2,494 | 858 | 34.4% | 28 |
| 2019-nCoV** | 2020 | 9,812 | 213 | 2.2% | 23 |
> \* 截至 2019 年 11 月;\** 截至2020 年 1 月 31 日
>
> 数据来源:CDC,世界卫生组织,新英格兰医药杂志;参见:[Business Insider](https://www.businessinsider.com/china-virus-everything-we-know-deadly-2019-ncov-wuhan-spread-2020-1#the-total-number-of-cases-internationally-has-surpassed-that-of-the-entire-sars-outbreak-8)
与 2003 年中国经历的 SARS 的死亡率 *9.6%* 相比,目前新型冠状病毒的死亡率相对小很多,**只有 2.2%**……
## 13. 避免成为密切接触者
与病例(疑似和确诊病例)发病后有如下接触情形之一者,被定义为“密切接触者”:
> 1. 与病例共同居住、学习、工作,或有其他密切接触的人员。
> 2. 诊疗、护理、探视病例的医护人员、家属或其他与病例有类似近距离接触的人员。
> 3. 与病例乘坐同一交通工具并有近距离接触的人员。
> 4. 现场调查人员调查后经评估认为符合密切接触者条件的人员。
人民日报推出了一个工具,可以用来查询你自己是否曾经与当前已被确诊的患者同行:
> [新型冠状病毒感染的肺炎确诊患者同行程查询工具](https://h5.peopleapp.com/txcx/index.html)
>
> [另外一个查询工具](https://h5.133.cn/gtgjwap/h5/virusTrip/search)
**一旦发现自己是密切接触者(无论是被通知,还是自己经过查询获知),唯一合理的决策只有 “自我隔离”。**
首先,密切接触者并不一定是感染者。被新型冠状病毒(2019-nCoV)传染上之后,可能有症状,也可能无症状,潜伏期可长达两周(14 天)。所以,密切接触者无论有没有症状都只有一半的概率的确被传染上。
| | 染上 | 未被传染 |
| ------ | ------------------------------------------ | ---------------- |
| 有症状 | 事实上也有一半的概率自我痊愈 | 可能只是普通流感 |
| 无症状 | 可能尚处于潜伏期,同样有一半的概率自我痊愈 | 彻底没有染上 |
即便是你真的已经被感染,也实际上有一定的概率能够自我痊愈 —— 人类的免疫系统依然是非常有效的。另外,新型冠状病毒的死亡率远远低于 50%。
所以,从概率角度出发,自我隔离永远应该是第一选择,**直接去医院问诊,肯定不是最优策略。**
你必须想清楚:**在疫情发展的过程中,医院是最大的传染源。** 如果你实际上并没有被传染上,很可能会因为去了医院而反倒变成真正的感染者。
即便是你真的被传染了,你也应该知道,事实上,作为二代、甚至三代感染者,身上的病毒很可能已经有所削弱,有没有医生都一样,最终你只能靠你自身的免疫力去战胜它 —— 即便是你去了医院,被收治了之后,医生也根本无法做到马上将你治愈,而是尽力延长你的生命(痛苦)。
**要想尽一切办法尽量避免成为密切接触者。**
虽然在中国这么说必然引发争议,甚至可能招来横祸,但,**良知告诉我必须把这句话说出来**:
> **中药根本不适用于病毒性传染病**
2003 年的时候声称对 SARS 有效的板蓝根冲剂,2020 年声称对 2019-nCoV 有效的双黄连口服液,都是铁板钉钉的扯淡。坏蛋骗傻子的游戏,竟然总是可以成功,令人无语。
2020 年 1 月 31 日晚间,全国上下有很多人跑到药店门口挤着排队,抢购双黄连口服液。他们的选择令人担忧 —— 在此之前他们中的绝大多数人尚未被感染,这下可好,一夜之间个个都变成了 “密切接触者” 而又不自知…… 而后,他们散去,回到各自家中,再把 “密切接触者人数” 扩大三五倍。他们的家人实在是太可怜了!不得不与这些人接触的那些人更可怜!
## 14. 家中的有效隔离
要避免家里成为封闭空间,一定要定时通风。
回家第一件事就是洗手 —— 最好在门口放置消毒免洗手液。
在门口脱下外套。
每人一套毛巾,不要混用。
每人一套餐具,不要混用,尽管麻烦,尽量使用公筷、公勺。
养成咳嗽或者打喷嚏时用手臂捂嘴的习惯。
揉眼睛一定要用纸巾,而不是直接用手。
不要在家里过多喷洒消毒水,消毒水过敏的症状和流感以及肺炎几乎一模一样,甚至连呼吸困难症状都相同。
## 15. 练习和颜悦色的说话方式
好像所有人的感受都相同:
> 最不听话的就是家人……
好像永远如此。
可实际情况是,绝大多数人一生都未学会对待家人应该且只应该使用和颜悦色的说话方式。
当我们伤心、愤怒、紧张、激动的时候,我们口腔与声带的工作方式都会发生相应的变化,声调提高,声音会变得刺耳,语速加快 —— 所有这些的作用,只有一个,使对方的潜意识紧张起来,迅速寻找应对方式……
你必须时刻提醒自己:
> 只有当我们和颜悦色地说话的时候,对方才是老板,否则永远只能跟对方的秘书对抗。
对方的秘书懂个屁啊!智商大约相当于七八岁的小屁孩而已。可你却不知道,所以越想越生气,于是进入多重恶性循环。
平时要勤练习,无论想说什么的时候,只要对方是家人,那么,就要在脑子里把要说的话,用和颜悦色的方式演练至少一遍。
觉得自己干脆做不到的时候,不要说话。
比如,你的配偶死活不肯出门戴口罩,他就觉得无所谓…… 跟他理论是没用的,尤其是用语言理论是断然无用的。那怎么办?挡在门口,递给他口罩,等。等他戴好,你再让开。一个字都不用说。反正,不要跟他的秘书对话就对了。
再举个例子,你知道中药不管用,但家里的老人非要半夜出门排队抢购双黄连口服液怎么办?
还是一样的,不要尝试用语言说服对方。告诉他你已经通过网购帮他们买好了,耐心等等…… 哪怕是撒谎,也比他们跑出去排队成为密切接触者强,是不是?一觉醒来,等他们的“老板”上班了,秘书不闹了,再和颜悦色地告诉他们你为什么撒谎了,也是很好的啊!
## 16. 假期结束了怎么办
假期结束了,需要出去工作,不可能再躲在家里了…… 面对的危险更大了,是吧?
讲大道理是没用的。这里,我只想拷贝粘贴一段我和我的社群中某位成员的对话:
> > 笑来老师您好,我是一名国家公务人员,对于这次疫情,您那天讲课讲到恐惧不恐慌时,因为还没正式上班,待在家里不出门,我还没太有紧张感,但今天我值班,因为单位要购置防护物资,要跟别人面对面打交道,收物资然后再分发,缺货的还要去超市再找,心里开始很郁闷,因为我们当地一个超市刚有一个卖菜的服务人员被确诊,当开车在路上看到路边商店、饭店关门,三三两两的车行驶在宽阔的马路上只有路灯发着清冷的光时,我感到好孤独无助,今天只是个开始,等正常每天上班后真有紧急情况,我们必须往前冲,还不知疫情啥时候结束,第一次感到病毒、死亡离我这么近,我还有孩子,笑来老师,我除了尽可能做好防护,我该怎么办?
>
> 很多人都面临同样的问题。因为我们现在面临的是对所有人都有生命威胁的病毒。 你问问你自己,如果你逃回去,躲在家里再也不出来,那么,这个决定是如何定义你的人格的?你愿意以那样的人格继续生活一辈子吗? 其实,现在很多人都没反应过来:如果从此之后必须“苟且”,那么其实生不如死。尽可能做好防护,积极向上地活着。
>
> > 谢谢老师!
> >
> > 现在我感觉到了我内在的那份力量,笑来老师您的话总是能砸进我的心里,给我智慧,给我力量!
## 17. 助人即助己
我有一个几千人的社群。
有一天,我在社群里讲课,告诉大家如何在疫情中做好自己的心理建设。结束的时候,我告诉大家,我虽然不是医生,虽然不是疫情专家,但,我是个很好的心理咨询师,非常擅长做各种心理疏导。现在疫情这么严重,所有人都有或多或少的创伤应激障碍,如果有人觉得需要心理疏导,可以跟我私聊。
几天下来,像上一节里那样的对话,前前后后已经过千条了……
我为什么会愿意在这样的事情上耗费时间精力?原因很简单,因为我很清楚,在这样的时候,助人即助己。
许多年前我就明白这个道理。许多年前,我发现自己特别需要被人鼓励 —— 谁不是如此呢?但,真的没有任何人鼓励我…… 真的很可怜!那怎么办呢?我的解决方法就是 “开始无时不刻地鼓励任何人”。 不停地鼓励别人,总是鼓励别人的结果就是,自己变成了不需要鼓励的人 —— 事实上,每次鼓励他人的过程都是在鼓励自己啊!尤其是当我看到被鼓励的人发生变化的时候,对自己是真正的鼓励,更大的鼓励啊!
疫情刚刚爆发的时候,我自己也马上开始出现了 “创伤应激障碍” 的症状,最典型的比如,注意难以集中 —— 连续四五天,一个字都写不出来。如何自救?很简单啊,想办法救别人啊!然后我的 “秘书” 就开始给我各种支持,因为这样的事情我已经做过无数遍了,所以它有不止一个 “预案”。
我开始看书、整理资料,接受社群成员的问询,能马上应对的就马上应对,不能马上应对的就告诉对方,“容我仔细想想”……
很快,我就进入了 “**现实生活中我有能力、我有责任、我有希望**” 的状态 —— 毕竟,我的角色很轻松地唤醒了我…… 你想啊,我毕竟是一个几千人社群的群主啊!
## 18. 保护自己的免疫机能
即便是这次的疫情最终平息,另外一种病毒的疫情还是会再次发生。在任何病毒性疫情面前,能保护我们的只有我们自己的免疫机能完善 —— 面对细菌,现代医学还是有一些办法的,可面对病毒,我们几乎总是束手无策。
保证自我免疫机能有效的方式,对普通人来说,其实相当简单,却事实上不见得容易做到:
> * 每日适度运动(当前情况下就是室内运动每天至少出一次汗)
> * 每日保证充足睡眠(如有必要就服用一些睡眠辅助药物,比如褪黑素)
更为重要的是,避免过度焦虑。请牢记:
> **过度焦虑是最损伤免疫机能的!**
尽管看起来有点极端,可事实上你最好听从以下建议:
> * 干脆关掉电视(或者,只看电影和剧集)
> * 短期关掉微信朋友圈
> * 短期卸载微博、抖音
**每日查看疫情发展状况最多一次。**
尤其不要让家里的孩子过度暴露在各种疫情信息(绝大多数是令人害怕担心的重复信息)之中,他们更为脆弱,更需要保护,尤其是心理保护。如果家中有孩子,那么,大人更应该规避时时刻刻讨论疫情。每天最多开一次 “通气会”。
尤其地,家中若是有糖尿病患者的话,必须辅助并监督他控制好血糖 —— 持续高血糖会直接引发免疫机能毁损。
## 19. 学会嬉皮笑脸地生活
人们为什么总是在关键时刻 “掉链子” 呢?很可能是因为人们对待重大问题的时候都会非常严肃,甚至干脆 “不得随意发声”。
比如,面对死亡,人们就有很多的禁忌。避讳谈论,甚至干脆避讳提及…… 不小心提到了,为了辟邪还要 “呸呸呸” 至少三下…… 这种做法真的对吗?有没有不合理的地方?如果有,害处在哪里?
有一项调查发现,绝大多数在空难事件当中幸存的飞行员都有共同的特征,就是,他们的嘴很臭,啥都敢说,经常讲各种关于空难的笑话 —— 而绝大多数人是不会这么做的,觉得那样不吉利。然而,在灾难发生的那一刻,只有这些平常拿空难开了无数次玩笑的人才有可能正常思考,不受慌张的潜意识的影响<sup>[2]</sup>。而那些平日里连提都不敢提的人,到了灾难发生的时候,直接进入 “战斗、逃跑、冻僵” 这三种应激状态中最差的一种, “冻僵”。
多项调查表明,人群当中至少有 85% 甚至更高比例的人在遇到极端危险的情况直接进入 “冻僵” 状态……
而这个比例也是相对符合现实生活状态的,绝大多数人根本就不敢讨论禁忌话题,只有少数人总是嬉皮笑脸地对待它们。用我们东北话讲,就是 “皮愣嘎叽的” —— 换个角度看,这其实是心理强大的一种表现。
## 20. 危险不应改变你的价值观
一个人自身的状况会极大地影响他对世界的感知。
我被发现糖尿病之后的一段时间里,就体会过这种情况。当时我必须住院治疗观察,整个一层糖尿病患者里,我是最年轻的。我反复听到的一句话就是,“这么年轻就得糖尿病了?” 更恼人的还有比如,“咋这么年轻却这么严重?”
你可以想象,在那一小段时间里,我的心情有多么糟糕,内心有多么压抑。在这种状态下,你知道我对同样的世界产生的感知是什么样的吗?
> 连身材曼妙的美女护士在我眼里都是面目可憎的!
—— 甚至能到这个地步。
重大疫情发生了,这的确是坏事。但,除此之外,这个世界的其他部分并没有突然发生好与坏的巨大变化。原来人群中有多少比例的好人,现在还是多少比例,并没有发生变化。原来人群当中有多少比例的坏人,现在还是多少比例,并没有发生变化。
你可能觉得骗子突然多了起来,其实并没有,原来有多少骗子,现在还是那些骗子,只不过,灾难发生的时候,骗子们突然找到了机会而已……
之前有多少比例的人持有双重标准,那么,现在依然是有多少比例的人持有双重标准,只不过在重大疫情面前,他们的双重标准暴露无遗而已……
在高密度的坏消息面前,你会不由自主地低估好消息的价值。
这个假期你天天宅在家里,伸手虽然可见五指,抬眼却看不到任何其他人,于是你可能会忽略大量的好人。依然在清洁城市的环卫工人,每天奔波在路上的快递小哥,比平时忙了一百倍的警务人员…… 更不用提大量 “逆行” 的医务人员 —— 多关注一下他们吧!骗子、小偷、坏蛋混蛋,根本就不值得你关注 —— 至少现在。
## 21. 希望才是最大的支撑
人们以为的坚强,其实并不存在。
没有不害怕的英雄,没有不自私的公仆,没有无缺点的父母…… 任何人都有一大堆的问题。
我们终生都在努力克服自己的缺点,时时刻刻都在与自己的阴暗面作斗争,并且还经常失败 —— 这才是生活的普遍真相。
世界那么不完美,我们如此多的缺陷,为什么还要津津有味地活着?
因为不管是一厢情愿也好,不切实际也罢,我们总是相信明天会更好 —— 这就是动力的来源。若是没有了希望,一切都没有意义。
多年来,我一直是自己的心理咨询师,自己的心理援助师。实在无能为力的时候,我就跑去当别人的心理咨询师、心理援助师…… 每一次基本上都是相当管用的。
然而,总是有更为艰难的时刻,比如现在,地球上的每一个人都面临生命威胁的时候。那怎么办?终极的手段只能是诉诸于希望。比如,简单点,我希望这些文字能够帮助更多人。
再比如,关于这次疫情,我相信它一定会过去的。新型冠状病毒不会杀死所有人 —— 它也要利己,杀光了所有人,它就没地方呆了,它也需要宿主。所以,病毒在传染人的过程中,不断变异,实际上是为了给自己和宿主之间建立一种平衡关系…… 这是传染病常识,病毒在传染的过程中所谓的毒性一定会慢慢削弱。所以,疫情一定会过去的。
比较有趣的是,历史上从来都没有不一样的时候,每次疫情大爆发之后,随之而来的是各方面的复兴,不仅是经济复兴,通常还会伴随着文化复兴 —— 你说这不是希望这是什么?
就好像我们必须学会让 “老板” 控制 “秘书” 一样,我们其实必须学会让 “老板” 和 “秘书” 更多地关注这世界美好的一面而不是更多地关注这世界阴暗的一面 —— 并不是我们想要自欺欺人,而是为了我们自己的心理健康。尽量把自己变成更健康的人,不就是等于为这世界的美好又增添了一分吗?美好这东西,可从来都不是凭空而来的啊!
## 结语
做自己的心理咨询师,并不难,起码入门并不难。
在糟糕的闪念出现之时,你第一个要问的是,这是老板在讲话?还是秘书在讲话?如此这般,你会发现自己一下子就清醒过来了。因为你已经知道那秘书的工作原理,你也知道如何与它沟通。
不仅要在生理上保护好自己,尤为重要甚至更为重要的是,要在心理上保护好自己 —— 与其参与无意义的群辩,与其把时间和精力浪费在时时刻刻刷重复资讯,还不如多安抚一下自己的 “秘书”,也多安抚一下别人的 “秘书”…… 心理健康对免疫机能有效性有极大的帮助。
能给自己做咨询师之后,你会发现你其实可以为你所关心的人做心理咨询心理援助了。因为你会发现你现在是真正能做到 “有效倾听” 了 —— 当你发现自己能够分辨自己究竟是在与老板打交道还是与秘书打交道的那一瞬间,你几乎与大师无异了,的确有一点神奇。
能力伴随着责任。你拥有了新的角色,你开始重视**自己的选择如何定义自己的人格**。你有了比自己更为重要的人或事…… 这一瞬间,你的世界也会随之而变 —— 更为重要的是,你开始创造一个新的世界,值得拭目以待的新世界。
灾难终会发生,可若是没有最坏的时刻,怎么会有最好的时光?关键在于,你不仅开始自己制造希望,你还会因为自己扮演保护者的角色,所以会不由自主地不断传播希望。
没有理由不相信,你一定会自然而然地变成一个更好的你。
----
2020/02/02 凌晨,于北京
-----
另外,2020 年,我会每天抽出一点时间,充当免费的心理援助师。
你可以在 [Mixin Messenger](https://mixin.one/messenger) 上找到我,我的 ID 是 *26806*
-----
## 参考书籍
1. [I would, but my DAMN MIND won't let me!: a teen's guide to controlling their thoughts and feelings](https://www.amazon.com/dp/099762440X/), by Jacqui Letran (2016)
1. [Deep Survival: Who Lives, Who Dies, and Why](https://www.amazon.com/dp/B0028Z4LUU), by Laurence Gonzales (2004)
1. 李兰娟:[新型冠状病毒在空气中存活时间可达48小时](https://weibo.com/1618051664/Isvyws8Zb?ref=home&rid=0_0_8_3069137192539465079_6_0_0&type=comment)
| 0.265404 | 0.737938 |
# Negative News Neural Nets Project: Classifying Adverse Media Articles using Machine Learning Algorithms
In this notebook, conda environment with Python 3.86 is used. Some libraries, such as spacy and nltk may require installation if your machine does not have them.
You can use the steps below to install spaCy. If something goes awry, feel free to use pip/do some stackoverflow search to complete the installation. The last two parts will be required later on in the notebook, they are not essential spaCy packages.
- conda install -c conda-forge spacy
- conda install -c conda-forge spacy-lookups-data
- python -m spacy download en_core_web_sm
- pip install spacy-langdetect
- conda install -c conda-forge wordcloud
On the other hand, installing nltk packages will be easy, just look at the error to understand what needs to be downloaded using nltk.download(...). I have already provided the download code for punkt package and I don't think anything is required beside that.
## Dataset Preparation
Before doing any EDA, null value imputation, necessary dataset checks etc, we need to form the whole training dataset by combining the AM and NAM articles together. The latter one will include the random articles as well.
Let's begin with importing necessary/potentially useful stuff... Some of them below may not be used at all in the future, so the list below is tentative.
```
import warnings
warnings.simplefilter("ignore", UserWarning)
import pandas as pd
import numpy as np
import json
import spacy
import matplotlib.pyplot as plt
%matplotlib inline
# For regular expressions
import re
# For handling strings
import string
# For performing mathematical operations
import math
# Uncomment this if you're using linux
# !ls
# Let's get an overview of what our folder contains..
!dir
```
We can see that the data required are in zipped format. Let's read them with pandas.
```
am = pd.read_csv('adverse_media_training.csv.zip')
nam = pd.read_csv('non_adverse_media_training.csv.zip')
```
Let's check the labels in both datasets. We may(/will :)) encounter some typos among them.
```
print(am.label.unique())
print()
print(am.label.value_counts())
print(nam.label.unique())
print()
print(nam.label.value_counts())
```
Both datasets are not pure in their essence. We need to transfer some rows between them and drop the unnecessary rows having labels such as 'delete', 'neither' etc.
```
# Creating the AM dataset for train
am_confirmed = am.loc[(am.label == 'am') | (am.label == 'am ')]
am_confirmed = pd.concat([am_confirmed, nam.loc[nam.label == 'am']])
am_confirmed.shape
# Creating NAM dataset for train
nam_confirmed = nam.loc[(nam.label == 'nam') | (nam.label == 'random')]
nam_confirmed = pd.concat([nam_confirmed, am.loc[(am.label == 'nam') | (am.label == 'random')]])
nam_confirmed.shape
# Let us also append the necessary labels. Actually, we can also modify the label column in both datasets directly.
am_confirmed['is_adverse_media'] = 1
nam_confirmed['is_adverse_media'] = 0
# Creating the train dataset
train = pd.concat([am_confirmed, nam_confirmed])
print(train.shape)
print()
print(train['is_adverse_media'].value_counts())
# Ratio of AM to NAM class
print('Ratio of AM to NAM articles:', round(411/318, 2))
```
Our dataset may turn out to be small for now, but thankfully it is not imbalanced very much.
**After adding Oskar's json data, dataset imbalance will be a problem.**
Anyway, let's take a quick look into what type of columns the train set has, and get some sumamry statistics on the labels.
```
train.info()
train.is_adverse_media.describe()
```
Since our basic task is to classify text articles, we will need the *article* and *is_adverse_media* columns for the sentiment analysis task.
Later on, if we decide to add an entity recognition task or turn this into a multilabel classification problem, we will need some other columns like *entity_name* as well.
## Data Preprocessing
Until now, we have created the training dataset in its crude form. In this part we will filter the training data and check the articles column for null values or non-english text.
```
train.head()
```
Let's drop the unnecessary columns from the training dataset.
(**Question for Kristjan:** Do the columns 'url, full_response and title' necessary for any other extra analysis in the future?)
```
# Keep only needed columns
train = train.loc[:, ['entity_name', 'entity_type', 'url', 'article', 'full_response', 'explanation', 'title', 'is_adverse_media']]
train.describe(include='all')
```
Now we can narrow our focus a little bit more... Let's check if there are any nulls in article & is_adverse_media columns.
```
sum(train.article.isna()), sum(train.is_adverse_media.isna())
```
We gotta do one last check before tokenizing the articles, we need to check if there are any non-english text managed to slip in during the data collection process. spaCy can do this with its langdetect module, hope you succeeded in installing it.
```
#!pip install spacy-langdetect
# Make sure we only have English articles
from spacy_langdetect import LanguageDetector
nlp = spacy.load('en_core_web_sm')
nlp.add_pipe(LanguageDetector(), name='language_detector', last=True)
# Let's create an example doc object first to test spaCy's LanguageDetector.
text = 'This is an english text. Ja see on eestikeelne lause. بوغيث '
doc = nlp(text)
# document level language detection. Think of it like average language of the document!
print(doc._.language)
# sentence level language detection
for sent in doc.sents:
print(sent, sent._.language)
```
The language detector acts like a weirdo while trying to understand the average language of the document. Classifying the whole text as 85 percent Estonian is a bit too much in my opinion. Let's test it on an actual article in the train dataset.
```
example = train.article[8]
example # Clearly this one's in English.
doc = nlp(example)
print(doc._.language)
```
With long texts like the articles we collected, spaCy does a good job.
Now, let's check the whole dataset to see if any non-english article exists.
```
train['article'].apply(lambda article: nlp(article)._.language['language']).unique()
```
All of our articles are in English. We can now move on to creating tokens.
Before applying any vectorizer, we need to create tokens from our articles by cleaning them from punctuation, empty spaces etc. The helper function below will use some regex commands to handle all those, besides transforming all the letters to lowercase.
```
# The regex below can be modified later on.
def lemmatize(article):
article = re.sub(r'http\S+', '', article)
article = re.sub(r"#(\w+)", '', article)
article = re.sub(r"@(\w+)", '', article)
article = re.sub(r'[^\w\s]', '', article)
article = re.sub(r'\w*\d\w*','', article)
article = re.sub(' +',' ', article)
article = article.strip().lower()
# nlp = spacy.load('en_core_web_sm', disable=['parser', 'ner'])
doc = nlp(article)
lemmatized_article = " ".join([token.lemma_ for token in doc if (token.is_stop==False)])
return lemmatized_article
```
Again, let's use the lemmatize function on an example to see what it does.
```
example = train.article[8]
lemmatized = lemmatize(example)
print('Before Lemmatization:')
print()
print(example)
print()
print('After Lemmatization:')
print()
print(lemmatized)
print()
```
Let's copy our train data and apply lemmatization on the articles belonging to the copy.
```
data = train[['article', 'is_adverse_media']].copy()
data = data.reset_index()
data = data.drop(['index'], axis=1)
print('Shape of our DataFrame:', data.shape)
data.head()
data['lemmatized_articles'] = data['article'].map(lemmatize)
data.head()
data = data[['is_adverse_media', 'lemmatized_articles']] # These are the only columns that we need for modeling
data = data.sample(frac = 1) # Let us not forget to shuffle the rows before train_test_split
print('Shape of our DataFrame:', data.shape)
data.head()
```
Let's save this cleaned dataframe as a csv file for later usage.
```
# Don't run this line again, the csv file is already created. This is just for explanatory purposes.
# data[['is_adverse_media', 'lemmatized_articles']].to_csv('./cleaned_lemmatized_text.csv')
```
|
github_jupyter
|
import warnings
warnings.simplefilter("ignore", UserWarning)
import pandas as pd
import numpy as np
import json
import spacy
import matplotlib.pyplot as plt
%matplotlib inline
# For regular expressions
import re
# For handling strings
import string
# For performing mathematical operations
import math
# Uncomment this if you're using linux
# !ls
# Let's get an overview of what our folder contains..
!dir
am = pd.read_csv('adverse_media_training.csv.zip')
nam = pd.read_csv('non_adverse_media_training.csv.zip')
print(am.label.unique())
print()
print(am.label.value_counts())
print(nam.label.unique())
print()
print(nam.label.value_counts())
# Creating the AM dataset for train
am_confirmed = am.loc[(am.label == 'am') | (am.label == 'am ')]
am_confirmed = pd.concat([am_confirmed, nam.loc[nam.label == 'am']])
am_confirmed.shape
# Creating NAM dataset for train
nam_confirmed = nam.loc[(nam.label == 'nam') | (nam.label == 'random')]
nam_confirmed = pd.concat([nam_confirmed, am.loc[(am.label == 'nam') | (am.label == 'random')]])
nam_confirmed.shape
# Let us also append the necessary labels. Actually, we can also modify the label column in both datasets directly.
am_confirmed['is_adverse_media'] = 1
nam_confirmed['is_adverse_media'] = 0
# Creating the train dataset
train = pd.concat([am_confirmed, nam_confirmed])
print(train.shape)
print()
print(train['is_adverse_media'].value_counts())
# Ratio of AM to NAM class
print('Ratio of AM to NAM articles:', round(411/318, 2))
train.info()
train.is_adverse_media.describe()
train.head()
# Keep only needed columns
train = train.loc[:, ['entity_name', 'entity_type', 'url', 'article', 'full_response', 'explanation', 'title', 'is_adverse_media']]
train.describe(include='all')
sum(train.article.isna()), sum(train.is_adverse_media.isna())
#!pip install spacy-langdetect
# Make sure we only have English articles
from spacy_langdetect import LanguageDetector
nlp = spacy.load('en_core_web_sm')
nlp.add_pipe(LanguageDetector(), name='language_detector', last=True)
# Let's create an example doc object first to test spaCy's LanguageDetector.
text = 'This is an english text. Ja see on eestikeelne lause. بوغيث '
doc = nlp(text)
# document level language detection. Think of it like average language of the document!
print(doc._.language)
# sentence level language detection
for sent in doc.sents:
print(sent, sent._.language)
example = train.article[8]
example # Clearly this one's in English.
doc = nlp(example)
print(doc._.language)
train['article'].apply(lambda article: nlp(article)._.language['language']).unique()
# The regex below can be modified later on.
def lemmatize(article):
article = re.sub(r'http\S+', '', article)
article = re.sub(r"#(\w+)", '', article)
article = re.sub(r"@(\w+)", '', article)
article = re.sub(r'[^\w\s]', '', article)
article = re.sub(r'\w*\d\w*','', article)
article = re.sub(' +',' ', article)
article = article.strip().lower()
# nlp = spacy.load('en_core_web_sm', disable=['parser', 'ner'])
doc = nlp(article)
lemmatized_article = " ".join([token.lemma_ for token in doc if (token.is_stop==False)])
return lemmatized_article
example = train.article[8]
lemmatized = lemmatize(example)
print('Before Lemmatization:')
print()
print(example)
print()
print('After Lemmatization:')
print()
print(lemmatized)
print()
data = train[['article', 'is_adverse_media']].copy()
data = data.reset_index()
data = data.drop(['index'], axis=1)
print('Shape of our DataFrame:', data.shape)
data.head()
data['lemmatized_articles'] = data['article'].map(lemmatize)
data.head()
data = data[['is_adverse_media', 'lemmatized_articles']] # These are the only columns that we need for modeling
data = data.sample(frac = 1) # Let us not forget to shuffle the rows before train_test_split
print('Shape of our DataFrame:', data.shape)
data.head()
# Don't run this line again, the csv file is already created. This is just for explanatory purposes.
# data[['is_adverse_media', 'lemmatized_articles']].to_csv('./cleaned_lemmatized_text.csv')
| 0.469763 | 0.921992 |
## Functions
A *function* is a package of code we can call repeatedly, from different parts of our program. You can view it as a machine that takes some input and turns it into some output.
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/3b/Function_machine2.svg/440px-Function_machine2.svg.png" alt="A function" style="width: 33%;"/>
Functions:
- takes zero or more arguments as input
- return one value (potentially a compound object) as output
Using functions serves several purposes:
1. it names a computation
1. it makes the code easier to read by hiding details inside the fuction
1. it means we don't have to repeat lines of code throughout our program, easing maintenance
1. it makes testing your code simpler
**Signature of a function**: what it takes as input and what it yields as output.
A function that accepts no arguments and returns `None`:
```
def print_welcome_message():
"""signature: nothing -> None"""
print ("Greetings from the temperature converter!")
ret_val = print_welcome_message()
print(ret_val)
```
Why does `None` exist? Sometimes, we need to establish a variable, whose value we don't know yet:
```
# before the market open, we don't know IBM's open price:
ibm_open = None
# Time passes... and the market opens:
ibm_open = 136.56
```
A function that accepts two integer arguments and returns `None`:
```
def print_stars(num_lines, num_stars):
"""signature: int, int -> None"""
for line_number in range(0, num_lines):
print('*' * num_stars)
print_stars(3, 60)
```
Another function that accepts one number, and returns a float:
```
def celsius_to_farenheit(celsius):
"""sig: number -> float"""
return celsius * (9/5) + 32
celsius_to_farenheit(100)
def nice_print(c, f):
print("{:5.2f} degrees celsius is {:5.2f} degrees fahrenheit.".format(c, f))
nice_print(100, 212)
```
Function invocation (calling our functions):
```
print_stars(1, 60)
print_welcome_message()
print_stars(1, 60)
celsius = 55.0
fahrenheit = celsius_to_farenheit(celsius)
nice_print(celsius, fahrenheit)
nice_print(20.55, celsius_to_farenheit(celsius))
nice_print(12.8, celsius_to_farenheit(12.8))
```
### Two important definitions
**function parameters**: names in a function definition which reference values used in function body. They provide a way to supply data to be used inside of a function from its caller.
**function arguments**: values provided (passed) to a function as part of a function call (or invocation).
```
# x and y are the parameters to sum:
def sum(x, y):
return x + y
x = 9
y = 10.4
# now x and y are the arguments to sum:
z = sum(x, y)
print(z)
foobar = sum("9", "bar")
print(foobar)
```
### Namespaces
The idea of a [*namespace*](https://en.wikipedia.org/wiki/Namespace) is important in understanding how function arguments work. They control the [*scope*](https://en.wikipedia.org/wiki/Scope_%28computer_science%29) of a variable. Let's look at an example:
```
GDP_MULTIPLIER = 1.4 # this goes in the global namespace
def calc_gdp_effect(gov_spending):
# `gov_spending` here is local to the `calc_gdp_effect`
# namespace
gov_spending *= GDP_MULTIPLIER
print("Inside calc_gdp_effect, gov_spending =", gov_spending)
return gov_spending
def main():
# `gov_spending` here is local to the `main` namespace
gov_spending = int(input("How much will government spending increase?"))
print("That amount of spending will increase GDP by", calc_gdp_effect(gov_spending))
print("But in main, gov_spending still equals", gov_spending)
main()
def f(x):
return x**5
y = f(3)
print(y)
```
### More about `print()`
We have been using `print` since the start of the course. One of the interesting things about this function is that it illustrates several interesting aspects of Python functions. First of all, `print` can take any number of arguments to output:
```
print(5, 6, 7, 8)
```
Secondly, `print` can also accept *named parameters*. By default (and these default arguments are very common in Python!), `print` outputs a new line after outputting its arguments. But we can change that with the named parameter `end`:
```
print("Output", end='|')
print("separated by", end='|')
print("pipes", end='|')
```
There is another *named parameter* to `print` that will let us do something like the above all in one call `print`: `sep`. If there are multiple values to be printed, by default, Python separates them with a space. But we can use `sep` to change that:
```
x = 7
y = 8
z = 9
print(x, y, z, sep=",")
```
These concepts, of *named parameters*, and *default values*, are YUUUGELY important in Python!
### More function definition examples:
```
def calc_mean(num1, num2):
"""sig: number, number -> float
"""
return (num1 + num2) / 2
mean = calc_mean(2, 16)
print("Mean is:", mean)
def extract_nth_char(input_string, position):
# sig: String, int -> String
if len(input_string) > position:
return input_string[position]
else:
return "ERROR"
extract_nth_char("Hello class", 5)
def print_results(index, sample, result):
# sig: int, String, String -> None
print("The {}th character of '{}' is '{}'".format(index,
sample,
result))
gpa = 0.8
fancy_print_gpa("Peter", gpa)
index = 25
sample = "This is a sample string"
result = extract_nth_char(sample, index)
if result != "ERROR":
print_results(index, sample, result)
else:
print("Something bad happened.")
```
### Using a `main()` function:
`main()` in Python is the customary name for a function to execute *if* your code is running as the top-level module in a Python program.
[Here is how `main()` is used.](https://docs.python.org/3/library/__main__.html)
Here is some code with a `main()` function:
```
def double_it(some_number):
return some_number * 2
def main():
print("Welcome to my program.")
value = int(input("Enter an integer and I'll double it! "))
print("Your value doubled is:",
double_it(value))
```
Now run `main()`:
```
main()
```
### Functions (Larger example)
We will calculate the area of a triangle, given length of 3 sides:
s = (a + b + c) / 2
area = sqrt(s * (s-a) * (s-b) * (s-c))
```
import math
def get_coord(x_or_y):
return float(input(" Please enter " + x_or_y + ":"))
def get_vertex():
# sig: None -> float, float
x = get_coord("x")
y = get_coord("y")
return x,y
# NOTE! This function returns two values as one!
def get_triangle():
# sig: None -> (float, float, float, float, float, float)
print("\nEnter the first vertex:")
x1, y1 = get_vertex()
print("\nEnter the second vertex:")
x2, y2 = get_vertex()
print("\nEnter the third vertex:")
x3, y3 = get_vertex()
return x1, y1, x2, y2, x3, y3
# NOTE! This function returns six values as one!
def calc_side_length(x1, y1, a, b):
# sig: float, float, float, float -> float
return math.sqrt((x1-a)**2 + (y1-b)**2)
def calc_area(x1, y1, x2, y2, x3, y3):
''' return area using Heron's formula '''
# sig: float, float, float, float, float, float -> float
a = calc_side_length(x1, y1, x2, y2)
b = calc_side_length(x2, y2, x3, y3)
c = calc_side_length(x3, y3, x1, y1)
s = (1/2) * (a + b + c)
return math.sqrt(s * (s-a) * (s-b) * (s-c))
x1, y1, x2, y2, x3, y3 = get_triangle()
area = calc_area(x1, y1, x2, y2, x3, y3)
print("Area is: {:2.4f}".format(area))
```
|
github_jupyter
|
def print_welcome_message():
"""signature: nothing -> None"""
print ("Greetings from the temperature converter!")
ret_val = print_welcome_message()
print(ret_val)
# before the market open, we don't know IBM's open price:
ibm_open = None
# Time passes... and the market opens:
ibm_open = 136.56
def print_stars(num_lines, num_stars):
"""signature: int, int -> None"""
for line_number in range(0, num_lines):
print('*' * num_stars)
print_stars(3, 60)
def celsius_to_farenheit(celsius):
"""sig: number -> float"""
return celsius * (9/5) + 32
celsius_to_farenheit(100)
def nice_print(c, f):
print("{:5.2f} degrees celsius is {:5.2f} degrees fahrenheit.".format(c, f))
nice_print(100, 212)
print_stars(1, 60)
print_welcome_message()
print_stars(1, 60)
celsius = 55.0
fahrenheit = celsius_to_farenheit(celsius)
nice_print(celsius, fahrenheit)
nice_print(20.55, celsius_to_farenheit(celsius))
nice_print(12.8, celsius_to_farenheit(12.8))
# x and y are the parameters to sum:
def sum(x, y):
return x + y
x = 9
y = 10.4
# now x and y are the arguments to sum:
z = sum(x, y)
print(z)
foobar = sum("9", "bar")
print(foobar)
GDP_MULTIPLIER = 1.4 # this goes in the global namespace
def calc_gdp_effect(gov_spending):
# `gov_spending` here is local to the `calc_gdp_effect`
# namespace
gov_spending *= GDP_MULTIPLIER
print("Inside calc_gdp_effect, gov_spending =", gov_spending)
return gov_spending
def main():
# `gov_spending` here is local to the `main` namespace
gov_spending = int(input("How much will government spending increase?"))
print("That amount of spending will increase GDP by", calc_gdp_effect(gov_spending))
print("But in main, gov_spending still equals", gov_spending)
main()
def f(x):
return x**5
y = f(3)
print(y)
print(5, 6, 7, 8)
print("Output", end='|')
print("separated by", end='|')
print("pipes", end='|')
x = 7
y = 8
z = 9
print(x, y, z, sep=",")
def calc_mean(num1, num2):
"""sig: number, number -> float
"""
return (num1 + num2) / 2
mean = calc_mean(2, 16)
print("Mean is:", mean)
def extract_nth_char(input_string, position):
# sig: String, int -> String
if len(input_string) > position:
return input_string[position]
else:
return "ERROR"
extract_nth_char("Hello class", 5)
def print_results(index, sample, result):
# sig: int, String, String -> None
print("The {}th character of '{}' is '{}'".format(index,
sample,
result))
gpa = 0.8
fancy_print_gpa("Peter", gpa)
index = 25
sample = "This is a sample string"
result = extract_nth_char(sample, index)
if result != "ERROR":
print_results(index, sample, result)
else:
print("Something bad happened.")
def double_it(some_number):
return some_number * 2
def main():
print("Welcome to my program.")
value = int(input("Enter an integer and I'll double it! "))
print("Your value doubled is:",
double_it(value))
main()
import math
def get_coord(x_or_y):
return float(input(" Please enter " + x_or_y + ":"))
def get_vertex():
# sig: None -> float, float
x = get_coord("x")
y = get_coord("y")
return x,y
# NOTE! This function returns two values as one!
def get_triangle():
# sig: None -> (float, float, float, float, float, float)
print("\nEnter the first vertex:")
x1, y1 = get_vertex()
print("\nEnter the second vertex:")
x2, y2 = get_vertex()
print("\nEnter the third vertex:")
x3, y3 = get_vertex()
return x1, y1, x2, y2, x3, y3
# NOTE! This function returns six values as one!
def calc_side_length(x1, y1, a, b):
# sig: float, float, float, float -> float
return math.sqrt((x1-a)**2 + (y1-b)**2)
def calc_area(x1, y1, x2, y2, x3, y3):
''' return area using Heron's formula '''
# sig: float, float, float, float, float, float -> float
a = calc_side_length(x1, y1, x2, y2)
b = calc_side_length(x2, y2, x3, y3)
c = calc_side_length(x3, y3, x1, y1)
s = (1/2) * (a + b + c)
return math.sqrt(s * (s-a) * (s-b) * (s-c))
x1, y1, x2, y2, x3, y3 = get_triangle()
area = calc_area(x1, y1, x2, y2, x3, y3)
print("Area is: {:2.4f}".format(area))
| 0.694613 | 0.957636 |
# Write data to cache
This notebook is meant to be used together with [Read data from cache](./read_data_from_cache.ipynb) to demonstate the use of the datasets cache.
First we setup a simple experiment. This is copied from another notebook and can be ignored in this context.
```
%matplotlib notebook
import numpy.random as rd
import matplotlib.pyplot as plt
from functools import partial
import numpy as np
from time import sleep, monotonic
import qcodes as qc
from qcodes import Station, load_or_create_experiment, \
initialise_database, Measurement, load_by_run_spec, load_by_guid
from qcodes.tests.instrument_mocks import DummyInstrument
from qcodes.dataset.plotting import plot_dataset
import time
# a generator to simulate a physical signal, in this case an exponentially
# decaying signal
def exponential_decay(a: float, b: float):
"""
Yields a*exp(-b*x) where x is put in
"""
x = 0
while True:
x = yield
yield a*np.exp(-b*x) + 0.02*a*np.random.randn()
# preparatory mocking of physical setup
dac = DummyInstrument('dac', gates=['ch1', 'ch2'])
dmm = DummyInstrument('dmm', gates=['v1', 'v2'])
station = qc.Station(dmm, dac)
# and then a bit of "wiring" to make the dmm "measure"
# the exponential decay
ed = exponential_decay(5, 0.2)
next(ed)
def customgetter(dac):
val = ed.send(dac.ch1())
next(ed)
return val
dmm.v1.get = partial(customgetter, dac)
initialise_database()
exp = load_or_create_experiment(experiment_name='dataset_cache_test',
sample_name="no sample")
```
Now we are ready to run an experiment. Once this experiment is running, take note of the id of the run (also accessible via ``dataset.captured_run_id``) created and open the [Read data from cache](./read_data_from_cache.ipynb) notebook and use there this id. After 20 sec this notebook will start writing actual data to the dataset.
```
# And then run an experiment
meas = Measurement(exp=exp)
meas.register_parameter(dac.ch1) # register the first independent parameter
meas.register_parameter(dmm.v1, setpoints=(dac.ch1,)) # now register the dependent oone
meas.write_period = 2
with meas.run() as datasaver:
time.sleep(20)
# While sleeping here start loader. From load_cached_notebook.ipynb
# this is done by loading this new run via ``captured_run_id`` printed when the measurement starts
print("done sleeping")
for set_v in np.linspace(0, 25, 100):
dac.ch1.set(set_v)
get_v = dmm.v1.get()
datasaver.add_result((dac.ch1, set_v),
(dmm.v1, get_v))
# flush so this always works
datasaver.flush_data_to_database(block=True)
time.sleep(0.1)
dataset = datasaver.dataset # convenient to have for plotting
```
|
github_jupyter
|
%matplotlib notebook
import numpy.random as rd
import matplotlib.pyplot as plt
from functools import partial
import numpy as np
from time import sleep, monotonic
import qcodes as qc
from qcodes import Station, load_or_create_experiment, \
initialise_database, Measurement, load_by_run_spec, load_by_guid
from qcodes.tests.instrument_mocks import DummyInstrument
from qcodes.dataset.plotting import plot_dataset
import time
# a generator to simulate a physical signal, in this case an exponentially
# decaying signal
def exponential_decay(a: float, b: float):
"""
Yields a*exp(-b*x) where x is put in
"""
x = 0
while True:
x = yield
yield a*np.exp(-b*x) + 0.02*a*np.random.randn()
# preparatory mocking of physical setup
dac = DummyInstrument('dac', gates=['ch1', 'ch2'])
dmm = DummyInstrument('dmm', gates=['v1', 'v2'])
station = qc.Station(dmm, dac)
# and then a bit of "wiring" to make the dmm "measure"
# the exponential decay
ed = exponential_decay(5, 0.2)
next(ed)
def customgetter(dac):
val = ed.send(dac.ch1())
next(ed)
return val
dmm.v1.get = partial(customgetter, dac)
initialise_database()
exp = load_or_create_experiment(experiment_name='dataset_cache_test',
sample_name="no sample")
# And then run an experiment
meas = Measurement(exp=exp)
meas.register_parameter(dac.ch1) # register the first independent parameter
meas.register_parameter(dmm.v1, setpoints=(dac.ch1,)) # now register the dependent oone
meas.write_period = 2
with meas.run() as datasaver:
time.sleep(20)
# While sleeping here start loader. From load_cached_notebook.ipynb
# this is done by loading this new run via ``captured_run_id`` printed when the measurement starts
print("done sleeping")
for set_v in np.linspace(0, 25, 100):
dac.ch1.set(set_v)
get_v = dmm.v1.get()
datasaver.add_result((dac.ch1, set_v),
(dmm.v1, get_v))
# flush so this always works
datasaver.flush_data_to_database(block=True)
time.sleep(0.1)
dataset = datasaver.dataset # convenient to have for plotting
| 0.542136 | 0.93337 |
Центр непрерывного образования
# Программа «Python для автоматизации и анализа данных»
Неделя 1 - 1
*Татьяна Рогович, НИУ ВШЭ*
## Введение в Python. Целые и вещественные числа. Логические переменные.
# Функция print()
С помощью Python можно решать огромное количество задач. Мы начнем с очень простых и постепенно будем их усложнять, закончив наш курс небольшим проектом. Если вы уже сталкивались с программированием, то вы помните, что обычно самой первой программой становится вывод "Hello, world". Попробуем сделать это в Python.
```
print('Hello, world!')
print(1)
```
Обратите внимание, что "Hello, world!" мы написали в кавычках, а единицу - нет. Это связанно с тем,
что в программировании мы имеем дело с разными типами данных. И Python будет воспринимать текст как текст (строковую переменную), только в том случае, если мы его возьмем в кавычки (неважно, одинарные или двойные). А при выводе эти кавычки отображаться уже не будут (они служат знаком для Python, что внутри них - текст).
print() - это функция, которая выводит то, что мы ей передадим. В других IDE это был бы вывод в терминал, а в Jupyter вывод напечатается под запускаемой ячейкой. Распознать функцию в питоне можно по скобкам после слова, внутри которых мы передаем аргумент, к которому эту функцию нужно применить (текст "Hello, world" или 1 в нашем случае).
```
Hello = 1
world = 2
print(Hello, world)
v1 = 'Hello'
v2 = 'world'
print(v1, v2, 3, sep=' | ', end='!!!\n')
print()
print(v1, v2, 3, sep=' | ', end='!!!\n')
```
Написали без кавычек - получили ошибку. Кстати, обратите внимание, что очень часто в тексте ошибки есть указание на то, что именно произошло, и можно попробовать догадаться, что же нужно исправить. Текст без кавычек Python воспринимает как название переменной, которую еще не задали. Кстати, если забыть закрыть или открыть кавычку (или поставить разные), то тоже поймаем ошибку.
Иногда мы хотим комментировать наш код, чтобы я-будущий или наши коллеги поменьше задавались вопросами, а что вообще имелось ввиду. Комментарии можно писать прямо в коде, они не повлияют на работу программы, если их правильно оформить.
```
# Обратите внимание: так выглядит комментарий - часть скрипта, которая не будет исполнена
# при запуске программы.
# Каждую строку комментария мы начинаем со знака хэштега.
"""
Это тоже комментарий - обычно выделение тремя апострофами мы используем для тех случаев,
когда хотим написать длинный, развернутый текст.
"""
print('Hello, world')
print(a)
'42+43'
```
Обратите внимание, что в отличие от других IDE (например, PyCharm) в Jupyter Notebook не всегда обязательно использовать print(), чтобы что-то вывести. Но не относитесь к этому как к стандарту для любых IDE.
```
'Hello, world, \'"ABC '
```
Выше рядом с выводом появилась подпись Out[]. В данном случае Jupyter показывает нам последнее значение, лежащее в буфере ячейки. Например, в PyCharm такой вывод всегда будет скрыт, пока мы его не "напечатаем" с помощью print(). Но эта особенность Jupyter помогает нам быстро проверить, что, например, находится внутри переменной, когда мы отлаживаем наш код.
Следующая вещь, которую нужно знать про язык программирования - как в нем задаются переменные. Переменные - это
контейнеры, которые хранят в себе информацию (текстовую, числовую, какие-то более сложные виды данных). В Python
знаком присвоения является знак =.
```
x = 'Hello, world!'
y = 'Hello, Python!'
print(x + ' ' + y)
# Обратите внимание, что результат вызова этой функции такой же, как выше,
# только текст теперь хранится внутри переменной.
```
Python - язык чувствительный к регистру. Поэтому, когда создаете/вызываете переменные или функции, обязательно используйте правильный регистр. Так, следующая строка выдаст ошибку.
```
print(X) # мы создали переменную x, а X не существует
```
Еще раз обратим внимание на ошибку. *NameError: name 'X' is not defined* означает, что переменная с таким названием не была создана в этой программе. Кстати, обратите внимание, что переменные в Jupyter хранятся на протяжении всей сессии (пока вы работаете с блокнотом и не закрыли его), и могут быть созданы в одной ячейке, а вызваны в другой. Давайте опять попробуем обратиться к x.
```
print(x) # работает!
```
# Типы данных: целочисленные переменные (integer)
Знакомство с типа данных мы начнем с целых чисел. Если вы вдруг знакомы с другими языками программирования, то стоит отметить, что типизация в Python - динамическая. Это значит, что вам не нужно говорить какой тип данных вы хотите положить в переменную - Python сам все определит. Проверить тип данных можно с помощью функции type(), передав ей в качестве аргумента сами данные или переменную.
**ЦЕЛЫЕ ЧИСЛА (INT, INTEGER):** 1, 2, 592, 1030523235 - любое целое число без дробной части.
```
y = 2
print(type(2))
print(type(y))
```
Обратите внимание - выше мы вызвали функцию внутри функции.
type(2) возвращает скрытое значение типа переменной (int для integer).
Чтобы вывести это скрытое значение - мы должны его "напечатать".
Самое элементарное, что можно делать в Python - использовать его как калькулятор. Давайте посмотрим, как
он вычитает, складывает и умножает.
```
print(2 + 2)
print(18 - 9)
print(4 * 3)
a = 1.1
b = int(a)
print(b)
2 ** 2020
```
С делением нужно быть немного осторожней. Существует два типа деления - привычное нам, которое даст в ответе дробь при делении 5 на 2, и целочисленное деление, в результате которого мы получим только целую часть частного.
```
print(5 / 2) # в результате такого деления получается другой тип данных (float), подробнее о нем поговорим позже.
print(5 // 2)
```
А если нам надо как раз найти остаток от деления, то мы можем воспользоваться оператором модуло %
```
print(5 % 2)
```
Еще одна математическая операция, которую мы можем выполнять без загрузки специальных математических библиотек - это
возведение в степень.
```
print(5**2)
```
Все то же самое работает и с переменными, содержащими числа.
```
a = 2
b = 3
print(a ** b)
# изменится ли результат, если мы перезапишем переменную a?
a = 5
print(a ** b)
```
Часто возникает ситуация, что мы считали какие-то данные в формате текста, и у нас не работают арифметические операции. Тогда мы можем с помощью функции int() преобразовать строковую переменную (о них ниже) в число, если эта строка может быть переведена в число в десятичной системе.
```
print(2 + '5') # ошибка, не сможем сложить целое число и строку
print(2 + int('5')) # преобразовали строку в число и все заработало
int('текст') # здесь тоже поймаем ошибку, потому что строка не представляет собой число
```
## (∩`-´)⊃━☆゚.*・。゚ Задача
### Сумма цифр трехзначного числа
Дано трехзначное число 179. Найдите сумму его цифр.
**Формат вывода**
Выведите ответ на задачу.
**Ответ**
Вывод программы:
17
```
# (∩`-´)⊃━☆゚.*・。゚
x = 179
x_1 = x // 100
x_2 = x // 10 % 10
x_3 = x % 10
print(x_1, x_2, x_3) # тестовый вывод, проверяем, что правильно "достали" цифры из числа
print(x_1 + x_2 + x_3) # ответ на задачу
```
## (∩`-´)⊃━☆゚.*・。゚ Задача
### Электронные часы
Дано число N. С начала суток прошло N минут. Определите, сколько часов и минут будут показывать электронные часы в этот момент.
**Формат ввода**
Вводится число N — целое, положительное, не превышает 10⁷.
**Формат вывода**
Программа должна вывести два числа: количество часов (от 0 до 23) и количество минут (от 0 до 59).
Учтите, что число N может быть больше, чем количество минут в сутках.
#### Примеры
Тест 1
**Входные данные:**
150
**Вывод программы:**
2 30
```
# (∩`-´)⊃━☆゚.*・。゚
minutes = 150
print(minutes // 60 % 24, minutes % 60)
```
# Типы данных: логические или булевы переменные (boolean)
Следующий тип данных - это логические переменные. Логические переменные могут принимать всего два значения - **истина (True)** или **ложь (False)**. В Python тип обознчается **bool**.
```
print(type(True), type(False))
```
Логические переменные чаще всего используется в условных операторах if-else и в цикле с остановкой по условию while. В части по анализу данных еще увидим одно частое применение - создание булевых масок для фильтрации данных (например, вывести только те данные, где возраст больше 20).
Обратите внимание, что True и False обязательно пишутся с большой буквы и без кавычек, иначе можно получить ошибку.
```
print(type('True')) # тип str - строковая переменная
print(true) # ошибка, Python думает, что это название переменной
```
Как и в случае чисел и строк, с логическими переменными работает преобразование типов. Превратить что-либо в логическую переменную можно с помощью функции bool().
Преобразование чисел работает следующим образом - 0 превращается в False, все остальное в True.
```
print(bool(0))
print(bool(23))
print(bool(-10))
```
Пустая строка преобразуется в False, все остальные строки в True.
```
print(bool(''))
print(bool('Hello'))
print(bool(' ')) # даже строка из одного пробела - это True
print(bool('False')) # и даже строка 'False' - это True
```
И при необходимости булеву переменную можно конвертировать в int. Тут все без сюрпризов - ноль и единица.
```
print(int(True))
print(int(False))
```
## Логические выражения
Давайте посмотрим, где используется новый тип данных.
Мы поработаем с логическими выражениями, результат которых - булевы переменные.
Логические выражения выполняют проверку на истинность, то есть выражение равно True, если оно истинно, и False, если ложно.
В логических выражениях используются операторы сравнения:
* == (равно)
* != (не равно)
* \> (больше)
* < (меньше)
* \>= (больше или равно)
* <= (меньше или равно)
```
print(1 == 1)
print(1 != '1')
c = 1 > 3
print(c)
print(type(c))
x = 5
print(1 < x <= 5) # можно писать связки цепочкой
```
Логические выражения можно комбинировать с помощью следующих логических операторов:
* логическое И (and) - выражение истинно, только когда обе части истинны, иначе оно ложно
* логическое ИЛИ (or) - выражение ложно, только когда обе части ложны, иначе оно истинно
* логическое отрицание (not) - превращает True в False и наоборот
```
print((1 == 1) and ('1' == 1))
print((1 == 1) or ('1' == 1))
print(not(1 == 1))
print(((1 == 1) or ('1' == 1)) and (2 == 2))
```
## (∩`-´)⊃━☆゚.*・。゚ Задача
## Вася в Италии
Вася уехал учиться по обмену на один семестр в Италию. Единственный магазин в городе открыт с 6 до 8 утра и с 16 до 17 вечера (включительно). Вася не мог попасть в магазин уже несколько дней и страдает от голода. Он может прийти в магазин в X часов. Если магазин открыт в X часов, то выведите True, а если закрыт - выведите False.
В единственной строке входных данных вводится целое число X, число находится в пределах от 0 до 23
**Формат ввода**
Целое число X, число находится в пределах от 0 до 23
**Формат вывода**
True или False
#### Примеры
Тест 1
**Входные данные:**
16
**Вывод программы:**
True
```
## (∩`-´)⊃━☆゚.*・。゚
time = 16
can_visit = 6 <= time <= 8
can_visit2 = 16 <= time <= 17
print(can_visit or can_visit2)
```
# Типы данных: вещественные числа (float)
По сути, вещественные числа это десятичные дроби, записанные через точку. Вещественные числа в питоне обозначаются словом float (от "плавающей" точки в них). Также могут быть представлены в виде инженерной записи: 1/10000 = 1e-05
**ВЕЩЕСТВЕННЫЕ ЧИСЛА (FLOAT):** 3.42, 2.323212, 3.0, 1e-05 - число с дробной частью (в том числе целое с дробной частью равной 0)
```
4.5 + 5
```
Если у нас было действие с участие целого и вещественного числа, то ответ всегда будет в виде вещественного числа (см. выше).
Также давайте вспомним про "обычное" деление, в результате которого получается вещественное число.
```
print(11 / 2)
print(type(11 / 2))
print(11 // 2)
print(type(11 // 2))
```
С вещественными числами нужно быть осторожными со сравнениями. В связи с особенностями устройства памяти компьютера дробные числа хранятся там весьма хитро и не всегда условные 0.2 то, чем кажутся. Это связано с проблемой точности представления чисел.
Подробнее можно прочитать [здесь](https://pythoner.name/documentation/tutorial/floatingpoint).
```
0.2 + 0.1 == 0.3
```
Наверняка, от такого равенства мы ожидали результат True, но нет. Поэтому будьте аккуратны и старайтесь не "завязывать" работу вашей программы на условия, связанные с вычислением вещественных чисел. Давайте посмотрим, как на самом деле выглядят эти числа в памяти компьютера.
```
print(0.2 + 0.1)
print(0.3)
```
Числа с плавающей точкой представлены в компьютерном железе как дроби с основанием 2 (двоичная система счисления). Например, десятичная дробь
0.125
имеет значение 1/10 + 2/100 + 5/1000, и таким же образом двоичная дробь
0.001
имеет значение 0/2 + 0/4 + 1/8. Эти две дроби имеют одинаковые значения, отличаются только тем, что первая записана в дробной нотации по основанию 10, а вторая по основанию 2.
К сожалению, большинство десятичных дробей не могут быть точно представлены в двоичной записи. Следствием этого является то, что в основном десятичные дробные числа вы вводите только приближенными к двоичным, которые и сохраняются в компьютере.
Если вам совсем не обойтись без такого сравнения, то можно сделать так: сравнивать не результат сложения и числа, а разность эти двух чисел с каким-то очень маленьким числом (с таким, размер которого будет точно не критичен для нашего вычисления). Например, порог это числа будет разным для каких-то физических вычислений, где важна высокая точность, и сравнения доходов граждан.
```
0.2 + 0.1 - 0.3 < 0.000001
```
Следующая проблема, с которой можно столкнуться - вместо результата вычисления получить ошибку 'Result too large'. Cвязано это с ограничением выделяемой памяти на хранение вещественного числа.
```
1.5 ** 100000
1.5 ** 1000
```
А если все получилось, то ответ еще может выглядеть вот так. Такая запись числа называется научной и экономит место - она хранит целую часть числа (мантисса) и степень десятки на которую это число нужно умножить (экспонента). Здесь результатом возведения 1.5 в степень 1000 будет число 1.2338405969061735, умноженное на 10 в степень 176. Понятно, что это число очень большое. Если бы вместо знакак + стоял -, то и степень десятки была бы отрицательная (10 в -176 степени), и такое число было бы очень, очень маленьким.
Как и в случае с целыми числами, вы можете перевести строку в вещественное число, если это возможно. Сделать это можно фукнцией float()
```
print(2.5 + float('2.4'))
```
## Округление вещественных чисел
У нас часто возникает необходимость превратить вещественное число в целое ("округлить"). В питоне есть несколько способов это сделать, но, к сожалению, ни один из них не работает как наше привычное округление и про это всегда нужно помнить.
Большинство этих функций не реализованы в базовом наборе команд питона и для того, чтобы ими пользоваться, нам придется загрузить дополнительную библиотеку math, которая содержит всякие специальные функции для математических вычислений.
```
import math # команда import загружает модуль под названием math
```
Модуль math устанавливается в рамках дистрибутива Anaconda, который мы использовали, чтобы установить Jupyter Notebook, поэтому его не нужно отдельно скачивать, а можно просто импортировать (загрузить в оперативную память текущей сессии). Иногда нужную библиотеку придется сначала установить на компьютер с помощью команды !pip install -название модуля- и только потом импортировать.
Самый простой способ округлить число - применить к нему функцию int.
```
int(2.6)
```
Обратите внимание, что такой метод просто обрубает дробную часть (значения выше 0.5 не округляются в сторону большего числа).
```
print(int(2.6))
print(int(-2.6))
```
Округление "в пол" из модуля math округляет до ближайшего меньшего целого числа.
```
print(math.floor(2.6)) # чтобы использовать функцю из дополнительного модуля -
# нужно сначала написать название этого модуля и через точку название функции
print(math.floor(-2.6))
```
Округление "в потолок" работает ровно наоброт - округляет до ближайшего большего числа, независимо от значения дробной части.
```
print(math.ceil(2.6))
print(math.ceil(-2.6))
```
В самом питоне есть еще функция round(). Вот она работает почти привычно нам, если бы не одно "но"...
```
print(round(2.2))
print(round(2.7))
print(round(2.5)) # внимание на эту строку
print(round(3.5))
```
Неожиданно? Тут дело в том, что в питоне реализованы не совсем привычные нам правила округления чисел с вещественной частью 0.5 - такое число округляется до ближайшего четного числа: 2 для 2.5 и 4 для 3.5.
## Замечание по импорту функций
Иногда нам не нужна вся библиотека, а только одна функция из-за нее. Скажите, странно же хранить в опреативной памяти всю "Войну и мир", если нас интересует только пятое предложение на восьмой странице.
Для этого можно воспользоваться импортом функции из библиотеки и тогда не нужно будет писать ее название через точку. Подводный камень здесь только тот, что если среди базовых команд питона есть функция с таким же именем, то она перезапишется и придется перезапускать свой блокнот, чтобы вернуть все как было.
```
from math import ceil
ceil(2.6) # теперь работает без math.
```
## (∩`-´)⊃━☆゚.*・。゚ Задача
## Дробная часть
Дано вещественное число. Выведите его дробную часть.
**Формат ввода**
Вещественное число
**Формат вывода**
Вещественное число (ответ на задачу)
#### Примеры
Тест 1
**Входные данные:**
4.0
**Вывод программы:**
0.0
```
# (∩`-´)⊃━☆゚.*・。゚
x = 4.0
print(x - int(x))
x = 5.2
print(x - int(x))
```
|
github_jupyter
|
print('Hello, world!')
print(1)
Hello = 1
world = 2
print(Hello, world)
v1 = 'Hello'
v2 = 'world'
print(v1, v2, 3, sep=' | ', end='!!!\n')
print()
print(v1, v2, 3, sep=' | ', end='!!!\n')
# Обратите внимание: так выглядит комментарий - часть скрипта, которая не будет исполнена
# при запуске программы.
# Каждую строку комментария мы начинаем со знака хэштега.
"""
Это тоже комментарий - обычно выделение тремя апострофами мы используем для тех случаев,
когда хотим написать длинный, развернутый текст.
"""
print('Hello, world')
print(a)
'42+43'
'Hello, world, \'"ABC '
x = 'Hello, world!'
y = 'Hello, Python!'
print(x + ' ' + y)
# Обратите внимание, что результат вызова этой функции такой же, как выше,
# только текст теперь хранится внутри переменной.
print(X) # мы создали переменную x, а X не существует
print(x) # работает!
y = 2
print(type(2))
print(type(y))
print(2 + 2)
print(18 - 9)
print(4 * 3)
a = 1.1
b = int(a)
print(b)
2 ** 2020
print(5 / 2) # в результате такого деления получается другой тип данных (float), подробнее о нем поговорим позже.
print(5 // 2)
print(5 % 2)
print(5**2)
a = 2
b = 3
print(a ** b)
# изменится ли результат, если мы перезапишем переменную a?
a = 5
print(a ** b)
print(2 + '5') # ошибка, не сможем сложить целое число и строку
print(2 + int('5')) # преобразовали строку в число и все заработало
int('текст') # здесь тоже поймаем ошибку, потому что строка не представляет собой число
# (∩`-´)⊃━☆゚.*・。゚
x = 179
x_1 = x // 100
x_2 = x // 10 % 10
x_3 = x % 10
print(x_1, x_2, x_3) # тестовый вывод, проверяем, что правильно "достали" цифры из числа
print(x_1 + x_2 + x_3) # ответ на задачу
# (∩`-´)⊃━☆゚.*・。゚
minutes = 150
print(minutes // 60 % 24, minutes % 60)
print(type(True), type(False))
print(type('True')) # тип str - строковая переменная
print(true) # ошибка, Python думает, что это название переменной
print(bool(0))
print(bool(23))
print(bool(-10))
print(bool(''))
print(bool('Hello'))
print(bool(' ')) # даже строка из одного пробела - это True
print(bool('False')) # и даже строка 'False' - это True
print(int(True))
print(int(False))
print(1 == 1)
print(1 != '1')
c = 1 > 3
print(c)
print(type(c))
x = 5
print(1 < x <= 5) # можно писать связки цепочкой
print((1 == 1) and ('1' == 1))
print((1 == 1) or ('1' == 1))
print(not(1 == 1))
print(((1 == 1) or ('1' == 1)) and (2 == 2))
## (∩`-´)⊃━☆゚.*・。゚
time = 16
can_visit = 6 <= time <= 8
can_visit2 = 16 <= time <= 17
print(can_visit or can_visit2)
4.5 + 5
print(11 / 2)
print(type(11 / 2))
print(11 // 2)
print(type(11 // 2))
0.2 + 0.1 == 0.3
print(0.2 + 0.1)
print(0.3)
0.2 + 0.1 - 0.3 < 0.000001
1.5 ** 100000
1.5 ** 1000
print(2.5 + float('2.4'))
import math # команда import загружает модуль под названием math
int(2.6)
print(int(2.6))
print(int(-2.6))
print(math.floor(2.6)) # чтобы использовать функцю из дополнительного модуля -
# нужно сначала написать название этого модуля и через точку название функции
print(math.floor(-2.6))
print(math.ceil(2.6))
print(math.ceil(-2.6))
print(round(2.2))
print(round(2.7))
print(round(2.5)) # внимание на эту строку
print(round(3.5))
from math import ceil
ceil(2.6) # теперь работает без math.
# (∩`-´)⊃━☆゚.*・。゚
x = 4.0
print(x - int(x))
x = 5.2
print(x - int(x))
| 0.196981 | 0.96502 |
```
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import numpy as np
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from scipy.stats import norm,lognorm
from itertools import permutations
from scipy import linalg,optimize
```
# Quelques définitions
```
class combined_distribution:
def __init__(self,dists,weigths):
self.dists = dists
self.weights = weigths / weigths.sum()
self.var_estimate = None
def pdf(self,x):
pdfs = np.array([dist.pdf(x) for dist in self.dists])
return np.dot(self.weights,pdfs)
def cdf(self,x):
pdfs = np.array([dist.cdf(x) for dist in self.dists])
return np.dot(self.weights,pdfs)
def cdf_inv(self,u):
res = np.array([
optimize.minimize_scalar(lambda x: (self.cdf(x)-u_i)**2)
for u_i in np.array([u]).ravel()])
return np.array([res_i.x for res_i in res])
def sample(self,n):
n_repartition = np.round(self.weights * n).astype(int)
samples = [dist_i.ppf(np.random.rand(n_i)) for (dist_i,n_i) in zip(dists,n_repartition)]
return np.concatenate(samples)
def mean(self):
means = [dist.mean() for dist in dists]
return np.dot(self.weights,means)
def median(self):
return self.cdf_inv(0.5)
def var(self):
if self.var_estimate is None:
samples = self.cdf_inv(np.random.random(100))
self.var_estimate = np.mean(np.square(samples)) - np.square(samples.mean())
return self.var_estimate
def std(self):
return np.sqrt(self.var())
def sample_mean_dist(self,n):
std_estimation = self.std() / np.sqrt(n)
return norm(loc= self.mean(),scale= std_estimation)
```
# Exemple de distribution
```
dists = np.array([norm(loc=-2),norm(loc=0,scale=0.5),norm(loc=4,scale=2)])
weights = np.array([3,2,5])
complex_dist = combined_distribution(dists,weights)
print("Distribution Multi-modale")
for dist,weight in zip(complex_dist.dists,complex_dist.weights):
print("N({},{}) x {}".format(dist.mean(),dist.var(),weight))
print("Moyenne :",complex_dist.mean())
print("Median :",complex_dist.median())
print("Variance :",complex_dist.var())
print("Ecart-type :",complex_dist.std())
x = np.linspace(-5,10,151)
fig, ax1 = plt.subplots()
plt.yticks(np.arange(0,1,step=0.1))
ax2 = ax1.twinx()
ax1.plot(x, complex_dist.pdf(x), 'b-')
ax2.plot(x, complex_dist.cdf(x), 'r-')
ax1.set_xlabel('X data')
ax1.set_ylabel('PDF', color='b')
ax2.set_ylabel('CDF', color='r')
ax1.plot([complex_dist.mean()]*2,[0,complex_dist.pdf(complex_dist.mean())],color='k')
ax2.plot([complex_dist.median(),x.max()],[0.5,0.5],"k--")
ax2.plot([complex_dist.median()]*2,[0,0.5],color='k',linestyle="--")
ax2.plot([complex_dist.cdf_inv(0.25),x.max()],[0.25,0.25],"--",color="gray")
ax2.plot([complex_dist.cdf_inv(0.25)]*2,[0,0.25],color='gray',linestyle="--")
ax2.plot([complex_dist.cdf_inv(0.75),x.max()],[0.75,0.75],"--",color="gray")
ax2.plot([complex_dist.cdf_inv(0.75)]*2,[0,0.75],color='gray',linestyle="--")
fig.set_size_inches((8,5))
@interact(n=widgets.IntSlider(min=2,max=300,stp=1,value=10,continuous_update=True))
def interact_sampling(n):
sample = complex_dist.sample(n)
fig, ax1 = plt.subplots()
x = np.linspace(-5,10,151)
ax1.set_xlim(-5,10)
plt.yticks(np.arange(0,1,step=0.1))
ax2 = ax1.twinx()
ax1.set_ylim(0,0.2)
ax2.set_ylim(0,1)
ax1.hist(sample,density=True,bins=20,zorder=1)
ax2.hist(sample,cumulative=True,density=True,bins=20,histtype="step",facecolor=None,edgecolor="red",zorder=0)
ax1.plot(x, complex_dist.pdf(x), 'k--')
ax2.plot(x, complex_dist.cdf(x), 'k--')
ax1.axvline(x=complex_dist.mean(),color='green',linestyle="-",ymax=1.0)
ax1.axvline(x=sample.mean(),color='k',linestyle="--",ymax=0.9)
ax1.set_xlabel('X data')
ax1.set_ylabel('PDF', color='b')
ax2.set_ylabel('CDF', color='r')
fig.set_size_inches((8,5))
```
# Théorème central limite
```
@interact(n_sample=widgets.IntSlider(value=50,min=1,max=1000,step=1,continuous_update=False),
sample_size=widgets.IntSlider(value=2,min=2,max=30,step=1,continuous_update=False))
def demo_central_limit_n(n_sample,sample_size):
estimate = [
complex_dist.sample(sample_size).mean()
for i in range(n_sample)]
x = np.linspace(-5,10,151)
fig, ax1 = plt.subplots()
ax1.hist(estimate,color='g',density=True,zorder=0)
ax2= ax1.twinx()
ax2.plot(x, complex_dist.pdf(x), 'b-',zorder=1)
estimation_mean_dist = complex_dist.sample_mean_dist(sample_size)
plt.axvline(x=complex_dist.mean(),color='k',linestyle="--",ymax=0.9,zorder=3)
ax1.plot(x,estimation_mean_dist.pdf(x),color='r',zorder=2)
ymin,ymax = plt.ylim()
ax1.set_ylim(0,max(ymax,1.05 * estimation_mean_dist.pdf(complex_dist.mean())))
```
# Variance vs. Variance relative
```
p = np.linspace(0,1,100)
plt.plot(p,p,label="p")
plt.plot(p,1-p,label="1-p")
plt.plot(p,np.multiply(p,1-p),label="p(1-p)")
plt.legend()
p = np.linspace(0.1,0.99,100)
plt.plot(p,np.multiply(1/p,1-p),label="p(1-p)")
plt.legend()
```
|
github_jupyter
|
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import numpy as np
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from scipy.stats import norm,lognorm
from itertools import permutations
from scipy import linalg,optimize
class combined_distribution:
def __init__(self,dists,weigths):
self.dists = dists
self.weights = weigths / weigths.sum()
self.var_estimate = None
def pdf(self,x):
pdfs = np.array([dist.pdf(x) for dist in self.dists])
return np.dot(self.weights,pdfs)
def cdf(self,x):
pdfs = np.array([dist.cdf(x) for dist in self.dists])
return np.dot(self.weights,pdfs)
def cdf_inv(self,u):
res = np.array([
optimize.minimize_scalar(lambda x: (self.cdf(x)-u_i)**2)
for u_i in np.array([u]).ravel()])
return np.array([res_i.x for res_i in res])
def sample(self,n):
n_repartition = np.round(self.weights * n).astype(int)
samples = [dist_i.ppf(np.random.rand(n_i)) for (dist_i,n_i) in zip(dists,n_repartition)]
return np.concatenate(samples)
def mean(self):
means = [dist.mean() for dist in dists]
return np.dot(self.weights,means)
def median(self):
return self.cdf_inv(0.5)
def var(self):
if self.var_estimate is None:
samples = self.cdf_inv(np.random.random(100))
self.var_estimate = np.mean(np.square(samples)) - np.square(samples.mean())
return self.var_estimate
def std(self):
return np.sqrt(self.var())
def sample_mean_dist(self,n):
std_estimation = self.std() / np.sqrt(n)
return norm(loc= self.mean(),scale= std_estimation)
dists = np.array([norm(loc=-2),norm(loc=0,scale=0.5),norm(loc=4,scale=2)])
weights = np.array([3,2,5])
complex_dist = combined_distribution(dists,weights)
print("Distribution Multi-modale")
for dist,weight in zip(complex_dist.dists,complex_dist.weights):
print("N({},{}) x {}".format(dist.mean(),dist.var(),weight))
print("Moyenne :",complex_dist.mean())
print("Median :",complex_dist.median())
print("Variance :",complex_dist.var())
print("Ecart-type :",complex_dist.std())
x = np.linspace(-5,10,151)
fig, ax1 = plt.subplots()
plt.yticks(np.arange(0,1,step=0.1))
ax2 = ax1.twinx()
ax1.plot(x, complex_dist.pdf(x), 'b-')
ax2.plot(x, complex_dist.cdf(x), 'r-')
ax1.set_xlabel('X data')
ax1.set_ylabel('PDF', color='b')
ax2.set_ylabel('CDF', color='r')
ax1.plot([complex_dist.mean()]*2,[0,complex_dist.pdf(complex_dist.mean())],color='k')
ax2.plot([complex_dist.median(),x.max()],[0.5,0.5],"k--")
ax2.plot([complex_dist.median()]*2,[0,0.5],color='k',linestyle="--")
ax2.plot([complex_dist.cdf_inv(0.25),x.max()],[0.25,0.25],"--",color="gray")
ax2.plot([complex_dist.cdf_inv(0.25)]*2,[0,0.25],color='gray',linestyle="--")
ax2.plot([complex_dist.cdf_inv(0.75),x.max()],[0.75,0.75],"--",color="gray")
ax2.plot([complex_dist.cdf_inv(0.75)]*2,[0,0.75],color='gray',linestyle="--")
fig.set_size_inches((8,5))
@interact(n=widgets.IntSlider(min=2,max=300,stp=1,value=10,continuous_update=True))
def interact_sampling(n):
sample = complex_dist.sample(n)
fig, ax1 = plt.subplots()
x = np.linspace(-5,10,151)
ax1.set_xlim(-5,10)
plt.yticks(np.arange(0,1,step=0.1))
ax2 = ax1.twinx()
ax1.set_ylim(0,0.2)
ax2.set_ylim(0,1)
ax1.hist(sample,density=True,bins=20,zorder=1)
ax2.hist(sample,cumulative=True,density=True,bins=20,histtype="step",facecolor=None,edgecolor="red",zorder=0)
ax1.plot(x, complex_dist.pdf(x), 'k--')
ax2.plot(x, complex_dist.cdf(x), 'k--')
ax1.axvline(x=complex_dist.mean(),color='green',linestyle="-",ymax=1.0)
ax1.axvline(x=sample.mean(),color='k',linestyle="--",ymax=0.9)
ax1.set_xlabel('X data')
ax1.set_ylabel('PDF', color='b')
ax2.set_ylabel('CDF', color='r')
fig.set_size_inches((8,5))
@interact(n_sample=widgets.IntSlider(value=50,min=1,max=1000,step=1,continuous_update=False),
sample_size=widgets.IntSlider(value=2,min=2,max=30,step=1,continuous_update=False))
def demo_central_limit_n(n_sample,sample_size):
estimate = [
complex_dist.sample(sample_size).mean()
for i in range(n_sample)]
x = np.linspace(-5,10,151)
fig, ax1 = plt.subplots()
ax1.hist(estimate,color='g',density=True,zorder=0)
ax2= ax1.twinx()
ax2.plot(x, complex_dist.pdf(x), 'b-',zorder=1)
estimation_mean_dist = complex_dist.sample_mean_dist(sample_size)
plt.axvline(x=complex_dist.mean(),color='k',linestyle="--",ymax=0.9,zorder=3)
ax1.plot(x,estimation_mean_dist.pdf(x),color='r',zorder=2)
ymin,ymax = plt.ylim()
ax1.set_ylim(0,max(ymax,1.05 * estimation_mean_dist.pdf(complex_dist.mean())))
p = np.linspace(0,1,100)
plt.plot(p,p,label="p")
plt.plot(p,1-p,label="1-p")
plt.plot(p,np.multiply(p,1-p),label="p(1-p)")
plt.legend()
p = np.linspace(0.1,0.99,100)
plt.plot(p,np.multiply(1/p,1-p),label="p(1-p)")
plt.legend()
| 0.59514 | 0.841761 |
```
import sys
import os
sys.path.append(os.path.abspath("../src/"))
import plot.viz_sequence as viz_sequence
import h5py
import numpy as np
import tqdm
tqdm.tqdm_notebook()
# Path to SHAP scores and TF-MoDISco results
shap_scores_path = "/users/amtseng/att_priors/results/shap_scores/profile/BPNet/BPNet_prior_r25_e17_task2_all_shap_scores.h5"
tfm_results_path = "/users/amtseng/att_priors/results/tfmodisco/profile/BPNet/BPNet_prior_r25_e17_task0_all_tfm.h5"
with h5py.File(shap_scores_path, "r") as f:
hyp_scores = f["hyp_scores"][:]
input_seqs = f["one_hot_seqs"][:]
def find_motifs(input_seqs, query_seq, center_slice):
base_dict = {"A": 0, "C": 1, "G": 2, "T": 3}
rc_base_dict = {"A": 3, "C": 2, "G": 1, "T": 0}
found = []
seq = np.array([base_dict[base] for base in query_seq])
rc_seq = np.array([rc_base_dict[base] for base in query_seq])
for i in tqdm.notebook.trange(len(input_seqs)):
input_seq = np.where(input_seqs[i][center_slice] == 1)[1]
for j in range(0, len(input_seq) - len(seq)):
if np.all(seq == input_seq[j : j + len(seq)]) or np.all(rc_seq == input_seq[j : j + len(seq)]):
found.append(i)
break
return found
for index in np.random.choice(hyp_scores.shape[0], size=5, replace=False):
viz_sequence.plot_weights((hyp_scores[index] * input_seqs[index])[570:770], subticks_frequency=100)
background_freqs = np.array([0.27, 0.23, 0.23, 0.27])
def pfm_info_content(pfm, pseudocount=0.001):
"""
Given an L x 4 PFM, computes information content for each base and
returns it as an L-array.
"""
num_bases = pfm.shape[1]
# Normalize track to probabilities along base axis
pfm_norm = (pfm + pseudocount) / (np.sum(pfm, axis=1, keepdims=True) + (num_bases * pseudocount))
ic = pfm_norm * np.log2(pfm_norm / np.expand_dims(background_freqs, axis=0))
return np.sum(ic, axis=1)
def pfm_to_pwm(pfm, pseudocount=0.001):
"""
Converts and L x 4 PFM into an L x 4 PWM.
"""
num_bases = pfm.shape[1]
# Incorporate pseudocount by adding it to every element and renormalizing
pfm_norm = (pfm + pseudocount) / (np.sum(pfm, axis=1, keepdims=True) + (num_bases * pseudocount))
return np.log2(pfm_norm / np.expand_dims(background_freqs, axis=0))
def import_tfmodisco_motifs(
tfm_results_hdf5, min_seqlets=0, min_ic=0.6, ic_window=6, trim_flank_ic_frac=0.2,
max_length=20, plot_all_motifs=False, plot_passed_motifs=True
):
"""
Imports the TF-MoDISco motifs, and a final set of motifs, trimmed by info content.
The motifs returned must have at least `min_seqlets` supporting them, and there must
be a window of size `ic_window` with at IC at least `min_ic`. Finally, the resulting
motifs are trimmed by cutting off flanks whose base-level IC is below
`trim_flank_ic_frac` of the highest IC of the motif. If the remaining motif is over
`max_length`, it is also deemed to not pass, because IC is not concentrated enough.
This also only keeps motifs with overall positive contributions (i.e. no negative
seqlets).
Returns 2 parallel lists: a list of motif CWMs, and a list of motif PWMs.
"""
cwms, pwms = [], []
num_seqlets = []
with h5py.File(tfm_results_hdf5, "r") as f:
metaclusters = f["metacluster_idx_to_submetacluster_results"]
num_metaclusters = len(metaclusters.keys())
for metacluster_i, metacluster_key in enumerate(list(metaclusters.keys())):
metacluster = metaclusters[metacluster_key]
if plot_all_motifs:
print("Metacluster: %s (%d/%d)" % (metacluster_key, metacluster_i + 1, num_metaclusters))
print("==========================================")
patterns = metacluster["seqlets_to_patterns_result"]["patterns"]
num_patterns = len(patterns["all_pattern_names"][:])
for pattern_i, pattern_name in enumerate(patterns["all_pattern_names"]):
pattern_name = pattern_name.decode()
pattern = patterns[pattern_name]
seqlets = pattern["seqlets_and_alnmts"]["seqlets"]
x = np.array([int(s.split(",")[0].split(":")[1]) for s in seqlets[:].astype(str)])
print(np.max(x))
if plot_all_motifs:
print("Pattern: %s (%d/%d)" % (pattern_name, pattern_i + 1, num_patterns))
print("--------------------------------------")
print("%d seqlets" % len(seqlets))
print("Sequence")
viz_sequence.plot_weights(pattern["sequence"]["fwd"][:])
print("Hypothetical contributions")
viz_sequence.plot_weights(pattern["task0_hypothetical_contribs"]["fwd"][:])
print("Contribution_scores")
viz_sequence.plot_weights(pattern["task0_contrib_scores"]["fwd"][:])
pfm = pattern["sequence"]["fwd"][:]
act_contribs = pattern["task0_contrib_scores"]["fwd"][:]
# Check that the contribution scores are overall positive
if np.sum(act_contribs) < 0:
continue
# Check number of seqlets and IC
if len(seqlets) < min_seqlets:
continue
pwm = pfm_to_pwm(pfm)
pwm_ic = pfm_info_content(pfm)
max_windowed_ic = max(
np.sum(pwm_ic[i : (i + ic_window)]) for i in range(len(pwm_ic) - ic_window + 1)
)
if max_windowed_ic / ic_window < min_ic:
continue
# Cut off flanks from actual contribution scores and PWM based on IC of PWM
ic_trim_thresh = np.max(pwm_ic) * trim_flank_ic_frac
pass_inds = np.where(pwm_ic >= ic_trim_thresh)[0]
trimmed_cwm = act_contribs[np.min(pass_inds): np.max(pass_inds) + 1]
trimmed_pwm = pwm[np.min(pass_inds): np.max(pass_inds) + 1]
# If too long after trimming, IC is not concentrated enough; toss out;
# it is almost certainly a homopolymer repeat
if len(trimmed_cwm) > max_length:
continue
# Last check to make sure motif is overall positive
if np.sum(trimmed_cwm) < 0:
continue
cwms.append(trimmed_cwm)
pwms.append(trimmed_pwm)
num_seqlets.append(len(seqlets))
if plot_passed_motifs:
print("Final motifs: %d total" % len(cwms))
print("==========================================")
for i in range(len(cwms)):
print("Motif %d (%d seqlets)" % (i + 1, num_seqlets[i]))
viz_sequence.plot_weights(cwms[i])
viz_sequence.plot_weights(pwms[i])
return cwms, pwms
motifs = import_tfmodisco_motifs(tfm_results_path, plot_all_motifs=True, plot_passed_motifs=True)
motifs = import_tfmodisco_motifs(tfm_results_path, min_seqlets=0, min_ic=0.6, trim_flank_ic_frac=0, max_length=100, plot_all_motifs=False, plot_passed_motifs=True)
# viz_sequence.plot_weights(np.flip(motifs[0][9], axis=(0, 1)))
```
|
github_jupyter
|
import sys
import os
sys.path.append(os.path.abspath("../src/"))
import plot.viz_sequence as viz_sequence
import h5py
import numpy as np
import tqdm
tqdm.tqdm_notebook()
# Path to SHAP scores and TF-MoDISco results
shap_scores_path = "/users/amtseng/att_priors/results/shap_scores/profile/BPNet/BPNet_prior_r25_e17_task2_all_shap_scores.h5"
tfm_results_path = "/users/amtseng/att_priors/results/tfmodisco/profile/BPNet/BPNet_prior_r25_e17_task0_all_tfm.h5"
with h5py.File(shap_scores_path, "r") as f:
hyp_scores = f["hyp_scores"][:]
input_seqs = f["one_hot_seqs"][:]
def find_motifs(input_seqs, query_seq, center_slice):
base_dict = {"A": 0, "C": 1, "G": 2, "T": 3}
rc_base_dict = {"A": 3, "C": 2, "G": 1, "T": 0}
found = []
seq = np.array([base_dict[base] for base in query_seq])
rc_seq = np.array([rc_base_dict[base] for base in query_seq])
for i in tqdm.notebook.trange(len(input_seqs)):
input_seq = np.where(input_seqs[i][center_slice] == 1)[1]
for j in range(0, len(input_seq) - len(seq)):
if np.all(seq == input_seq[j : j + len(seq)]) or np.all(rc_seq == input_seq[j : j + len(seq)]):
found.append(i)
break
return found
for index in np.random.choice(hyp_scores.shape[0], size=5, replace=False):
viz_sequence.plot_weights((hyp_scores[index] * input_seqs[index])[570:770], subticks_frequency=100)
background_freqs = np.array([0.27, 0.23, 0.23, 0.27])
def pfm_info_content(pfm, pseudocount=0.001):
"""
Given an L x 4 PFM, computes information content for each base and
returns it as an L-array.
"""
num_bases = pfm.shape[1]
# Normalize track to probabilities along base axis
pfm_norm = (pfm + pseudocount) / (np.sum(pfm, axis=1, keepdims=True) + (num_bases * pseudocount))
ic = pfm_norm * np.log2(pfm_norm / np.expand_dims(background_freqs, axis=0))
return np.sum(ic, axis=1)
def pfm_to_pwm(pfm, pseudocount=0.001):
"""
Converts and L x 4 PFM into an L x 4 PWM.
"""
num_bases = pfm.shape[1]
# Incorporate pseudocount by adding it to every element and renormalizing
pfm_norm = (pfm + pseudocount) / (np.sum(pfm, axis=1, keepdims=True) + (num_bases * pseudocount))
return np.log2(pfm_norm / np.expand_dims(background_freqs, axis=0))
def import_tfmodisco_motifs(
tfm_results_hdf5, min_seqlets=0, min_ic=0.6, ic_window=6, trim_flank_ic_frac=0.2,
max_length=20, plot_all_motifs=False, plot_passed_motifs=True
):
"""
Imports the TF-MoDISco motifs, and a final set of motifs, trimmed by info content.
The motifs returned must have at least `min_seqlets` supporting them, and there must
be a window of size `ic_window` with at IC at least `min_ic`. Finally, the resulting
motifs are trimmed by cutting off flanks whose base-level IC is below
`trim_flank_ic_frac` of the highest IC of the motif. If the remaining motif is over
`max_length`, it is also deemed to not pass, because IC is not concentrated enough.
This also only keeps motifs with overall positive contributions (i.e. no negative
seqlets).
Returns 2 parallel lists: a list of motif CWMs, and a list of motif PWMs.
"""
cwms, pwms = [], []
num_seqlets = []
with h5py.File(tfm_results_hdf5, "r") as f:
metaclusters = f["metacluster_idx_to_submetacluster_results"]
num_metaclusters = len(metaclusters.keys())
for metacluster_i, metacluster_key in enumerate(list(metaclusters.keys())):
metacluster = metaclusters[metacluster_key]
if plot_all_motifs:
print("Metacluster: %s (%d/%d)" % (metacluster_key, metacluster_i + 1, num_metaclusters))
print("==========================================")
patterns = metacluster["seqlets_to_patterns_result"]["patterns"]
num_patterns = len(patterns["all_pattern_names"][:])
for pattern_i, pattern_name in enumerate(patterns["all_pattern_names"]):
pattern_name = pattern_name.decode()
pattern = patterns[pattern_name]
seqlets = pattern["seqlets_and_alnmts"]["seqlets"]
x = np.array([int(s.split(",")[0].split(":")[1]) for s in seqlets[:].astype(str)])
print(np.max(x))
if plot_all_motifs:
print("Pattern: %s (%d/%d)" % (pattern_name, pattern_i + 1, num_patterns))
print("--------------------------------------")
print("%d seqlets" % len(seqlets))
print("Sequence")
viz_sequence.plot_weights(pattern["sequence"]["fwd"][:])
print("Hypothetical contributions")
viz_sequence.plot_weights(pattern["task0_hypothetical_contribs"]["fwd"][:])
print("Contribution_scores")
viz_sequence.plot_weights(pattern["task0_contrib_scores"]["fwd"][:])
pfm = pattern["sequence"]["fwd"][:]
act_contribs = pattern["task0_contrib_scores"]["fwd"][:]
# Check that the contribution scores are overall positive
if np.sum(act_contribs) < 0:
continue
# Check number of seqlets and IC
if len(seqlets) < min_seqlets:
continue
pwm = pfm_to_pwm(pfm)
pwm_ic = pfm_info_content(pfm)
max_windowed_ic = max(
np.sum(pwm_ic[i : (i + ic_window)]) for i in range(len(pwm_ic) - ic_window + 1)
)
if max_windowed_ic / ic_window < min_ic:
continue
# Cut off flanks from actual contribution scores and PWM based on IC of PWM
ic_trim_thresh = np.max(pwm_ic) * trim_flank_ic_frac
pass_inds = np.where(pwm_ic >= ic_trim_thresh)[0]
trimmed_cwm = act_contribs[np.min(pass_inds): np.max(pass_inds) + 1]
trimmed_pwm = pwm[np.min(pass_inds): np.max(pass_inds) + 1]
# If too long after trimming, IC is not concentrated enough; toss out;
# it is almost certainly a homopolymer repeat
if len(trimmed_cwm) > max_length:
continue
# Last check to make sure motif is overall positive
if np.sum(trimmed_cwm) < 0:
continue
cwms.append(trimmed_cwm)
pwms.append(trimmed_pwm)
num_seqlets.append(len(seqlets))
if plot_passed_motifs:
print("Final motifs: %d total" % len(cwms))
print("==========================================")
for i in range(len(cwms)):
print("Motif %d (%d seqlets)" % (i + 1, num_seqlets[i]))
viz_sequence.plot_weights(cwms[i])
viz_sequence.plot_weights(pwms[i])
return cwms, pwms
motifs = import_tfmodisco_motifs(tfm_results_path, plot_all_motifs=True, plot_passed_motifs=True)
motifs = import_tfmodisco_motifs(tfm_results_path, min_seqlets=0, min_ic=0.6, trim_flank_ic_frac=0, max_length=100, plot_all_motifs=False, plot_passed_motifs=True)
# viz_sequence.plot_weights(np.flip(motifs[0][9], axis=(0, 1)))
| 0.542621 | 0.289786 |
# Consistency Training with Supervision
**Author:** [Sayak Paul](https://twitter.com/RisingSayak)<br>
**Date created:** 2021/04/13<br>
**Last modified:** 2021/04/19<br>
**Description:** Training with consistency regularization for robustness against data distribution shifts.
Deep learning models excel in many image recognition tasks when the data is independent
and identically distributed (i.i.d.). However, they can suffer from performance
degradation caused by subtle distribution shifts in the input data (such as random
noise, contrast change, and blurring). So, naturally, there arises a question of
why. As discussed in [A Fourier Perspective on Model Robustness in Computer Vision](https://arxiv.org/pdf/1906.08988.pdf)),
there's no reason for deep learning models to be robust against such shifts. Standard
model training procedures (such as standard image classification training workflows)
*don't* enable a model to learn beyond what's fed to it in the form of training data.
In this example, we will be training an image classification model enforcing a sense of
*consistency* inside it by doing the following:
* Train a standard image classification model.
* Train an _equal or larger_ model on a noisy version of the dataset (augmented using
[RandAugment](https://arxiv.org/abs/1909.13719)).
* To do this, we will first obtain predictions of the previous model on the clean images
of the dataset.
* We will then use these predictions and train the second model to match these
predictions on the noisy variant of the same images. This is identical to the workflow of
[*Knowledge Distillation*](https://keras.io/examples/vision/knowledge_distillation/) but
since the student model is equal or larger in size this process is also referred to as
***Self-Training***.
This overall training workflow finds its roots in works like
[FixMatch](https://arxiv.org/abs/2001.07685), [Unsupervised Data Augmentation for Consistency Training](https://arxiv.org/abs/1904.12848),
and [Noisy Student Training](https://arxiv.org/abs/1911.04252). Since this training
process encourages a model yield consistent predictions for clean as well as noisy
images, it's often referred to as *consistency training* or *training with consistency
regularization*. Although the example focuses on using consistency training to enhance
the robustness of models to common corruptions this example can also serve a template
for performing _weakly supervised learning_.
This example requires TensorFlow 2.4 or higher, as well as TensorFlow Hub and TensorFlow
Models, which can be installed using the following command:
```
!pip install -q tf-models-official tensorflow-addons
```
## Imports and setup
```
from official.vision.image_classification.augment import RandAugment
from tensorflow.keras import layers
import tensorflow as tf
import tensorflow_addons as tfa
import matplotlib.pyplot as plt
tf.random.set_seed(42)
```
## Define hyperparameters
```
AUTO = tf.data.AUTOTUNE
BATCH_SIZE = 128
EPOCHS = 5
CROP_TO = 72
RESIZE_TO = 96
```
## Load the CIFAR-10 dataset
```
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
val_samples = 49500
new_train_x, new_y_train = x_train[: val_samples + 1], y_train[: val_samples + 1]
val_x, val_y = x_train[val_samples:], y_train[val_samples:]
```
## Create TensorFlow `Dataset` objects
```
# Initialize `RandAugment` object with 2 layers of
# augmentation transforms and strength of 9.
augmenter = RandAugment(num_layers=2, magnitude=9)
```
For training the teacher model, we will only be using two geometric augmentation
transforms: random horizontal flip and random crop.
```
def preprocess_train(image, label, noisy=True):
image = tf.image.random_flip_left_right(image)
# We first resize the original image to a larger dimension
# and then we take random crops from it.
image = tf.image.resize(image, [RESIZE_TO, RESIZE_TO])
image = tf.image.random_crop(image, [CROP_TO, CROP_TO, 3])
if noisy:
image = augmenter.distort(image)
return image, label
def preprocess_test(image, label):
image = tf.image.resize(image, [CROP_TO, CROP_TO])
return image, label
train_ds = tf.data.Dataset.from_tensor_slices((new_train_x, new_y_train))
validation_ds = tf.data.Dataset.from_tensor_slices((val_x, val_y))
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test))
```
We make sure `train_clean_ds` and `train_noisy_ds` are shuffled using the *same* seed to
ensure their orders are exactly the same. This will be helpful during training the
student model.
```
# This dataset will be used to train the first model.
train_clean_ds = (
train_ds.shuffle(BATCH_SIZE * 10, seed=42)
.map(lambda x, y: (preprocess_train(x, y, noisy=False)), num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
# This prepares the `Dataset` object to use RandAugment.
train_noisy_ds = (
train_ds.shuffle(BATCH_SIZE * 10, seed=42)
.map(preprocess_train, num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
validation_ds = (
validation_ds.map(preprocess_test, num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
test_ds = (
test_ds.map(preprocess_test, num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
# This dataset will be used to train the second model.
consistency_training_ds = tf.data.Dataset.zip((train_clean_ds, train_noisy_ds))
```
## Visualize the datasets
```
sample_images, sample_labels = next(iter(train_clean_ds))
plt.figure(figsize=(10, 10))
for i, image in enumerate(sample_images[:9]):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image.numpy().astype("int"))
plt.axis("off")
sample_images, sample_labels = next(iter(train_noisy_ds))
plt.figure(figsize=(10, 10))
for i, image in enumerate(sample_images[:9]):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image.numpy().astype("int"))
plt.axis("off")
```
## Define a model building utility function
We now define our model building utility. Our model is based on the [ResNet50V2 architecture](https://arxiv.org/abs/1603.05027).
```
def get_training_model(num_classes=10):
resnet50_v2 = tf.keras.applications.ResNet50V2(
weights=None, include_top=False, input_shape=(CROP_TO, CROP_TO, 3),
)
model = tf.keras.Sequential(
[
layers.Input((CROP_TO, CROP_TO, 3)),
layers.experimental.preprocessing.Rescaling(scale=1.0 / 127.5, offset=-1),
resnet50_v2,
layers.GlobalAveragePooling2D(),
layers.Dense(num_classes),
]
)
return model
```
In the interest of reproducibility, we serialize the initial random weights of the
teacher network.
```
initial_teacher_model = get_training_model()
initial_teacher_model.save_weights("initial_teacher_model.h5")
```
## Train the teacher model
As noted in Noisy Student Training, if the teacher model is trained with *geometric
ensembling* and when the student model is forced to mimic that, it leads to better
performance. The original work uses [Stochastic Depth](https://arxiv.org/abs/1603.09382)
and [Dropout](https://jmlr.org/papers/v15/srivastava14a.html) to bring in the ensembling
part but for this example, we will use [Stochastic Weight Averaging](https://arxiv.org/abs/1803.05407)
(SWA) which also resembles geometric ensembling.
```
# Define the callbacks.
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(patience=3)
early_stopping = tf.keras.callbacks.EarlyStopping(
patience=10, restore_best_weights=True
)
# Initialize SWA from tf-hub.
SWA = tfa.optimizers.SWA
# Compile and train the teacher model.
teacher_model = get_training_model()
teacher_model.load_weights("initial_teacher_model.h5")
teacher_model.compile(
# Notice that we are wrapping our optimizer within SWA
optimizer=SWA(tf.keras.optimizers.Adam()),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=["accuracy"],
)
history = teacher_model.fit(
train_clean_ds,
epochs=EPOCHS,
validation_data=validation_ds,
callbacks=[reduce_lr, early_stopping],
)
# Evaluate the teacher model on the test set.
_, acc = teacher_model.evaluate(test_ds, verbose=0)
print(f"Test accuracy: {acc*100}%")
```
## Define a self-training utility
For this part, we will borrow the `Distiller` class from [this Keras Example](https://keras.io/examples/vision/knowledge_distillation/).
```
# Majority of the code is taken from:
# https://keras.io/examples/vision/knowledge_distillation/
class SelfTrainer(tf.keras.Model):
def __init__(self, student, teacher):
super(SelfTrainer, self).__init__()
self.student = student
self.teacher = teacher
def compile(
self, optimizer, metrics, student_loss_fn, distillation_loss_fn, temperature=3,
):
super(SelfTrainer, self).compile(optimizer=optimizer, metrics=metrics)
self.student_loss_fn = student_loss_fn
self.distillation_loss_fn = distillation_loss_fn
self.temperature = temperature
def train_step(self, data):
# Since our dataset is a zip of two independent datasets,
# after initially parsing them, we segregate the
# respective images and labels next.
clean_ds, noisy_ds = data
clean_images, _ = clean_ds
noisy_images, y = noisy_ds
# Forward pass of teacher
teacher_predictions = self.teacher(clean_images, training=False)
with tf.GradientTape() as tape:
# Forward pass of student
student_predictions = self.student(noisy_images, training=True)
# Compute losses
student_loss = self.student_loss_fn(y, student_predictions)
distillation_loss = self.distillation_loss_fn(
tf.nn.softmax(teacher_predictions / self.temperature, axis=1),
tf.nn.softmax(student_predictions / self.temperature, axis=1),
)
total_loss = (student_loss + distillation_loss) / 2
# Compute gradients
trainable_vars = self.student.trainable_variables
gradients = tape.gradient(total_loss, trainable_vars)
# Update weights
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
# Update the metrics configured in `compile()`
self.compiled_metrics.update_state(
y, tf.nn.softmax(student_predictions, axis=1)
)
# Return a dict of performance
results = {m.name: m.result() for m in self.metrics}
results.update({"total_loss": total_loss})
return results
def test_step(self, data):
# During inference, we only pass a dataset consisting images and labels.
x, y = data
# Compute predictions
y_prediction = self.student(x, training=False)
# Update the metrics
self.compiled_metrics.update_state(y, tf.nn.softmax(y_prediction, axis=1))
# Return a dict of performance
results = {m.name: m.result() for m in self.metrics}
return results
```
The only difference in this implementation is the way loss is being calculated. **Instead
of weighted the distillation loss and student loss differently we are taking their
average following Noisy Student Training**.
## Train the student model
```
# Define the callbacks.
# We are using a larger decay factor to stabilize the training.
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(
patience=3, factor=0.5, monitor="val_accuracy"
)
early_stopping = tf.keras.callbacks.EarlyStopping(
patience=10, restore_best_weights=True, monitor="val_accuracy"
)
# Compile and train the student model.
self_trainer = SelfTrainer(student=get_training_model(), teacher=teacher_model)
self_trainer.compile(
# Notice we are *not* using SWA here.
optimizer="adam",
metrics=["accuracy"],
student_loss_fn=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
distillation_loss_fn=tf.keras.losses.KLDivergence(),
temperature=10,
)
history = self_trainer.fit(
consistency_training_ds,
epochs=EPOCHS,
validation_data=validation_ds,
callbacks=[reduce_lr, early_stopping],
)
# Evaluate the student model.
acc = self_trainer.evaluate(test_ds, verbose=0)
print(f"Test accuracy from student model: {acc*100}%")
```
## Assess the robustness of the models
A standard benchmark of assessing the robustness of vision models is to record their
performance on corrupted datasets like ImageNet-C and CIFAR-10-C both of which were
proposed in [Benchmarking Neural Network Robustness to Common Corruptions and
Perturbations](https://arxiv.org/abs/1903.12261). For this example, we will be using the
CIFAR-10-C dataset which has 19 different corruptions on 5 different severity levels. To
assess the robustness of the models on this dataset, we will do the following:
* Run the pre-trained models on the highest level of severities and obtain the top-1
accuracies.
* Compute the mean top-1 accuracy.
For the purpose of this example, we won't be going through these steps. This is why we
trained the models for only 5 epochs. You can check out [this
repository](https://github.com/sayakpaul/Consistency-Training-with-Supervision) that
demonstrates the full-scale training experiments and also the aforementioned assessment.
The figure below presents an executive summary of that assessment:

**Mean Top-1** results stand for the CIFAR-10-C dataset and **Test Top-1** results stand
for the CIFAR-10 test set. It's clear that consistency training has an advantage on not
only enhancing the model robustness but also on improving the standard test performance.
|
github_jupyter
|
!pip install -q tf-models-official tensorflow-addons
from official.vision.image_classification.augment import RandAugment
from tensorflow.keras import layers
import tensorflow as tf
import tensorflow_addons as tfa
import matplotlib.pyplot as plt
tf.random.set_seed(42)
AUTO = tf.data.AUTOTUNE
BATCH_SIZE = 128
EPOCHS = 5
CROP_TO = 72
RESIZE_TO = 96
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
val_samples = 49500
new_train_x, new_y_train = x_train[: val_samples + 1], y_train[: val_samples + 1]
val_x, val_y = x_train[val_samples:], y_train[val_samples:]
# Initialize `RandAugment` object with 2 layers of
# augmentation transforms and strength of 9.
augmenter = RandAugment(num_layers=2, magnitude=9)
def preprocess_train(image, label, noisy=True):
image = tf.image.random_flip_left_right(image)
# We first resize the original image to a larger dimension
# and then we take random crops from it.
image = tf.image.resize(image, [RESIZE_TO, RESIZE_TO])
image = tf.image.random_crop(image, [CROP_TO, CROP_TO, 3])
if noisy:
image = augmenter.distort(image)
return image, label
def preprocess_test(image, label):
image = tf.image.resize(image, [CROP_TO, CROP_TO])
return image, label
train_ds = tf.data.Dataset.from_tensor_slices((new_train_x, new_y_train))
validation_ds = tf.data.Dataset.from_tensor_slices((val_x, val_y))
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test))
# This dataset will be used to train the first model.
train_clean_ds = (
train_ds.shuffle(BATCH_SIZE * 10, seed=42)
.map(lambda x, y: (preprocess_train(x, y, noisy=False)), num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
# This prepares the `Dataset` object to use RandAugment.
train_noisy_ds = (
train_ds.shuffle(BATCH_SIZE * 10, seed=42)
.map(preprocess_train, num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
validation_ds = (
validation_ds.map(preprocess_test, num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
test_ds = (
test_ds.map(preprocess_test, num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
# This dataset will be used to train the second model.
consistency_training_ds = tf.data.Dataset.zip((train_clean_ds, train_noisy_ds))
sample_images, sample_labels = next(iter(train_clean_ds))
plt.figure(figsize=(10, 10))
for i, image in enumerate(sample_images[:9]):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image.numpy().astype("int"))
plt.axis("off")
sample_images, sample_labels = next(iter(train_noisy_ds))
plt.figure(figsize=(10, 10))
for i, image in enumerate(sample_images[:9]):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image.numpy().astype("int"))
plt.axis("off")
def get_training_model(num_classes=10):
resnet50_v2 = tf.keras.applications.ResNet50V2(
weights=None, include_top=False, input_shape=(CROP_TO, CROP_TO, 3),
)
model = tf.keras.Sequential(
[
layers.Input((CROP_TO, CROP_TO, 3)),
layers.experimental.preprocessing.Rescaling(scale=1.0 / 127.5, offset=-1),
resnet50_v2,
layers.GlobalAveragePooling2D(),
layers.Dense(num_classes),
]
)
return model
initial_teacher_model = get_training_model()
initial_teacher_model.save_weights("initial_teacher_model.h5")
# Define the callbacks.
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(patience=3)
early_stopping = tf.keras.callbacks.EarlyStopping(
patience=10, restore_best_weights=True
)
# Initialize SWA from tf-hub.
SWA = tfa.optimizers.SWA
# Compile and train the teacher model.
teacher_model = get_training_model()
teacher_model.load_weights("initial_teacher_model.h5")
teacher_model.compile(
# Notice that we are wrapping our optimizer within SWA
optimizer=SWA(tf.keras.optimizers.Adam()),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=["accuracy"],
)
history = teacher_model.fit(
train_clean_ds,
epochs=EPOCHS,
validation_data=validation_ds,
callbacks=[reduce_lr, early_stopping],
)
# Evaluate the teacher model on the test set.
_, acc = teacher_model.evaluate(test_ds, verbose=0)
print(f"Test accuracy: {acc*100}%")
# Majority of the code is taken from:
# https://keras.io/examples/vision/knowledge_distillation/
class SelfTrainer(tf.keras.Model):
def __init__(self, student, teacher):
super(SelfTrainer, self).__init__()
self.student = student
self.teacher = teacher
def compile(
self, optimizer, metrics, student_loss_fn, distillation_loss_fn, temperature=3,
):
super(SelfTrainer, self).compile(optimizer=optimizer, metrics=metrics)
self.student_loss_fn = student_loss_fn
self.distillation_loss_fn = distillation_loss_fn
self.temperature = temperature
def train_step(self, data):
# Since our dataset is a zip of two independent datasets,
# after initially parsing them, we segregate the
# respective images and labels next.
clean_ds, noisy_ds = data
clean_images, _ = clean_ds
noisy_images, y = noisy_ds
# Forward pass of teacher
teacher_predictions = self.teacher(clean_images, training=False)
with tf.GradientTape() as tape:
# Forward pass of student
student_predictions = self.student(noisy_images, training=True)
# Compute losses
student_loss = self.student_loss_fn(y, student_predictions)
distillation_loss = self.distillation_loss_fn(
tf.nn.softmax(teacher_predictions / self.temperature, axis=1),
tf.nn.softmax(student_predictions / self.temperature, axis=1),
)
total_loss = (student_loss + distillation_loss) / 2
# Compute gradients
trainable_vars = self.student.trainable_variables
gradients = tape.gradient(total_loss, trainable_vars)
# Update weights
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
# Update the metrics configured in `compile()`
self.compiled_metrics.update_state(
y, tf.nn.softmax(student_predictions, axis=1)
)
# Return a dict of performance
results = {m.name: m.result() for m in self.metrics}
results.update({"total_loss": total_loss})
return results
def test_step(self, data):
# During inference, we only pass a dataset consisting images and labels.
x, y = data
# Compute predictions
y_prediction = self.student(x, training=False)
# Update the metrics
self.compiled_metrics.update_state(y, tf.nn.softmax(y_prediction, axis=1))
# Return a dict of performance
results = {m.name: m.result() for m in self.metrics}
return results
# Define the callbacks.
# We are using a larger decay factor to stabilize the training.
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(
patience=3, factor=0.5, monitor="val_accuracy"
)
early_stopping = tf.keras.callbacks.EarlyStopping(
patience=10, restore_best_weights=True, monitor="val_accuracy"
)
# Compile and train the student model.
self_trainer = SelfTrainer(student=get_training_model(), teacher=teacher_model)
self_trainer.compile(
# Notice we are *not* using SWA here.
optimizer="adam",
metrics=["accuracy"],
student_loss_fn=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
distillation_loss_fn=tf.keras.losses.KLDivergence(),
temperature=10,
)
history = self_trainer.fit(
consistency_training_ds,
epochs=EPOCHS,
validation_data=validation_ds,
callbacks=[reduce_lr, early_stopping],
)
# Evaluate the student model.
acc = self_trainer.evaluate(test_ds, verbose=0)
print(f"Test accuracy from student model: {acc*100}%")
| 0.953525 | 0.990812 |
## Gaussian Process Latent Variable Model
The [Gaussian Process Latent Variable Model](https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction#Gaussian_process_latent_variable_models) (GPLVM) is a dimensionality reduction method that uses a Gaussian process to learn a low-dimensional representation of (potentially) high-dimensional data. In the typical setting of Gaussian process regression, where we are given inputs $X$ and outputs $y$, we choose a kernel and learn hyperparameters that best describe the mapping from $X$ to $y$. In the GPLVM, we are not given $X$: we are only given $y$. So we need to learn $X$ along with the kernel hyperparameters.
We do not do maximum likelihood inference on $X$. Instead, we set a Gaussian prior for $X$ and learn the mean and variance of the approximate (gaussian) posterior $q(X|y)$. In this notebook, we show how this can be done using the `pyro.contrib.gp` module. In particular we reproduce a result described in [2].
```
import os
import matplotlib.pyplot as plt
import pandas as pd
import torch
from torch.nn import Parameter
import pyro
import pyro.contrib.gp as gp
import pyro.distributions as dist
import pyro.ops.stats as stats
smoke_test = ('CI' in os.environ) # ignore; used to check code integrity in the Pyro repo
assert pyro.__version__.startswith('0.5.1')
pyro.enable_validation(True) # can help with debugging
pyro.set_rng_seed(1)
```
### Dataset
The data we are going to use consists of [single-cell](https://en.wikipedia.org/wiki/Single-cell_analysis) [qPCR](https://en.wikipedia.org/wiki/Real-time_polymerase_chain_reaction) data for 48 genes obtained from mice (Guo *et al.*, [1]). This data is available at the [Open Data Science repository](https://github.com/sods/ods). The data contains 48 columns, with each column corresponding to (normalized) measurements of each gene. Cells differentiate during their development and these data were obtained at various stages of development. The various stages are labelled from the 1-cell stage to the 64-cell stage. For the 32-cell stage, the data is further differentiated into 'trophectoderm' (TE) and 'inner cell mass' (ICM). ICM further differentiates into 'epiblast' (EPI) and 'primitive endoderm' (PE) at the 64-cell stage. Each of the rows in the dataset is labelled with one of these stages.
```
# license: Copyright (c) 2014, the Open Data Science Initiative
# license: https://www.elsevier.com/legal/elsevier-website-terms-and-conditions
URL = "https://raw.githubusercontent.com/sods/ods/master/datasets/guo_qpcr.csv"
df = pd.read_csv(URL, index_col=0)
print("Data shape: {}\n{}\n".format(df.shape, "-" * 21))
print("Data labels: {}\n{}\n".format(df.index.unique().tolist(), "-" * 86))
print("Show a small subset of the data:")
df.head()
```
### Modelling
First, we need to define the output tensor $y$. To predict values for all $48$ genes, we need $48$ Gaussian processes. So the required shape for $y$ is `num_GPs x num_data = 48 x 437`.
```
data = torch.tensor(df.values, dtype=torch.get_default_dtype())
# we need to transpose data to correct its shape
y = data.t()
```
Now comes the most interesting part. We know that the observed data $y$ has latent structure: in particular different datapoints correspond to different cell stages. We would like our GPLVM to learn this structure in an unsupervised manner. In principle, if we do a good job of inference then we should be able to discover this structure---at least if we choose reasonable priors. First, we have to choose the dimension of our latent space $X$. We choose $dim(X)=2$, since we would like our model to disentangle 'capture time' ($1$, $2$, $4$, $8$, $16$, $32$, and $64$) from cell branching types (TE, ICM, PE, EPI). Next, when we set the mean of our prior over $X$, we set the first dimension to be equal to the observed capture time. This will help the GPLVM discover the structure we are interested in and will make it more likely that that structure will be axis-aligned in a way that is easier for us to interpret.
```
capture_time = y.new_tensor([int(cell_name.split(" ")[0]) for cell_name in df.index.values])
# we scale the time into the interval [0, 1]
time = capture_time.log2() / 6
# we setup the mean of our prior over X
X_prior_mean = torch.zeros(y.size(1), 2) # shape: 437 x 2
X_prior_mean[:, 0] = time
```
We will use a sparse version of Gaussian process inference to make training faster. Remember that we also need to define $X$ as a `Parameter` so that we can set a prior and guide (variational distribution) for it.
```
kernel = gp.kernels.RBF(input_dim=2, lengthscale=torch.ones(2))
# we clone here so that we don't change our prior during the course of training
X = Parameter(X_prior_mean.clone())
# we will use SparseGPRegression model with num_inducing=32;
# initial values for Xu are sampled randomly from X_prior_mean
Xu = stats.resample(X_prior_mean.clone(), 32)
gplvm = gp.models.SparseGPRegression(X, y, kernel, Xu, noise=torch.tensor(0.01), jitter=1e-5)
```
We will use the [set_prior()](http://docs.pyro.ai/en/dev/contrib.gp.html#pyro.contrib.gp.parameterized.Parameterized.set_prior) and [autoguide()](http://docs.pyro.ai/en/dev/contrib.gp.html#pyro.contrib.gp.parameterized.Parameterized.autoguide) methods from the [Parameterized](http://docs.pyro.ai/en/dev/contrib.gp.html#module-pyro.contrib.gp.parameterized) class to set a prior and guide for $X$.
```
# we use `.to_event()` to tell Pyro that the prior distribution for X has no batch_shape
gplvm.set_prior("X", dist.Normal(X_prior_mean, 0.1).to_event())
gplvm.autoguide("X", dist.Normal)
```
### Inference
As mentioned in the [Gaussian Processes tutorial](gp.ipynb), we can use the helper function [gp.util.train](http://docs.pyro.ai/en/dev/contrib.gp.html#pyro.contrib.gp.util.train) to train a Pyro GP module. By default, this helper function uses the Adam optimizer with a learning rate of `0.01`.
```
# note that training is expected to take a minute or so
losses = gp.util.train(gplvm, num_steps=4000)
# let's plot the loss curve after 4000 steps of training
plt.plot(losses)
plt.show()
```
After inference, the mean and standard deviation of the approximated posterior $q(X) \sim p(X | y)$ will be stored in the parameters `X_loc` and `X_scale`. To get a sample from $q(X)$, we need to set the `mode` of `gplvm` to `"guide"`.
```
gplvm.mode = "guide"
X = gplvm.X
```
### Visualizing the result
Let’s see what we got by applying GPLVM to our dataset.
```
plt.figure(figsize=(8, 6))
colors = plt.get_cmap("tab10").colors[::-1]
labels = df.index.unique()
X = gplvm.X_loc.detach().numpy()
for i, label in enumerate(labels):
X_i = X[df.index == label]
plt.scatter(X_i[:, 0], X_i[:, 1], c=colors[i], label=label)
plt.legend()
plt.xlabel("pseudotime", fontsize=14)
plt.ylabel("branching", fontsize=14)
plt.title("GPLVM on Single-Cell qPCR data", fontsize=16)
plt.show()
```
We can see that the first dimension of the latent $X$ for each cell (horizontal axis) corresponds well with the observed capture time (colors). On the other hand, the 32 TE cell and 64 TE cell are clustered near each other. And the fact that ICM cells differentiate into PE and EPI can also be observed from the figure!
### Remarks
+ The sparse version scales well (linearly) with the number of data points. So the GPLVM can be used with large datasets. Indeed in [2] the authors have applied GPLVM to a dataset with 68k peripheral blood mononuclear cells.
+ Much of the power of Gaussian Processes lies in the function prior defined by the kernel. We recommend users try out different combinations of kernels for different types of datasets! For example, if the data contains periodicities, it might make sense to use a [Periodic kernel](http://docs.pyro.ai/en/dev/contrib.gp.html#periodic). Other kernels can also be found in the [Pyro GP docs](http://docs.pyro.ai/en/dev/contrib.gp.html#module-pyro.contrib.gp.kernels).
### References
[1] `Resolution of Cell Fate Decisions Revealed by Single-Cell Gene Expression Analysis from Zygote to Blastocyst`,<br />
Guoji Guo, Mikael Huss, Guo Qing Tong, Chaoyang Wang, Li Li Sun, Neil D. Clarke, Paul Robson
[2] `GrandPrix: Scaling up the Bayesian GPLVM for single-cell data`,<br />
Sumon Ahmed, Magnus Rattray, Alexis Boukouvalas
[3] `Bayesian Gaussian Process Latent Variable Model`,<br />
Michalis K. Titsias, Neil D. Lawrence
[4] `A novel approach for resolving differences in single-cell gene expression patterns from zygote to blastocyst`,<br />
Florian Buettner, Fabian J. Theis
|
github_jupyter
|
import os
import matplotlib.pyplot as plt
import pandas as pd
import torch
from torch.nn import Parameter
import pyro
import pyro.contrib.gp as gp
import pyro.distributions as dist
import pyro.ops.stats as stats
smoke_test = ('CI' in os.environ) # ignore; used to check code integrity in the Pyro repo
assert pyro.__version__.startswith('0.5.1')
pyro.enable_validation(True) # can help with debugging
pyro.set_rng_seed(1)
# license: Copyright (c) 2014, the Open Data Science Initiative
# license: https://www.elsevier.com/legal/elsevier-website-terms-and-conditions
URL = "https://raw.githubusercontent.com/sods/ods/master/datasets/guo_qpcr.csv"
df = pd.read_csv(URL, index_col=0)
print("Data shape: {}\n{}\n".format(df.shape, "-" * 21))
print("Data labels: {}\n{}\n".format(df.index.unique().tolist(), "-" * 86))
print("Show a small subset of the data:")
df.head()
data = torch.tensor(df.values, dtype=torch.get_default_dtype())
# we need to transpose data to correct its shape
y = data.t()
capture_time = y.new_tensor([int(cell_name.split(" ")[0]) for cell_name in df.index.values])
# we scale the time into the interval [0, 1]
time = capture_time.log2() / 6
# we setup the mean of our prior over X
X_prior_mean = torch.zeros(y.size(1), 2) # shape: 437 x 2
X_prior_mean[:, 0] = time
kernel = gp.kernels.RBF(input_dim=2, lengthscale=torch.ones(2))
# we clone here so that we don't change our prior during the course of training
X = Parameter(X_prior_mean.clone())
# we will use SparseGPRegression model with num_inducing=32;
# initial values for Xu are sampled randomly from X_prior_mean
Xu = stats.resample(X_prior_mean.clone(), 32)
gplvm = gp.models.SparseGPRegression(X, y, kernel, Xu, noise=torch.tensor(0.01), jitter=1e-5)
# we use `.to_event()` to tell Pyro that the prior distribution for X has no batch_shape
gplvm.set_prior("X", dist.Normal(X_prior_mean, 0.1).to_event())
gplvm.autoguide("X", dist.Normal)
# note that training is expected to take a minute or so
losses = gp.util.train(gplvm, num_steps=4000)
# let's plot the loss curve after 4000 steps of training
plt.plot(losses)
plt.show()
gplvm.mode = "guide"
X = gplvm.X
plt.figure(figsize=(8, 6))
colors = plt.get_cmap("tab10").colors[::-1]
labels = df.index.unique()
X = gplvm.X_loc.detach().numpy()
for i, label in enumerate(labels):
X_i = X[df.index == label]
plt.scatter(X_i[:, 0], X_i[:, 1], c=colors[i], label=label)
plt.legend()
plt.xlabel("pseudotime", fontsize=14)
plt.ylabel("branching", fontsize=14)
plt.title("GPLVM on Single-Cell qPCR data", fontsize=16)
plt.show()
| 0.73678 | 0.988279 |
# 911 Calls Capstone Project
For this capstone project we will be analyzing some 911 call data from [Kaggle](https://www.kaggle.com/mchirico/montcoalert). The data contains the following fields:
* lat : String variable, Latitude
* lng: String variable, Longitude
* desc: String variable, Description of the Emergency Call
* zip: String variable, Zipcode
* title: String variable, Title
* timeStamp: String variable, YYYY-MM-DD HH:MM:SS
* twp: String variable, Township
* addr: String variable, Address
* e: String variable, Dummy variable (always 1)
Just go along with this notebook and try to complete the instructions or answer the questions in bold using your Python and Data Science skills!
## Data and Setup
____
** Import numpy and pandas **
```
import numpy as np
import pandas as pd
```
** Import visualization libraries and set %matplotlib inline. **
```
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
%matplotlib inline
```
** Read in the csv file as a dataframe called df **
```
df = pd.read_csv('911.csv')
```
** Check the info() of the df **
```
df.info()
```
** Check the head of df **
```
df.head()
```
## Basic Questions
** What are the top 5 zipcodes for 911 calls? **
```
df['zip'].value_counts().head(5)
```
** What are the top 5 townships (twp) for 911 calls? **
```
df['twp'].value_counts().head(5)
```
** Take a look at the 'title' column, how many unique title codes are there? **
```
df['title'].nunique()
```
## Creating new features
** In the titles column there are "Reasons/Departments" specified before the title code. These are EMS, Fire, and Traffic. Use .apply() with a custom lambda expression to create a new column called "Reason" that contains this string value.**
**For example, if the title column value is EMS: BACK PAINS/INJURY , the Reason column value would be EMS. **
```
df['Reason'] = df['title'].apply(lambda title: title.split(':')[0])
```
** What is the most common Reason for a 911 call based off of this new column? **
```
df['Reason'].value_counts()
```
** Now use seaborn to create a countplot of 911 calls by Reason. **
```
sns.countplot(x='Reason', data=df)
```
___
** Now let us begin to focus on time information. What is the data type of the objects in the timeStamp column? **
```
type(df['timeStamp'].iloc[0])
```
** You should have seen that these timestamps are still strings. Use [pd.to_datetime](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html) to convert the column from strings to DateTime objects. **
```
df['timeStamp'] = pd.to_datetime(df['timeStamp'])
df['timeStamp'].head()
```
** You can now grab specific attributes from a Datetime object by calling them. For example:**
time = df['timeStamp'].iloc[0]
time.hour
**You can use Jupyter's tab method to explore the various attributes you can call. Now that the timestamp column are actually DateTime objects, use .apply() to create 3 new columns called Hour, Month, and Day of Week. You will create these columns based off of the timeStamp column, reference the solutions if you get stuck on this step.**
```
df['Hour'] = df['timeStamp'].apply(lambda time: time.hour)
df['Hour'].head()
```
** Notice how the Day of Week is an integer 0-6. Use the .map() with this dictionary to map the actual string names to the day of the week: **
dmap = {0:'Mon',1:'Tue',2:'Wed',3:'Thu',4:'Fri',5:'Sat',6:'Sun'}
```
df['Month'] = df['timeStamp'].apply(lambda time: time.month)
df['Month'].head()
df['Day of Week'] = df['timeStamp'].apply(lambda time: time.dayofweek).map({0:'Mon',1:'Tue',2:'Wed',3:'Thu',4:'Fri',5:'Sat',6:'Sun'})
df['Day of Week'].value_counts()
```
** Now use seaborn to create a countplot of the Day of Week column with the hue based off of the Reason column. **
```
ax = sns.countplot(x='Day of Week', data=df, hue='Reason')
ax.legend(loc=(1,0.76))
```
**Now do the same for Month:**
```
ax = sns.countplot(x='Month', data=df, hue='Reason')
```
**Did you notice something strange about the Plot?**
_____
** You should have noticed it was missing some Months, let's see if we can maybe fill in this information by plotting the information in another way, possibly a simple line plot that fills in the missing months, in order to do this, we'll need to do some work with pandas... **
** Now create a gropuby object called byMonth, where you group the DataFrame by the month column and use the count() method for aggregation. Use the head() method on this returned DataFrame. **
```
month_counts = pd.DataFrame()
month_counts['count'] = df.groupby('Month').count()['lat']
month_counts
```
** Now create a simple plot off of the dataframe indicating the count of calls per month. **
```
sns.lineplot(x=month_counts.index, y=month_counts['count'])
# Yoni - my 2nd way - simpler based on solutions notebook
month_counts['count'].plot()
```
** Now see if you can use seaborn's lmplot() to create a linear fit on the number of calls per month. Keep in mind you may need to reset the index to a column. **
```
month_counts['Month']=month_counts.index
sns.lmplot(x='Month', y='count', data=month_counts)
```
**Create a new column called 'Date' that contains the date from the timeStamp column. You'll need to use apply along with the .date() method. **
```
df['Date'] = df['timeStamp'].apply(lambda timeStamp: timeStamp.date())
df['Date'].head()
```
** Now groupby this Date column with the count() aggregate and create a plot of counts of 911 calls.**
```
#date_freq = pd.DataFrame({'Freq': df.groupby('Date').count()['lat']})
date_freq = df.groupby('Date').count()['lat']
date_freq.head()
#sns.lineplot(x=date_freq.index, y=date_freq['Freq'])
date_freq.plot()
plt.tight_layout()
```
** Now recreate this plot but create 3 separate plots with each plot representing a Reason for the 911 call**
```
date_freq_fire = df[df['Reason'] == 'Fire'].groupby('Date').count()['lat']
date_freq_fire.head()
sns.lineplot(x=date_freq_fire.index, y=date_freq_fire['Freq']).set_title('Fire')
date_freq_ems = df[df['Reason'] == 'EMS'].groupby('Date').count()['lat']
date_freq_ems.head()
#sns.lineplot(x=date_freq_ems.index, y=date_freq_ems['Freq']).set_title('EMS')
date_freq_ems.plot()
plt.title('Traffic')
plt.tight_layout()
```
____
** Now let's move on to creating heatmaps with seaborn and our data. We'll first need to restructure the dataframe so that the columns become the Hours and the Index becomes the Day of the Week. There are lots of ways to do this, but I would recommend trying to combine groupby with an [unstack](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html) method. Reference the solutions if you get stuck on this!**
```
#hour_day = pd.DataFrame({'Freq': df.groupby(['Day of Week','Hour']).count()['lat']})
#hour_day = hour_day.unstack()
#hour_day = hour_day.xs(key='Freq', axis=1)
#hour_day
hour_day = df.groupby(['Day of Week', 'Hour']).count()['lat'].unstack()
```
** Now create a HeatMap using this new DataFrame. **
```
sns.heatmap(hour_day)
```
** Now create a clustermap using this DataFrame. **
```
sns.clustermap(hour_day)
```
** Now repeat these same plots and operations, for a DataFrame that shows the Month as the column. **
```
#month_day = pd.DataFrame({'Freq': df.groupby(['Day of Week','Month']).count()['lat']})
#month_day = month_day.unstack()
#month_day = month_day.xs(key='Freq', axis=1)
#month_day
month_day = df.groupby(['Day of Week','Month']).count()['lat'].unstack()
month_day
sns.heatmap(month_day)
sns.clustermap(month_day)
```
**Continue exploring the Data however you see fit!**
# Great Job!
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
%matplotlib inline
df = pd.read_csv('911.csv')
df.info()
df.head()
df['zip'].value_counts().head(5)
df['twp'].value_counts().head(5)
df['title'].nunique()
df['Reason'] = df['title'].apply(lambda title: title.split(':')[0])
df['Reason'].value_counts()
sns.countplot(x='Reason', data=df)
type(df['timeStamp'].iloc[0])
df['timeStamp'] = pd.to_datetime(df['timeStamp'])
df['timeStamp'].head()
df['Hour'] = df['timeStamp'].apply(lambda time: time.hour)
df['Hour'].head()
df['Month'] = df['timeStamp'].apply(lambda time: time.month)
df['Month'].head()
df['Day of Week'] = df['timeStamp'].apply(lambda time: time.dayofweek).map({0:'Mon',1:'Tue',2:'Wed',3:'Thu',4:'Fri',5:'Sat',6:'Sun'})
df['Day of Week'].value_counts()
ax = sns.countplot(x='Day of Week', data=df, hue='Reason')
ax.legend(loc=(1,0.76))
ax = sns.countplot(x='Month', data=df, hue='Reason')
month_counts = pd.DataFrame()
month_counts['count'] = df.groupby('Month').count()['lat']
month_counts
sns.lineplot(x=month_counts.index, y=month_counts['count'])
# Yoni - my 2nd way - simpler based on solutions notebook
month_counts['count'].plot()
month_counts['Month']=month_counts.index
sns.lmplot(x='Month', y='count', data=month_counts)
df['Date'] = df['timeStamp'].apply(lambda timeStamp: timeStamp.date())
df['Date'].head()
#date_freq = pd.DataFrame({'Freq': df.groupby('Date').count()['lat']})
date_freq = df.groupby('Date').count()['lat']
date_freq.head()
#sns.lineplot(x=date_freq.index, y=date_freq['Freq'])
date_freq.plot()
plt.tight_layout()
date_freq_fire = df[df['Reason'] == 'Fire'].groupby('Date').count()['lat']
date_freq_fire.head()
sns.lineplot(x=date_freq_fire.index, y=date_freq_fire['Freq']).set_title('Fire')
date_freq_ems = df[df['Reason'] == 'EMS'].groupby('Date').count()['lat']
date_freq_ems.head()
#sns.lineplot(x=date_freq_ems.index, y=date_freq_ems['Freq']).set_title('EMS')
date_freq_ems.plot()
plt.title('Traffic')
plt.tight_layout()
#hour_day = pd.DataFrame({'Freq': df.groupby(['Day of Week','Hour']).count()['lat']})
#hour_day = hour_day.unstack()
#hour_day = hour_day.xs(key='Freq', axis=1)
#hour_day
hour_day = df.groupby(['Day of Week', 'Hour']).count()['lat'].unstack()
sns.heatmap(hour_day)
sns.clustermap(hour_day)
#month_day = pd.DataFrame({'Freq': df.groupby(['Day of Week','Month']).count()['lat']})
#month_day = month_day.unstack()
#month_day = month_day.xs(key='Freq', axis=1)
#month_day
month_day = df.groupby(['Day of Week','Month']).count()['lat'].unstack()
month_day
sns.heatmap(month_day)
sns.clustermap(month_day)
| 0.204263 | 0.963916 |
#Basic Linear Algebra for Deep Learning and Machine Learning Python Tutorial
URL: https://towardsai.net/p/machine-learning/basic-linear-algebra-for-deep-learning-and-machine-learning-ml-python-tutorial-444e23db3e9e
## Implementation of PCA from Scrach
* Implement Covariance Matrix
* Derive Eigenvalues and Eigenvectors
* Analysis from Iris Dataset
```
import numpy as np
import pylab as plt
import pandas as pd
from sklearn import datasets
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
load_iris = datasets.load_iris()
iris_df = pd.DataFrame(load_iris.data, columns=[load_iris.feature_names])
iris_df.head()
```
**Check the Shape of Iris Dataset**
```
load_iris.data.shape
```
**Standardization**
It is always good to standardize the data to keep all feature of data on same scale.
```
standardized_x = StandardScaler().fit_transform(load_iris.data)
standardized_x[:2]
standardized_x.T
```
**Compute Covariance Matrix**
```
covariance_matrix_x = np.cov(standardized_x.T)
covariance_matrix_x
```
**Compute Eigenvalues and Eigenvectors from Covariance Matrix**
```
eigenvalues, eigenvectors = np.linalg.eig(covariance_matrix_x)
eigenvalues
eigenvectors
```
**Check variance in Eigenvalues**
```
total_of_eigenvalues = sum(eigenvalues)
varariance = [(i / total_of_eigenvalues)*100 for i in sorted(eigenvalues, reverse=True)]
varariance
```
**From the above result of variance**
* 1st Component = 72.96%
* 2nd Component = 22.85%
* 3rd Component = 3.5%
* 4th Component = 0.5%
So, here 3rd and 4th Components have very low variannce respectively. These can be dropped. Becasue these components can't add any value.
**Taking 1st and 2nd Components only and Reshaping**
```
eigenpairs = [(np.abs(eigenvalues[i]), eigenvectors[:,i]) for i in range(len(eigenvalues))]
# Sorting from Higher values to lower value
eigenpairs.sort(key=lambda x: x[0], reverse=True)
eigenpairs
matrix_weighing = np.hstack((eigenpairs[0][1].reshape(4,1),
eigenpairs[1][1].reshape(4,1)))
matrix_weighing
Y = standardized_x.dot(matrix_weighing)
Y
plt.figure()
target_names = load_iris.target_names
y = load_iris.target
for c, i, target_name in zip("rgb", [0, 1, 2], target_names):
plt.scatter(Y[y==i,0], Y[y==i,1], c=c, label=target_name)
plt.xlabel('PCA 1')
plt.ylabel('PCA 2')
plt.legend()
plt.title('PCA')
plt.show()
```
##Prediction of House Price by using Linear Regression (Linear Equation)
* To understand the usage of Linear Algebra in Linear Regression
* Linear Equation in Linear Regression
* Load Training data of House Price
* Training data of house has only two variables - square_feet, price
```
#Download the dataset
!wget https://raw.githubusercontent.com/towardsai/tutorials/master/linear-algebra-for-ml-and-deep-learning/house_price.csv
import pandas as pd
import numpy as np
df = pd.read_csv('house_price.csv')
df.head()
```
**Calculating the Mean**
```
def get_mean(value):
total = sum(value)
length = len(value)
mean = total/length
return mean
```
**Calculating the Variance**
```
def get_variance(value):
mean = get_mean(value)
mean_difference_square = [pow((item - mean), 2) for item in value]
variance = sum(mean_difference_square)/float(len(value)-1)
return variance
```
**Calculating the Covariance**
```
def get_covariance(value1, value2):
value1_mean = get_mean(value1)
value2_mean = get_mean(value2)
values_size = len(value1)
covariance = 0.0
for i in range(0, values_size):
covariance += (value1[i] - value1_mean) * (value2[i] - value2_mean)
return covariance / float(values_size - 1)
```
**Implementing a Linear Regression**
```
def linear_regression(df):
X = df['square_feet']
Y = df['price']
m = len(X)
square_feet_mean = get_mean(X)
price_mean = get_mean(Y)
#variance of X
square_feet_variance = get_variance(X)
price_variance = get_variance(Y)
covariance_of_price_and_square_feet = get_covariance(X, Y)
w1 = covariance_of_price_and_square_feet / float(square_feet_variance)
w0 = price_mean - w1 * square_feet_mean
# prediction --> Linear Equation
prediction = w0 + w1 * X
df['price (prediction)'] = prediction
return df['price (prediction)']
```
**Calling the Linear Regression Method**
```
linear_regression(df)
```
|
github_jupyter
|
import numpy as np
import pylab as plt
import pandas as pd
from sklearn import datasets
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
load_iris = datasets.load_iris()
iris_df = pd.DataFrame(load_iris.data, columns=[load_iris.feature_names])
iris_df.head()
load_iris.data.shape
standardized_x = StandardScaler().fit_transform(load_iris.data)
standardized_x[:2]
standardized_x.T
covariance_matrix_x = np.cov(standardized_x.T)
covariance_matrix_x
eigenvalues, eigenvectors = np.linalg.eig(covariance_matrix_x)
eigenvalues
eigenvectors
total_of_eigenvalues = sum(eigenvalues)
varariance = [(i / total_of_eigenvalues)*100 for i in sorted(eigenvalues, reverse=True)]
varariance
eigenpairs = [(np.abs(eigenvalues[i]), eigenvectors[:,i]) for i in range(len(eigenvalues))]
# Sorting from Higher values to lower value
eigenpairs.sort(key=lambda x: x[0], reverse=True)
eigenpairs
matrix_weighing = np.hstack((eigenpairs[0][1].reshape(4,1),
eigenpairs[1][1].reshape(4,1)))
matrix_weighing
Y = standardized_x.dot(matrix_weighing)
Y
plt.figure()
target_names = load_iris.target_names
y = load_iris.target
for c, i, target_name in zip("rgb", [0, 1, 2], target_names):
plt.scatter(Y[y==i,0], Y[y==i,1], c=c, label=target_name)
plt.xlabel('PCA 1')
plt.ylabel('PCA 2')
plt.legend()
plt.title('PCA')
plt.show()
#Download the dataset
!wget https://raw.githubusercontent.com/towardsai/tutorials/master/linear-algebra-for-ml-and-deep-learning/house_price.csv
import pandas as pd
import numpy as np
df = pd.read_csv('house_price.csv')
df.head()
def get_mean(value):
total = sum(value)
length = len(value)
mean = total/length
return mean
def get_variance(value):
mean = get_mean(value)
mean_difference_square = [pow((item - mean), 2) for item in value]
variance = sum(mean_difference_square)/float(len(value)-1)
return variance
def get_covariance(value1, value2):
value1_mean = get_mean(value1)
value2_mean = get_mean(value2)
values_size = len(value1)
covariance = 0.0
for i in range(0, values_size):
covariance += (value1[i] - value1_mean) * (value2[i] - value2_mean)
return covariance / float(values_size - 1)
def linear_regression(df):
X = df['square_feet']
Y = df['price']
m = len(X)
square_feet_mean = get_mean(X)
price_mean = get_mean(Y)
#variance of X
square_feet_variance = get_variance(X)
price_variance = get_variance(Y)
covariance_of_price_and_square_feet = get_covariance(X, Y)
w1 = covariance_of_price_and_square_feet / float(square_feet_variance)
w0 = price_mean - w1 * square_feet_mean
# prediction --> Linear Equation
prediction = w0 + w1 * X
df['price (prediction)'] = prediction
return df['price (prediction)']
linear_regression(df)
| 0.728555 | 0.982557 |
# Starbucks Capstone Challenge
### Introduction
This data set contains simulated data that mimics customer behavior on the Starbucks rewards mobile app. Once every few days, Starbucks sends out an offer to users of the mobile app. An offer can be merely an advertisement for a drink or an actual offer such as a discount or BOGO (buy one get one free). Some users might not receive any offer during certain weeks.
Not all users receive the same offer, and that is the challenge to solve with this data set.
Your task is to combine transaction, demographic and offer data to determine which demographic groups respond best to which offer type. This data set is a simplified version of the real Starbucks app because the underlying simulator only has one product whereas Starbucks actually sells dozens of products.
Every offer has a validity period before the offer expires. As an example, a BOGO offer might be valid for only 5 days. You'll see in the data set that informational offers have a validity period even though these ads are merely providing information about a product; for example, if an informational offer has 7 days of validity, you can assume the customer is feeling the influence of the offer for 7 days after receiving the advertisement.
You'll be given transactional data showing user purchases made on the app including the timestamp of purchase and the amount of money spent on a purchase. This transactional data also has a record for each offer that a user receives as well as a record for when a user actually views the offer. There are also records for when a user completes an offer.
Keep in mind as well that someone using the app might make a purchase through the app without having received an offer or seen an offer.
### Example
To give an example, a user could receive a discount offer buy 10 dollars get 2 off on Monday. The offer is valid for 10 days from receipt. If the customer accumulates at least 10 dollars in purchases during the validity period, the customer completes the offer.
However, there are a few things to watch out for in this data set. Customers do not opt into the offers that they receive; in other words, a user can receive an offer, never actually view the offer, and still complete the offer. For example, a user might receive the "buy 10 dollars get 2 dollars off offer", but the user never opens the offer during the 10 day validity period. The customer spends 15 dollars during those ten days. There will be an offer completion record in the data set; however, the customer was not influenced by the offer because the customer never viewed the offer.
### Cleaning
This makes data cleaning especially important and tricky.
You'll also want to take into account that some demographic groups will make purchases even if they don't receive an offer. From a business perspective, if a customer is going to make a 10 dollar purchase without an offer anyway, you wouldn't want to send a buy 10 dollars get 2 dollars off offer. You'll want to try to assess what a certain demographic group will buy when not receiving any offers.
### Final Advice
Because this is a capstone project, you are free to analyze the data any way you see fit. For example, you could build a machine learning model that predicts how much someone will spend based on demographics and offer type. Or you could build a model that predicts whether or not someone will respond to an offer. Or, you don't need to build a machine learning model at all. You could develop a set of heuristics that determine what offer you should send to each customer (i.e., 75 percent of women customers who were 35 years old responded to offer A vs 40 percent from the same demographic to offer B, so send offer A).
# Data Sets
The data is contained in three files:
* portfolio.json - containing offer ids and meta data about each offer (duration, type, etc.)
* profile.json - demographic data for each customer
* transcript.json - records for transactions, offers received, offers viewed, and offers completed
Here is the schema and explanation of each variable in the files:
**portfolio.json**
* id (string) - offer id
* offer_type (string) - type of offer ie BOGO, discount, informational
* difficulty (int) - minimum required spend to complete an offer
* reward (int) - reward given for completing an offer
* duration (int) - time for offer to be open, in days
* channels (list of strings)
**profile.json**
* age (int) - age of the customer
* became_member_on (int) - date when customer created an app account
* gender (str) - gender of the customer (note some entries contain 'O' for other rather than M or F)
* id (str) - customer id
* income (float) - customer's income
**transcript.json**
* event (str) - record description (ie transaction, offer received, offer viewed, etc.)
* person (str) - customer id
* time (int) - time in hours since start of test. The data begins at time t=0
* value - (dict of strings) - either an offer id or transaction amount depending on the record
**Note:** If you are using the workspace, you will need to go to the terminal and run the command `conda update pandas` before reading in the files. This is because the version of pandas in the workspace cannot read in the transcript.json file correctly, but the newest version of pandas can. You can access the termnal from the orange icon in the top left of this notebook.
You can see how to access the terminal and how the install works using the two images below. First you need to access the terminal:
<img src="pic1.png"/>
Then you will want to run the above command:
<img src="pic2.png"/>
Finally, when you enter back into the notebook (use the jupyter icon again), you should be able to run the below cell without any errors.
```
import pandas as pd
import numpy as np
from datetime import datetime
import math
import seaborn as sns
import json
from matplotlib import pyplot as plt
import warnings
warnings.filterwarnings('ignore')
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn import preprocessing
from sklearn.preprocessing import StandardScaler, normalize, MinMaxScaler
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.metrics import confusion_matrix,accuracy_score, classification_report,f1_score
from sklearn.metrics import roc_auc_score,roc_curve, auc
from sklearn.utils import resample
from sklearn.svm import SVC
from sklearn.grid_search import GridSearchCV
from sklearn.linear_model import LogisticRegression
% matplotlib inline
# read in the json files
portfolio = pd.read_json('data/portfolio.json', orient='records', lines=True)
profile = pd.read_json('data/profile.json', orient='records', lines=True)
transcript = pd.read_json('data/transcript.json', orient='records', lines=True)
#View profile
profile.head()
profile.dtypes
profile['gender'].value_counts()
profile.describe()
sns.distplot(profile['age'],hist=True)
plt.title("Distribution of Age in Portfolio")
plt.xlabel("Age")
plt.ylabel("Count");
```
Here we can see that for age there seems to be a large number of outliers with an age of 118.
## Exploratory Analysis
```
sns.countplot(x='gender',data=profile);
sns.distplot(profile[profile['gender']=='M']['age'],hist=False,color="b", kde_kws={"shade": True});
sns.distplot(profile[profile['gender']=='F']['age'],hist=False,color="r", kde_kws={"shade": True});
plt.title('Age distribution by Gender')
plt.gca().get_yaxis().set_visible(False)
plt.legend(['Male','Female'],frameon=False);
sns.distplot(profile[profile['gender']=='M']['income'],hist=False,color="b", kde_kws={"shade": True});
sns.distplot(profile[profile['gender']=='F']['income'],hist=False,color="r", kde_kws={"shade": True});
plt.title('Income distribution by Gender')
plt.gca().get_yaxis().set_visible(False)
plt.legend(['Male','Female'],frameon=False);
```
## Data Processing
```
def clean_profile(profile):
'''
Function to clean profile dataframe.
INPUT - Profile dataframe
OUTPUT - Return cleaned version of profile dataframe
'''
#Convert became_member_on to datetime
profile['became_member_on'] = pd.to_datetime(profile['became_member_on'],format='%Y%m%d')
#Convert users with age 118 to np.nan
profile['age'] = profile['age'].apply(lambda x: np.nan if x ==118 else x)
#Create dummy columns for gender
genders = pd.get_dummies(profile['gender'],prefix = "gender", prefix_sep = "-")
profile = pd.concat([profile,genders],axis=1)
#Change id column name to offer id
profile.rename(columns={'id':'customer_id'},inplace=True)
#Extract the number of days a user has been a member of the rewards app.
today = pd.to_datetime(datetime.today().strftime('%Y%m%d'))
profile['became_member_on'] = (today - profile['became_member_on']) / np.timedelta64(1,'D')
return profile
profile = clean_profile(profile)
profile.head()
plt.hist(profile['became_member_on']);
plt.title('Distribution of The Number of Days a User Has Been a Member');
#Print mean and median for income
profile['income'].mean(), profile['income'].median()
#Explore portfolio
portfolio.head()
def clean_portfolio(portfolio):
'''
Function to clean the portoflio dataset. Encode the categorical variables.
Input - Portfolio dataframe
Output - Portfolio dataframe with categorical variables handled
'''
#Apply one hot encodings to channels column
#Email
portfolio['email'] = portfolio['channels'].apply(lambda x: 1 if 'email' in x else 0)
#Mobile
portfolio['mobile'] = portfolio['channels'].apply(lambda x: 1 if 'mobile' in x else 0)
#Social
portfolio['social'] = portfolio['channels'].apply(lambda x: 1 if 'social' in x else 0)
#Web
portfolio['web'] = portfolio['channels'].apply(lambda x: 1 if 'web' in x else 0)
#Create dummy columns for offer_type
offer_types = pd.get_dummies(portfolio['offer_type'], prefix ='offer_type', prefix_sep='-')
portfolio = pd.concat([portfolio.drop(['offer_type','channels'],axis=1),offer_types],axis=1)
portfolio.rename(columns={'id':'offer_id'},inplace=True)
return portfolio
portfolio = clean_portfolio(portfolio)
portfolio.head()
#Explore transcript
transcript.head()
transcript.tail()
def clean_transcript(transcript):
#Extract offer_id from value column
transcript['offer_id'] = transcript['value'].apply(lambda x: x['offer_id'] if 'offer_id' in x else (x['offer id'] if 'offer id' in x else None))
#create two seperate columns for reward and amount
for i in ['reward','amount']:
transcript[i] = transcript['value'].apply(lambda x:x[i] if i in x else None)
transcript.drop('value',axis=1,inplace=True)
transcript.rename(columns={'person':'customer_id'},inplace=True)
#Convert transcript time from hours to days
transcript['time'] = transcript['time'] / 24
return transcript
transcript = clean_transcript(transcript)
transcript.head()
#Explore transcript for one person
transcript[transcript['customer_id']=='78afa995795e4d85b5d9ceeca43f5fef']
def transform_transcript(transcript):
'''
Function to transform transcript dataframe to return a dataframe where it shows each successful and unsuccesful offer.
Input - Transcript dataframe
Output - transformed transcript dataframe
'''
offer_customer = transcript.groupby(['customer_id','offer_id','event'])['time'].count().unstack()
offer_customer.reset_index(level=[0,1],inplace = True)
#Replace nan values with 0.0
offer_customer.fillna(0.0, inplace = True)
#Need to determine which offers where successful - where offer completed and offer viewed are greater than 1.
#We can multiply the two columns together and replace any values > 0 with 1.
#This is an important step as some offers are completed but have not been viewed - meaning the offer did not cause the
#transaction.
offer_customer['successful offer'] = offer_customer['offer completed'] * offer_customer['offer viewed']
offer_customer['successful offer'] = offer_customer['successful offer'].apply(lambda x: 1.0 if x > 0 else 0.0)
offer_customer.drop(['offer completed','offer viewed','offer received'],axis=1, inplace = True)
return offer_customer
transcript = transform_transcript(transcript)
def merge_dataframes(profile,portfolio,transcript):
'''
Function to merge all the dataframes together.
Input - profile, portfolio and transcript dataframes
Output - single dataframe
'''
overall = transcript.merge(portfolio,how='left',on='offer_id')
overall = overall.merge(profile,how='left',on='customer_id')
return overall
overall_df = merge_dataframes(profile,portfolio,transcript)
overall_df.head()
```
We now have a single dataframe which includes information about the offer and information about the customer for every combination of offers and customers.
We also determined whether an offer was successful - which was when an offer was completed and viewed by the customer.
```
def change_offer_id(overall_df):
'''
Funtion to change the offer ids into a more readable form e.g offer 1, offer 2.
Input - overall_df which is the combined dataframe from all 3 datasets.
Output - overall_df with altered offer ids.
'''
unique_ids = list(overall_df['offer_id'].unique())
for i in range(len(unique_ids)):
overall_df['offer_id'] = overall_df['offer_id'].apply(lambda x: f'Offer {i+1}' if x == unique_ids[i] else x)
return overall_df
overall_df = change_offer_id(overall_df)
overall_df.head()
sns.countplot(x='offer_id',hue='gender',data=overall_df,palette='PuBu');
plt.title('Count of Offer Type by Gender')
plt.xticks(rotation=45);
sns.countplot(x='offer_id',hue='successful offer',data=overall_df,palette='Blues');
plt.legend(['Unsuccessful','Successful'],frameon=False)
plt.title('Count of Offer Type')
plt.xticks(rotation=45);
successful = overall_df.loc[overall_df['successful offer']==1]
sns.countplot(x='offer_id',hue='gender',data=successful,palette='Blues');
plt.legend(['Male','Other','Female'],frameon=False)
plt.title('Successful Offer Type by Gender')
plt.xticks(rotation=45);
#Distribution of income whether offer was successful
sns.distplot(overall_df.loc[overall_df['successful offer'] == 1]['income'],hist=False,color='green',kde_kws={'shade':True})
sns.distplot(overall_df.loc[overall_df['successful offer'] == 0]['income'],hist=False,color='grey',kde_kws={'shade':True})
plt.legend(['Successful Offer', 'Unsuccessful Offer'], frameon=False)
plt.gca().get_yaxis().set_visible(False)
plt.title('Income Distribution');
_ = overall_df.groupby(['gender'])['successful offer'].sum()
plt.pie(_, labels = _.index,shadow=True,explode = (0.05,0.05,0.05),colors=['coral','lightblue','green']);
plt.legend(['Female','Male','Other'],frameon=False)
plt.title("Successful Offer by Gender")
plt.gca().axis('Equal');
sns.distplot(overall_df[overall_df['successful offer']==1]['duration'],hist=False,color='green',kde_kws={'shade':True});
sns.distplot(overall_df[overall_df['successful offer']==0]['duration'],hist=False,color='grey',kde_kws={'shade':True})
plt.legend(['Successful Offers','Unsuccessful Offers'],frameon=False)
plt.title('Distribution of Offer Duration')
plt.gca().get_yaxis().set_visible(False);
sns.distplot(overall_df[overall_df['successful offer']==1]['difficulty'],hist=False,color='green',kde_kws={'shade':True});
sns.distplot(overall_df[overall_df['successful offer']==0]['difficulty'],hist=False,color='grey',kde_kws={'shade':True})
plt.legend(['Successful Offers','Unsuccessful Offers'],frameon=False)
plt.title('Distribution of Offer Difficulty')
plt.gca().get_yaxis().set_visible(False);
```
## Modeling
Now we have performed some exploratory analysis on the datasets we can now try a few different machine learning models to try and predict which offer would be best suited for each customer.
### 1. Random Forrest Classifier
We will use an Random Forrest Classifier to try and classify and choose the offer type that would be most well received by the customer.
```
def clean_overall_df(overall_df):
'''
Function to clean overall_df to return X variables and the predictor Y
Input - overall_df
output - two dataframes X and Y
X - Will be all the variables we will be using to predict the best offer type.
Y - Will be the offer type.
'''
#We want to look at only successful offers
clean_df_ = overall_df.loc[overall_df['successful offer'] == 1]
clean_df_.drop('gender',axis=1,inplace = True)
#We have missing values in income and age - fill these with the means for that column.
for col in ['age','income']:
clean_df_[col] = clean_df_[col].fillna(clean_df_[col].mean())
X = clean_df_.iloc[:,3:]
Y= clean_df_.iloc[:,1]
return X, Y
X,Y = clean_overall_df(overall_df)
X.shape, Y.shape
```
Now we have the training and test data sets we need to ensure that there are no missing values.
```
overall_df.isnull().sum()
X.isnull().sum()
#train test split
X_train, X_test, y_train, y_test = train_test_split(X,Y,test_size=.25,random_state=21)
X_train.shape,X_test.shape
#Feature scaling
scaler = StandardScaler()
X_train=scaler.fit_transform(X_train)
X_test = scaler.fit_transform(X_test)
clf = RandomForestClassifier(n_estimators=20,criterion='entropy',random_state=42)
clf.fit(X_train,y_train)
y_pred = clf.predict(X_test)
confusion = confusion_matrix(y_test,y_pred)
accuracy_score(y_test,y_pred)
sns.heatmap(confusion)
plt.ylabel('True')
plt.xlabel('Predicted');
```
In the X dataframe used to train this model i included information about the actual offer e.g difficulty and duration. This probably explains why the classfication accuracy is so high. This is not useful as we want to be able to predict which offer would be successful using information about the customer alone.
```
X,Y = clean_overall_df(overall_df)
#Only keep inforamtion in the X dataframe that refers to the user.
X = X.iloc[:,10:]
X.head()
X_train, X_test, y_train, y_test = train_test_split(X,Y,test_size=.2,random_state=21)
#Feature Scaling
scaler = StandardScaler()
X_train=scaler.fit_transform(X_train)
X_test = scaler.fit_transform(X_test)
#Instansiate Classifier
clf = RandomForestClassifier(n_estimators=20,criterion='entropy',random_state=42)
#Train Classifier
clf.fit(X_train,y_train)
y_pred = clf.predict(X_test)
print('Model accuracy: {0:0.4f}'.format(accuracy_score(y_test,y_pred)))
print(classification_report(y_test,y_pred))
sns.heatmap(confusion_matrix(y_test,y_pred),annot=True);
plt.xlabel("Predicted")
plt.ylabel("Actual");
```
## Predict whether a customer offer combination will be successful
Here we are now going to predict whether a user will complete an offer based on variables from the user and on the offer.
I will be using Logistic Regression, SVM, LDA and Adaboost to try and predict whether a customer receiving an offer will be successful.
To evaluate these models i will be using the model accuracy, the f1 score and the AUC rating. As i care equally about how the model classifies both classes i will place more of a preference on the accuracy. However, if i cared more about incorrectly classified predictions i would have chosen to focus on the f1 score. To visualize the performance of the models i will use Confusion matrixs and AUC curves.
## Logistic Regression
```
sns.countplot(overall_df['successful offer']);
```
As you can see there is a class inbalance which will effect the accuracy of the classifier. We will balance the classes by random over-sampling. So we will randomly samply datapoints from the successful offers with replacement untill we meet the number of datapoints we have for unsuccessful offers.
```
df_class1 = overall_df.loc[overall_df['successful offer']==1]
count_class0,count_class1 = overall_df['successful offer'].value_counts()
df_class1_over = df_class1.sample(count_class0,replace=True)
df_class_0 = overall_df.loc[overall_df['successful offer'] == 0]
over_df = pd.concat([df_class1_over,df_class_0],axis=0)
# over_df now has balanced classifying classes
over_df.drop('gender',axis=1,inplace=True)
```
### Data Preparation
Now the classes are balanced we now need to impute missing values. There are missing values in the agea and income columns. From our previous analysis there is a slight right skew in their distributions so i will inpute the missing values with the median for their respective columns.
During the initial cleaning i have already encoded the catergorical variables like gender etc.
A key assumption for Logistic Regression is that there is little or no multicolinearlity between independent variables.
In regards to outliers from my research i have read that Logistic Regression is robust in regards to outliers due to an inverse logistic loss function.
To process the data we will use MinMaxScaler.
```
over_df.isnull().sum()
#Impute missing values with median value for the column. I have chosen the median because both age and income have
#a right skew in their distributions.
for col in ['age','income']:
over_df[col] = over_df[col].fillna(over_df[col].median())
sns.countplot(over_df['successful offer']);
```
We have now rebalanced the classes via over sampling and we can now proceed to implement a classifier. But first i need to handle the offer id as it still categorical.
```
X = over_df.iloc[:,3:]
y = over_df.iloc[:,2]
X = pd.concat([X, over_df['offer_id']],axis=1)
def encode_offer_id(X):
'''
Fuction to encode offer id into dummy columns.
Input - X dataframe with offer_id column present
Output - X dataframe with encoded columns for offer id
'''
dummies = pd.get_dummies(X['offer_id'])
new = pd.concat([X.drop('offer_id',axis=1), dummies],axis=1)
return new
X = encode_offer_id(X)
X.columns
plt.figure(figsize=(10,10))
sns.heatmap(X.corr(),square=True, cmap='cubehelix');
```
As Logistic Regression assumes little or no multicolinearity i am going to drop the email feature as it appears to be strongly correlated with every feature.
```
X = X.drop(['email'],axis=1)
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=.2, random_state = 42)
scaler = MinMaxScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.fit_transform(X_test)
model = LogisticRegression(solver='liblinear',random_state=42)
model.fit(X_train,y_train)
log_pred = model.predict(X_test)
log_accuracy = accuracy_score(y_test,log_pred)
print("Logistic Regression Accuracy: %.2f" % accuracy_score(y_test, log_pred))
sns.heatmap(confusion_matrix(y_test,log_pred),annot=True,fmt='d')
plt.title('Logisitic Regression Confusion Matrix')
plt.ylabel("Actual")
plt.xlabel("Predicted");
log_f1_score = f1_score(y_test,log_pred)
print('Logisitic Regression F1 Score: %.3f' % log_f1_score)
```
The logistic regression classifier gave an accuracy of 76% and we achieved an f1 score of 0.773. As this is a binary classifcation i will place more weight on the f1 score as the f1 score is the weighted harmonic mean of recall and precision.
Now i have just used the standard parameters I will now look at tuning the parameters with GridSeacrchCV
```
parameters = {'penalty': ['l1','l2'], 'C': [1,10,100,1000]}
grid_log = GridSearchCV(LogisticRegression(), parameters, verbose=3, n_jobs=-1,cv=3)
grid_log.fit(X_train,y_train)
grid_log.best_params_
log2_pred = grid_log.predict(X_test)
log2_accuracy = accuracy_score(y_test,log2_pred)
log2_f1 = f1_score(y_test,log2_pred)
print('Tuned Logistic Regression accuracy: %.3f' % log2_accuracy)
print('Tuned Logistic Regression F1 score: %.3f' % log2_f1)
```
Our original Logistic Regresion model achieved the exact same f1 score but achieved a slightly higher accuracy. So using GridSearch our model did not improve.
Typically Logistic Regression requires large samples sizes for accurate results.
## Support Vector Machines
For SVM we need to further process the data. Its important that the data is scaled to avoid difficulties in the kernel calculation.
SVM's are great for non linear classification problems.
As using GridSearch will take too long using SVM i will change the kernel and the regularization parameter C to try and optimize the classifier.
```
X_train, X_test, y_train, y_test = train_test_split(X,y,random_state=42, test_size=.2)
#Feature scaling
scaler = StandardScaler()
X_train = preprocessing.scale(X_train)
X_test = preprocessing.scale(X_test)
```
To begin with i will use the Linear kernel and the default parameter for C, which is 1.
```
svc = SVC(kernel='linear')
#Train model
svc.fit(X_train,y_train)
#Predict values from test dataset
svc_y_pred = svc.predict(X_test)
#Evaluate accuracy and f1 score
svc_accuracy = accuracy_score(y_test,svc_y_pred)
svc_f1 = f1_score(y_test,svc_y_pred)
print('SVC Model Accuracy: %.3f' % svc_accuracy)
print('SVC F1 Score: %.3f' % svc_f1)
sns.heatmap(confusion_matrix(y_test,svc_y_pred),annot=True,fmt='d');
plt.ylabel("Actual")
plt.xlabel("Predicted");
print(classification_report(y_test,svc_y_pred))
```
As we have used a Linear Kernel we can view the coefficients the model has given each variable
```
#Create a plot of the coefficients for a given feature.
feature_names = list(X.columns)
coefs = list(svc.coef_[0])
plt.figure(figsize=(15,8))
plt.barh(feature_names,coefs)
plt.title('Feature effects on Offer Success')
plt.xlabel('Coefficients')
plt.ylabel('Feature');
```
Now i will change the kernel function to Radial Basic Function.
```
svc_model = SVC(C=1,gamma=1,kernel='rbf')
svc_model.fit(X_train,y_train)
y_pred_svc_2 = svc_model.predict(X_test)
svc2_accuracy = accuracy_score(y_test,y_pred_svc_2)
svc2_f1 = f1_score(y_test,y_pred_svc_2)
print('Accuracy for SVM with RBF Kernel: %.3f' % svc2_accuracy)
print('F1 score for SVM with RBF Kernel: %.3f' % svc2_f1)
print(classification_report(y_test,y_pred_svc_2))
svc_fpr, svc_tpr, svc_thresholds = roc_curve(y_test,y_pred_svc_2)
sns.heatmap(confusion_matrix(y_test,y_pred_svc_2),annot=True,fmt='d')
plt.ylabel("Actual")
plt.xlabel("Predicted")
plt.title("Confusion Matrix for SVM RBF");
#Area under curve
roc_auc = auc(svc_fpr,svc_tpr)
roc_auc
#Plot the auc
plt.figure(figsize=(5,5))
plt.title('Receiver Operating Characteristic')
plt.plot(svc_fpr,svc_tpr, color='red',label = 'AUC = %0.2f' % roc_auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],linestyle='--')
plt.axis('tight')
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate');
```
An ideal AUC score is 1 so a score of 0.78 is not too bad, it shows that the SVC classifier is somewhat accurate in distinguising successful and unsucessful offers. After changing a few of the parameters the model achieved an accuracy of 77.8% with an F1 Score of 0.79.
The Radial Basic Function kernel has performed better than the Linear Kernel. I will now change the C parameter to 100.
```
svc3 = SVC(C=100,gamma=1,kernel='rbf',cache_size=600)
svc3.fit(X_train,y_train)
svc3_y = svc3.predict(X_test)
svc3_accuracy = accuracy_score(y_test,svc3_y)
svc3_f1_score = f1_score(y_test,svc3_y)
print('SVC RBF Model with C = 100 Accuracy: %.3f' % svc3_accuracy)
print('SVC RBF Model with C = 100 F1 Score: %.3f' % svc3_f1_score)
sns.heatmap(confusion_matrix(y_test,svc3_y),annot=True,fmt='d');
plt.title('SVM RBF Kernel');
```
## Linear Discriminant Analysis
We will now try to use Linear Discriminant Analysis to improve on our model accuracy.
LDA is more sensity to outliers than the previous models. I have already examined the age of the user and removed outliers - i now need to look at income and days being a rewards member. I will use the tukey rule to remove any outliers.
LDA assumes normal distribution for features so as a preprocessing step i will normalize the data points.
```
def split_df(over_df):
'''
Function to split X, Y from dataframe and split into test and train datasets.
Input - over_df - dataframe with classes balanced.
Output - X_train, X_test, y_train, y_test
'''
for col in ['income','became_member_on']:
#Lower quartile
Q1 = np.percentile(over_df[col],25)
#Upper quartile
Q3 = np.percentile(over_df[col],75)
#Calculate interquartile range
IQR = Q3 - Q1
#Outlier step
step = IQR * 1.5
#Remove values that are greater than the upper quartile plus 1.5 times the IQR and lower than the lower quartile
#minus 1.5 times the IQR.
over_df = over_df[(over_df[col] > (Q1 - step)) & (over_df[col] < (Q3 + step))]
X = over_df.iloc[:,3:]
y = over_df.iloc[:,2]
X = pd.concat([X, over_df['offer_id']],axis=1)
dummies = pd.get_dummies(X['offer_id'])
X = pd.concat([X.drop('offer_id',axis=1), dummies],axis=1)
X_train, X_test,y_train,y_test = train_test_split(X,y,test_size=.2,random_state=42)
return X_train, X_test, y_test, y_train
X_train, X_test, y_test, y_train = split_df(over_df)
X_train = normalize(X_train)
X_test = normalize(X_test)
lda = LinearDiscriminantAnalysis(solver='lsqr')
lda.fit(X_train,y_train)
y_pred = lda.predict(X_test)
y_pred = lda.predict(X_test)
lda_accuracy = accuracy_score(y_test,y_pred)
lda_f1 = f1_score(y_test,y_pred)
print("LDA Model Accuracy: %.3f" % lda_accuracy)
print("LDA Model F1 Accuracy: %.3f" % lda_f1)
print(classification_report(y_test,y_pred))
sns.heatmap(confusion_matrix(y_test,y_pred),annot=True,fmt='d')
plt.xlabel("Predicted")
plt.ylabel("Actual")
plt.title('LDA Confusion Matrix')
plt.figure(figsize=(15,8))
coefs = lda.coef_[0]
plt.barh(feature_names,coefs)
plt.title("LDA Feature Coefficients");
lda_fpr,lda_tpr,lda_thresholds = roc_curve(y_test,y_pred)
lda_auc = auc(lda_fpr,lda_tpr)
```
The LDA model did not perform as well as the SVC model but performed better than the logisitic regression model.
## Adaboost Classifier
Adaboost is a decision tree algorithm that does not require scaled data, however, it is sensitive to outliers so i will use the training dataset and test dataset where outliers have been removed.
```
parameters = {'n_estimators':[500, 1000, 1500, 2000],
'learning_rate':[0.05, 0.1, 0.15, 0.2]}
ada = AdaBoostClassifier()
clf = GridSearchCV(ada,parameters,cv=3,verbose=3,n_jobs=-1)
clf.fit(X_train,y_train)
clf.best_params_
ada_pred = clf.predict(X_test)
ada_accuracy = accuracy_score(y_test,ada_pred)
ada_f1 = f1_score(y_test, ada_pred)
print("ADA Model Accuracy: %.3f" % ada_accuracy)
print("ADA Model F1 Accuracy: %.3f" % ada_f1)
sns.heatmap(confusion_matrix(y_test,ada_pred),annot=True,fmt='d')
plt.title("ADA Confusion Matrix")
plt.xlabel('Predicted')
plt.ylabel('True');
print(classification_report(y_test,ada_pred))
ada_fpr,ada_tpr,ada_thresholds = roc_curve(y_test,ada_pred)
ada_auc = auc(ada_fpr,ada_tpr)
```
## Conclusion
```
#Plot the auc
plt.figure(figsize=(5,5))
plt.title('Receiver Operating Characteristic')
plt.plot(svc_fpr,svc_tpr, color='red',label = 'AUC SVC = %0.2f' % roc_auc)
plt.plot(lda_fpr,lda_tpr,color='green',label = 'AUC LDA = %0.2f' % lda_auc)
plt.plot(ada_fpr,lda_tpr,color='blue',label='AUC ADA = %0.2f' % ada_auc)
plt.legend(loc = 'lower right',frameon=False)
plt.plot([0, 1], [0, 1],linestyle='--')
plt.axis('tight')
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate');
```
The SVM model with 'rbf' kernel produced the model with the highest accuracy, F1 score and AUC score.
```
accuracy = np.array([log_accuracy,log2_accuracy,svc_accuracy,svc2_accuracy,svc3_accuracy,lda_accuracy,ada_accuracy]).reshape(-1,1)
f1_score = np.array([log_f1_score,log2_f1,svc_f1,svc2_f1,svc3_f1_score,lda_f1,ada_f1]).reshape(-1,1)
metrics = pd.DataFrame(np.concatenate((accuracy,f1_score),axis=1),columns=['Accuracy','F1 Score'])
model_names = np.array(['Logistic Regresson 1','Logistic Regression 2','SVC Linear','SVC RBF 1','SVC RBF 2','LDA','ADA']).reshape(-1,1)
metrics = pd.concat([metrics,pd.DataFrame(model_names)],axis=1)
metrics.columns = ['Accuracy','F1 Score','Model Names']
metrics.set_index('Model Names').sort_values(by='Accuracy',ascending=False)
plt.barh(metrics['Model Names'],metrics['Accuracy']);
plt.xlabel('Accuracy')
plt.title('Accuracy by Model')
plt.xlim([0,1])
labels = ['%.2f' % x for x in metrics['Accuracy']]
for i,v in enumerate(metrics['Accuracy']):
plt.gca().text(0.85, i - 0.1, labels[i], color='black', fontweight='bold')
```
|
github_jupyter
|
import pandas as pd
import numpy as np
from datetime import datetime
import math
import seaborn as sns
import json
from matplotlib import pyplot as plt
import warnings
warnings.filterwarnings('ignore')
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn import preprocessing
from sklearn.preprocessing import StandardScaler, normalize, MinMaxScaler
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.metrics import confusion_matrix,accuracy_score, classification_report,f1_score
from sklearn.metrics import roc_auc_score,roc_curve, auc
from sklearn.utils import resample
from sklearn.svm import SVC
from sklearn.grid_search import GridSearchCV
from sklearn.linear_model import LogisticRegression
% matplotlib inline
# read in the json files
portfolio = pd.read_json('data/portfolio.json', orient='records', lines=True)
profile = pd.read_json('data/profile.json', orient='records', lines=True)
transcript = pd.read_json('data/transcript.json', orient='records', lines=True)
#View profile
profile.head()
profile.dtypes
profile['gender'].value_counts()
profile.describe()
sns.distplot(profile['age'],hist=True)
plt.title("Distribution of Age in Portfolio")
plt.xlabel("Age")
plt.ylabel("Count");
sns.countplot(x='gender',data=profile);
sns.distplot(profile[profile['gender']=='M']['age'],hist=False,color="b", kde_kws={"shade": True});
sns.distplot(profile[profile['gender']=='F']['age'],hist=False,color="r", kde_kws={"shade": True});
plt.title('Age distribution by Gender')
plt.gca().get_yaxis().set_visible(False)
plt.legend(['Male','Female'],frameon=False);
sns.distplot(profile[profile['gender']=='M']['income'],hist=False,color="b", kde_kws={"shade": True});
sns.distplot(profile[profile['gender']=='F']['income'],hist=False,color="r", kde_kws={"shade": True});
plt.title('Income distribution by Gender')
plt.gca().get_yaxis().set_visible(False)
plt.legend(['Male','Female'],frameon=False);
def clean_profile(profile):
'''
Function to clean profile dataframe.
INPUT - Profile dataframe
OUTPUT - Return cleaned version of profile dataframe
'''
#Convert became_member_on to datetime
profile['became_member_on'] = pd.to_datetime(profile['became_member_on'],format='%Y%m%d')
#Convert users with age 118 to np.nan
profile['age'] = profile['age'].apply(lambda x: np.nan if x ==118 else x)
#Create dummy columns for gender
genders = pd.get_dummies(profile['gender'],prefix = "gender", prefix_sep = "-")
profile = pd.concat([profile,genders],axis=1)
#Change id column name to offer id
profile.rename(columns={'id':'customer_id'},inplace=True)
#Extract the number of days a user has been a member of the rewards app.
today = pd.to_datetime(datetime.today().strftime('%Y%m%d'))
profile['became_member_on'] = (today - profile['became_member_on']) / np.timedelta64(1,'D')
return profile
profile = clean_profile(profile)
profile.head()
plt.hist(profile['became_member_on']);
plt.title('Distribution of The Number of Days a User Has Been a Member');
#Print mean and median for income
profile['income'].mean(), profile['income'].median()
#Explore portfolio
portfolio.head()
def clean_portfolio(portfolio):
'''
Function to clean the portoflio dataset. Encode the categorical variables.
Input - Portfolio dataframe
Output - Portfolio dataframe with categorical variables handled
'''
#Apply one hot encodings to channels column
#Email
portfolio['email'] = portfolio['channels'].apply(lambda x: 1 if 'email' in x else 0)
#Mobile
portfolio['mobile'] = portfolio['channels'].apply(lambda x: 1 if 'mobile' in x else 0)
#Social
portfolio['social'] = portfolio['channels'].apply(lambda x: 1 if 'social' in x else 0)
#Web
portfolio['web'] = portfolio['channels'].apply(lambda x: 1 if 'web' in x else 0)
#Create dummy columns for offer_type
offer_types = pd.get_dummies(portfolio['offer_type'], prefix ='offer_type', prefix_sep='-')
portfolio = pd.concat([portfolio.drop(['offer_type','channels'],axis=1),offer_types],axis=1)
portfolio.rename(columns={'id':'offer_id'},inplace=True)
return portfolio
portfolio = clean_portfolio(portfolio)
portfolio.head()
#Explore transcript
transcript.head()
transcript.tail()
def clean_transcript(transcript):
#Extract offer_id from value column
transcript['offer_id'] = transcript['value'].apply(lambda x: x['offer_id'] if 'offer_id' in x else (x['offer id'] if 'offer id' in x else None))
#create two seperate columns for reward and amount
for i in ['reward','amount']:
transcript[i] = transcript['value'].apply(lambda x:x[i] if i in x else None)
transcript.drop('value',axis=1,inplace=True)
transcript.rename(columns={'person':'customer_id'},inplace=True)
#Convert transcript time from hours to days
transcript['time'] = transcript['time'] / 24
return transcript
transcript = clean_transcript(transcript)
transcript.head()
#Explore transcript for one person
transcript[transcript['customer_id']=='78afa995795e4d85b5d9ceeca43f5fef']
def transform_transcript(transcript):
'''
Function to transform transcript dataframe to return a dataframe where it shows each successful and unsuccesful offer.
Input - Transcript dataframe
Output - transformed transcript dataframe
'''
offer_customer = transcript.groupby(['customer_id','offer_id','event'])['time'].count().unstack()
offer_customer.reset_index(level=[0,1],inplace = True)
#Replace nan values with 0.0
offer_customer.fillna(0.0, inplace = True)
#Need to determine which offers where successful - where offer completed and offer viewed are greater than 1.
#We can multiply the two columns together and replace any values > 0 with 1.
#This is an important step as some offers are completed but have not been viewed - meaning the offer did not cause the
#transaction.
offer_customer['successful offer'] = offer_customer['offer completed'] * offer_customer['offer viewed']
offer_customer['successful offer'] = offer_customer['successful offer'].apply(lambda x: 1.0 if x > 0 else 0.0)
offer_customer.drop(['offer completed','offer viewed','offer received'],axis=1, inplace = True)
return offer_customer
transcript = transform_transcript(transcript)
def merge_dataframes(profile,portfolio,transcript):
'''
Function to merge all the dataframes together.
Input - profile, portfolio and transcript dataframes
Output - single dataframe
'''
overall = transcript.merge(portfolio,how='left',on='offer_id')
overall = overall.merge(profile,how='left',on='customer_id')
return overall
overall_df = merge_dataframes(profile,portfolio,transcript)
overall_df.head()
def change_offer_id(overall_df):
'''
Funtion to change the offer ids into a more readable form e.g offer 1, offer 2.
Input - overall_df which is the combined dataframe from all 3 datasets.
Output - overall_df with altered offer ids.
'''
unique_ids = list(overall_df['offer_id'].unique())
for i in range(len(unique_ids)):
overall_df['offer_id'] = overall_df['offer_id'].apply(lambda x: f'Offer {i+1}' if x == unique_ids[i] else x)
return overall_df
overall_df = change_offer_id(overall_df)
overall_df.head()
sns.countplot(x='offer_id',hue='gender',data=overall_df,palette='PuBu');
plt.title('Count of Offer Type by Gender')
plt.xticks(rotation=45);
sns.countplot(x='offer_id',hue='successful offer',data=overall_df,palette='Blues');
plt.legend(['Unsuccessful','Successful'],frameon=False)
plt.title('Count of Offer Type')
plt.xticks(rotation=45);
successful = overall_df.loc[overall_df['successful offer']==1]
sns.countplot(x='offer_id',hue='gender',data=successful,palette='Blues');
plt.legend(['Male','Other','Female'],frameon=False)
plt.title('Successful Offer Type by Gender')
plt.xticks(rotation=45);
#Distribution of income whether offer was successful
sns.distplot(overall_df.loc[overall_df['successful offer'] == 1]['income'],hist=False,color='green',kde_kws={'shade':True})
sns.distplot(overall_df.loc[overall_df['successful offer'] == 0]['income'],hist=False,color='grey',kde_kws={'shade':True})
plt.legend(['Successful Offer', 'Unsuccessful Offer'], frameon=False)
plt.gca().get_yaxis().set_visible(False)
plt.title('Income Distribution');
_ = overall_df.groupby(['gender'])['successful offer'].sum()
plt.pie(_, labels = _.index,shadow=True,explode = (0.05,0.05,0.05),colors=['coral','lightblue','green']);
plt.legend(['Female','Male','Other'],frameon=False)
plt.title("Successful Offer by Gender")
plt.gca().axis('Equal');
sns.distplot(overall_df[overall_df['successful offer']==1]['duration'],hist=False,color='green',kde_kws={'shade':True});
sns.distplot(overall_df[overall_df['successful offer']==0]['duration'],hist=False,color='grey',kde_kws={'shade':True})
plt.legend(['Successful Offers','Unsuccessful Offers'],frameon=False)
plt.title('Distribution of Offer Duration')
plt.gca().get_yaxis().set_visible(False);
sns.distplot(overall_df[overall_df['successful offer']==1]['difficulty'],hist=False,color='green',kde_kws={'shade':True});
sns.distplot(overall_df[overall_df['successful offer']==0]['difficulty'],hist=False,color='grey',kde_kws={'shade':True})
plt.legend(['Successful Offers','Unsuccessful Offers'],frameon=False)
plt.title('Distribution of Offer Difficulty')
plt.gca().get_yaxis().set_visible(False);
def clean_overall_df(overall_df):
'''
Function to clean overall_df to return X variables and the predictor Y
Input - overall_df
output - two dataframes X and Y
X - Will be all the variables we will be using to predict the best offer type.
Y - Will be the offer type.
'''
#We want to look at only successful offers
clean_df_ = overall_df.loc[overall_df['successful offer'] == 1]
clean_df_.drop('gender',axis=1,inplace = True)
#We have missing values in income and age - fill these with the means for that column.
for col in ['age','income']:
clean_df_[col] = clean_df_[col].fillna(clean_df_[col].mean())
X = clean_df_.iloc[:,3:]
Y= clean_df_.iloc[:,1]
return X, Y
X,Y = clean_overall_df(overall_df)
X.shape, Y.shape
overall_df.isnull().sum()
X.isnull().sum()
#train test split
X_train, X_test, y_train, y_test = train_test_split(X,Y,test_size=.25,random_state=21)
X_train.shape,X_test.shape
#Feature scaling
scaler = StandardScaler()
X_train=scaler.fit_transform(X_train)
X_test = scaler.fit_transform(X_test)
clf = RandomForestClassifier(n_estimators=20,criterion='entropy',random_state=42)
clf.fit(X_train,y_train)
y_pred = clf.predict(X_test)
confusion = confusion_matrix(y_test,y_pred)
accuracy_score(y_test,y_pred)
sns.heatmap(confusion)
plt.ylabel('True')
plt.xlabel('Predicted');
X,Y = clean_overall_df(overall_df)
#Only keep inforamtion in the X dataframe that refers to the user.
X = X.iloc[:,10:]
X.head()
X_train, X_test, y_train, y_test = train_test_split(X,Y,test_size=.2,random_state=21)
#Feature Scaling
scaler = StandardScaler()
X_train=scaler.fit_transform(X_train)
X_test = scaler.fit_transform(X_test)
#Instansiate Classifier
clf = RandomForestClassifier(n_estimators=20,criterion='entropy',random_state=42)
#Train Classifier
clf.fit(X_train,y_train)
y_pred = clf.predict(X_test)
print('Model accuracy: {0:0.4f}'.format(accuracy_score(y_test,y_pred)))
print(classification_report(y_test,y_pred))
sns.heatmap(confusion_matrix(y_test,y_pred),annot=True);
plt.xlabel("Predicted")
plt.ylabel("Actual");
sns.countplot(overall_df['successful offer']);
df_class1 = overall_df.loc[overall_df['successful offer']==1]
count_class0,count_class1 = overall_df['successful offer'].value_counts()
df_class1_over = df_class1.sample(count_class0,replace=True)
df_class_0 = overall_df.loc[overall_df['successful offer'] == 0]
over_df = pd.concat([df_class1_over,df_class_0],axis=0)
# over_df now has balanced classifying classes
over_df.drop('gender',axis=1,inplace=True)
over_df.isnull().sum()
#Impute missing values with median value for the column. I have chosen the median because both age and income have
#a right skew in their distributions.
for col in ['age','income']:
over_df[col] = over_df[col].fillna(over_df[col].median())
sns.countplot(over_df['successful offer']);
X = over_df.iloc[:,3:]
y = over_df.iloc[:,2]
X = pd.concat([X, over_df['offer_id']],axis=1)
def encode_offer_id(X):
'''
Fuction to encode offer id into dummy columns.
Input - X dataframe with offer_id column present
Output - X dataframe with encoded columns for offer id
'''
dummies = pd.get_dummies(X['offer_id'])
new = pd.concat([X.drop('offer_id',axis=1), dummies],axis=1)
return new
X = encode_offer_id(X)
X.columns
plt.figure(figsize=(10,10))
sns.heatmap(X.corr(),square=True, cmap='cubehelix');
X = X.drop(['email'],axis=1)
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=.2, random_state = 42)
scaler = MinMaxScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.fit_transform(X_test)
model = LogisticRegression(solver='liblinear',random_state=42)
model.fit(X_train,y_train)
log_pred = model.predict(X_test)
log_accuracy = accuracy_score(y_test,log_pred)
print("Logistic Regression Accuracy: %.2f" % accuracy_score(y_test, log_pred))
sns.heatmap(confusion_matrix(y_test,log_pred),annot=True,fmt='d')
plt.title('Logisitic Regression Confusion Matrix')
plt.ylabel("Actual")
plt.xlabel("Predicted");
log_f1_score = f1_score(y_test,log_pred)
print('Logisitic Regression F1 Score: %.3f' % log_f1_score)
parameters = {'penalty': ['l1','l2'], 'C': [1,10,100,1000]}
grid_log = GridSearchCV(LogisticRegression(), parameters, verbose=3, n_jobs=-1,cv=3)
grid_log.fit(X_train,y_train)
grid_log.best_params_
log2_pred = grid_log.predict(X_test)
log2_accuracy = accuracy_score(y_test,log2_pred)
log2_f1 = f1_score(y_test,log2_pred)
print('Tuned Logistic Regression accuracy: %.3f' % log2_accuracy)
print('Tuned Logistic Regression F1 score: %.3f' % log2_f1)
X_train, X_test, y_train, y_test = train_test_split(X,y,random_state=42, test_size=.2)
#Feature scaling
scaler = StandardScaler()
X_train = preprocessing.scale(X_train)
X_test = preprocessing.scale(X_test)
svc = SVC(kernel='linear')
#Train model
svc.fit(X_train,y_train)
#Predict values from test dataset
svc_y_pred = svc.predict(X_test)
#Evaluate accuracy and f1 score
svc_accuracy = accuracy_score(y_test,svc_y_pred)
svc_f1 = f1_score(y_test,svc_y_pred)
print('SVC Model Accuracy: %.3f' % svc_accuracy)
print('SVC F1 Score: %.3f' % svc_f1)
sns.heatmap(confusion_matrix(y_test,svc_y_pred),annot=True,fmt='d');
plt.ylabel("Actual")
plt.xlabel("Predicted");
print(classification_report(y_test,svc_y_pred))
#Create a plot of the coefficients for a given feature.
feature_names = list(X.columns)
coefs = list(svc.coef_[0])
plt.figure(figsize=(15,8))
plt.barh(feature_names,coefs)
plt.title('Feature effects on Offer Success')
plt.xlabel('Coefficients')
plt.ylabel('Feature');
svc_model = SVC(C=1,gamma=1,kernel='rbf')
svc_model.fit(X_train,y_train)
y_pred_svc_2 = svc_model.predict(X_test)
svc2_accuracy = accuracy_score(y_test,y_pred_svc_2)
svc2_f1 = f1_score(y_test,y_pred_svc_2)
print('Accuracy for SVM with RBF Kernel: %.3f' % svc2_accuracy)
print('F1 score for SVM with RBF Kernel: %.3f' % svc2_f1)
print(classification_report(y_test,y_pred_svc_2))
svc_fpr, svc_tpr, svc_thresholds = roc_curve(y_test,y_pred_svc_2)
sns.heatmap(confusion_matrix(y_test,y_pred_svc_2),annot=True,fmt='d')
plt.ylabel("Actual")
plt.xlabel("Predicted")
plt.title("Confusion Matrix for SVM RBF");
#Area under curve
roc_auc = auc(svc_fpr,svc_tpr)
roc_auc
#Plot the auc
plt.figure(figsize=(5,5))
plt.title('Receiver Operating Characteristic')
plt.plot(svc_fpr,svc_tpr, color='red',label = 'AUC = %0.2f' % roc_auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],linestyle='--')
plt.axis('tight')
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate');
svc3 = SVC(C=100,gamma=1,kernel='rbf',cache_size=600)
svc3.fit(X_train,y_train)
svc3_y = svc3.predict(X_test)
svc3_accuracy = accuracy_score(y_test,svc3_y)
svc3_f1_score = f1_score(y_test,svc3_y)
print('SVC RBF Model with C = 100 Accuracy: %.3f' % svc3_accuracy)
print('SVC RBF Model with C = 100 F1 Score: %.3f' % svc3_f1_score)
sns.heatmap(confusion_matrix(y_test,svc3_y),annot=True,fmt='d');
plt.title('SVM RBF Kernel');
def split_df(over_df):
'''
Function to split X, Y from dataframe and split into test and train datasets.
Input - over_df - dataframe with classes balanced.
Output - X_train, X_test, y_train, y_test
'''
for col in ['income','became_member_on']:
#Lower quartile
Q1 = np.percentile(over_df[col],25)
#Upper quartile
Q3 = np.percentile(over_df[col],75)
#Calculate interquartile range
IQR = Q3 - Q1
#Outlier step
step = IQR * 1.5
#Remove values that are greater than the upper quartile plus 1.5 times the IQR and lower than the lower quartile
#minus 1.5 times the IQR.
over_df = over_df[(over_df[col] > (Q1 - step)) & (over_df[col] < (Q3 + step))]
X = over_df.iloc[:,3:]
y = over_df.iloc[:,2]
X = pd.concat([X, over_df['offer_id']],axis=1)
dummies = pd.get_dummies(X['offer_id'])
X = pd.concat([X.drop('offer_id',axis=1), dummies],axis=1)
X_train, X_test,y_train,y_test = train_test_split(X,y,test_size=.2,random_state=42)
return X_train, X_test, y_test, y_train
X_train, X_test, y_test, y_train = split_df(over_df)
X_train = normalize(X_train)
X_test = normalize(X_test)
lda = LinearDiscriminantAnalysis(solver='lsqr')
lda.fit(X_train,y_train)
y_pred = lda.predict(X_test)
y_pred = lda.predict(X_test)
lda_accuracy = accuracy_score(y_test,y_pred)
lda_f1 = f1_score(y_test,y_pred)
print("LDA Model Accuracy: %.3f" % lda_accuracy)
print("LDA Model F1 Accuracy: %.3f" % lda_f1)
print(classification_report(y_test,y_pred))
sns.heatmap(confusion_matrix(y_test,y_pred),annot=True,fmt='d')
plt.xlabel("Predicted")
plt.ylabel("Actual")
plt.title('LDA Confusion Matrix')
plt.figure(figsize=(15,8))
coefs = lda.coef_[0]
plt.barh(feature_names,coefs)
plt.title("LDA Feature Coefficients");
lda_fpr,lda_tpr,lda_thresholds = roc_curve(y_test,y_pred)
lda_auc = auc(lda_fpr,lda_tpr)
parameters = {'n_estimators':[500, 1000, 1500, 2000],
'learning_rate':[0.05, 0.1, 0.15, 0.2]}
ada = AdaBoostClassifier()
clf = GridSearchCV(ada,parameters,cv=3,verbose=3,n_jobs=-1)
clf.fit(X_train,y_train)
clf.best_params_
ada_pred = clf.predict(X_test)
ada_accuracy = accuracy_score(y_test,ada_pred)
ada_f1 = f1_score(y_test, ada_pred)
print("ADA Model Accuracy: %.3f" % ada_accuracy)
print("ADA Model F1 Accuracy: %.3f" % ada_f1)
sns.heatmap(confusion_matrix(y_test,ada_pred),annot=True,fmt='d')
plt.title("ADA Confusion Matrix")
plt.xlabel('Predicted')
plt.ylabel('True');
print(classification_report(y_test,ada_pred))
ada_fpr,ada_tpr,ada_thresholds = roc_curve(y_test,ada_pred)
ada_auc = auc(ada_fpr,ada_tpr)
#Plot the auc
plt.figure(figsize=(5,5))
plt.title('Receiver Operating Characteristic')
plt.plot(svc_fpr,svc_tpr, color='red',label = 'AUC SVC = %0.2f' % roc_auc)
plt.plot(lda_fpr,lda_tpr,color='green',label = 'AUC LDA = %0.2f' % lda_auc)
plt.plot(ada_fpr,lda_tpr,color='blue',label='AUC ADA = %0.2f' % ada_auc)
plt.legend(loc = 'lower right',frameon=False)
plt.plot([0, 1], [0, 1],linestyle='--')
plt.axis('tight')
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate');
accuracy = np.array([log_accuracy,log2_accuracy,svc_accuracy,svc2_accuracy,svc3_accuracy,lda_accuracy,ada_accuracy]).reshape(-1,1)
f1_score = np.array([log_f1_score,log2_f1,svc_f1,svc2_f1,svc3_f1_score,lda_f1,ada_f1]).reshape(-1,1)
metrics = pd.DataFrame(np.concatenate((accuracy,f1_score),axis=1),columns=['Accuracy','F1 Score'])
model_names = np.array(['Logistic Regresson 1','Logistic Regression 2','SVC Linear','SVC RBF 1','SVC RBF 2','LDA','ADA']).reshape(-1,1)
metrics = pd.concat([metrics,pd.DataFrame(model_names)],axis=1)
metrics.columns = ['Accuracy','F1 Score','Model Names']
metrics.set_index('Model Names').sort_values(by='Accuracy',ascending=False)
plt.barh(metrics['Model Names'],metrics['Accuracy']);
plt.xlabel('Accuracy')
plt.title('Accuracy by Model')
plt.xlim([0,1])
labels = ['%.2f' % x for x in metrics['Accuracy']]
for i,v in enumerate(metrics['Accuracy']):
plt.gca().text(0.85, i - 0.1, labels[i], color='black', fontweight='bold')
| 0.404625 | 0.985649 |
# Build a machine learning workflow using Step Functions and SageMaker
1. [Introduction](#Introduction)
1. [Setup](#Setup)
1. [Build a machine learning workflow](#Build-a-machine-learning-workflow)
## Introduction
This notebook describes using the AWS Step Functions Data Science SDK to create and manage workflows. The Step Functions SDK is an open source library that allows data scientists to easily create and execute machine learning workflows using AWS Step Functions and Amazon SageMaker. For more information, see the following.
* [AWS Step Functions](https://aws.amazon.com/step-functions/)
* [AWS Step Functions Developer Guide](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html)
* [AWS Step Functions Data Science SDK](https://aws-step-functions-data-science-sdk.readthedocs.io)
In this notebook we will use the SDK to create steps, link them together to create a workflow, and execute the workflow in AWS Step Functions. The first tutorial shows how to create an ML pipeline workflow, and the second shows how to run multiple experiments in parallel.
```
import sys
!{sys.executable} -m pip install --upgrade stepfunctions
```
## Setup
### Add a policy to your SageMaker role in IAM
**If you are running this notebook on an Amazon SageMaker notebook instance**, the IAM role assumed by your notebook instance needs permission to create and run workflows in AWS Step Functions. To provide this permission to the role, do the following.
1. Open the Amazon [SageMaker console](https://console.aws.amazon.com/sagemaker/).
2. Select **Notebook instances** and choose the name of your notebook instance
3. Under **Permissions and encryption** select the role ARN to view the role on the IAM console
4. Choose **Attach policies** and search for `AWSStepFunctionsFullAccess`.
5. Select the check box next to `AWSStepFunctionsFullAccess` and choose **Attach policy**
If you are running this notebook in a local environment, the SDK will use your configured AWS CLI configuration. For more information, see [Configuring the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).
Next, create an execution role in IAM for Step Functions.
### Create an execution role for Step Functions
You need an execution role so that you can create and execute workflows in Step Functions.
1. Go to the [IAM console](https://console.aws.amazon.com/iam/)
2. Select **Roles** and then **Create role**.
3. Under **Choose the service that will use this role** select **Step Functions**
4. Choose **Next** until you can enter a **Role name**
5. Enter a name such as `StepFunctionsWorkflowExecutionRole` and then select **Create role**
Attach a policy to the role you created. The following steps attach a policy that provides full access to Step Functions, however as a good practice you should only provide access to the resources you need.
1. Under the **Permissions** tab, click **Add inline policy**
2. Enter the following in the **JSON** tab
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sagemaker:CreateTransformJob",
"sagemaker:DescribeTransformJob",
"sagemaker:StopTransformJob",
"sagemaker:CreateTrainingJob",
"sagemaker:DescribeTrainingJob",
"sagemaker:StopTrainingJob",
"sagemaker:CreateHyperParameterTuningJob",
"sagemaker:DescribeHyperParameterTuningJob",
"sagemaker:StopHyperParameterTuningJob",
"sagemaker:CreateModel",
"sagemaker:CreateEndpointConfig",
"sagemaker:CreateEndpoint",
"sagemaker:DeleteEndpointConfig",
"sagemaker:DeleteEndpoint",
"sagemaker:UpdateEndpoint",
"sagemaker:ListTags",
"lambda:InvokeFunction",
"sqs:SendMessage",
"sns:Publish",
"ecs:RunTask",
"ecs:StopTask",
"ecs:DescribeTasks",
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem",
"batch:SubmitJob",
"batch:DescribeJobs",
"batch:TerminateJob",
"glue:StartJobRun",
"glue:GetJobRun",
"glue:GetJobRuns",
"glue:BatchStopJobRun"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"iam:PassRole"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"iam:PassedToService": "sagemaker.amazonaws.com"
}
}
},
{
"Effect": "Allow",
"Action": [
"events:PutTargets",
"events:PutRule",
"events:DescribeRule"
],
"Resource": [
"arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTrainingJobsRule",
"arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTransformJobsRule",
"arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTuningJobsRule",
"arn:aws:events:*:*:rule/StepFunctionsGetEventsForECSTaskRule",
"arn:aws:events:*:*:rule/StepFunctionsGetEventsForBatchJobsRule"
]
}
]
}
```
3. Choose **Review policy** and give the policy a name such as `StepFunctionsWorkflowExecutionPolicy`
4. Choose **Create policy**. You will be redirected to the details page for the role.
5. Copy the **Role ARN** at the top of the **Summary**
### Configure execution roles
```
import sagemaker
# SageMaker Execution Role
# You can use sagemaker.get_execution_role() if running inside sagemaker's notebook instance
sagemaker_execution_role = sagemaker.get_execution_role() #Replace with ARN if not in an AWS SageMaker notebook
# paste the StepFunctionsWorkflowExecutionRole ARN from above
workflow_execution_role = "<execution-role-arn>"
```
### Import the required modules
```
import boto3
import sagemaker
import time
import random
import uuid
import logging
import stepfunctions
import io
import random
from sagemaker.amazon.amazon_estimator import get_image_uri
from stepfunctions import steps
from stepfunctions.steps import TrainingStep, ModelStep, TransformStep
from stepfunctions.inputs import ExecutionInput
from stepfunctions.workflow import Workflow
from stepfunctions.template import TrainingPipeline
from stepfunctions.template.utils import replace_parameters_with_jsonpath
session = sagemaker.Session()
stepfunctions.set_stream_logger(level=logging.INFO)
region = boto3.Session().region_name
bucket = session.default_bucket()
prefix = 'sagemaker/DEMO-xgboost-regression'
bucket_path = 'https://s3-{}.amazonaws.com/{}'.format(region, bucket)
```
### Prepare the dataset
The following cell defines utility methods to split a dataset into train, validation, and test datasets. It then defines methods to upload them to an Amazon S3 bucket.
```
def data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST):
data = [l for l in open(FILE_DATA, 'r')]
train_file = open(FILE_TRAIN, 'w')
valid_file = open(FILE_VALIDATION, 'w')
tests_file = open(FILE_TEST, 'w')
num_of_data = len(data)
num_train = int((PERCENT_TRAIN/100.0)*num_of_data)
num_valid = int((PERCENT_VALIDATION/100.0)*num_of_data)
num_tests = int((PERCENT_TEST/100.0)*num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[],[],[]]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data)-1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return boto3.Session(region_name=region).resource('s3').Bucket(bucket).Object(key).upload_fileobj(fobj)
def upload_to_s3(bucket, channel, filename):
fobj=open(filename, 'rb')
key = prefix+'/'+channel
url = 's3://{}/{}/{}'.format(bucket, key, filename)
print('Writing to {}'.format(url))
write_to_s3(fobj, bucket, key)
```
This notebook uses the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the [UCI data repository](https://archive.ics.uci.edu/ml/datasets/abalone). More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements.
```
try: #python3
from urllib.request import urlretrieve
except: #python2
from urllib import urlretrieve
# Load the dataset
FILE_DATA = 'abalone'
urlretrieve("https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", FILE_DATA)
#split the downloaded data into train/test/validation files
FILE_TRAIN = 'abalone.train'
FILE_VALIDATION = 'abalone.validation'
FILE_TEST = 'abalone.test'
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST)
#upload the files to the S3 bucket
upload_to_s3(bucket, 'train', FILE_TRAIN)
upload_to_s3(bucket, 'validation', FILE_VALIDATION)
upload_to_s3(bucket, 'test', FILE_TEST)
train_s3_file = bucket_path + "/" + prefix + '/train'
validation_s3_file = bucket_path + "/" + prefix + '/validation'
test_s3_file = bucket_path + "/" + prefix + '/test'
```
### Configure the AWS Sagemaker estimator
```
xgb = sagemaker.estimator.Estimator(
get_image_uri(region, 'xgboost'),
sagemaker_execution_role,
train_instance_count = 1,
train_instance_type = 'ml.m4.4xlarge',
train_volume_size = 5,
output_path = bucket_path + "/" + prefix + "/single-xgboost",
sagemaker_session = session
)
xgb.set_hyperparameters(
objective = 'reg:linear',
num_round = 50,
max_depth = 5,
eta = 0.2,
gamme = 4,
min_child_weight = 6,
subsample = 0.7,
silent = 0
)
```
## Build a machine learning workflow
<img src="img/e2e_pipeline.png">
You can use a workflow to create a machine learning pipeline. The AWS Data Science Workflows SDK provides several AWS SageMaker workflow steps that you can use to construct an ML pipeline. In this tutorial you will use the Train and Transform steps.
* [**TrainingStep**](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.html#stepfunctions.steps.sagemaker.TrainingStep) - Starts a Sagemaker training job and outputs the model artifacts to S3.
* [**ModelStep**](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.html#stepfunctions.steps.sagemaker.ModelStep) - Creates a model on SageMaker using the model artifacts from S3.
* [**TransformStep**](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.html#stepfunctions.steps.sagemaker.TransformStep) - Starts a SageMaker transform job
* [**EndpointConfigStep**](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.html#stepfunctions.steps.sagemaker.EndpointConfigStep) - Defines an endpoint configuration on SageMaker.
* [**EndpointStep**](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.html#stepfunctions.steps.sagemaker.EndpointStep) - Deploys the trained model to the configured endpoint.
### Define the input schema for a workflow execution
The [**ExecutionInput**](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/placeholders.html#stepfunctions.inputs.ExecutionInput) API defines the options to dynamically pass information to a workflow at runtime.
The following cell defines the fields that must be passed to your workflow when starting an execution.
While the workflow is usually static after it is defined, you may want to pass values dynamically that are used by steps in your workflow. To help with this, the SDK provides a way to create placeholders when you define your workflow. These placeholders can be dynamically assigned values when you execute your workflow.
ExecutionInput values are accessible to each step of your workflow. You have the ability to define a schema for this placeholder collection, as shown in the cell below. When you execute your workflow the SDK will verify if the dynamic input conforms to the schema you defined.
```
# SageMaker expects unique names for each job, model and endpoint.
# If these names are not unique the execution will fail. Pass these
# dynamically for each execution using placeholders.
execution_input = ExecutionInput(schema={
'JobName': str,
'ModelName': str,
'EndpointName': str
})
```
### Create the training step
In the following cell we create the training step and pass the estimator we defined above. See [TrainingStep](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.html#stepfunctions.steps.sagemaker.TrainingStep) in the AWS Step Functions Data Science SDK documentation.
```
training_step = steps.TrainingStep(
'Train Step',
estimator=xgb,
data={
'train': sagemaker.s3_input(train_s3_file, content_type='libsvm'),
'validation': sagemaker.s3_input(validation_s3_file, content_type='libsvm')
},
job_name=execution_input['JobName']
)
```
### Create the model step
In the following cell we define a model step that will create a model in SageMaker using the artifacts created during the TrainingStep. See [ModelStep](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.html#stepfunctions.steps.sagemaker.ModelStep) in the AWS Step Functions Data Science SDK documentation.
The model creation step typically follows the training step. The Step Functions SDK provides the [get_expected_model](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.html#stepfunctions.steps.sagemaker.TrainingStep.get_expected_model) method in the TrainingStep class to provide a reference for the trained model artifacts. Please note that this method is only useful when the ModelStep directly follows the TrainingStep.
```
model_step = steps.ModelStep(
'Save model',
model=training_step.get_expected_model(),
model_name=execution_input['ModelName']
)
```
### Create the transform step
In the following cell we create the transform step. See [TransformStep](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.html#stepfunctions.steps.sagemaker.TransformStep) in the AWS Step Functions Data Science SDK documentation.
```
transform_step = steps.TransformStep(
'Transform Input Dataset',
transformer=xgb.transformer(
instance_count=1,
instance_type='ml.m5.large'
),
job_name=execution_input['JobName'],
model_name=execution_input['ModelName'],
data=test_s3_file,
content_type='text/libsvm'
)
```
### Create an endpoint configuration step
In the following cell we create an endpoint configuration step. See [EndpointConfigStep](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.html#stepfunctions.steps.sagemaker.EndpointConfigStep) in the AWS Step Functions Data Science SDK documentation.
```
endpoint_config_step = steps.EndpointConfigStep(
"Create Endpoint Config",
endpoint_config_name=execution_input['ModelName'],
model_name=execution_input['ModelName'],
initial_instance_count=1,
instance_type='ml.m5.large'
)
```
### Create an endpoint
In the following cell we create a step to deploy the trained model to an endpoint in AWS SageMaker. See [EndpointStep](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.html#stepfunctions.steps.sagemaker.EndpointStep) in the AWS Step Functions Data Science SDK documentation.
```
endpoint_step = steps.EndpointStep(
"Create Endpoint",
endpoint_name=execution_input['EndpointName'],
endpoint_config_name=execution_input['ModelName']
)
```
### Chain together steps for your workflow
Create your workflow definition by chaining the steps together. See [Chain](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.html#stepfunctions.steps.states.Chain) in the AWS Step Functions Data Science SDK documentation.
```
workflow_definition = steps.Chain([
training_step,
model_step,
transform_step,
endpoint_config_step,
endpoint_step
])
```
Create your workflow using the workflow definition above, and render the graph with [render_graph](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Workflow.render_graph).
```
workflow = Workflow(
name='MyTrainTransformDeploy_v1',
definition=workflow_definition,
role=workflow_execution_role,
execution_input=execution_input
)
workflow.render_graph()
```
Create the workflow in AWS Step Functions with [create](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Workflow.create).
```
workflow.create()
```
Run the workflow with [execute](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Workflow.execute).
```
execution = workflow.execute(
inputs={
'JobName': 'regression-{}'.format(uuid.uuid1().hex), # Each Sagemaker Job requires a unique name
'ModelName': 'regression-{}'.format(uuid.uuid1().hex), # Each Model requires a unique name,
'EndpointName': 'regression-{}'.format(uuid.uuid1().hex) # Each Endpoint requires a unique name,
}
)
```
Render workflow progress with the [render_progress](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Execution.render_progress).
This generates a snapshot of the current state of your workflow as it executes. This is a static image. Run the cell again to check progress.
```
execution.render_progress()
```
Use [list_events](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Execution.list_events) to list all events in the workflow execution.
```
execution.list_events(html=True)
```
Use [list_executions](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Workflow.list_executions) to list all executions for a specific workflow.
```
workflow.list_executions(html=True)
```
Use [list_workflows](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.html#stepfunctions.workflow.Workflow.list_workflows) to list all workflows in your AWS account.
```
Workflow.list_workflows(html=True)
```
---
|
github_jupyter
|
import sys
!{sys.executable} -m pip install --upgrade stepfunctions
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sagemaker:CreateTransformJob",
"sagemaker:DescribeTransformJob",
"sagemaker:StopTransformJob",
"sagemaker:CreateTrainingJob",
"sagemaker:DescribeTrainingJob",
"sagemaker:StopTrainingJob",
"sagemaker:CreateHyperParameterTuningJob",
"sagemaker:DescribeHyperParameterTuningJob",
"sagemaker:StopHyperParameterTuningJob",
"sagemaker:CreateModel",
"sagemaker:CreateEndpointConfig",
"sagemaker:CreateEndpoint",
"sagemaker:DeleteEndpointConfig",
"sagemaker:DeleteEndpoint",
"sagemaker:UpdateEndpoint",
"sagemaker:ListTags",
"lambda:InvokeFunction",
"sqs:SendMessage",
"sns:Publish",
"ecs:RunTask",
"ecs:StopTask",
"ecs:DescribeTasks",
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem",
"batch:SubmitJob",
"batch:DescribeJobs",
"batch:TerminateJob",
"glue:StartJobRun",
"glue:GetJobRun",
"glue:GetJobRuns",
"glue:BatchStopJobRun"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"iam:PassRole"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"iam:PassedToService": "sagemaker.amazonaws.com"
}
}
},
{
"Effect": "Allow",
"Action": [
"events:PutTargets",
"events:PutRule",
"events:DescribeRule"
],
"Resource": [
"arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTrainingJobsRule",
"arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTransformJobsRule",
"arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTuningJobsRule",
"arn:aws:events:*:*:rule/StepFunctionsGetEventsForECSTaskRule",
"arn:aws:events:*:*:rule/StepFunctionsGetEventsForBatchJobsRule"
]
}
]
}
import sagemaker
# SageMaker Execution Role
# You can use sagemaker.get_execution_role() if running inside sagemaker's notebook instance
sagemaker_execution_role = sagemaker.get_execution_role() #Replace with ARN if not in an AWS SageMaker notebook
# paste the StepFunctionsWorkflowExecutionRole ARN from above
workflow_execution_role = "<execution-role-arn>"
import boto3
import sagemaker
import time
import random
import uuid
import logging
import stepfunctions
import io
import random
from sagemaker.amazon.amazon_estimator import get_image_uri
from stepfunctions import steps
from stepfunctions.steps import TrainingStep, ModelStep, TransformStep
from stepfunctions.inputs import ExecutionInput
from stepfunctions.workflow import Workflow
from stepfunctions.template import TrainingPipeline
from stepfunctions.template.utils import replace_parameters_with_jsonpath
session = sagemaker.Session()
stepfunctions.set_stream_logger(level=logging.INFO)
region = boto3.Session().region_name
bucket = session.default_bucket()
prefix = 'sagemaker/DEMO-xgboost-regression'
bucket_path = 'https://s3-{}.amazonaws.com/{}'.format(region, bucket)
def data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST):
data = [l for l in open(FILE_DATA, 'r')]
train_file = open(FILE_TRAIN, 'w')
valid_file = open(FILE_VALIDATION, 'w')
tests_file = open(FILE_TEST, 'w')
num_of_data = len(data)
num_train = int((PERCENT_TRAIN/100.0)*num_of_data)
num_valid = int((PERCENT_VALIDATION/100.0)*num_of_data)
num_tests = int((PERCENT_TEST/100.0)*num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[],[],[]]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data)-1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return boto3.Session(region_name=region).resource('s3').Bucket(bucket).Object(key).upload_fileobj(fobj)
def upload_to_s3(bucket, channel, filename):
fobj=open(filename, 'rb')
key = prefix+'/'+channel
url = 's3://{}/{}/{}'.format(bucket, key, filename)
print('Writing to {}'.format(url))
write_to_s3(fobj, bucket, key)
try: #python3
from urllib.request import urlretrieve
except: #python2
from urllib import urlretrieve
# Load the dataset
FILE_DATA = 'abalone'
urlretrieve("https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", FILE_DATA)
#split the downloaded data into train/test/validation files
FILE_TRAIN = 'abalone.train'
FILE_VALIDATION = 'abalone.validation'
FILE_TEST = 'abalone.test'
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST)
#upload the files to the S3 bucket
upload_to_s3(bucket, 'train', FILE_TRAIN)
upload_to_s3(bucket, 'validation', FILE_VALIDATION)
upload_to_s3(bucket, 'test', FILE_TEST)
train_s3_file = bucket_path + "/" + prefix + '/train'
validation_s3_file = bucket_path + "/" + prefix + '/validation'
test_s3_file = bucket_path + "/" + prefix + '/test'
xgb = sagemaker.estimator.Estimator(
get_image_uri(region, 'xgboost'),
sagemaker_execution_role,
train_instance_count = 1,
train_instance_type = 'ml.m4.4xlarge',
train_volume_size = 5,
output_path = bucket_path + "/" + prefix + "/single-xgboost",
sagemaker_session = session
)
xgb.set_hyperparameters(
objective = 'reg:linear',
num_round = 50,
max_depth = 5,
eta = 0.2,
gamme = 4,
min_child_weight = 6,
subsample = 0.7,
silent = 0
)
# SageMaker expects unique names for each job, model and endpoint.
# If these names are not unique the execution will fail. Pass these
# dynamically for each execution using placeholders.
execution_input = ExecutionInput(schema={
'JobName': str,
'ModelName': str,
'EndpointName': str
})
training_step = steps.TrainingStep(
'Train Step',
estimator=xgb,
data={
'train': sagemaker.s3_input(train_s3_file, content_type='libsvm'),
'validation': sagemaker.s3_input(validation_s3_file, content_type='libsvm')
},
job_name=execution_input['JobName']
)
model_step = steps.ModelStep(
'Save model',
model=training_step.get_expected_model(),
model_name=execution_input['ModelName']
)
transform_step = steps.TransformStep(
'Transform Input Dataset',
transformer=xgb.transformer(
instance_count=1,
instance_type='ml.m5.large'
),
job_name=execution_input['JobName'],
model_name=execution_input['ModelName'],
data=test_s3_file,
content_type='text/libsvm'
)
endpoint_config_step = steps.EndpointConfigStep(
"Create Endpoint Config",
endpoint_config_name=execution_input['ModelName'],
model_name=execution_input['ModelName'],
initial_instance_count=1,
instance_type='ml.m5.large'
)
endpoint_step = steps.EndpointStep(
"Create Endpoint",
endpoint_name=execution_input['EndpointName'],
endpoint_config_name=execution_input['ModelName']
)
workflow_definition = steps.Chain([
training_step,
model_step,
transform_step,
endpoint_config_step,
endpoint_step
])
workflow = Workflow(
name='MyTrainTransformDeploy_v1',
definition=workflow_definition,
role=workflow_execution_role,
execution_input=execution_input
)
workflow.render_graph()
workflow.create()
execution = workflow.execute(
inputs={
'JobName': 'regression-{}'.format(uuid.uuid1().hex), # Each Sagemaker Job requires a unique name
'ModelName': 'regression-{}'.format(uuid.uuid1().hex), # Each Model requires a unique name,
'EndpointName': 'regression-{}'.format(uuid.uuid1().hex) # Each Endpoint requires a unique name,
}
)
execution.render_progress()
execution.list_events(html=True)
workflow.list_executions(html=True)
Workflow.list_workflows(html=True)
| 0.425605 | 0.970854 |
# Tutorial: Linear Regression
Agenda:
1. Spyder interface
2. Linear regression running example: boston data
3. Vectorize cost function
4. Closed form solution
5. Gradient descent
```
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.datasets import load_boston
boston_data = load_boston()
print(boston_data['DESCR'])
# take the boston data
data = boston_data['data']
# we will only work with two of the features: INDUS and RM
x_input = data[:, [2,5]]
y_target = boston_data['target']
# Individual plots for the two features:
plt.title('Industrialness vs Med House Price')
plt.scatter(x_input[:, 0], y_target)
plt.xlabel('Industrialness')
plt.ylabel('Med House Price')
plt.show()
plt.title('Avg Num Rooms vs Med House Price')
plt.scatter(x_input[:, 1], y_target)
plt.xlabel('Avg Num Rooms')
plt.ylabel('Med House Price')
plt.show()
```
## Define cost function
$$\mathcal{E}(y, t) = \frac{1}{2N} \sum_{i=1}^N (y^{(i)}-t^{(i)})^2 $$
$$\mathcal{E}(y, t) = \frac{1}{2N} \sum_{i=1}^N (w_1 x_1^{(i)} + w_2 x_2^{(i)} + b -t^{(i)})^2 $$
```
def cost(w1, w2, b, X, t):
'''
Evaluate the cost function in a non-vectorized manner for
inputs `X` and targets `t`, at weights `w1`, `w2` and `b`.
'''
costs = 0
for i in range(len(t)):
y_i = w1 * X[i, 0] + w2 * X[i, 1] + b
t_i = t[i]
costs += 0.5 * (y_i - t_i) ** 2
return costs / len(t)
cost(3, 5, 20, x_input, y_target)
cost(3, 5, 0, x_input, y_target)
```
## Vectorizing the cost function:
$$\mathcal{E}(y, t) = \frac{1}{2N} \| \bf{X} \bf{w} + b \bf{1} - \bf{t} \| ^2$$
```
def cost_vectorized(w1, w2, b, X, t):
'''
Evaluate the cost function in a vectorized manner for
inputs `X` and targets `t`, at weights `w1`, `w2` and `b`.
'''
N = len(y_target)
w = np.array([w1, w2])
y = np.dot(X, w) + b * np.ones(N)
return np.sum((y - t)**2) / (2.0 * N)
cost_vectorized(3, 5, 20, x_input, y_target)
cost(3, 5, 0, x_input, y_target)
```
## Comparing speed of the vectorized vs unvectorized code
We'll see below that the vectorized code already
runs ~2x faster than the non-vectorized code!
Hopefully this will convince you to always vectorized your code whenever possible
```
import time
t0 = time.time()
print cost(4, 5, 20, x_input, y_target)
t1 = time.time()
print t1 - t0
t0 = time.time()
print cost_vectorized(4, 5, 20, x_input, y_target)
t1 = time.time()
print t1 - t0
```
## Plotting cost in weight space
We'll plot the cost for two of our weights, assuming that bias = -22.89831573.
We'll see where that number comes from later.
Notice the shape of the contours are ovals.
```
w1s = np.arange(-1.0, 0.0, 0.01)
w2s = np.arange(6.0, 10.0, 0.1)
z_cost = []
for w2 in w2s:
z_cost.append([cost_vectorized(w1, w2, -22.89831573, x_input, y_target) for w1 in w1s])
z_cost = np.array(z_cost)
np.shape(z_cost)
W1, W2 = np.meshgrid(w1s, w2s)
CS = plt.contour(W1, W2, z_cost, 25)
plt.clabel(CS, inline=1, fontsize=10)
plt.title('Costs for various values of w1 and w2 for b=0')
plt.xlabel("w1")
plt.ylabel("w2")
plt.plot([-0.33471389], [7.82205511], 'o') # this will be the minima that we'll find later
plt.show()
```
# Exact Solution
Work this out on the board:
1. ignore biases (add an extra feature & weight instead)
2. get equations from partial derivative
3. vectorize
4. write code.
```
# add an extra feature (column in the input) that are just all ones
x_in = np.concatenate([x_input, np.ones([np.shape(x_input)[0], 1])], axis=1)
x_in
def solve_exactly(X, t):
'''
Solve linear regression exactly. (fully vectorized)
Given `X` - NxD matrix of inputs
`t` - target outputs
Returns the optimal weights as a D-dimensional vector
'''
N, D = np.shape(X)
A = np.matmul(X.T, X)
c = np.dot(X.T, t)
return np.matmul(np.linalg.inv(A), c)
solve_exactly(x_in, y_target)
# In real life we don't want to code it directly
np.linalg.lstsq(x_in, y_target)
```
## Implement Gradient Function
$$ \frac{\partial \mathcal{E}}{\partial w_j} = \frac{1}{N}\sum_i x_j^{(i)}(y^{(i)}-t^{(i)}) $$
```
# Vectorized gradient function
def gradfn(weights, X, t):
'''
Given `weights` - a current "Guess" of what our weights should be
`X` - matrix of shape (N,D) of input features
`t` - target y values
Return gradient of each weight evaluated at the current value
'''
N, D = np.shape(X)
y_pred = np.matmul(X, weights)
error = y_pred - t
return np.matmul(np.transpose(x_in), error) / float(N)
def solve_via_gradient_descent(X, t, print_every=5000,
niter=100000, alpha=0.005):
'''
Given `X` - matrix of shape (N,D) of input features
`t` - target y values
Solves for linear regression weights.
Return weights after `niter` iterations.
'''
N, D = np.shape(X)
# initialize all the weights to zeros
w = np.zeros([D])
for k in range(niter):
dw = gradfn(w, X, t)
w = w - alpha*dw
if k % print_every == 0:
print 'Weight after %d iteration: %s' % (k, str(w))
return w
solve_via_gradient_descent( X=x_in, t=y_target)
# For comparison, this was the exact result:
np.linalg.lstsq(x_in, y_target)
```
|
github_jupyter
|
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.datasets import load_boston
boston_data = load_boston()
print(boston_data['DESCR'])
# take the boston data
data = boston_data['data']
# we will only work with two of the features: INDUS and RM
x_input = data[:, [2,5]]
y_target = boston_data['target']
# Individual plots for the two features:
plt.title('Industrialness vs Med House Price')
plt.scatter(x_input[:, 0], y_target)
plt.xlabel('Industrialness')
plt.ylabel('Med House Price')
plt.show()
plt.title('Avg Num Rooms vs Med House Price')
plt.scatter(x_input[:, 1], y_target)
plt.xlabel('Avg Num Rooms')
plt.ylabel('Med House Price')
plt.show()
def cost(w1, w2, b, X, t):
'''
Evaluate the cost function in a non-vectorized manner for
inputs `X` and targets `t`, at weights `w1`, `w2` and `b`.
'''
costs = 0
for i in range(len(t)):
y_i = w1 * X[i, 0] + w2 * X[i, 1] + b
t_i = t[i]
costs += 0.5 * (y_i - t_i) ** 2
return costs / len(t)
cost(3, 5, 20, x_input, y_target)
cost(3, 5, 0, x_input, y_target)
def cost_vectorized(w1, w2, b, X, t):
'''
Evaluate the cost function in a vectorized manner for
inputs `X` and targets `t`, at weights `w1`, `w2` and `b`.
'''
N = len(y_target)
w = np.array([w1, w2])
y = np.dot(X, w) + b * np.ones(N)
return np.sum((y - t)**2) / (2.0 * N)
cost_vectorized(3, 5, 20, x_input, y_target)
cost(3, 5, 0, x_input, y_target)
import time
t0 = time.time()
print cost(4, 5, 20, x_input, y_target)
t1 = time.time()
print t1 - t0
t0 = time.time()
print cost_vectorized(4, 5, 20, x_input, y_target)
t1 = time.time()
print t1 - t0
w1s = np.arange(-1.0, 0.0, 0.01)
w2s = np.arange(6.0, 10.0, 0.1)
z_cost = []
for w2 in w2s:
z_cost.append([cost_vectorized(w1, w2, -22.89831573, x_input, y_target) for w1 in w1s])
z_cost = np.array(z_cost)
np.shape(z_cost)
W1, W2 = np.meshgrid(w1s, w2s)
CS = plt.contour(W1, W2, z_cost, 25)
plt.clabel(CS, inline=1, fontsize=10)
plt.title('Costs for various values of w1 and w2 for b=0')
plt.xlabel("w1")
plt.ylabel("w2")
plt.plot([-0.33471389], [7.82205511], 'o') # this will be the minima that we'll find later
plt.show()
# add an extra feature (column in the input) that are just all ones
x_in = np.concatenate([x_input, np.ones([np.shape(x_input)[0], 1])], axis=1)
x_in
def solve_exactly(X, t):
'''
Solve linear regression exactly. (fully vectorized)
Given `X` - NxD matrix of inputs
`t` - target outputs
Returns the optimal weights as a D-dimensional vector
'''
N, D = np.shape(X)
A = np.matmul(X.T, X)
c = np.dot(X.T, t)
return np.matmul(np.linalg.inv(A), c)
solve_exactly(x_in, y_target)
# In real life we don't want to code it directly
np.linalg.lstsq(x_in, y_target)
# Vectorized gradient function
def gradfn(weights, X, t):
'''
Given `weights` - a current "Guess" of what our weights should be
`X` - matrix of shape (N,D) of input features
`t` - target y values
Return gradient of each weight evaluated at the current value
'''
N, D = np.shape(X)
y_pred = np.matmul(X, weights)
error = y_pred - t
return np.matmul(np.transpose(x_in), error) / float(N)
def solve_via_gradient_descent(X, t, print_every=5000,
niter=100000, alpha=0.005):
'''
Given `X` - matrix of shape (N,D) of input features
`t` - target y values
Solves for linear regression weights.
Return weights after `niter` iterations.
'''
N, D = np.shape(X)
# initialize all the weights to zeros
w = np.zeros([D])
for k in range(niter):
dw = gradfn(w, X, t)
w = w - alpha*dw
if k % print_every == 0:
print 'Weight after %d iteration: %s' % (k, str(w))
return w
solve_via_gradient_descent( X=x_in, t=y_target)
# For comparison, this was the exact result:
np.linalg.lstsq(x_in, y_target)
| 0.751922 | 0.990892 |
```
import pandas as pd
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
import seaborn as sns
from numpy import genfromtxt
from numpy import linalg as LA
import scipy as sp
import sympy
input_data = pd.read_csv('a6n.csv', index_col=0)
G = nx.Graph(input_data.values)
```
### how many driver node we have?
```
def number_of_driver_nodes(G):
N = G.number_of_nodes()
A = (nx.adjacency_matrix(G)).todense() # get adjacency matrix A of G
all_eigs = LA.eigvals(A) # get eigenvalues of A
lambda_i = list(set(np.round(all_eigs,8)))
#lambda_i = list(set(all_eigs))
driver_nodes_num = -1
lambda_m = -1
IN = np.eye(N)
n = len(lambda_i)
miu_lambda =np.zeros(n)
for i in range(0,n):
miu_lambda[i] = N - LA.matrix_rank(lambda_i[i] * IN - A, tol=1E-6)
if miu_lambda[i] > driver_nodes_num:
driver_nodes_num = miu_lambda[i]
lambda_m = lambda_i[i]
return (driver_nodes_num, lambda_m)
```
### Which node is a driver node?
```
def get_driver_nodes(G):
N = G.number_of_nodes()
A = (nx.adjacency_matrix(G)).todense() # get adjacency matrix A of G
all_eigs = LA.eigvals(A) # get eigenvalues of A
lambda_i = list(set(np.round(all_eigs,8)))
#lambda_i = list(set(all_eigs))
driver_nodes_num = -1
lambda_m = -1
IN = np.eye(N)
n = len(lambda_i)
miu_lambda =np.zeros(n)
for i in range(0,n):
miu_lambda[i] = N - LA.matrix_rank(lambda_i[i] * IN - A, tol=1E-8)
if miu_lambda[i] > driver_nodes_num:
driver_nodes_num = miu_lambda[i]
lambda_m = lambda_i[i]
middle_matrix = lambda_m * np.eye(N) - A # get the middle matrix A - \lambda * I_N
middle_matrix = np.round(middle_matrix, 8)
reduced_matrix,pivot_array1=sympy.Matrix(middle_matrix).rref()
reduced_matrix_array = np.array(reduced_matrix).astype(np.float64)
reduced_matrix_array_transpose=np.matrix.transpose(reduced_matrix_array)
_, pivot_array2 = sympy.Matrix(reduced_matrix_array_transpose).T.rref()
all_nodes = G.nodes()
pivot_nodes = [all_nodes[i] for i in pivot_array2]
t=0
N_d = N-len(pivot_array2)
driver_nodes = np.zeros(N_d)
for i in all_nodes:
if i in pivot_array2:
pass
else:
driver_nodes[t] = i
t = t+1
return (driver_nodes_num, driver_nodes)
```
## Example
```
mat = np.array(
[[-1.0, 1.0, 1.0,1.0,1.0,1.0],
[1.0, 0.0, 0.0,0.0,0.0, 0.0],
[1.0, 0.0, 0.0,0.0,0.0 ,0.0],
[1.0, 0.0, 0.0,-1.0,0.0,0.0],
[1.0, 0.0, 0.0,0.0,-1.0,1.0],
[1.0, 0.0, 0.0,0.0,1.0,-1.0]
])
G2 = nx.Graph(mat)
input_data = pd.read_csv('Book2.csv', index_col=0)
G3 = nx.Graph(input_data.values)
number_of_driver_nodes(G3)
get_driver_nodes(G3)
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
import seaborn as sns
from numpy import genfromtxt
from numpy import linalg as LA
import scipy as sp
import sympy
input_data = pd.read_csv('a6n.csv', index_col=0)
G = nx.Graph(input_data.values)
def number_of_driver_nodes(G):
N = G.number_of_nodes()
A = (nx.adjacency_matrix(G)).todense() # get adjacency matrix A of G
all_eigs = LA.eigvals(A) # get eigenvalues of A
lambda_i = list(set(np.round(all_eigs,8)))
#lambda_i = list(set(all_eigs))
driver_nodes_num = -1
lambda_m = -1
IN = np.eye(N)
n = len(lambda_i)
miu_lambda =np.zeros(n)
for i in range(0,n):
miu_lambda[i] = N - LA.matrix_rank(lambda_i[i] * IN - A, tol=1E-6)
if miu_lambda[i] > driver_nodes_num:
driver_nodes_num = miu_lambda[i]
lambda_m = lambda_i[i]
return (driver_nodes_num, lambda_m)
def get_driver_nodes(G):
N = G.number_of_nodes()
A = (nx.adjacency_matrix(G)).todense() # get adjacency matrix A of G
all_eigs = LA.eigvals(A) # get eigenvalues of A
lambda_i = list(set(np.round(all_eigs,8)))
#lambda_i = list(set(all_eigs))
driver_nodes_num = -1
lambda_m = -1
IN = np.eye(N)
n = len(lambda_i)
miu_lambda =np.zeros(n)
for i in range(0,n):
miu_lambda[i] = N - LA.matrix_rank(lambda_i[i] * IN - A, tol=1E-8)
if miu_lambda[i] > driver_nodes_num:
driver_nodes_num = miu_lambda[i]
lambda_m = lambda_i[i]
middle_matrix = lambda_m * np.eye(N) - A # get the middle matrix A - \lambda * I_N
middle_matrix = np.round(middle_matrix, 8)
reduced_matrix,pivot_array1=sympy.Matrix(middle_matrix).rref()
reduced_matrix_array = np.array(reduced_matrix).astype(np.float64)
reduced_matrix_array_transpose=np.matrix.transpose(reduced_matrix_array)
_, pivot_array2 = sympy.Matrix(reduced_matrix_array_transpose).T.rref()
all_nodes = G.nodes()
pivot_nodes = [all_nodes[i] for i in pivot_array2]
t=0
N_d = N-len(pivot_array2)
driver_nodes = np.zeros(N_d)
for i in all_nodes:
if i in pivot_array2:
pass
else:
driver_nodes[t] = i
t = t+1
return (driver_nodes_num, driver_nodes)
mat = np.array(
[[-1.0, 1.0, 1.0,1.0,1.0,1.0],
[1.0, 0.0, 0.0,0.0,0.0, 0.0],
[1.0, 0.0, 0.0,0.0,0.0 ,0.0],
[1.0, 0.0, 0.0,-1.0,0.0,0.0],
[1.0, 0.0, 0.0,0.0,-1.0,1.0],
[1.0, 0.0, 0.0,0.0,1.0,-1.0]
])
G2 = nx.Graph(mat)
input_data = pd.read_csv('Book2.csv', index_col=0)
G3 = nx.Graph(input_data.values)
number_of_driver_nodes(G3)
get_driver_nodes(G3)
| 0.33764 | 0.768168 |
```
import torch
import numpy as np
%load_ext autoreload
%autoreload 2
```
# Emulating the DRP Object Catalog
### Global config
```
import json
np.random.seed(0)
torch.manual_seed(0)
torch.cuda.manual_seed(0)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
if device=='cuda':
torch.set_default_tensor_type('torch.cuda.FloatTensor')
print("device: ", device)
args = json.load(open("args.txt"))
```
### Data I/O
```
from derp_data import DerpData
import itertools
# X base columns
truth_cols = list('ugriz') + ['y_truth', 'ra_truth', 'dec_truth', 'redshift', 'star',]
truth_cols += ['size_bulge_true', 'size_minor_bulge_true', 'ellipticity_1_bulge_true', 'ellipticity_2_bulge_true', 'bulge_to_total_ratio_i']
truth_cols += ['size_disk_true', 'size_minor_disk_true', 'ellipticity_1_disk_true', 'ellipticity_2_disk_true',]
opsim_cols = ['m5_flux', 'PSF_sigma2', 'filtSkyBrightness_flux', 'airmass', 'n_obs']
# Y base columns
drp_cols = ['x', 'y_obs', 'ra_obs', 'dec_obs', 'Ixx', 'Ixy', 'Iyy', 'IxxPSF', 'IxyPSF', 'IyyPSF',] #'extendedness',]
drp_cols_prefix = ['cModelFlux_', 'psFlux_']
drp_cols_suffix = ['_base_CircularApertureFlux_70_0_instFlux','_ext_photometryKron_KronFlux_instFlux',]
drp_cols += [t[0] + t[1] for t in list(itertools.product(drp_cols_prefix, list('ugrizy')))]
drp_cols += [t[1] + t[0] for t in list(itertools.product(drp_cols_suffix, list('ugrizy')))]
# Define dataset
data = DerpData(data_path='raw_data/obj_master.csv', X_base_cols=truth_cols + opsim_cols, Y_base_cols=drp_cols,
verbose=args['verbose'], ignore_null_rows=True, save_to_disk=False)
X_cols = data.X_cols
Y_cols = data.Y_cols
X_cat_mapping = data.X_cat_mapping
n_trainval = data.n_trainval
n_train = data.n_train
n_val = n_trainval - n_train
X_dim = data.X_dim
Y_dim = data.Y_dim
from torch.utils.data.sampler import SubsetRandomSampler
from torch.utils.data import DataLoader
# Split train vs. val
train_sampler = SubsetRandomSampler(data.train_indices)
val_sampler = SubsetRandomSampler(data.val_indices)
# Define dataloader
kwargs = {'num_workers': 1, 'pin_memory': True} if device=='cuda' else {}
train_loader = DataLoader(data, batch_size=args['batch_size'], sampler=train_sampler, **kwargs)
val_loader = DataLoader(data, batch_size=args['batch_size'], sampler=val_sampler, **kwargs)
for batch_idx, (X_batch, Y_batch) in enumerate(val_loader):
print(X_batch.shape)
print(Y_batch.shape)
break
```
### Model
The simplest model with diagonal covariance matrix is this:
```
from models import ConcreteDense
length_scale = args['l']
wr = length_scale**2.0/data.n_train
dr = 2.0/data.n_train
model = ConcreteDense(data.X_dim, data.Y_dim, args['n_features'], wr, dr)
print("Model's state_dict:")
for param_tensor in model.state_dict():
print(param_tensor, "\t", model.state_dict()[param_tensor].size())
```
### Training
```
from optim import fit_model
X_val = data.X[data.val_indices, :]
Y_val = data.Y[data.val_indices, :]
model = fit_model(model, args['n_epochs'], train_loader, val_loader, n_val=data.n_val,
device=device, logging_interval=args['logging_interval'],
X_val=X_val, Y_val=Y_val, n_MC=args['n_MC'], run_id=args['run_id'])
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(np.arange(0, args['n_epochs'], args['logging_interval']),
rmse)
plt.title("RMSE")
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(np.arange(0, args['n_epochs'], args['logging_interval']),
pppp)
plt.title("Per-point predictive probability")
```
### Export post-training metadata
```
data.export_metadata_for_eval(device_type=device.type)
```
|
github_jupyter
|
import torch
import numpy as np
%load_ext autoreload
%autoreload 2
import json
np.random.seed(0)
torch.manual_seed(0)
torch.cuda.manual_seed(0)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
if device=='cuda':
torch.set_default_tensor_type('torch.cuda.FloatTensor')
print("device: ", device)
args = json.load(open("args.txt"))
from derp_data import DerpData
import itertools
# X base columns
truth_cols = list('ugriz') + ['y_truth', 'ra_truth', 'dec_truth', 'redshift', 'star',]
truth_cols += ['size_bulge_true', 'size_minor_bulge_true', 'ellipticity_1_bulge_true', 'ellipticity_2_bulge_true', 'bulge_to_total_ratio_i']
truth_cols += ['size_disk_true', 'size_minor_disk_true', 'ellipticity_1_disk_true', 'ellipticity_2_disk_true',]
opsim_cols = ['m5_flux', 'PSF_sigma2', 'filtSkyBrightness_flux', 'airmass', 'n_obs']
# Y base columns
drp_cols = ['x', 'y_obs', 'ra_obs', 'dec_obs', 'Ixx', 'Ixy', 'Iyy', 'IxxPSF', 'IxyPSF', 'IyyPSF',] #'extendedness',]
drp_cols_prefix = ['cModelFlux_', 'psFlux_']
drp_cols_suffix = ['_base_CircularApertureFlux_70_0_instFlux','_ext_photometryKron_KronFlux_instFlux',]
drp_cols += [t[0] + t[1] for t in list(itertools.product(drp_cols_prefix, list('ugrizy')))]
drp_cols += [t[1] + t[0] for t in list(itertools.product(drp_cols_suffix, list('ugrizy')))]
# Define dataset
data = DerpData(data_path='raw_data/obj_master.csv', X_base_cols=truth_cols + opsim_cols, Y_base_cols=drp_cols,
verbose=args['verbose'], ignore_null_rows=True, save_to_disk=False)
X_cols = data.X_cols
Y_cols = data.Y_cols
X_cat_mapping = data.X_cat_mapping
n_trainval = data.n_trainval
n_train = data.n_train
n_val = n_trainval - n_train
X_dim = data.X_dim
Y_dim = data.Y_dim
from torch.utils.data.sampler import SubsetRandomSampler
from torch.utils.data import DataLoader
# Split train vs. val
train_sampler = SubsetRandomSampler(data.train_indices)
val_sampler = SubsetRandomSampler(data.val_indices)
# Define dataloader
kwargs = {'num_workers': 1, 'pin_memory': True} if device=='cuda' else {}
train_loader = DataLoader(data, batch_size=args['batch_size'], sampler=train_sampler, **kwargs)
val_loader = DataLoader(data, batch_size=args['batch_size'], sampler=val_sampler, **kwargs)
for batch_idx, (X_batch, Y_batch) in enumerate(val_loader):
print(X_batch.shape)
print(Y_batch.shape)
break
from models import ConcreteDense
length_scale = args['l']
wr = length_scale**2.0/data.n_train
dr = 2.0/data.n_train
model = ConcreteDense(data.X_dim, data.Y_dim, args['n_features'], wr, dr)
print("Model's state_dict:")
for param_tensor in model.state_dict():
print(param_tensor, "\t", model.state_dict()[param_tensor].size())
from optim import fit_model
X_val = data.X[data.val_indices, :]
Y_val = data.Y[data.val_indices, :]
model = fit_model(model, args['n_epochs'], train_loader, val_loader, n_val=data.n_val,
device=device, logging_interval=args['logging_interval'],
X_val=X_val, Y_val=Y_val, n_MC=args['n_MC'], run_id=args['run_id'])
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(np.arange(0, args['n_epochs'], args['logging_interval']),
rmse)
plt.title("RMSE")
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(np.arange(0, args['n_epochs'], args['logging_interval']),
pppp)
plt.title("Per-point predictive probability")
data.export_metadata_for_eval(device_type=device.type)
| 0.440951 | 0.785679 |

# Creating Interactive Tables
In many lab reports it may be useful if your students to have a way to record data into Jupyter. This template will help you create your own lab assignments by simplifying the creation of interactive tables and graphs. We begin my importing our helper functions below
```
from lab_template_helpers import easy_table, graph_table
```
We now have access to two functions: `easy_table` and `graph_table`. The first function allows us to create or load custom tables, and the second allows us to graph those results interactively.
## Creating A Blank Table For Data Entry.
We will now use the `easy_table` function with its `create_table` method. The only information you need to provide this function is the filename that you want to save your table as. In this case, we'll call it "`demonstration.csv`. Upon executing the cell below, you will need to provide some information to create your custom table(s). You'll be asked to provide column names and if you want custom row names (and what those names are). If you do not want custom rows names, you will need to provide the number of rows you need. Please run the cell below and follow the prompts to create your blank table.
```
easy_table.create_table("demonstration.csv")
```
Once we've created a table, we need to load it using the `load_table` method with the file we created using `create_table`. We also need to give our new table a name, in this case called `my_table`.
```
my_table = easy_table.load_table('demonstration.csv')
my_table
```
Where this created table will only accept numerical entries. To modify the table, click on a cell and enter a new number. We do however note that using custom tables like this, it is not possible to type in your own "text based" results. These custom tables are for numbers only.
Note you can sort and filter values in the table by clicking either the column name or the icon beside the column name respectively.
## Loading A Preexisting File (Online or Local)
If you don't want to create a blank table, you can load your own csv files, either from the internet or from your own Callysto directory. This is done using the `load_table` function. However, now we specify either a pre-filled csv file or one online. The example below shows how to load an online table. Note how we've also specified "`external = True`" to ensure that our loaded table will be properly indexed.
### Note
If this was a local file saved to your personal Callysto hub, we would type the file's path instead of a URL here.
```
online_table = easy_table.load_table("https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv",
external=True)
online_table
```
Where conversely to the blank table, in the 'species' column, you are free to enter your own text.
## Non-Comma Separated CSV files.
In the event that you want to use a CSV with an alternative delimiter, we must specify the optional `sep` (for separator) argument. This allows us to properly read CSV files with delimiters such as
1. tabs $\rightarrow$ "`\t`"
2. Semicolons $\rightarrow$ "`;`"
3. Pipes $\rightarrow$ "`|`"
We demonstrate this below, first an example of if you do not specify your alternative delimiter
```
tab_separated = easy_table.load_table("https://raw.githubusercontent.com/goodby/csv/master/src/Goodby/CSV/Import/Tests/Standard/Join/csv_files/tab-separated.csv",
external = True)
tab_separated
```
Note how all our values get parsed line-by-line, rather than into individual cells. If we specify the delimiter we get the following
```
tab_separated = easy_table.load_table("https://raw.githubusercontent.com/goodby/csv/master/src/Goodby/CSV/Import/Tests/Standard/Join/csv_files/tab-separated.csv",
external = True,
sep = '\t')
tab_separated
```
Which is the desired result.
## Saving an Updated Table
If your students have created their own data table/modified an existing one, it is possible to save their changes for later. To do this, use the `save_table` method of our `easy_table` function. In this case, feel free to change the entries of `my_table` to save your modifications.
To use `save_table` we need to tell it which table we'd like to save, as well as provide a filename to save it as. We demonstrate this below.
```
easy_table.save_table(my_table, "updated_demonstration.csv")
```
We can now load our updated table and see the changes that we've made to our table.
```
easy_table.load_table("updated_demonstration.csv")
```
## Special Note
If you're creating a lab report for your students to use, they may never need to see the `create_table` function (unless of course you want them to). If you create the table yourself, you can distribute the created `CSV` file with the lab report and have your students load the file. Certainly, it may be easier to get students to create their own tables individually.
# Interactive Graph Using Created/Loaded Tables
To graph the data in a modified or downloaded table, we can use the `graph_from_table` method from the `graph_table` function. The `graph_from_table` method takes one argument which is the name of the table you created. For demonstration purposes, we will be using the `online_table` with `graph_from_table`.
In the cell below we've provided some more examples of open data sets. In this case we've re-hosted all of these data sets on the Cybera cloud only to make downloading within Alberta slightly faster. All of these data sets are available freely online. Feel free to uncomment any table of interest in order to test out the functionality of both the table and the graph. We do note however that for larger files, it may take a few moments to download and plot.
```
## Data set of nutritional information of 80 cereal brands
# online_table = easy_table.load_table("https://swift-yeg.cloud.cybera.ca:8080/v1/AUTH_233e84cd313945c992b4b585f7b9125d/callysto-open-data/cereal.csv",
# external=True)
## Data set of ground hog day shadow observations and temperatures
# online_table = easy_table.load_table("https://swift-yeg.cloud.cybera.ca:8080/v1/AUTH_233e84cd313945c992b4b585f7b9125d/callysto-open-data/archive.csv",
# external = True)
## Women's shoe data set, NOTE: This is a large data set (>100 mb) and will be a little slow. However, it is
## an excellent data set to play with the filter capabilities of the table in order to make plotting a little
## faster.
# online_table = easy_table.load_table("https://swift-yeg.cloud.cybera.ca:8080/v1/AUTH_233e84cd313945c992b4b585f7b9125d/callysto-open-data/7210_1.csv",
# external = True)
## Data set of world comodity prices. NOTE: This is another large file (~90 mb) and plotting may be slow if
## you don't filter the table first.
# online_table = easy_table.load_table("https://swift-yeg.cloud.cybera.ca:8080/v1/AUTH_233e84cd313945c992b4b585f7b9125d/callysto-open-data/WFPVAM_FoodPrices_05-12-2017.csv",
# external = True)
online_table
graph_table.graph_from_table(online_table)
```
In the widget above, you have three drop down menus with the following functions
1. X axis data: This drop down selects which column of your data table to use as your $x$ points on the graph.
2. Y axis data: This drop down selects which column of your data table to use as your $y$ points on the graph.
3. Plot Type: This drop down allows to you to choose from a scatter plot, a line plot, or a bar graph to plot your data
There are also three text entry boxes with the following function
1. Plot title: Enter the title of your plot here
2. Y axis label: Enter the title of the $y$ axis here
3. X axis label: Enter the title of the $x$ axis here.
### Note
If you filter or sort your interactive table then re-run the plot cell, any sortation and filters will present in the data in the graph.
### Other Plot Functionality
If you hover your mouse of the plot you will be able to zoom on a highlighted area. There are several other functions along the top menu of the plot such as saving the image.

|
github_jupyter
|
from lab_template_helpers import easy_table, graph_table
easy_table.create_table("demonstration.csv")
my_table = easy_table.load_table('demonstration.csv')
my_table
online_table = easy_table.load_table("https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv",
external=True)
online_table
tab_separated = easy_table.load_table("https://raw.githubusercontent.com/goodby/csv/master/src/Goodby/CSV/Import/Tests/Standard/Join/csv_files/tab-separated.csv",
external = True)
tab_separated
tab_separated = easy_table.load_table("https://raw.githubusercontent.com/goodby/csv/master/src/Goodby/CSV/Import/Tests/Standard/Join/csv_files/tab-separated.csv",
external = True,
sep = '\t')
tab_separated
easy_table.save_table(my_table, "updated_demonstration.csv")
easy_table.load_table("updated_demonstration.csv")
## Data set of nutritional information of 80 cereal brands
# online_table = easy_table.load_table("https://swift-yeg.cloud.cybera.ca:8080/v1/AUTH_233e84cd313945c992b4b585f7b9125d/callysto-open-data/cereal.csv",
# external=True)
## Data set of ground hog day shadow observations and temperatures
# online_table = easy_table.load_table("https://swift-yeg.cloud.cybera.ca:8080/v1/AUTH_233e84cd313945c992b4b585f7b9125d/callysto-open-data/archive.csv",
# external = True)
## Women's shoe data set, NOTE: This is a large data set (>100 mb) and will be a little slow. However, it is
## an excellent data set to play with the filter capabilities of the table in order to make plotting a little
## faster.
# online_table = easy_table.load_table("https://swift-yeg.cloud.cybera.ca:8080/v1/AUTH_233e84cd313945c992b4b585f7b9125d/callysto-open-data/7210_1.csv",
# external = True)
## Data set of world comodity prices. NOTE: This is another large file (~90 mb) and plotting may be slow if
## you don't filter the table first.
# online_table = easy_table.load_table("https://swift-yeg.cloud.cybera.ca:8080/v1/AUTH_233e84cd313945c992b4b585f7b9125d/callysto-open-data/WFPVAM_FoodPrices_05-12-2017.csv",
# external = True)
online_table
graph_table.graph_from_table(online_table)
| 0.582729 | 0.977671 |
# Mark's Problem: Unsupervised Learning
Mark regularly gets handed files full of fashion images, labelled by category. He wants to know how he can use this to help keep up with the latest trends for the magazine.
For now, he's interested in producing a visualization of the various categories so that he can learn more about them. He's hoping his these explorations will eventually help him speed up the process of sorting through what he gets sent to review every week.
But first, he has to put this data in a usable format.
```
from src.data import RawDataset, Dataset
from src.utils import list_dir
from src.paths import raw_data_path
```
When you are developing in a module, it's really handy to have these lines:
```
%load_ext autoreload
%autoreload 2
```
We want to see debug-level logging in the notebook. Here's the incantation
```
import logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger()
logger.setLevel(logging.INFO)
```
# More Datasets! Practice Makes Perfect.
Acually, practice just makes permanent. **Perfect practice** makes perfect, but we digress.
## Adding and processing the Fashion-MNIST (FMNIST) Dataset
Recall that our approach to building a usable dataset is:
1. Assemble the raw data files. Generate (and record) hashes to ensure the validity of these files.
2. Add LICENSE and DESCR (description) metadata to make the raw data usable for other people, and
3. Write a function to process the raw data into a usable format (for us, a `Dataset` object)
4. Write transformation functions on `Dataset` objects that fit our data munging into an automated reproducible workflow.
In practice, that means:
* Create a `RawDataset`
* `add_url()`: give instructions for how to `fetch` your data and add a `DESCR` and `LICENSE`
* `add_process()`: add a function that knows how to process your specific dataset
* `workflow.add_raw_dataset()`: add the `RawDataset` to your `workflow`
* Transform your `Dataset`
* (Optionally add a `transformer` function to the `workflow`)
* `workflow.add_transformer()`: further transform your data.
* Run `make data`
Looking at the FMNIST GitHub documentation, we see that the raw data is distributed as a set of 4 files.
| Name | Content | Examples | Size | Link | MD5 Checksum|
| --- | --- |--- | --- |--- |--- |
| `train-images-idx3-ubyte.gz` | training set images | 60,000|26 MBytes | [Download](http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz)|`8d4fb7e6c68d591d4c3dfef9ec88bf0d`|
| `train-labels-idx1-ubyte.gz` | training set labels |60,000|29 KBytes | [Download](http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz)|`25c81989df183df01b3e8a0aad5dffbe`|
| `t10k-images-idx3-ubyte.gz` | test set images | 10,000|4.3 MBytes | [Download](http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz)|`bef4ecab320f06d8554ea6380940ec79`|
| `t10k-labels-idx1-ubyte.gz` | test set labels | 10,000| 5.1 KBytes | [Download](http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz)|`bb300cfdad3c16e7a12a480ee83cd310`|
Let's give our dataset a name.
```
dataset_name="f-mnist"
```
### Download and Check Hashes
Because Zalando are excellent data citizens, they have conveniently given us MD5 hashes that we can verify when we download this data.
```
# Set the log level to DEBUG so we can see what's going on
logger.setLevel(logging.DEBUG)
# Specify the raw files and their hashes
data_site = 'http://fashion-mnist.s3-website.eu-central-1.amazonaws.com'
file_list = [
('train-images-idx3-ubyte.gz','8d4fb7e6c68d591d4c3dfef9ec88bf0d'),
('train-labels-idx1-ubyte.gz','25c81989df183df01b3e8a0aad5dffbe'),
('t10k-images-idx3-ubyte.gz', 'bef4ecab320f06d8554ea6380940ec79'),
('t10k-labels-idx1-ubyte.gz', 'bb300cfdad3c16e7a12a480ee83cd310'),
]
fmnist = RawDataset(dataset_name)
for file, hashval in file_list:
url = f"{data_site}/{file}"
fmnist.add_url(url=url, hash_type='md5', hash_value=hashval)
# Download and check the hashes
fmnist.fetch()
list_dir(raw_data_path)
```
### Don't forget the License and Description
```
# Easy case. Zalando are good data citizens, so their data License is directly available from
# their Raw Data Repo on github
# Notice we tag this data with the name `LICENSE`
fmnist.add_url(url='https://raw.githubusercontent.com/zalandoresearch/fashion-mnist/master/LICENSE',
name='LICENSE', file_name=f'{dataset_name}.license')
# What does the raw data look like?
# Where did I get it from?
# What format is it in?
# What should it look like when it's processed?
fmnist_readme = '''
Fashion-MNIST
=============
Notes
-----
Data Set Characteristics:
:Number of Instances: 70000
:Number of Attributes: 728
:Attribute Information: 28x28 8-bit greyscale image
:Missing Attribute Values: None
:Creator: Zalando
:Date: 2017
This is a copy of Zalando's Fashion-MNIST [F-MNIST] dataset:
https://github.com/zalandoresearch/fashion-mnist
Fashion-MNIST is a dataset of Zalando's article images—consisting of a
training set of 60,000 examples and a test set of 10,000
examples. Each example is a 28x28 grayscale image, associated with a
label from 10 classes. Fashion-MNIST is intended to serve as a direct
drop-in replacement for the original [MNIST] dataset for benchmarking
machine learning algorithms. It shares the same image size and
structure of training and testing splits.
References
----------
- [F-MNIST] Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms.
Han Xiao, Kashif Rasul, Roland Vollgraf. arXiv:1708.07747
- [MNIST] The MNIST Database of handwritten digits. Yann LeCun, Corinna Cortes,
Christopher J.C. Burges. http://yann.lecun.com/exdb/mnist/
'''
fmnist.add_metadata(kind="DESCR", contents=fmnist_readme)
fmnist.fetch()
```
Recall, most unpacking can be handled automagically. Just run it.
```
fmnist.unpack()
```
## Converting a `RawDataset` into a usable `Dataset`
Recall that we need to write a processing function and add it to our `RawDataset`.
### Processing the raw data
Finally, we need to convert the raw data into usable `data` and `target` vectors.
The code at https://github.com/zalandoresearch/fashion-mnist/blob/master/utils/mnist_reader.py tells us how to do that. Having a look at the sample code, we notice that we need numpy. How do we add this to the environment?
* Add it to `environment.yml`
* `make requirements`
Once we have done this, we can do the following processing and setup:
```
import numpy as np
unpack_path = fmnist.unpack()
kind = "train"
label_path = unpack_path / f"{kind}-labels-idx1-ubyte"
with open(label_path, 'rb') as fd:
target = np.frombuffer(fd.read(), dtype=np.uint8, offset=8)
dataset_path = unpack_path / f"{kind}-images-idx3-ubyte"
with open(dataset_path, 'rb') as fd:
data = np.frombuffer(fd.read(), dtype=np.uint8, offset=16).reshape(len(target), 784)
print(f'Data: {data.shape}, Target: {target.shape}')
```
### Building a `Dataset`
Time to build a processing function. Recall that a processing function produces a dictionary of kwargs that can be used as a `Dataset` constructor:
```
from src.data import Dataset
help(Dataset.__init__)
```
Rewriting the sample code into the framework gives us this:
### EXERCISE: Add this into the right place
```
#%%file -a ../src/data/localdata.py
#__all__ += ['process_mnist']
def process_mnist(dataset_name='mnist', kind='train', metadata=None):
'''
Load the MNIST dataset (or a compatible variant; e.g. F-MNIST)
dataset_name: {'mnist', 'f-mnist'}
Which variant to load
kind: {'train', 'test'}
Dataset comes pre-split into training and test data.
Indicates which dataset to load
metadata: dict
Additional metadata fields will be added to this dict.
'kind': value of `kind` used to generate a subset of the data
'''
if metadata is None:
metadata = {}
if kind == 'test':
kind = 't10k'
label_path = interim_data_path / dataset_name / f"{kind}-labels-idx1-ubyte"
with open(label_path, 'rb') as fd:
target = np.frombuffer(fd.read(), dtype=np.uint8, offset=8)
dataset_path = interim_data_path / dataset_name / f"{kind}-images-idx3-ubyte"
with open(dataset_path, 'rb') as fd:
data = np.frombuffer(fd.read(), dtype=np.uint8,
offset=16).reshape(len(target), 784)
metadata['subset'] = kind
dset_opts = {
'dataset_name': dataset_name,
'data': data,
'target': target,
'metadata': metadata,
}
return dset_opts
```
Now add this process function to the built in workflow in order to automate `Dataset` creation.
```
from functools import partial
from src.data.localdata import process_mnist
fmnist.unpack(force=True)
fmnist.load_function = partial(process_mnist, dataset_name='f-mnist')
ds = fmnist.process(force=True)
ds.data.shape, ds.target.shape
```
## Add this Dataset to the master dataset list
```
from src import workflow
# Add the Raw Dataset to the master list of Raw Datasets
workflow.add_raw_dataset(fmnist)
workflow.available_raw_datasets()
# Create a pair of Datasets from this Raw Dataset, by specifying different options for the RawDataset creation
for kind in ['train', 'test']:
workflow.add_transformer(from_raw=fmnist.name, raw_dataset_opts={'kind':kind},
output_dataset=f"{fmnist.name}_{kind}")
workflow.get_transformer_list()
```
Apply the transforms and save the resulting Datasets. This is the same as doing a `make data`
```
logger.setLevel(logging.INFO)
workflow.make_data()
!cd .. && make data
```
Now we can load these datsets by name:
```
ds = Dataset.load("f-mnist_test")
print(f"Data:{ds.data.shape}, Target:{ds.target.shape}")
ds = Dataset.load("f-mnist_train")
print(f"Data:{ds.data.shape}, Target:{ds.target.shape}")
```
### Don't forget: check in your changes using `git`
* Check in the generated `raw_datasets.json`, `transformer_list.json` in to source code control
* do a `make data`
* add tests if you haven't yet
## Summary
Mark is well on his way to doing data science on his fashion data. In this example, he:
* Created a `RawDataset` consisting of 4 raw data files
* Checked the hashes of these files against known (published) values
* Added license and description metadata
* Added a processing function to parse the contents of these raw data files into a usable format, and
* Created "test" and "train" variants of a `Dataset` object from this `RawDataset`
```
workflow.available_datasets()
```
|
github_jupyter
|
from src.data import RawDataset, Dataset
from src.utils import list_dir
from src.paths import raw_data_path
%load_ext autoreload
%autoreload 2
import logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger()
logger.setLevel(logging.INFO)
dataset_name="f-mnist"
# Set the log level to DEBUG so we can see what's going on
logger.setLevel(logging.DEBUG)
# Specify the raw files and their hashes
data_site = 'http://fashion-mnist.s3-website.eu-central-1.amazonaws.com'
file_list = [
('train-images-idx3-ubyte.gz','8d4fb7e6c68d591d4c3dfef9ec88bf0d'),
('train-labels-idx1-ubyte.gz','25c81989df183df01b3e8a0aad5dffbe'),
('t10k-images-idx3-ubyte.gz', 'bef4ecab320f06d8554ea6380940ec79'),
('t10k-labels-idx1-ubyte.gz', 'bb300cfdad3c16e7a12a480ee83cd310'),
]
fmnist = RawDataset(dataset_name)
for file, hashval in file_list:
url = f"{data_site}/{file}"
fmnist.add_url(url=url, hash_type='md5', hash_value=hashval)
# Download and check the hashes
fmnist.fetch()
list_dir(raw_data_path)
# Easy case. Zalando are good data citizens, so their data License is directly available from
# their Raw Data Repo on github
# Notice we tag this data with the name `LICENSE`
fmnist.add_url(url='https://raw.githubusercontent.com/zalandoresearch/fashion-mnist/master/LICENSE',
name='LICENSE', file_name=f'{dataset_name}.license')
# What does the raw data look like?
# Where did I get it from?
# What format is it in?
# What should it look like when it's processed?
fmnist_readme = '''
Fashion-MNIST
=============
Notes
-----
Data Set Characteristics:
:Number of Instances: 70000
:Number of Attributes: 728
:Attribute Information: 28x28 8-bit greyscale image
:Missing Attribute Values: None
:Creator: Zalando
:Date: 2017
This is a copy of Zalando's Fashion-MNIST [F-MNIST] dataset:
https://github.com/zalandoresearch/fashion-mnist
Fashion-MNIST is a dataset of Zalando's article images—consisting of a
training set of 60,000 examples and a test set of 10,000
examples. Each example is a 28x28 grayscale image, associated with a
label from 10 classes. Fashion-MNIST is intended to serve as a direct
drop-in replacement for the original [MNIST] dataset for benchmarking
machine learning algorithms. It shares the same image size and
structure of training and testing splits.
References
----------
- [F-MNIST] Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms.
Han Xiao, Kashif Rasul, Roland Vollgraf. arXiv:1708.07747
- [MNIST] The MNIST Database of handwritten digits. Yann LeCun, Corinna Cortes,
Christopher J.C. Burges. http://yann.lecun.com/exdb/mnist/
'''
fmnist.add_metadata(kind="DESCR", contents=fmnist_readme)
fmnist.fetch()
fmnist.unpack()
import numpy as np
unpack_path = fmnist.unpack()
kind = "train"
label_path = unpack_path / f"{kind}-labels-idx1-ubyte"
with open(label_path, 'rb') as fd:
target = np.frombuffer(fd.read(), dtype=np.uint8, offset=8)
dataset_path = unpack_path / f"{kind}-images-idx3-ubyte"
with open(dataset_path, 'rb') as fd:
data = np.frombuffer(fd.read(), dtype=np.uint8, offset=16).reshape(len(target), 784)
print(f'Data: {data.shape}, Target: {target.shape}')
from src.data import Dataset
help(Dataset.__init__)
#%%file -a ../src/data/localdata.py
#__all__ += ['process_mnist']
def process_mnist(dataset_name='mnist', kind='train', metadata=None):
'''
Load the MNIST dataset (or a compatible variant; e.g. F-MNIST)
dataset_name: {'mnist', 'f-mnist'}
Which variant to load
kind: {'train', 'test'}
Dataset comes pre-split into training and test data.
Indicates which dataset to load
metadata: dict
Additional metadata fields will be added to this dict.
'kind': value of `kind` used to generate a subset of the data
'''
if metadata is None:
metadata = {}
if kind == 'test':
kind = 't10k'
label_path = interim_data_path / dataset_name / f"{kind}-labels-idx1-ubyte"
with open(label_path, 'rb') as fd:
target = np.frombuffer(fd.read(), dtype=np.uint8, offset=8)
dataset_path = interim_data_path / dataset_name / f"{kind}-images-idx3-ubyte"
with open(dataset_path, 'rb') as fd:
data = np.frombuffer(fd.read(), dtype=np.uint8,
offset=16).reshape(len(target), 784)
metadata['subset'] = kind
dset_opts = {
'dataset_name': dataset_name,
'data': data,
'target': target,
'metadata': metadata,
}
return dset_opts
from functools import partial
from src.data.localdata import process_mnist
fmnist.unpack(force=True)
fmnist.load_function = partial(process_mnist, dataset_name='f-mnist')
ds = fmnist.process(force=True)
ds.data.shape, ds.target.shape
from src import workflow
# Add the Raw Dataset to the master list of Raw Datasets
workflow.add_raw_dataset(fmnist)
workflow.available_raw_datasets()
# Create a pair of Datasets from this Raw Dataset, by specifying different options for the RawDataset creation
for kind in ['train', 'test']:
workflow.add_transformer(from_raw=fmnist.name, raw_dataset_opts={'kind':kind},
output_dataset=f"{fmnist.name}_{kind}")
workflow.get_transformer_list()
logger.setLevel(logging.INFO)
workflow.make_data()
!cd .. && make data
ds = Dataset.load("f-mnist_test")
print(f"Data:{ds.data.shape}, Target:{ds.target.shape}")
ds = Dataset.load("f-mnist_train")
print(f"Data:{ds.data.shape}, Target:{ds.target.shape}")
workflow.available_datasets()
| 0.696371 | 0.980053 |

# Column Type Transforms
Copyright (c) Microsoft Corporation. All rights reserved.<br>
Licensed under the MIT License.
When consuming a data set, it is highly useful to know as much as possible about the data. Column types can help you understand more about each column, and enable type-specific transformations later. This provides much more insight than treating all data as strings.
In this notebook, you will learn about:
- [Built-in column types](#types)
- How to:
- [Convert to long (integer)](#long)
- [Convert to double (floating point or decimal number)](#double)
- [Convert to boolean](#boolean)
- [Convert to datetime](#datetime)
- [How to use `ColumnTypesBuilder` to get suggested column types and convert them](#builder)
- [How to convert column type for multiple columns if types are known](#multiple-columns)
## Set up
```
import azureml.dataprep as dprep
dflow = dprep.read_csv('../data/crime-winter.csv')
dflow = dflow.keep_columns(['Case Number', 'Date', 'IUCR', 'Arrest', 'Longitude', 'Latitude'])
```
<a id="types"></a>
## Built-in column types
Currently, Data Prep supports the following column types: string, long (integer), double (floating point or decimal number), boolean, and datetime.
In the previous step, a data set was read in as a Dataflow, with only a few interesting columns kept. We will use this Dataflow to explore column types throughout the notebook.
```
dflow.head(5)
```
From the first few rows of the Dataflow, you can see that the columns contain different types of data. However, by looking at `dtypes`, you can see that `read_csv()` treats all columns as string columns.
Note that `auto_read_file()` is a data ingestion function that infers column types. Learn more about it [here](./auto-read-file.ipynb).
```
dflow.dtypes
```
<a id="long"></a>
### Converting to long (integer)
Suppose the "IUCR" column should only contain integers. You can call `to_long` to convert the column type of "IUCR" to `FieldType.INTEGER`. If you look at the data profile ([learn more about data profiles](./data-profile.ipynb)), you will see numeric metrics populated for that column such as mean, variance, quantiles, etc. This is helpful for understanding the shape and distribution of numeric data.
```
dflow_conversion = dflow.to_long('IUCR')
profile = dflow_conversion.get_profile()
profile
```
<a id="double"></a>
### Converting to double (floating point or decimal number)
Suppose the "Latitude" and "Longitude" columns should only contain decimal numbers. You can call `to_double` to convert the column type of "Latitude" and "Longitude" to `FieldType.DECIMAL`. In the data profile, you will see numeric metrics populated for these columns as well. Note that after converting the column types, you can see that there are missing values in these columns. Metrics like this can be helpful for noticing issues with the data set.
```
dflow_conversion = dflow_conversion.to_number(['Latitude', 'Longitude'])
profile = dflow_conversion.get_profile()
profile
```
<a id="boolean"></a>
### Converting to boolean
Suppose the "Arrest" column should only contain boolean values. You can call `to_bool` to convert the column type of "Arrest" to `FieldType.BOOLEAN`.
The `to_bool` function allows you to specify which values should map to `True` and which values should map to `False`. To do so, you can provide those values in an array as parameters `true_values` and `false_values`. Additionally, you can specify whether all other values should become `True`, `False` or Error by using the `mismatch_as` parameter.
```
dflow_conversion.to_bool('Arrest',
true_values=[1],
false_values=[0],
mismatch_as=dprep.MismatchAsOption.ASERROR).head(5)
```
In the previous conversion, all the values in the "Arrest" column became `DataPrepError`, because 'FALSE' didn't match any of the `false_values` nor any of the `true_values`, and all the unmatched values were set to become errors. Let's try the conversion again with different `false_values`.
```
dflow_conversion = dflow_conversion.to_bool('Arrest',
true_values=['1', 'TRUE'],
false_values=['0', 'FALSE'],
mismatch_as=dprep.MismatchAsOption.ASERROR)
dflow_conversion.head(5)
```
This time, all the string values 'FALSE' have been successfully converted to the boolean value `False`. Take another look at the data profile.
```
profile = dflow_conversion.get_profile()
profile
```
<a id="datetime"></a>
Suppose the "Date" column should only contain datetime values. You can convert its column type to `FieldType.DateTime` using the `to_datetime` function. Typically, datetime formats can be confusing or inconsistent. Next, we will show you all the tools that can help correctly converting the column to `DateTime`.
In the first example, directly call `to_datetime` with only the column name. Data Prep will inspect the data in this column and learn what format should be used for the conversion.
Note that if there is data in the column that cannot be converted to datetime, an Error value will be created in that cell.
```
dflow_conversion_date = dflow_conversion.to_datetime('Date')
dflow_conversion_date.head(5)
```
In this case, we can see that '1/10/2016 11:00' was converted using the format `%m/%d/%Y %H:%M`.
The data in this column is actually somewhat ambiguous. Should the dates be 'October 1' or 'January 10'? The function `to_datetime` determines that both are possible, but defaults to month-first (US format).
If the data was supposed to be day-first, you can customize the conversion.
```
dflow_alternate_conversion = dflow_conversion.to_datetime('Date', date_time_formats=['%d/%m/%Y %H:%M'])
dflow_alternate_conversion.head(5)
```
<a id="builder"></a>
## Using `ColumnTypesBuilder`
Data Prep can help you automatically detect what are the likely column types.
You can call `dflow.builders.set_column_types()` to get a `ColumnTypesBuilder`. Then, calling `learn()` on it will trigger Data Prep to inspect the data in each column. As a result, you can see the suggested column types for each column (conversion candidates).
```
builder = dflow.builders.set_column_types()
builder.learn()
builder
```
In this case, Data Prep suggested the correct column types for "Arrest", "Case Number", "Latitude", and "Longitude".
However, for "Date", it has suggested two possible date formats: month-first, or day-first. The ambiguity must be resolved before you complete the conversion. To use the month-first format, you can call `builder.ambiguous_date_conversions_keep_month_day()`. Otherwise, call `builder.ambiguous_date_conversions_keep_day_month()`. Note that if there were multiple datetime columns with ambiguous date conversions, calling one of these functions will apply the resolution to all of them.
If you want to skip all the ambiguous date column conversions instead, you can call: `builder.ambiguous_date_conversions_drop()`
```
builder.ambiguous_date_conversions_keep_month_day()
builder.conversion_candidates
```
The conversion candidate for "IUCR" is currently `FieldType.INTEGER`. If you know that "IUCR" should be floating point (called `FieldType.DECIMAL`), you can tweak the builder to change the conversion candidate for that specific column.
```
builder.conversion_candidates['IUCR'] = dprep.FieldType.DECIMAL
builder
```
In this case we are happy with "IUCR" as `FieldType.INTEGER`. So we set it back.
```
builder.conversion_candidates['IUCR'] = dprep.FieldType.INTEGER
builder
```
Once you are happy with the conversion candidates, you can complete the conversion by calling `builder.to_dataflow()`.
```
dflow_converion_using_builder = builder.to_dataflow()
dflow_converion_using_builder.head(5)
```
<a id="multiple-columns"></a>
## Convert column types for multiple columns
If you already know the column types, you can simply call `dflow.set_column_types()`. This function allows you to specify multiple columns, and the desired column type for each one. Here's how you can convert all five columns at once.
Note that `set_column_types` only supports a subset of column type conversions. For example, we cannot specify the true/false values for a boolean conversion, so the results of this operation is incorrect for the "Arrest" column.
```
dflow_conversion_using_set = dflow.set_column_types({
'IUCR': dprep.FieldType.INTEGER,
'Latitude': dprep.FieldType.DECIMAL,
'Longitude': dprep.FieldType.DECIMAL,
'Arrest': dprep.FieldType.BOOLEAN,
'Date': (dprep.FieldType.DATE, ['%m/%d/%Y %H:%M']),
})
dflow_conversion_using_set.head(5)
```
|
github_jupyter
|
import azureml.dataprep as dprep
dflow = dprep.read_csv('../data/crime-winter.csv')
dflow = dflow.keep_columns(['Case Number', 'Date', 'IUCR', 'Arrest', 'Longitude', 'Latitude'])
dflow.head(5)
dflow.dtypes
dflow_conversion = dflow.to_long('IUCR')
profile = dflow_conversion.get_profile()
profile
dflow_conversion = dflow_conversion.to_number(['Latitude', 'Longitude'])
profile = dflow_conversion.get_profile()
profile
dflow_conversion.to_bool('Arrest',
true_values=[1],
false_values=[0],
mismatch_as=dprep.MismatchAsOption.ASERROR).head(5)
dflow_conversion = dflow_conversion.to_bool('Arrest',
true_values=['1', 'TRUE'],
false_values=['0', 'FALSE'],
mismatch_as=dprep.MismatchAsOption.ASERROR)
dflow_conversion.head(5)
profile = dflow_conversion.get_profile()
profile
dflow_conversion_date = dflow_conversion.to_datetime('Date')
dflow_conversion_date.head(5)
dflow_alternate_conversion = dflow_conversion.to_datetime('Date', date_time_formats=['%d/%m/%Y %H:%M'])
dflow_alternate_conversion.head(5)
builder = dflow.builders.set_column_types()
builder.learn()
builder
builder.ambiguous_date_conversions_keep_month_day()
builder.conversion_candidates
builder.conversion_candidates['IUCR'] = dprep.FieldType.DECIMAL
builder
builder.conversion_candidates['IUCR'] = dprep.FieldType.INTEGER
builder
dflow_converion_using_builder = builder.to_dataflow()
dflow_converion_using_builder.head(5)
dflow_conversion_using_set = dflow.set_column_types({
'IUCR': dprep.FieldType.INTEGER,
'Latitude': dprep.FieldType.DECIMAL,
'Longitude': dprep.FieldType.DECIMAL,
'Arrest': dprep.FieldType.BOOLEAN,
'Date': (dprep.FieldType.DATE, ['%m/%d/%Y %H:%M']),
})
dflow_conversion_using_set.head(5)
| 0.249356 | 0.983036 |
# Metadata
```
Course: DS 5001
Module: 04 HW KEY
Author: R.C. Alvarado
```
# Instructions
In this week’s code exercise, you will use NLTK to help tokenize and annotate a small corpus of George Eliot's novels to create an `F3` level digital analytical edition from them.
Using this week's Lab notebook as a guide (`M04_01_Pipeline.ipynb`), which uses the `TextParser` class in the `/lib` directory of the notebook repository, import and combine the novels contained in the directory `/data/gutenberg/eliot-set`.
You should produce the following related dataframes:
* A library `LIB` with the following metadata (and data) about each book:
* The `book_id`, matching the first level of the index in the `CORPUS`.
* The raw book title will be sufficient, i.e. with title and author combined.
* The path of the source file.
* The regex used to parse chapter milestones.
* The length of the book (number of tokens).
* The number of chapters in the book.
* A an aggregate of all the novels' tokens `CORPUS` with an appropriate `OHCO` index, with following features:
* The token string.
* The term string.
* THe part-of-speech tag inferred by NLTK.
* A vocabulary `VOCAB` of terms extracted from `CORPUS`, with the following annotation features derived from either NLTK or by using operations presented in the notebook:
* Stopwords.
* Porter stems.
* Maximum POS; i.e. the most frequently associated POS tag for the term using `.idxmax()`. Note that ties are handled by the method.
* POS ambiguity expressed a number of POS tags associated with a term's tokens.
Once you have these, use the dataframes to answer the questions below.
**Hints**:
* You will need to edit the `ohco_pats` config to match the downloaded texts.
* You may also need to edit the code that reads files from disk and parses their names.
* In defining the milestone regexes, be sure to include all chapter-level sections.
# Questions
## Q1
What regular expression did you use to chunk _Middlemarch_ into chapters?
**Answer**: `^(?:PRELUDE|CHAPTER|FINALE)` or something similar.
## Q2
What is the title of the book has the most tokens?
**Answer**: _Middlemarch_.
## Q3
How many chapter level chunks are there in this novel?
**Answer**: 88
## Q4
Among the three stemming algorithms -- Porter, Snowball, and Lancaster -- which is the most aggressive, in terms of the number of words associated with each stem?
**Answer**: Lancaster (1.8 stems/term)
## Q5
Using the most aggressive stemmer from the previous question, what is the stem with the most associated terms?
**Answer**: 'cont'
# Code
## Setup
```
data_home = "../labs-repo/data"
local_lib = "../labs-repo/lib"
source_files = f'{data_home}/gutenberg/eliot-set'
data_prefix = 'eliot'
OHCO = ['book_id', 'chap_num', 'para_num', 'sent_num', 'token_num']
import pandas as pd
import numpy as np
from glob import glob
import re
import nltk
import sys
sys.path.append(local_lib)
from textparser import TextParser
```
## Inspect
Since Project Gutenberg texts vary widely in their markup, we define our chunking patterns by hand.
```
roman = '[IVXLCM]+'
caps = "[A-Z';, -]+"
clip_pats = [
r"\*\*\*\s*START OF",
r"\*\*\*\s*END OF"
]
# All are 'chap'and 'm'
ohco_pat_list = [
(6688, rf"^Chapter\s+{roman}\.\s*$"),
(507, rf"^(?:Chapter\s+{roman}|Epilogue)\s*$"),
(145, rf"^(?:PRELUDE|BOOK|CHAPTER|FINALE)")
]
```
## Register
We get each file and add to a library `LIB`.
```
source_file_list = sorted(glob(f"{source_files}/*.*"))
source_file_list
book_data = []
for source_file_path in source_file_list:
book_id = int(source_file_path.split('-')[-1].split('.')[0].replace('pg',''))
book_title = source_file_path.split('/')[-1].split('-')[0].replace('_', ' ')
book_data.append((book_id, source_file_path, book_title))
LIB = pd.DataFrame(book_data, columns=['book_id','source_file_path','raw_title'])\
.set_index('book_id').sort_index()
LIB
```
## Tokenize
We tokenize each book and add each `TOKENS` table to a list to be concatenated into a single `CORPUS`.
```
books = []
for pat in ohco_pat_list:
book_id, chap_regex = pat
print("Tokenizing", book_id, LIB.loc[book_id].raw_title)
ohco_pats = [('chap', chap_regex, 'm')]
src_file_path = LIB.loc[book_id].source_file_path
text = TextParser(src_file_path, ohco_pats=ohco_pats, clip_pats=clip_pats, use_nltk=True)
text.verbose = False
text.strip_hyphens = True
text.strip_whitespace = True
text.import_source().parse_tokens();
text.TOKENS['book_id'] = book_id
text.TOKENS = text.TOKENS.reset_index().set_index(['book_id'] + text.OHCO)
books.append(text.TOKENS)
```
## Create Corpus
```
CORPUS = pd.concat(books).sort_index()
CORPUS.loc[145]
```
## Extract some features for `LIB`
```
LIB['book_len'] = CORPUS.groupby('book_id').term_str.count()
LIB['n_chaps'] = CORPUS.reset_index()[['book_id','chap_id']]\
.drop_duplicates()\
.groupby('book_id').chap_id.count()
LIB['chap_regex'] = LIB.index.map(pd.Series({x[0]:x[1] for x in ohco_pat_list}))
LIB.sort_values('book_len')
```
## Exract VOCAB
Extract a vocabulary from the CORPUS as a whole
```
# CORPUS[CORPUS.term_str == '']
CORPUS[CORPUS.term_str == ''].token_str.value_counts()
CORPUS = CORPUS[CORPUS.term_str != '']
VOCAB = CORPUS.term_str.value_counts().to_frame('n').sort_index()
VOCAB.index.name = 'term_str'
VOCAB['n_chars'] = VOCAB.index.str.len()
VOCAB['p'] = VOCAB.n / VOCAB.n.sum()
VOCAB['i'] = -np.log2(VOCAB.p)
```
## Annotate VOCAB
```
VOCAB['max_pos'] = CORPUS[['term_str','pos']].value_counts().unstack(fill_value=0).idxmax(1)
TPM = CORPUS[['term_str','pos']].value_counts().unstack()
VOCAB['n_pos'] = TPM.count(1)
VOCAB['cat_pos'] = CORPUS[['term_str','pos']].value_counts().to_frame('n').reset_index()\
.groupby('term_str').pos.apply(lambda x: set(x))
VOCAB
```
## Add Stopwords
We use NLTK's built in stopword list for English. Note that we can add and subtract from this list, or just create our own list and keep it in our data model.
```
sw = pd.DataFrame(nltk.corpus.stopwords.words('english'), columns=['term_str'])
sw = sw.reset_index().set_index('term_str')
sw.columns = ['dummy']
sw.dummy = 1
VOCAB['stop'] = VOCAB.index.map(sw.dummy)
VOCAB['stop'] = VOCAB['stop'].fillna(0).astype('int')
VOCAB
```
## Add Stems
```
from nltk.stem.porter import PorterStemmer
stemmer1 = PorterStemmer()
VOCAB['stem_porter'] = VOCAB.apply(lambda x: stemmer1.stem(x.name), 1)
from nltk.stem.snowball import SnowballStemmer
stemmer2 = SnowballStemmer("english")
VOCAB['stem_snowball'] = VOCAB.apply(lambda x: stemmer2.stem(x.name), 1)
from nltk.stem.lancaster import LancasterStemmer
stemmer3 = LancasterStemmer()
VOCAB['stem_lancaster'] = VOCAB.apply(lambda x: stemmer3.stem(x.name), 1)
VOCAB.sample(10)
VOCAB[VOCAB.stem_porter != VOCAB.stem_snowball]
```
# Answers
## Q1
```
ohco_pats[0][1]
```
## Q2
```
LIB.loc[LIB.book_len.idxmax()].raw_title
```
## Q3
How many chapter level chunks are there in this novel?
```
LIB.loc[145].n_chaps
```
## Q4
Among the three stemming algorithms -- Porter, Snowball, and Lancaster -- which is the most aggressive, defined as the average number of terms associated with each stem?
```
for stem_type in ['porter', 'snowball', 'lancaster']:
x = VOCAB[f"stem_{stem_type}"].value_counts().mean()
print(stem_type, round(x,2))
```
lancaster
## Q5
Using the most aggressive stemmer from the previous question, what is the stem with the most associated terms?
```
most_aggressive_stem = VOCAB.stem_lancaster.value_counts().head(1).index.values[0]
most_aggressive_stem
VOCAB.query(f"stem_lancaster == '{most_aggressive_stem}'")
```
|
github_jupyter
|
Course: DS 5001
Module: 04 HW KEY
Author: R.C. Alvarado
data_home = "../labs-repo/data"
local_lib = "../labs-repo/lib"
source_files = f'{data_home}/gutenberg/eliot-set'
data_prefix = 'eliot'
OHCO = ['book_id', 'chap_num', 'para_num', 'sent_num', 'token_num']
import pandas as pd
import numpy as np
from glob import glob
import re
import nltk
import sys
sys.path.append(local_lib)
from textparser import TextParser
roman = '[IVXLCM]+'
caps = "[A-Z';, -]+"
clip_pats = [
r"\*\*\*\s*START OF",
r"\*\*\*\s*END OF"
]
# All are 'chap'and 'm'
ohco_pat_list = [
(6688, rf"^Chapter\s+{roman}\.\s*$"),
(507, rf"^(?:Chapter\s+{roman}|Epilogue)\s*$"),
(145, rf"^(?:PRELUDE|BOOK|CHAPTER|FINALE)")
]
source_file_list = sorted(glob(f"{source_files}/*.*"))
source_file_list
book_data = []
for source_file_path in source_file_list:
book_id = int(source_file_path.split('-')[-1].split('.')[0].replace('pg',''))
book_title = source_file_path.split('/')[-1].split('-')[0].replace('_', ' ')
book_data.append((book_id, source_file_path, book_title))
LIB = pd.DataFrame(book_data, columns=['book_id','source_file_path','raw_title'])\
.set_index('book_id').sort_index()
LIB
books = []
for pat in ohco_pat_list:
book_id, chap_regex = pat
print("Tokenizing", book_id, LIB.loc[book_id].raw_title)
ohco_pats = [('chap', chap_regex, 'm')]
src_file_path = LIB.loc[book_id].source_file_path
text = TextParser(src_file_path, ohco_pats=ohco_pats, clip_pats=clip_pats, use_nltk=True)
text.verbose = False
text.strip_hyphens = True
text.strip_whitespace = True
text.import_source().parse_tokens();
text.TOKENS['book_id'] = book_id
text.TOKENS = text.TOKENS.reset_index().set_index(['book_id'] + text.OHCO)
books.append(text.TOKENS)
CORPUS = pd.concat(books).sort_index()
CORPUS.loc[145]
LIB['book_len'] = CORPUS.groupby('book_id').term_str.count()
LIB['n_chaps'] = CORPUS.reset_index()[['book_id','chap_id']]\
.drop_duplicates()\
.groupby('book_id').chap_id.count()
LIB['chap_regex'] = LIB.index.map(pd.Series({x[0]:x[1] for x in ohco_pat_list}))
LIB.sort_values('book_len')
# CORPUS[CORPUS.term_str == '']
CORPUS[CORPUS.term_str == ''].token_str.value_counts()
CORPUS = CORPUS[CORPUS.term_str != '']
VOCAB = CORPUS.term_str.value_counts().to_frame('n').sort_index()
VOCAB.index.name = 'term_str'
VOCAB['n_chars'] = VOCAB.index.str.len()
VOCAB['p'] = VOCAB.n / VOCAB.n.sum()
VOCAB['i'] = -np.log2(VOCAB.p)
VOCAB['max_pos'] = CORPUS[['term_str','pos']].value_counts().unstack(fill_value=0).idxmax(1)
TPM = CORPUS[['term_str','pos']].value_counts().unstack()
VOCAB['n_pos'] = TPM.count(1)
VOCAB['cat_pos'] = CORPUS[['term_str','pos']].value_counts().to_frame('n').reset_index()\
.groupby('term_str').pos.apply(lambda x: set(x))
VOCAB
sw = pd.DataFrame(nltk.corpus.stopwords.words('english'), columns=['term_str'])
sw = sw.reset_index().set_index('term_str')
sw.columns = ['dummy']
sw.dummy = 1
VOCAB['stop'] = VOCAB.index.map(sw.dummy)
VOCAB['stop'] = VOCAB['stop'].fillna(0).astype('int')
VOCAB
from nltk.stem.porter import PorterStemmer
stemmer1 = PorterStemmer()
VOCAB['stem_porter'] = VOCAB.apply(lambda x: stemmer1.stem(x.name), 1)
from nltk.stem.snowball import SnowballStemmer
stemmer2 = SnowballStemmer("english")
VOCAB['stem_snowball'] = VOCAB.apply(lambda x: stemmer2.stem(x.name), 1)
from nltk.stem.lancaster import LancasterStemmer
stemmer3 = LancasterStemmer()
VOCAB['stem_lancaster'] = VOCAB.apply(lambda x: stemmer3.stem(x.name), 1)
VOCAB.sample(10)
VOCAB[VOCAB.stem_porter != VOCAB.stem_snowball]
ohco_pats[0][1]
LIB.loc[LIB.book_len.idxmax()].raw_title
LIB.loc[145].n_chaps
for stem_type in ['porter', 'snowball', 'lancaster']:
x = VOCAB[f"stem_{stem_type}"].value_counts().mean()
print(stem_type, round(x,2))
most_aggressive_stem = VOCAB.stem_lancaster.value_counts().head(1).index.values[0]
most_aggressive_stem
VOCAB.query(f"stem_lancaster == '{most_aggressive_stem}'")
| 0.244183 | 0.912981 |
### Simulation
```
from IPython.display import Image
Image(filename="figure.png",width=600)
from IPython.display import display, HTML
display(HTML(data="""
<style>
div#notebook-container { width: 95%; }
div#menubar-container { width: 65%; }
div#maintoolbar-container { width: 99%; }
</style>
"""))
```

### Equation generation
```
import sympy as sp
import numpy as np
from IPython.display import display
sp.init_printing(use_latex='mathjax')
# parameters
# Angular moment of inertia
J_B = 1e-2 * np.diag([1., 1., 1.])
# Gravity
g_I = np.array((-1, 0., 0.))
# Fuel consumption
alpha_m = 0.01
# Vector from thrust point to CoM
r_T_B = np.array([-1e-2, 0., 0.])
def dir_cosine(q):
return np.matrix([
[1 - 2 * (q[2] ** 2 + q[3] ** 2), 2 * (q[1] * q[2] +
q[0] * q[3]), 2 * (q[1] * q[3] - q[0] * q[2])],
[2 * (q[1] * q[2] - q[0] * q[3]), 1 - 2 *
(q[1] ** 2 + q[3] ** 2), 2 * (q[2] * q[3] + q[0] * q[1])],
[2 * (q[1] * q[3] + q[0] * q[2]), 2 * (q[2] * q[3] -
q[0] * q[1]), 1 - 2 * (q[1] ** 2 + q[2] ** 2)]
])
def omega(w):
return np.matrix([
[0, -w[0], -w[1], -w[2]],
[w[0], 0, w[2], -w[1]],
[w[1], -w[2], 0, w[0]],
[w[2], w[1], -w[0], 0],
])
def skew(v):
return np.matrix([
[0, -v[2], v[1]],
[v[2], 0, -v[0]],
[-v[1], v[0], 0]
])
f = sp.zeros(14, 1)
x = sp.Matrix(sp.symbols(
'm rx ry rz vx vy vz q0 q1 q2 q3 wx wy wz', real=True))
u = sp.Matrix(sp.symbols('ux uy uz', real=True))
g_I = sp.Matrix(g_I)
r_T_B = sp.Matrix(r_T_B)
J_B = sp.Matrix(J_B)
C_B_I = dir_cosine(x[7:11, 0])
C_I_B = C_B_I.transpose()
f[0, 0] = - alpha_m * u.norm()
f[1:4, 0] = x[4:7, 0]
f[4:7, 0] = 1 / x[0, 0] * C_I_B * u + g_I
f[7:11, 0] = 1 / 2 * omega(x[11:14, 0]) * x[7: 11, 0]
f[11:14, 0] = J_B ** -1 * \
(skew(r_T_B) * u - skew(x[11:14, 0]) * J_B * x[11:14, 0])
display(sp.simplify(f)) # f
display(sp.simplify(f.jacobian(x)))# A
sp.simplify(f.jacobian(u)) # B
```
### Ref
- Python implementation of 'Successive Convexification for 6-DoF Mars Rocket Powered Landing with Free-Final-Time' paper
by Michael Szmuk and Behçet Açıkmeşe.
- inspired by EmbersArc/SuccessiveConvexificationFreeFinalTime: Implementation of "Successive Convexification for 6-DoF Mars Rocket Powered Landing with Free-Final-Time" https://github.com/EmbersArc/SuccessiveConvexificationFreeFinalTime
|
github_jupyter
|
from IPython.display import Image
Image(filename="figure.png",width=600)
from IPython.display import display, HTML
display(HTML(data="""
<style>
div#notebook-container { width: 95%; }
div#menubar-container { width: 65%; }
div#maintoolbar-container { width: 99%; }
</style>
"""))
import sympy as sp
import numpy as np
from IPython.display import display
sp.init_printing(use_latex='mathjax')
# parameters
# Angular moment of inertia
J_B = 1e-2 * np.diag([1., 1., 1.])
# Gravity
g_I = np.array((-1, 0., 0.))
# Fuel consumption
alpha_m = 0.01
# Vector from thrust point to CoM
r_T_B = np.array([-1e-2, 0., 0.])
def dir_cosine(q):
return np.matrix([
[1 - 2 * (q[2] ** 2 + q[3] ** 2), 2 * (q[1] * q[2] +
q[0] * q[3]), 2 * (q[1] * q[3] - q[0] * q[2])],
[2 * (q[1] * q[2] - q[0] * q[3]), 1 - 2 *
(q[1] ** 2 + q[3] ** 2), 2 * (q[2] * q[3] + q[0] * q[1])],
[2 * (q[1] * q[3] + q[0] * q[2]), 2 * (q[2] * q[3] -
q[0] * q[1]), 1 - 2 * (q[1] ** 2 + q[2] ** 2)]
])
def omega(w):
return np.matrix([
[0, -w[0], -w[1], -w[2]],
[w[0], 0, w[2], -w[1]],
[w[1], -w[2], 0, w[0]],
[w[2], w[1], -w[0], 0],
])
def skew(v):
return np.matrix([
[0, -v[2], v[1]],
[v[2], 0, -v[0]],
[-v[1], v[0], 0]
])
f = sp.zeros(14, 1)
x = sp.Matrix(sp.symbols(
'm rx ry rz vx vy vz q0 q1 q2 q3 wx wy wz', real=True))
u = sp.Matrix(sp.symbols('ux uy uz', real=True))
g_I = sp.Matrix(g_I)
r_T_B = sp.Matrix(r_T_B)
J_B = sp.Matrix(J_B)
C_B_I = dir_cosine(x[7:11, 0])
C_I_B = C_B_I.transpose()
f[0, 0] = - alpha_m * u.norm()
f[1:4, 0] = x[4:7, 0]
f[4:7, 0] = 1 / x[0, 0] * C_I_B * u + g_I
f[7:11, 0] = 1 / 2 * omega(x[11:14, 0]) * x[7: 11, 0]
f[11:14, 0] = J_B ** -1 * \
(skew(r_T_B) * u - skew(x[11:14, 0]) * J_B * x[11:14, 0])
display(sp.simplify(f)) # f
display(sp.simplify(f.jacobian(x)))# A
sp.simplify(f.jacobian(u)) # B
| 0.424651 | 0.87153 |
```
from pylab import *
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
from mpl_toolkits.basemap import cm
from numpy import*
import numpy as np
from scipy import stats
%matplotlib inline
import warnings
warnings.simplefilter('ignore')
from scipy.cluster.hierarchy import linkage, dendrogram, fcluster
np.set_printoptions(precision=5, suppress=True) # suppress scientific float notation
dset=np.fromfile('gpcc_mon_jawabali_1901_2010.dat',dtype=np.float32)
#BUANG NaN
idNaN=find(dset<0)
dummy=np.empty(idNaN.size)
dummy[:]=np.nan
dset[idNaN]=dummy
nt=1320
ny=9
nx=23
#reshape data menjadi data 3 dimensi
#data=np.reshape(dset,(ny*nx,nt))
data=np.reshape(dset,(nt,ny,nx))
data.shape
#komposit data menjadi 12 bulan
data1 = np.reshape(data,(110,12,ny,nx))
#data1 = np.reshape(data,(12,110,ny,nx))
new_data = np.empty((12,ny,nx))
for i in range (0,ny,1):
for j in range (0,nx,1):
for k in range (0,12,1):
#new_data[k,i,j] = np.nanmean(data1[k,:,i,j])
new_data[k,i,j] = np.nanmean(data1[:,k,i,j])
new_data1 = np.reshape(new_data,(12,ny*nx))
#new_data1 = np.reshape(new_data,(12*ny*nx))
#data_cluster = np.reshape(new_data1,(ny*nx,12))
data_cluster = np.transpose(new_data1)
data_cluster.shape
fig1=plt.figure()
ax = fig1.add_axes([0, 0, 1, 1])
kode = np.array([0,1,2,3,4,5,6,7,8,9,10,11])
bulan = ['Jan','Feb','Mar','Apr','Mei','Jun','Jul','Aug','Sep','Oct','Nov','Dec']
for i in range(207):
ax.plot(data_cluster[i,:])
ax.set_title('Pola Curah Hujan Pulau Jawa & Bali Tahun 1901-2010',fontsize=15)
plt.xticks(kode, bulan)
ax.set_xlabel('bulan')
ax.set_ylabel('mm / bulan')
data_cluster_fix = np.reshape(data_cluster[~isnan(data_cluster)],(85,12))
data_cluster_fix.size
# generate the linkage matrix
Z = linkage(data_cluster_fix, 'ward')
# first 20 iteration
Z[:20] #[idx1, idx2, dist, sample_count]
# calculate full dendrogram
fig3=plt.figure(figsize=(25, 10))
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('sample index')
plt.ylabel('distance')
dendrogram(
Z,
leaf_rotation=90., # rotates the x axis labels
leaf_font_size=8., # font size for the x axis labels
)
fig3
#dendrogram truncation
plt.title('Hierarchical Clustering Dendrogram (truncated)')
plt.xlabel('sample index or (cluster size)')
plt.ylabel('distance')
dendrogram(
Z,
truncate_mode='lastp', # show only the last p merged clusters
p=12 # show only the last p merged clusters
)
plt.show()
#more fancy dendrogram
def fancy_dendrogram(*args, **kwargs):
max_d = kwargs.pop('max_d', None)
if max_d and 'color_threshold' not in kwargs:
kwargs['color_threshold'] = max_d
annotate_above = kwargs.pop('annotate_above', 0)
ddata = dendrogram(*args, **kwargs)
if not kwargs.get('no_plot', False):
plt.title('Hierarchical Clustering Dendrogram (truncated)')
plt.xlabel('sample index or (cluster size)')
plt.ylabel('distance')
for i, d, c in zip(ddata['icoord'], ddata['dcoord'], ddata['color_list']):
x = 0.5 * sum(i[1:3])
y = d[1]
if y > annotate_above:
plt.plot(x, y, 'o', c=c)
plt.annotate("%.3g" % y, (x, y), xytext=(0, -5),
textcoords='offset points',
va='top', ha='center')
if max_d:
plt.axhline(y=max_d, c='k')
return ddata
# set cut-off to 50
max_d = 1250 # max_d as in max_distance
fancy_dendrogram(
Z,
truncate_mode='lastp',
p=12,
leaf_rotation=90.,
leaf_font_size=12.,
show_contracted=True,
annotate_above=10,
max_d=max_d, # plot a horizontal cut-off line
)
plt.show()
#Retrieve the Clusters
max_d = 1250
clusters=fcluster(Z, max_d, criterion='distance')
clusters
clusters.size
cluster_1 = data_cluster_fix[clusters==1,:]
cluster_1.shape
cluster_2 = data_cluster_fix[clusters==2,:]
cluster_2.shape
C = data_cluster_fix[clusters,:]
C.shape
data_clusterx=data_cluster[:,2] #index 2 menyatakan bulan Maret
data_clusterx=np.reshape(data_clusterx,(ny,nx))
data_clusterx[~isnan(data_clusterx)]=clusters
data_clusterx.shape
map = Basemap(projection='cyl',llcrnrlon=104.985,llcrnrlat=-9.0126,urcrnrlon=115.978,urcrnrlat=-5.015,resolution='f') # projection, lat/lon extents and resolution of polygons to draw
# resolutions: c - crude, l - low, i - intermediate, h - high, f - full
lon,lat = map.makegrid(nx,ny)
fig=plt.figure(figsize=(15,15))
map.drawcoastlines()
map.drawstates()
map.drawcountries()
map.drawcounties() # you can even add counties (and other shapefiles!)
# draw parallels and meridians.
# label parallels on right and top
# meridians on bottom and left
parallels = np.arange(-80,80,1)
# labels = [left,right,top,bottom]
map.drawparallels(parallels,labels=[False,True,True,False])
meridians = np.arange(10.,351.,1)
map.drawmeridians(meridians,labels=[True,False,False,True])
plt.title('Plot Spasial Hasil Hierarchical Clustering Curah Hujan Bulan Maret 1901-2010 di Pulau Jawa & Bali',fontsize=15)
gpcc = map.contourf(lon,lat,data_clusterx,cmap='GnBu')
cb = map.colorbar(gpcc,"bottom", size="5%", pad="10%")
cb.set_ticks([1,2])
cb.set_ticklabels([1,2])
cb.set_label('cluster 1 cluster 2')
cluster_1[:,2] #index 2 menyatakan bulan Maret
cluster_2[:,2] #index 2 menyatakan bulan Maret
fig2=plt.figure()
ax = fig2.add_axes([0, 0, 1, 1])
kode = np.array([0,1,2,3,4,5,6,7,8,9,10,11])
bulan = ['Jan','Feb','Mar','Apr','Mei','Jun','Jul','Aug','Sep','Oct','Nov','Dec']
for i in range(14):
ax.plot(cluster_1[i,:])
ax.set_title('Pola Curah Hujan Pulau Jawa & Bali Tahun 1901-2010 (Cluster Pertama)',fontsize=15)
plt.xticks(kode, bulan)
ax.set_yticks([0,100,200,300,400,500,600,700,800])
ax.set_xlabel('bulan')
ax.set_ylabel('mm / bulan')
fig5=plt.figure()
ax = fig5.add_axes([0, 0, 1, 1])
kode = np.array([0,1,2,3,4,5,6,7,8,9,10,11])
bulan = ['Jan','Feb','Mar','Apr','Mei','Jun','Jul','Aug','Sep','Oct','Nov','Dec']
for i in range(71):
ax.plot(cluster_2[i,:])
ax.set_title('Pola Curah Hujan Pulau Jawa & Bali Tahun 1901-2010 (Cluster Kedua)',fontsize=15)
plt.xticks(kode, bulan)
ax.set_yticks([0,100,200,300,400,500,600,700,800])
ax.set_xlabel('bulan')
ax.set_ylabel('mm / bulan')
cluster1 = np.empty((12))
for i in range(0,12,1):
cluster1[i] = np.mean(cluster_1[:,i])
cluster2 = np.empty((12))
for i in range(0,12,1):
cluster2[i] = np.mean(cluster_2[:,i])
fig5=plt.figure()
ax = fig5.add_axes([0, 0, 1, 1])
kode = np.array([0,1,2,3,4,5,6,7,8,9,10,11])
bulan = ['Jan','Feb','Mar','Apr','Mei','Jun','Jul','Aug','Sep','Oct','Nov','Dec']
ax.plot(cluster1,'r')
ax.plot(cluster2,'b')
ax.set_title('Pola Curah Hujan Pulau Jawa & Bali Tahun 1901-2010 (Cluster Pertama & Kedua)',fontsize=15)
plt.xticks(kode, bulan)
ax.set_yticks([0,100,200,300,400,500,600,700,800])
ax.legend(["Cluster 1", "Cluster 2"],loc=0)
ax.set_xlabel('bulan')
ax.set_ylabel('mm / bulan')
```
|
github_jupyter
|
from pylab import *
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
from mpl_toolkits.basemap import cm
from numpy import*
import numpy as np
from scipy import stats
%matplotlib inline
import warnings
warnings.simplefilter('ignore')
from scipy.cluster.hierarchy import linkage, dendrogram, fcluster
np.set_printoptions(precision=5, suppress=True) # suppress scientific float notation
dset=np.fromfile('gpcc_mon_jawabali_1901_2010.dat',dtype=np.float32)
#BUANG NaN
idNaN=find(dset<0)
dummy=np.empty(idNaN.size)
dummy[:]=np.nan
dset[idNaN]=dummy
nt=1320
ny=9
nx=23
#reshape data menjadi data 3 dimensi
#data=np.reshape(dset,(ny*nx,nt))
data=np.reshape(dset,(nt,ny,nx))
data.shape
#komposit data menjadi 12 bulan
data1 = np.reshape(data,(110,12,ny,nx))
#data1 = np.reshape(data,(12,110,ny,nx))
new_data = np.empty((12,ny,nx))
for i in range (0,ny,1):
for j in range (0,nx,1):
for k in range (0,12,1):
#new_data[k,i,j] = np.nanmean(data1[k,:,i,j])
new_data[k,i,j] = np.nanmean(data1[:,k,i,j])
new_data1 = np.reshape(new_data,(12,ny*nx))
#new_data1 = np.reshape(new_data,(12*ny*nx))
#data_cluster = np.reshape(new_data1,(ny*nx,12))
data_cluster = np.transpose(new_data1)
data_cluster.shape
fig1=plt.figure()
ax = fig1.add_axes([0, 0, 1, 1])
kode = np.array([0,1,2,3,4,5,6,7,8,9,10,11])
bulan = ['Jan','Feb','Mar','Apr','Mei','Jun','Jul','Aug','Sep','Oct','Nov','Dec']
for i in range(207):
ax.plot(data_cluster[i,:])
ax.set_title('Pola Curah Hujan Pulau Jawa & Bali Tahun 1901-2010',fontsize=15)
plt.xticks(kode, bulan)
ax.set_xlabel('bulan')
ax.set_ylabel('mm / bulan')
data_cluster_fix = np.reshape(data_cluster[~isnan(data_cluster)],(85,12))
data_cluster_fix.size
# generate the linkage matrix
Z = linkage(data_cluster_fix, 'ward')
# first 20 iteration
Z[:20] #[idx1, idx2, dist, sample_count]
# calculate full dendrogram
fig3=plt.figure(figsize=(25, 10))
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('sample index')
plt.ylabel('distance')
dendrogram(
Z,
leaf_rotation=90., # rotates the x axis labels
leaf_font_size=8., # font size for the x axis labels
)
fig3
#dendrogram truncation
plt.title('Hierarchical Clustering Dendrogram (truncated)')
plt.xlabel('sample index or (cluster size)')
plt.ylabel('distance')
dendrogram(
Z,
truncate_mode='lastp', # show only the last p merged clusters
p=12 # show only the last p merged clusters
)
plt.show()
#more fancy dendrogram
def fancy_dendrogram(*args, **kwargs):
max_d = kwargs.pop('max_d', None)
if max_d and 'color_threshold' not in kwargs:
kwargs['color_threshold'] = max_d
annotate_above = kwargs.pop('annotate_above', 0)
ddata = dendrogram(*args, **kwargs)
if not kwargs.get('no_plot', False):
plt.title('Hierarchical Clustering Dendrogram (truncated)')
plt.xlabel('sample index or (cluster size)')
plt.ylabel('distance')
for i, d, c in zip(ddata['icoord'], ddata['dcoord'], ddata['color_list']):
x = 0.5 * sum(i[1:3])
y = d[1]
if y > annotate_above:
plt.plot(x, y, 'o', c=c)
plt.annotate("%.3g" % y, (x, y), xytext=(0, -5),
textcoords='offset points',
va='top', ha='center')
if max_d:
plt.axhline(y=max_d, c='k')
return ddata
# set cut-off to 50
max_d = 1250 # max_d as in max_distance
fancy_dendrogram(
Z,
truncate_mode='lastp',
p=12,
leaf_rotation=90.,
leaf_font_size=12.,
show_contracted=True,
annotate_above=10,
max_d=max_d, # plot a horizontal cut-off line
)
plt.show()
#Retrieve the Clusters
max_d = 1250
clusters=fcluster(Z, max_d, criterion='distance')
clusters
clusters.size
cluster_1 = data_cluster_fix[clusters==1,:]
cluster_1.shape
cluster_2 = data_cluster_fix[clusters==2,:]
cluster_2.shape
C = data_cluster_fix[clusters,:]
C.shape
data_clusterx=data_cluster[:,2] #index 2 menyatakan bulan Maret
data_clusterx=np.reshape(data_clusterx,(ny,nx))
data_clusterx[~isnan(data_clusterx)]=clusters
data_clusterx.shape
map = Basemap(projection='cyl',llcrnrlon=104.985,llcrnrlat=-9.0126,urcrnrlon=115.978,urcrnrlat=-5.015,resolution='f') # projection, lat/lon extents and resolution of polygons to draw
# resolutions: c - crude, l - low, i - intermediate, h - high, f - full
lon,lat = map.makegrid(nx,ny)
fig=plt.figure(figsize=(15,15))
map.drawcoastlines()
map.drawstates()
map.drawcountries()
map.drawcounties() # you can even add counties (and other shapefiles!)
# draw parallels and meridians.
# label parallels on right and top
# meridians on bottom and left
parallels = np.arange(-80,80,1)
# labels = [left,right,top,bottom]
map.drawparallels(parallels,labels=[False,True,True,False])
meridians = np.arange(10.,351.,1)
map.drawmeridians(meridians,labels=[True,False,False,True])
plt.title('Plot Spasial Hasil Hierarchical Clustering Curah Hujan Bulan Maret 1901-2010 di Pulau Jawa & Bali',fontsize=15)
gpcc = map.contourf(lon,lat,data_clusterx,cmap='GnBu')
cb = map.colorbar(gpcc,"bottom", size="5%", pad="10%")
cb.set_ticks([1,2])
cb.set_ticklabels([1,2])
cb.set_label('cluster 1 cluster 2')
cluster_1[:,2] #index 2 menyatakan bulan Maret
cluster_2[:,2] #index 2 menyatakan bulan Maret
fig2=plt.figure()
ax = fig2.add_axes([0, 0, 1, 1])
kode = np.array([0,1,2,3,4,5,6,7,8,9,10,11])
bulan = ['Jan','Feb','Mar','Apr','Mei','Jun','Jul','Aug','Sep','Oct','Nov','Dec']
for i in range(14):
ax.plot(cluster_1[i,:])
ax.set_title('Pola Curah Hujan Pulau Jawa & Bali Tahun 1901-2010 (Cluster Pertama)',fontsize=15)
plt.xticks(kode, bulan)
ax.set_yticks([0,100,200,300,400,500,600,700,800])
ax.set_xlabel('bulan')
ax.set_ylabel('mm / bulan')
fig5=plt.figure()
ax = fig5.add_axes([0, 0, 1, 1])
kode = np.array([0,1,2,3,4,5,6,7,8,9,10,11])
bulan = ['Jan','Feb','Mar','Apr','Mei','Jun','Jul','Aug','Sep','Oct','Nov','Dec']
for i in range(71):
ax.plot(cluster_2[i,:])
ax.set_title('Pola Curah Hujan Pulau Jawa & Bali Tahun 1901-2010 (Cluster Kedua)',fontsize=15)
plt.xticks(kode, bulan)
ax.set_yticks([0,100,200,300,400,500,600,700,800])
ax.set_xlabel('bulan')
ax.set_ylabel('mm / bulan')
cluster1 = np.empty((12))
for i in range(0,12,1):
cluster1[i] = np.mean(cluster_1[:,i])
cluster2 = np.empty((12))
for i in range(0,12,1):
cluster2[i] = np.mean(cluster_2[:,i])
fig5=plt.figure()
ax = fig5.add_axes([0, 0, 1, 1])
kode = np.array([0,1,2,3,4,5,6,7,8,9,10,11])
bulan = ['Jan','Feb','Mar','Apr','Mei','Jun','Jul','Aug','Sep','Oct','Nov','Dec']
ax.plot(cluster1,'r')
ax.plot(cluster2,'b')
ax.set_title('Pola Curah Hujan Pulau Jawa & Bali Tahun 1901-2010 (Cluster Pertama & Kedua)',fontsize=15)
plt.xticks(kode, bulan)
ax.set_yticks([0,100,200,300,400,500,600,700,800])
ax.legend(["Cluster 1", "Cluster 2"],loc=0)
ax.set_xlabel('bulan')
ax.set_ylabel('mm / bulan')
| 0.411466 | 0.582847 |
# Visualization with Matplotlib
```
from IPython.display import Pretty as disp
hint = 'https://raw.githubusercontent.com/soltaniehha/Business-Analytics/master/docs/hints/' # path to hints on GitHub
import pandas as pd
import matplotlib.pyplot as plt
# will lead to static images of your plot embedded in the notebook
%matplotlib inline
```
### Setting Styles
We will use the ``plt.style`` directive to choose appropriate aesthetic styles for our figures.
Here we will set the ``classic`` style, which ensures that the plots we create use the classic Matplotlib style:
```
plt.style.use('classic')
```
Available styles:
```
plt.style.available
x = [1,2,3,4]
y = [10, 20, 15, 30]
plt.plot(x, y)
fig = plt.figure() # so we can use it to save the image
plt.plot(x, y, marker='o', markersize=20, color='red')
plt.xlabel('X')
plt.ylabel('Y')
plt.title('Interesting Plot')
```
### Saving Figures to File
One nice feature of Matplotlib is the ability to save figures in a wide variety of formats.
Saving a figure can be done using the ``savefig()`` command.
For example, to save the previous figure as a PNG file, you can run this:
```
fig.savefig('my_figure.png')
```
We now have a file called ``my_figure.png`` in the current working directory:
```
!ls -lh my_figure.png
```
To confirm that it contains what we think it contains, let's use the IPython ``Image`` object to display the contents of this file:
```
from IPython.display import Image
Image('my_figure.png')
```
In ``savefig()``, the file format is inferred from the extension of the given filename.
Depending on what backends you have installed, many different file formats are available.
The list of supported file types can be found for your system by using the following method of the figure canvas object:
```
fig.canvas.get_supported_filetypes()
```
#### MATLAB-style Interface
Matplotlib was originally written as a Python alternative for MATLAB users, and much of its syntax reflects that fact.
The MATLAB-style tools are contained in the pyplot (``plt``) interface.
For example, the following code will probably look quite familiar to MATLAB users:
```
googl = pd.read_csv('https://raw.githubusercontent.com/soltaniehha/Business-Analytics/master/data/GOOGL.csv', parse_dates=True, index_col= 'Date')
googl_close = googl['Close']
googl_vol = googl['Volume']
plt.figure() # create a plot figure
# create the first of two panels and set current axis
plt.subplot(2, 1, 1) # (rows, columns, panel number)
plt.plot(googl_close)
# create the second panel and set current axis
plt.subplot(2, 1, 2)
plt.plot(googl_vol)
```
It is important to note that this interface is *stateful*: it keeps track of the "current" figure and axes, which are where all ``plt`` commands are applied.
While this stateful interface is fast and convenient for simple plots, it is easy to run into problems. For example, once the second panel is created, how can we go back and add something to the first?
This is possible within the MATLAB-style interface, but a bit clunky.
Fortunately, there is a better way.
#### Object-oriented interface
The object-oriented interface is available for these more complicated situations, and for when you want more control over your figure.
Rather than depending on some notion of an "active" figure or axes, **in the object-oriented interface the plotting functions are *methods* of explicit ``Figure`` and ``Axes`` objects**.
To re-create the previous plot using this style of plotting, you might do the following:
```
# First create a grid of plots
# ax will be an array of two Axes objects
fig, ax = plt.subplots(2)
# Call plot() method on the appropriate object
ax[0].plot(googl_close)
ax[1].plot(googl_vol)
```
For more simple plots, the choice of which style to use is largely a matter of preference, but the object-oriented approach can become a necessity as plots become more complicated.
We will switch between the MATLAB-style and object-oriented interfaces, depending on what is most convenient.
In most cases, the difference is as small as switching ``plt.plot()`` to ``ax.plot()``, but there are a few gotchas that we will highlight below:
## Aside: Matplotlib Gotchas
While most ``plt`` functions translate directly to ``ax`` methods (such as ``plt.plot()`` → ``ax.plot()``, ``plt.legend()`` → ``ax.legend()``, etc.), this is not the case for all commands.
In particular, functions to set limits, labels, and titles are slightly modified.
For transitioning between MATLAB-style functions and object-oriented methods, make the following changes:
- ``plt.xlabel()`` → ``ax.set_xlabel()``
- ``plt.ylabel()`` → ``ax.set_ylabel()``
- ``plt.xlim()`` → ``ax.set_xlim()``
- ``plt.ylim()`` → ``ax.set_ylim()``
- ``plt.title()`` → ``ax.set_title()``
In the object-oriented interface to plotting, rather than calling these functions individually, it is often more convenient to use the ``ax.set()`` method to set all these properties at once:
```
ax = plt.axes()
ax.plot(googl_close)
ax.set(xlim=(pd.to_datetime('2017-04-05'), pd.to_datetime('2019-04-05')), ylim=(900, 1300),
xlabel='Date', ylabel='Closing Price',
title='GOOGL Daily')
plt.xticks(rotation=45);
```
# Your turn
Using the `battles` dataset from Game of Thrones plot a scatter plot that shows the relationship between attacker_size and defender_size in these 38 battles. Use the appropriate `xlim` and `ylim` to make the chart more readable.
Dataset source: [Kaggle: Game of Thrones](https://www.kaggle.com/mylesoneill/game-of-thrones)
SPOILER ALERT: if you haven't seen the series and are planning to watch do not dig in too much in this dataset as it contains spoilers. However, since this was extracted based on the books there shouldn't be any final season spoilers.
```
battles = pd.read_csv('https://raw.githubusercontent.com/soltaniehha/Business-Analytics/master/data/GOT-battles.csv')
battles.head()
# Your asnwer goes here
# Don't run this cell to keep the outcome as your frame of reference
# HINT: Uncomment and execute the line below to get help
#disp(hint + '09-01-plot-hint')
# SOLUTION: Uncomment and execute the cell below to get help
#disp(hint + '09-01-plot')
```
Using the `deaths` dataset from Game of Thrones plot a histogram that shows the distribution of character deaths throughout 'Book Intro Chapter'.
Dataset source: [Kaggle: Game of Thrones](https://www.kaggle.com/mylesoneill/game-of-thrones)
SPOILER ALERT: if you haven't seen the series and are planning to watch do not dig in too much in this dataset as it contains spoilers. However, since this was extracted based on the books there shouldn't be any final season spoilers.
```
deaths = pd.read_csv('https://raw.githubusercontent.com/soltaniehha/Business-Analytics/master/data/GOT-character-deaths.csv')
deaths.head()
# Your asnwer goes here
# Don't run this cell to keep the outcome as your frame of reference
# SOLUTION: Uncomment and execute the cell below to get help
#disp(hint + '09-01-hist')
```
## Matplotlib Resources
* [Online Documentation](http://matplotlib.org/)
* [Matplotlib Tutorials](https://matplotlib.org/tutorials/index.html)
* [Matplotlib Gallery](http://matplotlib.org/gallery.html)
Notebooks from [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do):
* [Simple Line Plots](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/04.01-Simple-Line-Plots.ipynb)
* [Simple Scatter Plots](https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/04.02-Simple-Scatter-Plots.ipynb)
* [Visualizing Errors](https://github.com/jakevdp/PythonDataScienceHandbook/blob/8a34a4f653bdbdc01415a94dc20d4e9b97438965/notebooks/04.03-Errorbars.ipynb)
* [Density and Contour Plots](https://github.com/jakevdp/PythonDataScienceHandbook/blob/8a34a4f653bdbdc01415a94dc20d4e9b97438965/notebooks/04.04-Density-and-Contour-Plots.ipynb)
* [Histograms, Binnings, and Density](https://github.com/jakevdp/PythonDataScienceHandbook/blob/8a34a4f653bdbdc01415a94dc20d4e9b97438965/notebooks/04.05-Histograms-and-Binnings.ipynb)
* [Customizing Plot Legends](https://github.com/jakevdp/PythonDataScienceHandbook/blob/8a34a4f653bdbdc01415a94dc20d4e9b97438965/notebooks/04.06-Customizing-Legends.ipynb)
* [Customizing Colorbars](https://github.com/jakevdp/PythonDataScienceHandbook/blob/8a34a4f653bdbdc01415a94dc20d4e9b97438965/notebooks/04.07-Customizing-Colorbars.ipynb)
* [Multiple Subplots](https://github.com/jakevdp/PythonDataScienceHandbook/blob/8a34a4f653bdbdc01415a94dc20d4e9b97438965/notebooks/04.08-Multiple-Subplots.ipynb)
* [Text and Annotation](https://github.com/jakevdp/PythonDataScienceHandbook/blob/8a34a4f653bdbdc01415a94dc20d4e9b97438965/notebooks/04.09-Text-and-Annotation.ipynb)
* [Customizing Ticks](https://github.com/jakevdp/PythonDataScienceHandbook/blob/8a34a4f653bdbdc01415a94dc20d4e9b97438965/notebooks/04.10-Customizing-Ticks.ipynb)
* [Customizing Matplotlib: Configurations and Stylesheets](https://github.com/jakevdp/PythonDataScienceHandbook/blob/8a34a4f653bdbdc01415a94dc20d4e9b97438965/notebooks/04.11-Settings-and-Stylesheets.ipynb)
* [Three-Dimensional Plotting in Matplotlib](https://github.com/jakevdp/PythonDataScienceHandbook/blob/8a34a4f653bdbdc01415a94dc20d4e9b97438965/notebooks/04.12-Three-Dimensional-Plotting.ipynb)
* [Geographic Data with Basemap](https://github.com/jakevdp/PythonDataScienceHandbook/blob/8a34a4f653bdbdc01415a94dc20d4e9b97438965/notebooks/04.13-Geographic-Data-With-Basemap.ipynb)
|
github_jupyter
|
from IPython.display import Pretty as disp
hint = 'https://raw.githubusercontent.com/soltaniehha/Business-Analytics/master/docs/hints/' # path to hints on GitHub
import pandas as pd
import matplotlib.pyplot as plt
# will lead to static images of your plot embedded in the notebook
%matplotlib inline
plt.style.use('classic')
plt.style.available
x = [1,2,3,4]
y = [10, 20, 15, 30]
plt.plot(x, y)
fig = plt.figure() # so we can use it to save the image
plt.plot(x, y, marker='o', markersize=20, color='red')
plt.xlabel('X')
plt.ylabel('Y')
plt.title('Interesting Plot')
fig.savefig('my_figure.png')
!ls -lh my_figure.png
from IPython.display import Image
Image('my_figure.png')
fig.canvas.get_supported_filetypes()
googl = pd.read_csv('https://raw.githubusercontent.com/soltaniehha/Business-Analytics/master/data/GOOGL.csv', parse_dates=True, index_col= 'Date')
googl_close = googl['Close']
googl_vol = googl['Volume']
plt.figure() # create a plot figure
# create the first of two panels and set current axis
plt.subplot(2, 1, 1) # (rows, columns, panel number)
plt.plot(googl_close)
# create the second panel and set current axis
plt.subplot(2, 1, 2)
plt.plot(googl_vol)
# First create a grid of plots
# ax will be an array of two Axes objects
fig, ax = plt.subplots(2)
# Call plot() method on the appropriate object
ax[0].plot(googl_close)
ax[1].plot(googl_vol)
ax = plt.axes()
ax.plot(googl_close)
ax.set(xlim=(pd.to_datetime('2017-04-05'), pd.to_datetime('2019-04-05')), ylim=(900, 1300),
xlabel='Date', ylabel='Closing Price',
title='GOOGL Daily')
plt.xticks(rotation=45);
battles = pd.read_csv('https://raw.githubusercontent.com/soltaniehha/Business-Analytics/master/data/GOT-battles.csv')
battles.head()
# Your asnwer goes here
# Don't run this cell to keep the outcome as your frame of reference
# HINT: Uncomment and execute the line below to get help
#disp(hint + '09-01-plot-hint')
# SOLUTION: Uncomment and execute the cell below to get help
#disp(hint + '09-01-plot')
deaths = pd.read_csv('https://raw.githubusercontent.com/soltaniehha/Business-Analytics/master/data/GOT-character-deaths.csv')
deaths.head()
# Your asnwer goes here
# Don't run this cell to keep the outcome as your frame of reference
# SOLUTION: Uncomment and execute the cell below to get help
#disp(hint + '09-01-hist')
| 0.603581 | 0.979016 |
# San Diego Burrito Analytics: Bootcamp 2016
Scott Cole
15 Sept 2016
This notebook characterizes the data collected from consuming burritos from Don Carlos during Neuro bootcamp.
# Outline
1. Load data into python
* Use a Pandas dataframe
* View data
* Print some metadata
2. Hypothesis tests
* California burritos vs. Carnitas burritos
* Don Carlos 1 vs. Don Carlos 2
* Bonferroni correction
3. Distributions
* Distributions of each burrito quality
* Tests for normal distribution
4. Correlations
* Hunger vs. Overall rating
* Correlation matrix
5. Assumptions discussion
# 0. Import libraries into Python
```
# These commands control inline plotting
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
import numpy as np # Useful numeric package
import scipy as sp # Useful statistics package
import matplotlib.pyplot as plt # Plotting package
```
# 1. Load data into a Pandas dataframe
```
import pandas as pd # Dataframe package
filename = './burrito_bootcamp.csv'
df = pd.read_csv(filename)
```
### View raw data
```
df
```
### Brief metadata
```
print 'Number of burritos:', df.shape[0]
print 'Average burrito rating'
print 'Reviewers: '
print np.array(df['Reviewer'])
```
### What types of burritos have been rated?
```
def burritotypes(x, types = {'California':'cali', 'Carnitas':'carnita', 'Carne asada':'carne asada',
'Soyrizo':'soyrizo', 'Shredded chicken':'chicken'}):
import re
T = len(types)
Nmatches = {}
for b in x:
matched = False
for t in types.keys():
re4str = re.compile('.*'+types[t]+'.*', re.IGNORECASE)
if np.logical_and(re4str.match(b) is not None, matched is False):
try:
Nmatches[t] +=1
except KeyError:
Nmatches[t] = 1
matched = True
if matched is False:
try:
Nmatches['other'] +=1
except KeyError:
Nmatches['other'] = 1
return Nmatches
typecounts = burritotypes(df.Burrito)
plt.figure(figsize=(6,6))
ax = plt.axes([0.1, 0.1, 0.65, 0.65])
# The slices will be ordered and plotted counter-clockwise.
labels = typecounts.keys()
fracs = typecounts.values()
explode=[.1]*len(typecounts)
patches, texts, autotexts = plt.pie(fracs, explode=explode, labels=labels,
autopct=lambda(p): '{:.0f}'.format(p * np.sum(fracs) / 100), shadow=False, startangle=0)
# The default startangle is 0, which would start
# the Frogs slice on the x-axis. With startangle=90,
# everything is rotated counter-clockwise by 90 degrees,
# so the plotting starts on the positive y-axis.
plt.title('Types of burritos',size=30)
for t in texts:
t.set_size(20)
for t in autotexts:
t.set_size(20)
autotexts[0].set_color('w')
```
# 2. Hypothesis tests
```
#California burritos vs. Carnitas burritos
TODO
# Don Carlos 1 vs. Don Carlos 2
TODO
# Bonferroni correction
TODO
```
# 3. Burrito dimension distributions
### Distribution of each burrito quality
```
import math
def metrichist(metricname):
if metricname == 'Volume':
bins = np.arange(.375,1.225,.05)
xticks = np.arange(.4,1.2,.1)
xlim = (.4,1.2)
else:
bins = np.arange(-.25,5.5,.5)
xticks = np.arange(0,5.5,.5)
xlim = (-.25,5.25)
plt.figure(figsize=(5,5))
n, _, _ = plt.hist(df[metricname].dropna(),bins,color='k')
plt.xlabel(metricname + ' rating',size=20)
plt.xticks(xticks,size=15)
plt.xlim(xlim)
plt.ylabel('Count',size=20)
plt.yticks((0,int(math.ceil(np.max(n) / 5.)) * 5),size=15)
plt.tight_layout()
m_Hist = ['Hunger','Volume','Tortilla','Temp','Meat','Fillings',
'Meat:filling','Uniformity','Salsa','Synergy','Wrap','overall']
for m in m_Hist:
metrichist(m)
```
### Test for normal distribution
```
TODO
```
|
github_jupyter
|
# These commands control inline plotting
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
import numpy as np # Useful numeric package
import scipy as sp # Useful statistics package
import matplotlib.pyplot as plt # Plotting package
import pandas as pd # Dataframe package
filename = './burrito_bootcamp.csv'
df = pd.read_csv(filename)
df
print 'Number of burritos:', df.shape[0]
print 'Average burrito rating'
print 'Reviewers: '
print np.array(df['Reviewer'])
def burritotypes(x, types = {'California':'cali', 'Carnitas':'carnita', 'Carne asada':'carne asada',
'Soyrizo':'soyrizo', 'Shredded chicken':'chicken'}):
import re
T = len(types)
Nmatches = {}
for b in x:
matched = False
for t in types.keys():
re4str = re.compile('.*'+types[t]+'.*', re.IGNORECASE)
if np.logical_and(re4str.match(b) is not None, matched is False):
try:
Nmatches[t] +=1
except KeyError:
Nmatches[t] = 1
matched = True
if matched is False:
try:
Nmatches['other'] +=1
except KeyError:
Nmatches['other'] = 1
return Nmatches
typecounts = burritotypes(df.Burrito)
plt.figure(figsize=(6,6))
ax = plt.axes([0.1, 0.1, 0.65, 0.65])
# The slices will be ordered and plotted counter-clockwise.
labels = typecounts.keys()
fracs = typecounts.values()
explode=[.1]*len(typecounts)
patches, texts, autotexts = plt.pie(fracs, explode=explode, labels=labels,
autopct=lambda(p): '{:.0f}'.format(p * np.sum(fracs) / 100), shadow=False, startangle=0)
# The default startangle is 0, which would start
# the Frogs slice on the x-axis. With startangle=90,
# everything is rotated counter-clockwise by 90 degrees,
# so the plotting starts on the positive y-axis.
plt.title('Types of burritos',size=30)
for t in texts:
t.set_size(20)
for t in autotexts:
t.set_size(20)
autotexts[0].set_color('w')
#California burritos vs. Carnitas burritos
TODO
# Don Carlos 1 vs. Don Carlos 2
TODO
# Bonferroni correction
TODO
import math
def metrichist(metricname):
if metricname == 'Volume':
bins = np.arange(.375,1.225,.05)
xticks = np.arange(.4,1.2,.1)
xlim = (.4,1.2)
else:
bins = np.arange(-.25,5.5,.5)
xticks = np.arange(0,5.5,.5)
xlim = (-.25,5.25)
plt.figure(figsize=(5,5))
n, _, _ = plt.hist(df[metricname].dropna(),bins,color='k')
plt.xlabel(metricname + ' rating',size=20)
plt.xticks(xticks,size=15)
plt.xlim(xlim)
plt.ylabel('Count',size=20)
plt.yticks((0,int(math.ceil(np.max(n) / 5.)) * 5),size=15)
plt.tight_layout()
m_Hist = ['Hunger','Volume','Tortilla','Temp','Meat','Fillings',
'Meat:filling','Uniformity','Salsa','Synergy','Wrap','overall']
for m in m_Hist:
metrichist(m)
TODO
| 0.372848 | 0.93784 |
# Assignment 3 - Building a Custom Visualization
---
In this assignment you must choose one of the options presented below and submit a visual as well as your source code for peer grading. The details of how you solve the assignment are up to you, although your assignment must use matplotlib so that your peers can evaluate your work. The options differ in challenge level, but there are no grades associated with the challenge level you chose. However, your peers will be asked to ensure you at least met a minimum quality for a given technique in order to pass. Implement the technique fully (or exceed it!) and you should be able to earn full grades for the assignment.
Ferreira, N., Fisher, D., & Konig, A. C. (2014, April). [Sample-oriented task-driven visualizations: allowing users to make better, more confident decisions.](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/Ferreira_Fisher_Sample_Oriented_Tasks.pdf)
In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 571-580). ACM. ([video](https://www.youtube.com/watch?v=BI7GAs-va-Q))
In this [paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/Ferreira_Fisher_Sample_Oriented_Tasks.pdf) the authors describe the challenges users face when trying to make judgements about probabilistic data generated through samples. As an example, they look at a bar chart of four years of data (replicated below in Figure 1). Each year has a y-axis value, which is derived from a sample of a larger dataset. For instance, the first value might be the number votes in a given district or riding for 1992, with the average being around 33,000. On top of this is plotted the 95% confidence interval for the mean (see the boxplot lectures for more information, and the yerr parameter of barcharts).
<br>
<img src="readonly/Assignment3Fig1.png" alt="Figure 1" style="width: 400px;"/>
<h4 style="text-align: center;" markdown="1"> Figure 1 from (Ferreira et al, 2014).</h4>
<br>
A challenge that users face is that, for a given y-axis value (e.g. 42,000), it is difficult to know which x-axis values are most likely to be representative, because the confidence levels overlap and their distributions are different (the lengths of the confidence interval bars are unequal). One of the solutions the authors propose for this problem (Figure 2c) is to allow users to indicate the y-axis value of interest (e.g. 42,000) and then draw a horizontal line and color bars based on this value. So bars might be colored red if they are definitely above this value (given the confidence interval), blue if they are definitely below this value, or white if they contain this value.
<br>
<img src="readonly/Assignment3Fig2c.png" alt="Figure 1" style="width: 400px;"/>
<h4 style="text-align: center;" markdown="1"> Figure 2c from (Ferreira et al. 2014). Note that the colorbar legend at the bottom as well as the arrows are not required in the assignment descriptions below.</h4>
<br>
<br>
**Easiest option:** Implement the bar coloring as described above - a color scale with only three colors, (e.g. blue, white, and red). Assume the user provides the y axis value of interest as a parameter or variable.
**Harder option:** Implement the bar coloring as described in the paper, where the color of the bar is actually based on the amount of data covered (e.g. a gradient ranging from dark blue for the distribution being certainly below this y-axis, to white if the value is certainly contained, to dark red if the value is certainly not contained as the distribution is above the axis).
**Even Harder option:** Add interactivity to the above, which allows the user to click on the y axis to set the value of interest. The bar colors should change with respect to what value the user has selected.
**Hardest option:** Allow the user to interactively set a range of y values they are interested in, and recolor based on this (e.g. a y-axis band, see the paper for more details).
---
*Note: The data given for this assignment is not the same as the data used in the article and as a result the visualizations may look a little different.*
```
# Use the following data for this assignment:
import pandas as pd
import numpy as np
np.random.seed(12345)
df = pd.DataFrame([np.random.normal(32000,200000,3650),
np.random.normal(43000,100000,3650),
np.random.normal(43500,140000,3650),
np.random.normal(48000,70000,3650)],
index=[1992,1993,1994,1995])
df
```
|
github_jupyter
|
# Use the following data for this assignment:
import pandas as pd
import numpy as np
np.random.seed(12345)
df = pd.DataFrame([np.random.normal(32000,200000,3650),
np.random.normal(43000,100000,3650),
np.random.normal(43500,140000,3650),
np.random.normal(48000,70000,3650)],
index=[1992,1993,1994,1995])
df
| 0.474875 | 0.993587 |
```
import os
import tensorflow as tf
import numpy as np
import itertools
import matplotlib.pyplot as plt
import gc
from datetime import datetime
from sklearn.utils import shuffle
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from tensorflow import keras
from tensorflow.keras import layers
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.decomposition import PCA
input_label = []
output_label = []
a,b = 0,0
ficheiro = open("..\\..\\DatasetTratado\\21-02-2018.csv", "r")
ficheiro.readline()
ficheiro.readline()
ficheiro.readline()
linha = ficheiro.readline()
while(linha != ""):
linha = linha.split(",")
out = linha.pop(37)
if(out == "Benign"):
out = 0
b += 1
else:
out = 1
a += 1
output_label.append(out)
input_label.append(linha)
linha = ficheiro.readline()
ficheiro.close()
print(str(a) + " " + str(b))
scaler = MinMaxScaler(feature_range=(0,1))
scaler.fit(input_label)
input_label = scaler.transform(input_label)
```
<h2>PCA</h2>
```
pca = PCA(n_components=18)
pca.fit(input_label)
x_pca = pca.transform(input_label)
input_label.shape
x_pca.shape
x_pca = x_pca.reshape(len(x_pca), 18, 1)
y_pca = np.array(output_label)
x_pca, y_pca = shuffle(x_pca, y_pca)
```
<h2>Cross Validation</h2>
```
confusion_matrixs = []
roc_curvs = []
for i in range(10):
mini = int(len(x_pca) * 0.10) * i
maxi = int((len(x_pca) * 0.10) * (i + 1))
inp_train = np.array([*x_pca[0: mini],*x_pca[maxi:len(x_pca)]])
inp_test = np.array(x_pca[mini: maxi])
out_train = np.array([*y_pca[0: mini],*y_pca[maxi:len(y_pca)]])
out_test = np.array(y_pca[mini:maxi])
model = keras.Sequential([
layers.Input(shape = (18,1)),
layers.Conv1D(filters = 16, kernel_size = 3, padding = "same", activation = "relu", use_bias = True),
layers.MaxPool1D(pool_size = 3),
layers.Conv1D(filters = 8, kernel_size = 3, padding = "same", activation = "relu", use_bias = True),
layers.MaxPool1D(pool_size = 3),
layers.Flatten(),
layers.Dense(units = 2, activation = "softmax")
])
model.compile(optimizer= keras.optimizers.Adam(learning_rate= 0.00025), loss="sparse_categorical_crossentropy", metrics=['accuracy'])
treino = model.fit(x = inp_train, y = out_train, validation_split= 0.1, epochs = 10, shuffle = True,verbose = 0)
res = np.array([np.argmax(resu) for resu in model.predict(inp_test)])
confusion_matrixs.append(confusion_matrix(out_test, res))
fpr, tpr, _ = roc_curve(out_test, res)
auc = roc_auc_score(out_test, res)
roc_curvs.append([fpr, tpr, auc])
print(i)
```
<h2>Roc Curves</h2>
```
cores = ["blue", "orange", "green", "red", "purple", "brown", "pink", "gray", "olive", "cyan"]
for i in range(10):
plt.plot(roc_curvs[i][0],roc_curvs[i][1],label="curva " + str(i) + ", auc=" + str(roc_curvs[i][2]), c = cores[i])
plt.legend(loc=4)
plt.show()
total_conv_matrix = [[0,0],[0,0]]
for cov in confusion_matrixs:
total_conv_matrix[0][0] += cov[0][0]
total_conv_matrix[0][1] += cov[0][1]
total_conv_matrix[1][0] += cov[1][0]
total_conv_matrix[1][1] += cov[1][1]
def plot_confusion_matrix(cm, classes, normaliza = False, title = "Confusion matrix", cmap = plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normaliza:
cm = cm.astype('float') / cm.sum(axis = 1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print("Confusion matrix, without normalization")
print(cm)
thresh = cm.max() / 2
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i,j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
labels = ["Benign", "DDos"]
plot_confusion_matrix(cm = np.array(total_conv_matrix), classes = labels, title = "DDos IDS")
```
|
github_jupyter
|
import os
import tensorflow as tf
import numpy as np
import itertools
import matplotlib.pyplot as plt
import gc
from datetime import datetime
from sklearn.utils import shuffle
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from tensorflow import keras
from tensorflow.keras import layers
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.decomposition import PCA
input_label = []
output_label = []
a,b = 0,0
ficheiro = open("..\\..\\DatasetTratado\\21-02-2018.csv", "r")
ficheiro.readline()
ficheiro.readline()
ficheiro.readline()
linha = ficheiro.readline()
while(linha != ""):
linha = linha.split(",")
out = linha.pop(37)
if(out == "Benign"):
out = 0
b += 1
else:
out = 1
a += 1
output_label.append(out)
input_label.append(linha)
linha = ficheiro.readline()
ficheiro.close()
print(str(a) + " " + str(b))
scaler = MinMaxScaler(feature_range=(0,1))
scaler.fit(input_label)
input_label = scaler.transform(input_label)
pca = PCA(n_components=18)
pca.fit(input_label)
x_pca = pca.transform(input_label)
input_label.shape
x_pca.shape
x_pca = x_pca.reshape(len(x_pca), 18, 1)
y_pca = np.array(output_label)
x_pca, y_pca = shuffle(x_pca, y_pca)
confusion_matrixs = []
roc_curvs = []
for i in range(10):
mini = int(len(x_pca) * 0.10) * i
maxi = int((len(x_pca) * 0.10) * (i + 1))
inp_train = np.array([*x_pca[0: mini],*x_pca[maxi:len(x_pca)]])
inp_test = np.array(x_pca[mini: maxi])
out_train = np.array([*y_pca[0: mini],*y_pca[maxi:len(y_pca)]])
out_test = np.array(y_pca[mini:maxi])
model = keras.Sequential([
layers.Input(shape = (18,1)),
layers.Conv1D(filters = 16, kernel_size = 3, padding = "same", activation = "relu", use_bias = True),
layers.MaxPool1D(pool_size = 3),
layers.Conv1D(filters = 8, kernel_size = 3, padding = "same", activation = "relu", use_bias = True),
layers.MaxPool1D(pool_size = 3),
layers.Flatten(),
layers.Dense(units = 2, activation = "softmax")
])
model.compile(optimizer= keras.optimizers.Adam(learning_rate= 0.00025), loss="sparse_categorical_crossentropy", metrics=['accuracy'])
treino = model.fit(x = inp_train, y = out_train, validation_split= 0.1, epochs = 10, shuffle = True,verbose = 0)
res = np.array([np.argmax(resu) for resu in model.predict(inp_test)])
confusion_matrixs.append(confusion_matrix(out_test, res))
fpr, tpr, _ = roc_curve(out_test, res)
auc = roc_auc_score(out_test, res)
roc_curvs.append([fpr, tpr, auc])
print(i)
cores = ["blue", "orange", "green", "red", "purple", "brown", "pink", "gray", "olive", "cyan"]
for i in range(10):
plt.plot(roc_curvs[i][0],roc_curvs[i][1],label="curva " + str(i) + ", auc=" + str(roc_curvs[i][2]), c = cores[i])
plt.legend(loc=4)
plt.show()
total_conv_matrix = [[0,0],[0,0]]
for cov in confusion_matrixs:
total_conv_matrix[0][0] += cov[0][0]
total_conv_matrix[0][1] += cov[0][1]
total_conv_matrix[1][0] += cov[1][0]
total_conv_matrix[1][1] += cov[1][1]
def plot_confusion_matrix(cm, classes, normaliza = False, title = "Confusion matrix", cmap = plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normaliza:
cm = cm.astype('float') / cm.sum(axis = 1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print("Confusion matrix, without normalization")
print(cm)
thresh = cm.max() / 2
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i,j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
labels = ["Benign", "DDos"]
plot_confusion_matrix(cm = np.array(total_conv_matrix), classes = labels, title = "DDos IDS")
| 0.5794 | 0.566558 |
# Comparison with pyrb
The ``pyrb`` package uses different formulations for the risk parity problem with general linear constraints and with the addition of objective terms such as the mean return and the volatility. Nonetheless, we can fairly compare with their code for the case where only the risk parity term is included and the constraints are $\texttt{sum}(\mathbf{w}) = 1$ and $\mathbf{w} \geq 0$.
Following the example shown at https://github.com/jcrichard/pyrb/blob/master/notebooks/RiskBudgeting.ipynb, we have:
```
from pyrb import EqualRiskContribution
import pandas as pd
import numpy as np
import riskparityportfolio as rpp
covariance_matrix = pd.read_csv("https://raw.githubusercontent.com/jcrichard/pyrb/master/notebooks/data.csv",sep=";",index_col=0).pct_change().cov() * 260
covariance_matrix
cov = np.asarray(covariance_matrix)
ERC = EqualRiskContribution(cov)
%timeit ERC.solve()
optimal_weights = ERC.x
optimal_weights
risk_contributions_scaled = ERC.get_risk_contributions()
risk_contributions_scaled
b = np.ones(len(cov)) / len(cov)
%timeit rpp.vanilla.design(cov, b)
rpp.vanilla.design(cov, b)
```
It seems that, for this example, ``riskparityportfolio`` is around $40\times$ faster than ``pyrb``. However,
more examples are to be done in order to draw more conclusive results, especially on more interesting scenarios involving general linear constraints and additional objective terms such as the mean return.
## Constrained Risk Parity
For constrained risk parity, see the pyrb notebook: https://github.com/jcrichard/pyrb/blob/master/notebooks/ConstrainedRiskBudgeting.ipynb
```
vol = [0.05,0.05,0.07,0.1,0.15,0.15,0.15,0.18]
cor = np.array([[100, 80, 60, -20, -10, -20, -20, -20],
[ 80, 100, 40, -20, -20, -10, -20, -20],
[ 60, 40, 100, 50, 30, 20, 20, 30],
[-20, -20, 50, 100, 60, 60, 50, 60],
[-10, -20, 30, 60, 100, 90, 70, 70],
[-20, -10, 20, 60, 90, 100, 60, 70],
[-20, -20, 20, 50, 70, 60, 100, 70],
[-20, -20, 30, 60, 70, 70, 70, 100]])/100
cov = np.outer(vol,vol)*cor
my_rpp = rpp.RiskParityPortfolio(covariance=cov)
my_rpp.design()
my_rpp.weights
my_rpp.risk_contributions
# inequality constraints matrix and vector
Dmat = np.array([[0,0,0,0,-1,-1,-1,-1],
[1,-1,0,0,1,-1,0,0]])
dvec = np.array([-0.3,-0.05])
my_rpp.design(Dmat=Dmat, dvec=dvec)
my_rpp.weights
Dmat @ my_rpp.weights
my_rpp.risk_contributions
```
|
github_jupyter
|
from pyrb import EqualRiskContribution
import pandas as pd
import numpy as np
import riskparityportfolio as rpp
covariance_matrix = pd.read_csv("https://raw.githubusercontent.com/jcrichard/pyrb/master/notebooks/data.csv",sep=";",index_col=0).pct_change().cov() * 260
covariance_matrix
cov = np.asarray(covariance_matrix)
ERC = EqualRiskContribution(cov)
%timeit ERC.solve()
optimal_weights = ERC.x
optimal_weights
risk_contributions_scaled = ERC.get_risk_contributions()
risk_contributions_scaled
b = np.ones(len(cov)) / len(cov)
%timeit rpp.vanilla.design(cov, b)
rpp.vanilla.design(cov, b)
vol = [0.05,0.05,0.07,0.1,0.15,0.15,0.15,0.18]
cor = np.array([[100, 80, 60, -20, -10, -20, -20, -20],
[ 80, 100, 40, -20, -20, -10, -20, -20],
[ 60, 40, 100, 50, 30, 20, 20, 30],
[-20, -20, 50, 100, 60, 60, 50, 60],
[-10, -20, 30, 60, 100, 90, 70, 70],
[-20, -10, 20, 60, 90, 100, 60, 70],
[-20, -20, 20, 50, 70, 60, 100, 70],
[-20, -20, 30, 60, 70, 70, 70, 100]])/100
cov = np.outer(vol,vol)*cor
my_rpp = rpp.RiskParityPortfolio(covariance=cov)
my_rpp.design()
my_rpp.weights
my_rpp.risk_contributions
# inequality constraints matrix and vector
Dmat = np.array([[0,0,0,0,-1,-1,-1,-1],
[1,-1,0,0,1,-1,0,0]])
dvec = np.array([-0.3,-0.05])
my_rpp.design(Dmat=Dmat, dvec=dvec)
my_rpp.weights
Dmat @ my_rpp.weights
my_rpp.risk_contributions
| 0.511961 | 0.985043 |
# Cylinder with material jump
**This needs the fenics module**
```
import torch as tn
import torchtt as tntt
import matplotlib.pyplot as plt
import tt_iga
import numpy as np
import datetime
import matplotlib.colors
import scipy.sparse.linalg
import pandas as pd
import os
import datetime
import multiprocessing as mp
import fenics as fn
import pickle
tn.set_default_dtype(tn.float64)
```
Define function and classes for the Fenics solver
```
def create_file_and_mesh(theta,meshsize = 0.5, verb = False):
with open('fem_mesh/cylinder_material1_proto.geo', 'r') as file:
data = file.read()
s = "theta2 = %.18f; \ntheta3 = %.18f; \ntheta4 = %.18f;\nmeshsize=%.18f;"%(theta[1],theta[2],theta[3],meshsize)
s = s + data
with open("fem_mesh/tmp.geo", "w") as file:
file.write(s)
file.close()
if verb: print('geo file created',flush = True)
if verb:
os.system('gmsh fem_mesh/tmp.geo -nt 20 -3 -o fem_mesh/tmp.msh -format msh2 ')
else:
os.system('gmsh fem_mesh/tmp.geo -nt 20 -3 -o fem_mesh/tmp.msh -format msh2 >/dev/null 2>&1')
if verb: print('mesh file created',flush=True)
if verb:
os.system('dolfin-convert fem_mesh/tmp.msh fem_mesh/tmp.xml')
else:
os.system('dolfin-convert fem_mesh/tmp.msh fem_mesh/tmp.xml >/dev/null 2>&1')
if verb: print('mesh file converted in fenics format',flush=True)
mesh = fn.Mesh('fem_mesh/tmp.xml')
markers = fn.MeshFunction("size_t", mesh, 'fem_mesh/tmp_physical_region.xml')
boundaries = fn.MeshFunction('size_t', mesh, 'fem_mesh/tmp_facet_region.xml')
if verb: print('mesh imported')
return mesh, markers, boundaries
class Solver():
def __init__(self):
pass
def set_params(self, theta, meshsize=0.4):
'''
Set the parameters.
Parameters
----------
theta : list of floats or numpy array
The parameters. Belong to [-0.05,0.05].
meshsize : float, optional
The meshgrid size. The default is 0.4.
Returns
-------
None.
'''
self.theta = theta
self.meshsize = meshsize
def create_mesh(self, verb = False):
'''
Create the mesh and save it
Returns
-------
tme : datetime object
Duration of simulation.
'''
if verb: print('meshsize ',self.meshsize,' thetas ',self.theta,flush=True)
tme = datetime.datetime.now()
mesh, subdomains, boundaries = create_file_and_mesh(self.theta, self.meshsize, verb)
self.mesh = mesh
self.subdomains = subdomains
self.boundaries = boundaries
tme = datetime.datetime.now() - tme
if verb : print('Time needed for meshing and importing ',tme,flush=True)
return tme
def solve(self, verb = False):
'''
Solve the problem
Returns
-------
tme : datetime object
Duration of simulation.
'''
tme = datetime.datetime.now()
class permittivity(fn.UserExpression):
def __init__(self, markers, val, **kwargs):
self.markers = markers
self.val = val
super().__init__(**kwargs)
def eval_cell(self, values, x, cell):
if self.markers[cell.index] == 44:
values[0] = self.val
else:
values[0] = 1
kappa = permittivity(self.subdomains, 6.0+self.theta[0]*5.0, degree=2)
dx = fn.Measure('dx', domain=self.mesh, subdomain_data=self.subdomains)
V = fn.FunctionSpace(self.mesh, 'CG', 2)
top_boundary = fn.DirichletBC(V, fn.Constant(0.0), self.boundaries, 41)
bottom_boundary = fn.DirichletBC(V, fn.Constant(10.0), self.boundaries, 42)
# mantle_boundary = fn.DirichletBC(V, fn.Constant(1), boundaries, 43)
bcs =[top_boundary, bottom_boundary]
# Solve the Poisson equation with the source set to 0
u = fn.TrialFunction(V)
v = fn.TestFunction(V)
a = fn.dot(fn.grad(u), fn.grad(v)) * (kappa) * fn.dx
L = fn.Constant('0') * v * fn.dx
u = fn.Function(V)
if verb: print('solving...',flush=True)
# fn.solve(a == L, u, bcs, solver_parameters={str('linear_solver'): str('gmres'), 'relative_tolerance' : 1e-8})
problem = fn.LinearVariationalProblem(a, L, u, bcs)
solver = fn.LinearVariationalSolver(problem)
solver.parameters['linear_solver'] = 'gmres'
solver.parameters['preconditioner'] = 'ilu'
prm = solver.parameters['krylov_solver']
prm['absolute_tolerance'] = 1E-10
prm['relative_tolerance'] = 1E-6
prm['maximum_iterations'] = 1000
solver.solve()
if verb: print('system solved',flush=True)
#problem = fn.LinearVariationalProblem(a, L, u, bcs)
#solver = fn.LinearVariationalSolver(problem)
# fn.solve(a == L, u, bcs)
self.u = u
tme = datetime.datetime.now() - tme
return tme
def get_dof_vector(self):
'''
Returns the DoF vector of the solution.
Returns
-------
numpy array
the DoF vector.
'''
return self.u.vector()
def get_dof_size(self):
'''
Returns the size of the DoF vector.
Returns
-------
int
the size of the DoF vector.
'''
return self.u.vector()[:].size
def __call__(self, x1s, x2s, x3s):
'''
Evaluates the solution.
Parameters
----------
x1s : numpy array
first coordinates.
x2s : numpy array
second coordinates.
x3s : numpy array
third coordinates.
Returns
-------
numpy array
the solution evaluated on the given points.
'''
shape = x1s.shape
x1s = x1s.flatten()
x2s = x2s.flatten()
x3s = x3s.flatten()
ucalc = 0*x1s
for i in range(x1s.size):
try:
ucalc[i] = self.u((x1s[i],x2s[i],x3s[i]))
except:
ucalc[i] = np.nan
return ucalc.reshape(shape)
def aux_fun(dct_results_iga,i,ms,queue):
degs = dct_results_iga['degs']
ns = dct_results_iga['ns']
nls = dct_results_iga['nls']
nl = nls[1]
print()
print(' i = %d/%d, ms = %f'%(i,dct_results_iga['params'].shape[0],ms))
print()
solver = Solver()
solver.set_params(dct_results_iga['params'][i,:], ms)
solver.create_mesh(False)
solver.solve(False)
errz_tmp = []
for n in ns:
x = dct_results_iga['results'][(degs[0],n,nl)]['computations'][i]['xs']
y = dct_results_iga['results'][(degs[0],n,nl)]['computations'][i]['ys']
z = dct_results_iga['results'][(degs[0],n,nl)]['computations'][i]['zs']
fval = dct_results_iga['results'][(degs[0],n,nl)]['computations'][i]['us']
ws = dct_results_iga['results'][(degs[0],n,nl)]['computations'][i]['ws']
femval = solver(x,y,z)
err = np.sqrt(np.nansum((fval-femval)**2*ws))
print(err)
errz_tmp.append(err)
del solver.u
del solver
queue.put(errz_tmp)
deg = 2
Ns = np.array([80,80,80])-deg+1
baza1 = tt_iga.BSplineBasis(np.concatenate((np.linspace(0,0.5,Ns[0]//2),np.linspace(0.5,1,Ns[0]//2))),deg)
baza2 = tt_iga.BSplineBasis(np.linspace(0,1,Ns[1]),deg)
baza3 = tt_iga.BSplineBasis(np.concatenate((np.linspace(0,0.3,Ns[2]//3),np.linspace(0.3,0.7,Ns[2]//3),np.linspace(0.7,1,Ns[2]//3))),deg)
Basis = [baza1,baza2,baza3]
N = [baza1.N,baza2.N,baza3.N]
print(N)
nl = 12
Basis_param = [tt_iga.LagrangeLeg(nl,[-0.05,0.05])]*4
# square to circle transformation
xc = lambda u,v: u*tn.sqrt(1-v**2/2)
yc = lambda u,v: v*tn.sqrt(1-u**2/2)
# scale [0,1] to an inteval [a,b]
line = lambda t,a,b: t*(b-a)+a
# aux function needed for mapping along the length of the cylinder
def scaling(z,theta1,theta2):
a = 0.3
b = 0.7
s = (z<a)*line(z/a,0,a+theta1)
s+= tn.logical_and(z>=a,z<=b)*line((z-a)/(b-a),a+theta1,b+theta2)
s+= tn.logical_and(z>b,z<=1)*line((z-b)/(1-b),b+theta2,1)
return s
# create the components of the parametrization
angle_mult = 1.0
xparam = lambda t : xc(t[:,0]*2-1,t[:,1]*2-1)
yparam = lambda t : yc(t[:,0]*2-1,t[:,1]*2-1)
zparam = lambda t : scaling(t[:,2],t[:,6],t[:,5]+xparam(t)*angle_mult*t[:,4]+yparam(t)*0*t[:,4])
# create the material coeffiecient (defined on the reference domain)
sigma_ref = lambda x: 0.0*x[:,2]+(5.0+x[:,3]*5.0)*tn.logical_and(x[:,0]>=0.0,x[:,0]<0.5)*tn.logical_and(x[:,2]>0.3,x[:,2]<0.7)+1
#%% Instantiate the Geometry object and do some plots
geom = tt_iga.Geometry(Basis+Basis_param)
geom.interpolate([xparam, yparam, zparam])
tme = datetime.datetime.now()
Mass_tt = geom.mass_interp(eps=1e-11)
tme = datetime.datetime.now() -tme
print('Time mass matrix ',tme.total_seconds())
tme = datetime.datetime.now()
Stt = geom.stiffness_interp( func=None, func_reference = sigma_ref, qtt = False, verb=False)
tme = datetime.datetime.now() -tme
print('Time stiffness matrix ',tme.total_seconds())
f_tt = tntt.zeros(Stt.N)
# incorporate the boundary conditions and construct the system tensor operator
Pin_tt,Pbd_tt = tt_iga.get_projectors(N,[[1,1],[1,1],[0,0]])
# Pbd_tt = (1/N[0]) * Pbd_tt
U0 = 10
Pin_tt = Pin_tt ** tntt.eye([nl]*4)
Pbd_tt = Pbd_tt ** tntt.eye([nl]*4)
tmp = tn.zeros(N, dtype = tn.float64)
tmp[:,:,0] = U0
g_tt = Pbd_tt @ (tntt.TT(tmp) ** tntt.ones([nl]*4))
M_tt = Pin_tt@Stt@Pin_tt + Pbd_tt
rhs_tt = Pin_tt @ (Mass_tt @ f_tt - Stt@Pbd_tt@g_tt).round(1e-12) + g_tt
M_tt = M_tt.round(1e-9)
eps_solver = 1e-6
print('Solving in TT...')
tme_amen = datetime.datetime.now()
dofs_tt = tntt.solvers.amen_solve(M_tt.cuda(), rhs_tt.cuda(), x0 = tntt.ones(rhs_tt.N).cuda(), eps = eps_solver, nswp=40, kickrank=4).cpu()
tme_amen = (datetime.datetime.now() -tme_amen).total_seconds()
print('Time system solve in TT ',tme_amen)
```
FEM solution for the problem
```
params = [0.05, 0.05, 0.05, 0.05]
solver_fine = Solver()
solver_fine.set_params(params, 0.08)
tme_mesh_fem = solver_fine.create_mesh(False)
tme_solve_fem = solver_fine.solve(False)
fspace = tt_iga.Function(Basis+Basis_param)
fspace.dofs = dofs_tt
fval = fspace([tn.linspace(0,1,128),tn.tensor([0.5]),tn.linspace(0,1,128),tn.tensor([0.05]),tn.tensor([0.05]),tn.tensor([0.05]),tn.tensor([0.05])])
x,y,z = geom([tn.linspace(0,1,128),tn.tensor([0.5]),tn.linspace(0,1,128),tn.tensor([0.05]),tn.tensor([0.05]),tn.tensor([0.05]),tn.tensor([0.05])])
plt.figure()
plt.contour(x.full().numpy().squeeze(),z.full().numpy().squeeze(),fval.full().numpy().squeeze(), levels = 128)
plt.xlabel(r'$x_2$', fontsize=14)
plt.ylabel(r'$x_3$', fontsize=14)
plt.gca().tick_params(axis='both', labelsize=14)
cbar = plt.colorbar()
cbar.ax.tick_params(labelsize=14)
plt.savefig('./data/jump_solution.pdf')
plt.figure()
plt.contourf(x.full().numpy().squeeze(),z.full().numpy().squeeze(),fval.full().numpy().squeeze(), levels = 128)
plt.colorbar()
ufem = solver_fine(x.full().numpy().squeeze(),y.full().numpy().squeeze(),z.full().numpy().squeeze())
plt.figure()
plt.contourf(x.full().numpy().squeeze(),z.full().numpy().squeeze(),np.abs(fval.full().numpy().squeeze()-ufem), levels = 128)
plt.xlabel(r'$x_2$', fontsize=14)
plt.ylabel(r'$x_3$', fontsize=14)
plt.gca().tick_params(axis='both', labelsize=14)
cbar = plt.colorbar()
cbar.ax.tick_params(labelsize=14)
plt.savefig('./data/jump_error.pdf')
from matplotlib import cm
fig = geom.plot_domain([tn.tensor([0.05])]*4,[(0,1),(0,1),(0.0,1)],surface_color=None, wireframe = False,frame_color='k')
geom.plot_domain([tn.tensor([0.05])]*4,[(0.0,0.5),(0.0,1),(0.3,0.7)],fig = fig,surface_color=None,wireframe = False,frame_color='k')
ax = fig.gca()
C = fval.full().numpy().squeeze()
norm = matplotlib.colors.Normalize(vmin=C.min(),vmax=C.max())
C = plt.cm.jet(norm(C))
C[:,:,-1] = 1
ax.plot_surface(x.full().numpy().squeeze(),y.full().numpy().squeeze(),z.full().numpy().squeeze(),facecolors = C, antialiased=True,rcount=256,ccount=256,alpha=0.1)
fig.gca().set_xlabel(r'$x_1$')
fig.gca().set_ylabel(r'$x_2$')
fig.gca().set_zlabel(r'$x_3$')
# fig = plt.figure(figsize = (14, 9))
# ax = plt.axes(projection = '3d')
# ax.plot_surface(x.full().squeeze(), z.full().squeeze(), fval.full().squeeze(), facecolors = C)
fig = geom.plot_domain([tn.tensor([0.05]),tn.tensor([-0.05]),tn.tensor([0.05]),tn.tensor([0.05])],[(0,1),(0,1),(0.0,1)],surface_color='blue', wireframe = False,alpha=0.1)
geom.plot_domain([tn.tensor([0.05]),tn.tensor([-0.05]),tn.tensor([0.05]),tn.tensor([0.05])],[(0.0,0.5),(0.0,1),(0.3,0.7)],fig = fig,surface_color='green',wireframe = False)
fig.gca().zaxis.set_rotate_label(False)
fig.gca().set_xlabel(r'$x_1$', fontsize=14)
fig.gca().set_ylabel(r'$x_2$', fontsize=14)
fig.gca().set_zlabel(r'$x_3$', fontsize=14)
fig.gca().set_xticks([-1, 0, 1])
fig.gca().set_yticks([-1, 0, 1])
fig.gca().set_zticks([0, 0.5, 1])
fig.gca().view_init(15, -60)
fig.gca().tick_params(axis='both', labelsize=14)
plt.savefig('./data/cylinder_material1.pdf')
fig = geom.plot_domain([tn.tensor([0.05]),tn.tensor([0.05]),tn.tensor([0.05]),tn.tensor([0.05])],[(0,1),(0,1),(0.0,1)],surface_color='blue', wireframe = False,alpha=0.1)
geom.plot_domain([tn.tensor([0.05]),tn.tensor([0.05]),tn.tensor([0.05]),tn.tensor([0.05])],[(0.0,0.5),(0.0,1),(0.3,0.7)],fig = fig,surface_color='green',wireframe = False)
fig.gca().zaxis.set_rotate_label(False)
fig.gca().set_xlabel(r'$x_1$', fontsize=14)
fig.gca().set_ylabel(r'$x_2$', fontsize=14)
fig.gca().set_zlabel(r'$x_3$', fontsize=14)
fig.gca().set_xticks([-1, 0, 1])
fig.gca().set_yticks([-1, 0, 1])
fig.gca().set_zticks([0, 0.5, 1])
fig.gca().view_init(15, -60)
fig.gca().tick_params(axis='both', labelsize=14)
plt.savefig('./data/cylinder_material2.pdf')
```
|
github_jupyter
|
import torch as tn
import torchtt as tntt
import matplotlib.pyplot as plt
import tt_iga
import numpy as np
import datetime
import matplotlib.colors
import scipy.sparse.linalg
import pandas as pd
import os
import datetime
import multiprocessing as mp
import fenics as fn
import pickle
tn.set_default_dtype(tn.float64)
def create_file_and_mesh(theta,meshsize = 0.5, verb = False):
with open('fem_mesh/cylinder_material1_proto.geo', 'r') as file:
data = file.read()
s = "theta2 = %.18f; \ntheta3 = %.18f; \ntheta4 = %.18f;\nmeshsize=%.18f;"%(theta[1],theta[2],theta[3],meshsize)
s = s + data
with open("fem_mesh/tmp.geo", "w") as file:
file.write(s)
file.close()
if verb: print('geo file created',flush = True)
if verb:
os.system('gmsh fem_mesh/tmp.geo -nt 20 -3 -o fem_mesh/tmp.msh -format msh2 ')
else:
os.system('gmsh fem_mesh/tmp.geo -nt 20 -3 -o fem_mesh/tmp.msh -format msh2 >/dev/null 2>&1')
if verb: print('mesh file created',flush=True)
if verb:
os.system('dolfin-convert fem_mesh/tmp.msh fem_mesh/tmp.xml')
else:
os.system('dolfin-convert fem_mesh/tmp.msh fem_mesh/tmp.xml >/dev/null 2>&1')
if verb: print('mesh file converted in fenics format',flush=True)
mesh = fn.Mesh('fem_mesh/tmp.xml')
markers = fn.MeshFunction("size_t", mesh, 'fem_mesh/tmp_physical_region.xml')
boundaries = fn.MeshFunction('size_t', mesh, 'fem_mesh/tmp_facet_region.xml')
if verb: print('mesh imported')
return mesh, markers, boundaries
class Solver():
def __init__(self):
pass
def set_params(self, theta, meshsize=0.4):
'''
Set the parameters.
Parameters
----------
theta : list of floats or numpy array
The parameters. Belong to [-0.05,0.05].
meshsize : float, optional
The meshgrid size. The default is 0.4.
Returns
-------
None.
'''
self.theta = theta
self.meshsize = meshsize
def create_mesh(self, verb = False):
'''
Create the mesh and save it
Returns
-------
tme : datetime object
Duration of simulation.
'''
if verb: print('meshsize ',self.meshsize,' thetas ',self.theta,flush=True)
tme = datetime.datetime.now()
mesh, subdomains, boundaries = create_file_and_mesh(self.theta, self.meshsize, verb)
self.mesh = mesh
self.subdomains = subdomains
self.boundaries = boundaries
tme = datetime.datetime.now() - tme
if verb : print('Time needed for meshing and importing ',tme,flush=True)
return tme
def solve(self, verb = False):
'''
Solve the problem
Returns
-------
tme : datetime object
Duration of simulation.
'''
tme = datetime.datetime.now()
class permittivity(fn.UserExpression):
def __init__(self, markers, val, **kwargs):
self.markers = markers
self.val = val
super().__init__(**kwargs)
def eval_cell(self, values, x, cell):
if self.markers[cell.index] == 44:
values[0] = self.val
else:
values[0] = 1
kappa = permittivity(self.subdomains, 6.0+self.theta[0]*5.0, degree=2)
dx = fn.Measure('dx', domain=self.mesh, subdomain_data=self.subdomains)
V = fn.FunctionSpace(self.mesh, 'CG', 2)
top_boundary = fn.DirichletBC(V, fn.Constant(0.0), self.boundaries, 41)
bottom_boundary = fn.DirichletBC(V, fn.Constant(10.0), self.boundaries, 42)
# mantle_boundary = fn.DirichletBC(V, fn.Constant(1), boundaries, 43)
bcs =[top_boundary, bottom_boundary]
# Solve the Poisson equation with the source set to 0
u = fn.TrialFunction(V)
v = fn.TestFunction(V)
a = fn.dot(fn.grad(u), fn.grad(v)) * (kappa) * fn.dx
L = fn.Constant('0') * v * fn.dx
u = fn.Function(V)
if verb: print('solving...',flush=True)
# fn.solve(a == L, u, bcs, solver_parameters={str('linear_solver'): str('gmres'), 'relative_tolerance' : 1e-8})
problem = fn.LinearVariationalProblem(a, L, u, bcs)
solver = fn.LinearVariationalSolver(problem)
solver.parameters['linear_solver'] = 'gmres'
solver.parameters['preconditioner'] = 'ilu'
prm = solver.parameters['krylov_solver']
prm['absolute_tolerance'] = 1E-10
prm['relative_tolerance'] = 1E-6
prm['maximum_iterations'] = 1000
solver.solve()
if verb: print('system solved',flush=True)
#problem = fn.LinearVariationalProblem(a, L, u, bcs)
#solver = fn.LinearVariationalSolver(problem)
# fn.solve(a == L, u, bcs)
self.u = u
tme = datetime.datetime.now() - tme
return tme
def get_dof_vector(self):
'''
Returns the DoF vector of the solution.
Returns
-------
numpy array
the DoF vector.
'''
return self.u.vector()
def get_dof_size(self):
'''
Returns the size of the DoF vector.
Returns
-------
int
the size of the DoF vector.
'''
return self.u.vector()[:].size
def __call__(self, x1s, x2s, x3s):
'''
Evaluates the solution.
Parameters
----------
x1s : numpy array
first coordinates.
x2s : numpy array
second coordinates.
x3s : numpy array
third coordinates.
Returns
-------
numpy array
the solution evaluated on the given points.
'''
shape = x1s.shape
x1s = x1s.flatten()
x2s = x2s.flatten()
x3s = x3s.flatten()
ucalc = 0*x1s
for i in range(x1s.size):
try:
ucalc[i] = self.u((x1s[i],x2s[i],x3s[i]))
except:
ucalc[i] = np.nan
return ucalc.reshape(shape)
def aux_fun(dct_results_iga,i,ms,queue):
degs = dct_results_iga['degs']
ns = dct_results_iga['ns']
nls = dct_results_iga['nls']
nl = nls[1]
print()
print(' i = %d/%d, ms = %f'%(i,dct_results_iga['params'].shape[0],ms))
print()
solver = Solver()
solver.set_params(dct_results_iga['params'][i,:], ms)
solver.create_mesh(False)
solver.solve(False)
errz_tmp = []
for n in ns:
x = dct_results_iga['results'][(degs[0],n,nl)]['computations'][i]['xs']
y = dct_results_iga['results'][(degs[0],n,nl)]['computations'][i]['ys']
z = dct_results_iga['results'][(degs[0],n,nl)]['computations'][i]['zs']
fval = dct_results_iga['results'][(degs[0],n,nl)]['computations'][i]['us']
ws = dct_results_iga['results'][(degs[0],n,nl)]['computations'][i]['ws']
femval = solver(x,y,z)
err = np.sqrt(np.nansum((fval-femval)**2*ws))
print(err)
errz_tmp.append(err)
del solver.u
del solver
queue.put(errz_tmp)
deg = 2
Ns = np.array([80,80,80])-deg+1
baza1 = tt_iga.BSplineBasis(np.concatenate((np.linspace(0,0.5,Ns[0]//2),np.linspace(0.5,1,Ns[0]//2))),deg)
baza2 = tt_iga.BSplineBasis(np.linspace(0,1,Ns[1]),deg)
baza3 = tt_iga.BSplineBasis(np.concatenate((np.linspace(0,0.3,Ns[2]//3),np.linspace(0.3,0.7,Ns[2]//3),np.linspace(0.7,1,Ns[2]//3))),deg)
Basis = [baza1,baza2,baza3]
N = [baza1.N,baza2.N,baza3.N]
print(N)
nl = 12
Basis_param = [tt_iga.LagrangeLeg(nl,[-0.05,0.05])]*4
# square to circle transformation
xc = lambda u,v: u*tn.sqrt(1-v**2/2)
yc = lambda u,v: v*tn.sqrt(1-u**2/2)
# scale [0,1] to an inteval [a,b]
line = lambda t,a,b: t*(b-a)+a
# aux function needed for mapping along the length of the cylinder
def scaling(z,theta1,theta2):
a = 0.3
b = 0.7
s = (z<a)*line(z/a,0,a+theta1)
s+= tn.logical_and(z>=a,z<=b)*line((z-a)/(b-a),a+theta1,b+theta2)
s+= tn.logical_and(z>b,z<=1)*line((z-b)/(1-b),b+theta2,1)
return s
# create the components of the parametrization
angle_mult = 1.0
xparam = lambda t : xc(t[:,0]*2-1,t[:,1]*2-1)
yparam = lambda t : yc(t[:,0]*2-1,t[:,1]*2-1)
zparam = lambda t : scaling(t[:,2],t[:,6],t[:,5]+xparam(t)*angle_mult*t[:,4]+yparam(t)*0*t[:,4])
# create the material coeffiecient (defined on the reference domain)
sigma_ref = lambda x: 0.0*x[:,2]+(5.0+x[:,3]*5.0)*tn.logical_and(x[:,0]>=0.0,x[:,0]<0.5)*tn.logical_and(x[:,2]>0.3,x[:,2]<0.7)+1
#%% Instantiate the Geometry object and do some plots
geom = tt_iga.Geometry(Basis+Basis_param)
geom.interpolate([xparam, yparam, zparam])
tme = datetime.datetime.now()
Mass_tt = geom.mass_interp(eps=1e-11)
tme = datetime.datetime.now() -tme
print('Time mass matrix ',tme.total_seconds())
tme = datetime.datetime.now()
Stt = geom.stiffness_interp( func=None, func_reference = sigma_ref, qtt = False, verb=False)
tme = datetime.datetime.now() -tme
print('Time stiffness matrix ',tme.total_seconds())
f_tt = tntt.zeros(Stt.N)
# incorporate the boundary conditions and construct the system tensor operator
Pin_tt,Pbd_tt = tt_iga.get_projectors(N,[[1,1],[1,1],[0,0]])
# Pbd_tt = (1/N[0]) * Pbd_tt
U0 = 10
Pin_tt = Pin_tt ** tntt.eye([nl]*4)
Pbd_tt = Pbd_tt ** tntt.eye([nl]*4)
tmp = tn.zeros(N, dtype = tn.float64)
tmp[:,:,0] = U0
g_tt = Pbd_tt @ (tntt.TT(tmp) ** tntt.ones([nl]*4))
M_tt = Pin_tt@Stt@Pin_tt + Pbd_tt
rhs_tt = Pin_tt @ (Mass_tt @ f_tt - Stt@Pbd_tt@g_tt).round(1e-12) + g_tt
M_tt = M_tt.round(1e-9)
eps_solver = 1e-6
print('Solving in TT...')
tme_amen = datetime.datetime.now()
dofs_tt = tntt.solvers.amen_solve(M_tt.cuda(), rhs_tt.cuda(), x0 = tntt.ones(rhs_tt.N).cuda(), eps = eps_solver, nswp=40, kickrank=4).cpu()
tme_amen = (datetime.datetime.now() -tme_amen).total_seconds()
print('Time system solve in TT ',tme_amen)
params = [0.05, 0.05, 0.05, 0.05]
solver_fine = Solver()
solver_fine.set_params(params, 0.08)
tme_mesh_fem = solver_fine.create_mesh(False)
tme_solve_fem = solver_fine.solve(False)
fspace = tt_iga.Function(Basis+Basis_param)
fspace.dofs = dofs_tt
fval = fspace([tn.linspace(0,1,128),tn.tensor([0.5]),tn.linspace(0,1,128),tn.tensor([0.05]),tn.tensor([0.05]),tn.tensor([0.05]),tn.tensor([0.05])])
x,y,z = geom([tn.linspace(0,1,128),tn.tensor([0.5]),tn.linspace(0,1,128),tn.tensor([0.05]),tn.tensor([0.05]),tn.tensor([0.05]),tn.tensor([0.05])])
plt.figure()
plt.contour(x.full().numpy().squeeze(),z.full().numpy().squeeze(),fval.full().numpy().squeeze(), levels = 128)
plt.xlabel(r'$x_2$', fontsize=14)
plt.ylabel(r'$x_3$', fontsize=14)
plt.gca().tick_params(axis='both', labelsize=14)
cbar = plt.colorbar()
cbar.ax.tick_params(labelsize=14)
plt.savefig('./data/jump_solution.pdf')
plt.figure()
plt.contourf(x.full().numpy().squeeze(),z.full().numpy().squeeze(),fval.full().numpy().squeeze(), levels = 128)
plt.colorbar()
ufem = solver_fine(x.full().numpy().squeeze(),y.full().numpy().squeeze(),z.full().numpy().squeeze())
plt.figure()
plt.contourf(x.full().numpy().squeeze(),z.full().numpy().squeeze(),np.abs(fval.full().numpy().squeeze()-ufem), levels = 128)
plt.xlabel(r'$x_2$', fontsize=14)
plt.ylabel(r'$x_3$', fontsize=14)
plt.gca().tick_params(axis='both', labelsize=14)
cbar = plt.colorbar()
cbar.ax.tick_params(labelsize=14)
plt.savefig('./data/jump_error.pdf')
from matplotlib import cm
fig = geom.plot_domain([tn.tensor([0.05])]*4,[(0,1),(0,1),(0.0,1)],surface_color=None, wireframe = False,frame_color='k')
geom.plot_domain([tn.tensor([0.05])]*4,[(0.0,0.5),(0.0,1),(0.3,0.7)],fig = fig,surface_color=None,wireframe = False,frame_color='k')
ax = fig.gca()
C = fval.full().numpy().squeeze()
norm = matplotlib.colors.Normalize(vmin=C.min(),vmax=C.max())
C = plt.cm.jet(norm(C))
C[:,:,-1] = 1
ax.plot_surface(x.full().numpy().squeeze(),y.full().numpy().squeeze(),z.full().numpy().squeeze(),facecolors = C, antialiased=True,rcount=256,ccount=256,alpha=0.1)
fig.gca().set_xlabel(r'$x_1$')
fig.gca().set_ylabel(r'$x_2$')
fig.gca().set_zlabel(r'$x_3$')
# fig = plt.figure(figsize = (14, 9))
# ax = plt.axes(projection = '3d')
# ax.plot_surface(x.full().squeeze(), z.full().squeeze(), fval.full().squeeze(), facecolors = C)
fig = geom.plot_domain([tn.tensor([0.05]),tn.tensor([-0.05]),tn.tensor([0.05]),tn.tensor([0.05])],[(0,1),(0,1),(0.0,1)],surface_color='blue', wireframe = False,alpha=0.1)
geom.plot_domain([tn.tensor([0.05]),tn.tensor([-0.05]),tn.tensor([0.05]),tn.tensor([0.05])],[(0.0,0.5),(0.0,1),(0.3,0.7)],fig = fig,surface_color='green',wireframe = False)
fig.gca().zaxis.set_rotate_label(False)
fig.gca().set_xlabel(r'$x_1$', fontsize=14)
fig.gca().set_ylabel(r'$x_2$', fontsize=14)
fig.gca().set_zlabel(r'$x_3$', fontsize=14)
fig.gca().set_xticks([-1, 0, 1])
fig.gca().set_yticks([-1, 0, 1])
fig.gca().set_zticks([0, 0.5, 1])
fig.gca().view_init(15, -60)
fig.gca().tick_params(axis='both', labelsize=14)
plt.savefig('./data/cylinder_material1.pdf')
fig = geom.plot_domain([tn.tensor([0.05]),tn.tensor([0.05]),tn.tensor([0.05]),tn.tensor([0.05])],[(0,1),(0,1),(0.0,1)],surface_color='blue', wireframe = False,alpha=0.1)
geom.plot_domain([tn.tensor([0.05]),tn.tensor([0.05]),tn.tensor([0.05]),tn.tensor([0.05])],[(0.0,0.5),(0.0,1),(0.3,0.7)],fig = fig,surface_color='green',wireframe = False)
fig.gca().zaxis.set_rotate_label(False)
fig.gca().set_xlabel(r'$x_1$', fontsize=14)
fig.gca().set_ylabel(r'$x_2$', fontsize=14)
fig.gca().set_zlabel(r'$x_3$', fontsize=14)
fig.gca().set_xticks([-1, 0, 1])
fig.gca().set_yticks([-1, 0, 1])
fig.gca().set_zticks([0, 0.5, 1])
fig.gca().view_init(15, -60)
fig.gca().tick_params(axis='both', labelsize=14)
plt.savefig('./data/cylinder_material2.pdf')
| 0.630571 | 0.689861 |
# Classifying Fashion-MNIST
Now it's your turn to build and train a neural network. You'll be using the [Fashion-MNIST dataset](https://github.com/zalandoresearch/fashion-mnist), a drop-in replacement for the MNIST dataset. MNIST is actually quite trivial with neural networks where you can easily achieve better than 97% accuracy. Fashion-MNIST is a set of 28x28 greyscale images of clothes. It's more complex than MNIST, so it's a better representation of the actual performance of your network, and a better representation of datasets you'll use in the real world.
<img src='assets/fashion-mnist-sprite.png' width=500px>
In this notebook, you'll build your own neural network. For the most part, you could just copy and paste the code from Part 3, but you wouldn't be learning. It's important for you to write the code yourself and get it to work. Feel free to consult the previous notebooks though as you work through this.
First off, let's load the dataset through torchvision.
```
import torch
from torchvision import datasets, transforms
import helper
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
```
Here we can see one of the images.
```
image, label = next(iter(trainloader))
helper.imshow(image[0,:]);
```
## Building the network
Here you should define your network. As with MNIST, each image is 28x28 which is a total of 784 pixels, and there are 10 classes. You should include at least one hidden layer. We suggest you use ReLU activations for the layers and to return the logits or log-softmax from the forward pass. It's up to you how many layers you add and the size of those layers.
```
# TODO: Define your network architecture here
from collections import OrderedDict
from torch import nn
input_size = 784
hidden_sizes = [256, 128, 64]
output_size = 10
model = nn.Sequential(OrderedDict([('fc1', nn.Linear(input_size, hidden_sizes[0])),
('relu1', nn.ReLU()),
('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])),
('relu2', nn.ReLU()),
('fc3', nn.Linear(hidden_sizes[1], hidden_sizes[2])),
('relu3', nn.ReLU()),
('output', nn.Linear(hidden_sizes[2], output_size)),
('logSoftmax', nn.LogSoftmax(dim = 1))]))
model
```
# Train the network
Now you should create your network and train it. First you'll want to define [the criterion](http://pytorch.org/docs/master/nn.html#loss-functions) ( something like `nn.CrossEntropyLoss`) and [the optimizer](http://pytorch.org/docs/master/optim.html) (typically `optim.SGD` or `optim.Adam`).
Then write the training code. Remember the training pass is a fairly straightforward process:
* Make a forward pass through the network to get the logits
* Use the logits to calculate the loss
* Perform a backward pass through the network with `loss.backward()` to calculate the gradients
* Take a step with the optimizer to update the weights
By adjusting the hyperparameters (hidden units, learning rate, etc), you should be able to get the training loss below 0.4.
```
from torch import optim
# TODO: Create the network, define the criterion and optimizer
# We defined the network above
# Criterion: We use the negative log likelihood as our output is logSoftMax
criterion = nn.NLLLoss()
# We just pick an optimizer - Adam optimizer is widely used
# We give it a learning rate as well as the parameters of the model
optimizer = optim.Adam(model.parameters(), lr=0.01)
# TODO: Train the network here
# Now that the model is defined, we can finally start traning
epochs = 10
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
# As we did for the numbers, we flatten the images
images = images.view(images.shape[0], -1)
# We reset the gradients every time
optimizer.zero_grad()
# 1. Make a forward pass thorugh the network
output = model(images)
# 2. Use the logits to calculate the loss
# We use the computed logits from our output
loss = criterion(output, labels)
# 3. Perform a backward pass through the network with loss.backward() to calculate the gradients
loss.backward()
# 4. Take a step with the optimizer to update the weights
optimizer.step()
running_loss += loss.item()
else:
print(f"Training loss: {running_loss/len(trainloader)}")
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
# Test out your network!
dataiter = iter(testloader)
images, labels = dataiter.next()
img = images[0]
# Convert 2D image to 1D vector
img = img.resize_(1, 784)
with torch.no_grad():
logps = model(img)
# TODO: Calculate the class probabilities (softmax) for img
ps = torch.exp(logps)
# Plot the image and probabilities
helper.view_classify(img.resize_(1, 28, 28), ps, version='Fashion')
```
|
github_jupyter
|
import torch
from torchvision import datasets, transforms
import helper
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
image, label = next(iter(trainloader))
helper.imshow(image[0,:]);
# TODO: Define your network architecture here
from collections import OrderedDict
from torch import nn
input_size = 784
hidden_sizes = [256, 128, 64]
output_size = 10
model = nn.Sequential(OrderedDict([('fc1', nn.Linear(input_size, hidden_sizes[0])),
('relu1', nn.ReLU()),
('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])),
('relu2', nn.ReLU()),
('fc3', nn.Linear(hidden_sizes[1], hidden_sizes[2])),
('relu3', nn.ReLU()),
('output', nn.Linear(hidden_sizes[2], output_size)),
('logSoftmax', nn.LogSoftmax(dim = 1))]))
model
from torch import optim
# TODO: Create the network, define the criterion and optimizer
# We defined the network above
# Criterion: We use the negative log likelihood as our output is logSoftMax
criterion = nn.NLLLoss()
# We just pick an optimizer - Adam optimizer is widely used
# We give it a learning rate as well as the parameters of the model
optimizer = optim.Adam(model.parameters(), lr=0.01)
# TODO: Train the network here
# Now that the model is defined, we can finally start traning
epochs = 10
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
# As we did for the numbers, we flatten the images
images = images.view(images.shape[0], -1)
# We reset the gradients every time
optimizer.zero_grad()
# 1. Make a forward pass thorugh the network
output = model(images)
# 2. Use the logits to calculate the loss
# We use the computed logits from our output
loss = criterion(output, labels)
# 3. Perform a backward pass through the network with loss.backward() to calculate the gradients
loss.backward()
# 4. Take a step with the optimizer to update the weights
optimizer.step()
running_loss += loss.item()
else:
print(f"Training loss: {running_loss/len(trainloader)}")
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
# Test out your network!
dataiter = iter(testloader)
images, labels = dataiter.next()
img = images[0]
# Convert 2D image to 1D vector
img = img.resize_(1, 784)
with torch.no_grad():
logps = model(img)
# TODO: Calculate the class probabilities (softmax) for img
ps = torch.exp(logps)
# Plot the image and probabilities
helper.view_classify(img.resize_(1, 28, 28), ps, version='Fashion')
| 0.629319 | 0.990892 |
```
import pandas as pd
import numpy as np
weather= pd.read_csv('weather.csv')
weather['Station'].nunique()
weather.columns
weather.head(2)
# Checking for missing data
# Why isn't this working... basically want to check if rows in certain columns contain missing data
for x in weather.columns:
if 'M' in x:
print('yes')
# Kihoon suggested this method, which is definitely more useful
(weather[weather.columns] == 'M').sum().sort_values(ascending=False)
weather['Depth'].value_counts()
weather['SnowFall'].value_counts()
# Given Water1 contains all missing values, and depth and snowfall also do not contain any useful information
# I'm going to go ahead and drop these columns
weather= weather.drop(['Depth','SnowFall','Water1'], axis=1)
def date_separate(weather):
weather = weather.copy()
weather['Year'] = pd.DatetimeIndex(weather['Date']).year
weather['Month'] = pd.DatetimeIndex(weather['Date']).month
weather['Day'] = pd.DatetimeIndex(weather['Date']).day
return weather
weather['CodeSum'].isnull().sum()
weather['CodeSum'].head(5)
# I see there is missing data but when I call weather['CodeSum'].isnull().sum(), it gives me 0
# Therefore, I realized something was weird and it was considering the space as something
(weather[weather.columns] == ' ').sum().sort_values(ascending=False)
weather['CodeSum'].value_counts()
# I am going to drop any type of CodeSum that doesn't appear >20 times.
weather['Station'].value_counts()
# First, I am going to drop all the rows that have a space
weather = weather[~weather['CodeSum'].isin([' '])]
#Next, drop the rest
low = weather['CodeSum'].value_counts()
weather= weather[weather.isin(low.index[low >20]).values]
# Please let me know if you think this is not a smart idea, because maybe the largest number of trapped mosquitos
# occur under special conditions that we should consider. I just figured that since those conditions are not as frequent
# we shouldn't really consider them..
weather['CodeSum'].value_counts()
# We see that the most common weather conditions are Rain, Mist, Haze, Drizzle, and Thunderstorm and therefore
# combinations of the sort as well
weather=date_separate(weather)
weather.head()
weather['Station'].nunique()
weather.describe()
# Here I am dropping all the rows in Sunset that contain '-'
weather = weather[~weather['Sunset'].isin(['-'])]
weather['Sunset'].dtypes
weather['Station'].nunique()
# After applying a filter that let's me see only the rows in Sunset that do not contain '-', the number of Stations
# dropped to one..
# I was still confused that even when I dropped the columns with - in Sunset, it still was calling it a column of objects
# so I converted it to integer
weather['Sunset']= weather.Sunset.astype(int)
weather['Sunset'].head(2)
weather.dtypes
# Inspecting further on why this an object
weather['Tavg'].dtypes
objects=[]
for index, x in enumerate(weather['Tavg']):
if type(x) == object:
print(objects.append(index))
# I'm not sure I understand why the list is showing up as none if Tavg still says dtypes= object
objects
# Is it fair to convert this then, just to integer
weather['Tavg']= weather.Tavg.astype(int)
weather.dtypes
weather['AvgSpeed']=weather.AvgSpeed.astype(float)
weather['Heat']=weather.Heat.astype(int)
weather['Cool']=weather.Cool.astype(int)
weather.describe()
(weather[weather.columns] == 'M').sum().sort_values(ascending=False)
weather.replace(to_replace='M', value=np.nan, inplace=True)
# Replacing Ts in Precipitation total with an empty string
weather.loc[weather['PrecipTotal'].str.contains('T')] = ''
# Convert empty to null values
weather.replace(to_replace='', value=np.nan, inplace=True)
# Drop all Nan values
weather= weather.dropna()
```
|
github_jupyter
|
import pandas as pd
import numpy as np
weather= pd.read_csv('weather.csv')
weather['Station'].nunique()
weather.columns
weather.head(2)
# Checking for missing data
# Why isn't this working... basically want to check if rows in certain columns contain missing data
for x in weather.columns:
if 'M' in x:
print('yes')
# Kihoon suggested this method, which is definitely more useful
(weather[weather.columns] == 'M').sum().sort_values(ascending=False)
weather['Depth'].value_counts()
weather['SnowFall'].value_counts()
# Given Water1 contains all missing values, and depth and snowfall also do not contain any useful information
# I'm going to go ahead and drop these columns
weather= weather.drop(['Depth','SnowFall','Water1'], axis=1)
def date_separate(weather):
weather = weather.copy()
weather['Year'] = pd.DatetimeIndex(weather['Date']).year
weather['Month'] = pd.DatetimeIndex(weather['Date']).month
weather['Day'] = pd.DatetimeIndex(weather['Date']).day
return weather
weather['CodeSum'].isnull().sum()
weather['CodeSum'].head(5)
# I see there is missing data but when I call weather['CodeSum'].isnull().sum(), it gives me 0
# Therefore, I realized something was weird and it was considering the space as something
(weather[weather.columns] == ' ').sum().sort_values(ascending=False)
weather['CodeSum'].value_counts()
# I am going to drop any type of CodeSum that doesn't appear >20 times.
weather['Station'].value_counts()
# First, I am going to drop all the rows that have a space
weather = weather[~weather['CodeSum'].isin([' '])]
#Next, drop the rest
low = weather['CodeSum'].value_counts()
weather= weather[weather.isin(low.index[low >20]).values]
# Please let me know if you think this is not a smart idea, because maybe the largest number of trapped mosquitos
# occur under special conditions that we should consider. I just figured that since those conditions are not as frequent
# we shouldn't really consider them..
weather['CodeSum'].value_counts()
# We see that the most common weather conditions are Rain, Mist, Haze, Drizzle, and Thunderstorm and therefore
# combinations of the sort as well
weather=date_separate(weather)
weather.head()
weather['Station'].nunique()
weather.describe()
# Here I am dropping all the rows in Sunset that contain '-'
weather = weather[~weather['Sunset'].isin(['-'])]
weather['Sunset'].dtypes
weather['Station'].nunique()
# After applying a filter that let's me see only the rows in Sunset that do not contain '-', the number of Stations
# dropped to one..
# I was still confused that even when I dropped the columns with - in Sunset, it still was calling it a column of objects
# so I converted it to integer
weather['Sunset']= weather.Sunset.astype(int)
weather['Sunset'].head(2)
weather.dtypes
# Inspecting further on why this an object
weather['Tavg'].dtypes
objects=[]
for index, x in enumerate(weather['Tavg']):
if type(x) == object:
print(objects.append(index))
# I'm not sure I understand why the list is showing up as none if Tavg still says dtypes= object
objects
# Is it fair to convert this then, just to integer
weather['Tavg']= weather.Tavg.astype(int)
weather.dtypes
weather['AvgSpeed']=weather.AvgSpeed.astype(float)
weather['Heat']=weather.Heat.astype(int)
weather['Cool']=weather.Cool.astype(int)
weather.describe()
(weather[weather.columns] == 'M').sum().sort_values(ascending=False)
weather.replace(to_replace='M', value=np.nan, inplace=True)
# Replacing Ts in Precipitation total with an empty string
weather.loc[weather['PrecipTotal'].str.contains('T')] = ''
# Convert empty to null values
weather.replace(to_replace='', value=np.nan, inplace=True)
# Drop all Nan values
weather= weather.dropna()
| 0.261897 | 0.396769 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.